CN113239722B - Deep learning based strong convection extrapolation method and system under multi-scale - Google Patents

Deep learning based strong convection extrapolation method and system under multi-scale Download PDF

Info

Publication number
CN113239722B
CN113239722B CN202110345106.5A CN202110345106A CN113239722B CN 113239722 B CN113239722 B CN 113239722B CN 202110345106 A CN202110345106 A CN 202110345106A CN 113239722 B CN113239722 B CN 113239722B
Authority
CN
China
Prior art keywords
radar
time
state
network
extrapolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110345106.5A
Other languages
Chinese (zh)
Other versions
CN113239722A (en
Inventor
文立玉
罗飞
柴文涛
卫霄飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202110345106.5A priority Critical patent/CN113239722B/en
Publication of CN113239722A publication Critical patent/CN113239722A/en
Application granted granted Critical
Publication of CN113239722B publication Critical patent/CN113239722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • G01W1/10Devices for predicting weather conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a strong convection extrapolation method and system based on deep learning under multiple scales, wherein the method comprises the following steps: receiving radar image data, and extracting hidden state features of the radar image data; convolving the implicit state characteristics, inputting a convolution result into a TrajGRU network, and carrying out strong convection extrapolation to obtain a radar chart; and performing second convolution on the radar graph and batch regularization to obtain extrapolated image data. The method trains radar image data to obtain an extrapolated image, and is used for forecasting extreme weather such as rainstorm, thunderstorm, hail and the like in strong convection weather.

Description

Deep learning based strong convection extrapolation method and system under multi-scale
Technical Field
The invention belongs to the technical field of computer artificial intelligence and meteorological intersection, and particularly relates to a strong convection extrapolation method and system based on deep learning under multiple scales.
Background
Strong convection weather generally refers to extreme weather phenomena with disaster properties, such as convection strong wind, hail, short-term strong precipitation, etc. accompanying thunderstorm phenomena. Such weather is generally characterized by: the weather system has the advantages of sudden occurrence, rapid movement, severe weather and extremely strong destructive power, mainly occurs in medium and small-scale weather systems, has small spatial scale, generally has a horizontal range of about ten kilometers to two hundred kilometers, is generally less than two hundred kilometers, and has a horizontal range of only dozens of meters to dozens of kilometers. Its life history is short and with obvious paroxysmal nature, about one to tens of hours, shorter only a few minutes to an hour. It often occurs in convective clouds or monolithic convective clouds. When strong convection weather comes temporarily, severe weather such as lightning thunder and lightning, wind and heavy rain are often accompanied, so that the house is destroyed, crop trees are destroyed, telecommunication traffic is damaged, and even casualties are caused.
Therefore, the method is particularly important for predicting the strong convection weather with extremely strong destructive power, and the movement trend and the occurrence condition of the strong convection weather can be known in advance through the prediction of the weather phenomenon, so that related meteorological personnel can report the strong convection weather to related departments before the strong convection weather comes, and take powerful measures to avoid unnecessary disasters as far as possible. At present, in the aspect of extrapolation of radar images, the used technologies mainly include an optical flow method and extrapolation based on methods such as LSTM and related variants thereof under a deep neural network.
The optical flow method is an earlier method used in the extrapolation problem, and its essence is to use the correlation between adjacent sequence image frames in the serialized image to find some relation between two consecutive frames, and calculate the motion information of some area on the image between adjacent sequence frames according to this relation.
Extrapolation based on LSTM/ConvLSTM: at present, researchers use LSTM to realize extrapolation prediction of radar, LSTM can be used for fitting sequence data, and the problem of gradient disappearance is solved by forgetting part of information through forgetting gates and outputting gates. It can handle sequences of small magnitude. The hong Kong scholars apply a robust class-based convolution operation on the basis of the LSTM, and propose convLSTM, which can spatially extract features, is more suitable for time series data based on images, and is more effective for feature extraction of images. ConvLSTM is a variant of LSTM, and the weight calculation of W is changed into convolution operation, so that the characteristics of the image can be extracted.
Improvement based on ConvGRU and VGGNet: at present, some scholars propose a ConvGRU network model combining convolutional neural networks CNN and GRUs under the condition of referring to a ConvLSTM network structure, and the GRU network structure is simpler than that of the LSTM network, so that the ConvGRU network model is combined by relevant control gates. But is less effective and therefore better using the ConvGRU network model because it has faster training speed and smaller memory requirements than the ConvLSTM structure. The related document also carries out related improvement on the convolution layer of the ConvGRU based on the VGGNet network, and the improved network architecture uses superposition of a plurality of small convolution kernels to replace a large convolution kernel, so that the number of parameters for training is reduced, and meanwhile, the capability of the network in feature extraction is enhanced. The advantage of this model is that it uses convGRU structures instead of convLSTM structures, while using a multi-level framework model on the stack of structures. Namely the spatial feature extraction capability of the convolution structure and the memory capability of the GRU structure which is easy to handle time series problems. Finally, the forecasting effects of the model and the optical flow method are compared through related experiments, and the applicability of the improved model in the short-term forecasting problem of rainfall is verified.
Radar echo extrapolation based on GAN algorithm: scholars use the GAN for short-term prediction based on generation of the antagonistic network, and perform relevant experiments and practices. The GAN method mainly extracts image characteristics from radar images from a series of radar observation data according to the principle of a convolutional neural network, so as to establish a model of a prediction network, and optimizes the model through a loss function. Extrapolation experiments for a certain area in a certain year based on 4 related weather processes can find that the extrapolated shape, intensity and echo position of the generated confrontation network GAN in the short-term forecast of the convective weather are basically consistent with the live condition in most cases, so that the GAN technology has good effect on radar extrapolation. However, the literature also indicates that the echo range extrapolated based on the GAN method is large, and particularly indicates that the extrapolation effect is poor mainly on the prediction of lamellar cloud precipitation. The 3 levels of echo intensity values forecasted by 18 examples 1h in the precipitation caused by an east wind system, the precipitation caused by a southwest monsoon wind system, the precipitation caused by a west wind zone system and the typhoon precipitation are tested, and the GAN network model method has good forecast on the echoes with medium intensity, but still has improved space for forecasting the strong echoes.
In the radar echo extrapolation of the GAN method, the network working principle is as follows: the first generation generator G1 generates predicted data from the historical serialized data, and then learns the generated echo and the true echo in the first generation discriminator D1, so that the first generation discriminator D1 can truly discriminate the generated radar image from the true radar image. Then, the generator G2 of the second generation is utilized again. The radar image generated by the second generation generator G2 can trick the first generation discriminator D1. At this point, the second generation discriminator D2 is retrained, and so on. The generator G and the discriminator D play games with each other to form a min-max game (min-maxgame), so that the G and the D continuously optimize themselves through interaction in the training process until dynamic balance is achieved, namely, the G and the D cannot become better, namely, a false sample and a true sample at the moment are completely indistinguishable, and an output echo is used for proximity prediction.
It should be noted that, during the training process of the model, the historical sequence radar pictures and the relevant data of the predicted target within 1h are used as the training samples. Since the GAN method itself does not have an explicit "forecasting" process. Thus, the learned output of the model during the training process is a forecast for 1h in the future.
The traditional interpolation methods such as the reverse distance weighting method, the kriging method, the trend surface method and the like have the problems of large errors, complex calculation, high requirement on data distribution, low applicability and the like.
a) Disadvantages of the optical flow method
The prior art has a plurality of obvious problems. In the case of the optical flow method, it is difficult to achieve both the timeliness and the accuracy. The optical flow method theory is based on the assumption that the brightness of the same object is constant, and the optical flow method is difficult to completely meet the requirements in reality, which is a big disadvantage of the optical flow method. The scholars of the related literature also mention that the optical flow estimation step and the extrapolation step in the optical flow method are separate and therefore, the parameter selection is difficult.
b) Disadvantages of LSTM extrapolation
In the existing model based on RNN, LSTM still appears troublesome for sequences of large magnitude or longer, and if the time span of LSTM is large and the network is deep, the calculation amount is large and time-consuming.
c) Scale singleness problem for extrapolated networks
The extrapolation tasks based on deep learning, which we know at present, are based on single scale studies. Based on this, the present invention proposes a multiscale-based extrapolation task.
d) Extrapolation model level simplicity
The related scholars combine the VGG with the convGRU, so that the network hierarchy is deeper, but the construction of an extrapolation model based on the convGRU by using the VGG network has a further improvement space in effect.
e) MSE does not perform well on the extrapolation task
Most extrapolation-based studies use MSE loss, which results in rapid degradation of the quality of multiple extrapolations, and therefore we need to propose an optimization method that better conforms to this business.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method for extrapolation of strong convection under multiple scales based on deep learning, which can train to obtain an extrapolated image for extreme weather such as strong convection weather forecast, e.g., rainstorm, thunderstorm, hail, etc.
In order to achieve the purpose, the technical scheme of the invention is as follows: a strong convection extrapolation method based on deep learning under multiple scales comprises the following steps:
receiving radar image data, and extracting implicit state features of the radar image data;
convolving the implicit state characteristics, inputting a convolution result into a TrajGRU network, and carrying out strong convection extrapolation to obtain a radar chart;
performing a second convolution on the radar chart and performing batch regularization to obtain extrapolated image data; wherein the loss function of the batch regularization is:
Figure BDA0003000426320000051
wherein xi is 1 +ξ 2 1, M is the number of the real value and the predicted value, M is the number of the real value and the predicted value of MS-SSIM loss, xi 2 Is the weight occupied by MS-SSIM loss, [ l M (x,y)]Is the coordinate (x, y) brightness at M, alpha, of MS-SSIM loss M Is a luminance factor, y i Is a true value,
Figure BDA0003000426320000061
Is a predicted value, [ c ] j (x,y)]Beta is the contrast of the MS-SSIM loss in coordinates (x, y) at J j Is a contrast factor, s j (x, y) is the structural contrast of MS-SSIM loss at J coordinates (x, y), γ j Is a structural contrast factor, ξ 1 The loss of the true value and the predicted value is weighted,
Figure BDA0003000426320000062
to offset the MMD loss, θ is the offset coefficient.
Further, the step of receiving radar image data and extracting hidden state features of the radar image data specifically includes:
extracting Doppler radar base data of the radar image data through a VGG16 network, wherein the Doppler radar base data comprise radar basic reflectivity, radar combined reflectivity, radar basic radial velocity and combined radial velocity;
and extracting the hidden state characteristics of the radar base data according to the Doppler radar base data.
Further, the VGG16 network employs convolutional layers by stacking convolutional kernels of small size.
Further, convolving the implicit state features by ConvGRU, where the new information structure of ConvGRU is:
Figure BDA0003000426320000063
wherein, H' t,:,i,j Is new information at location (i, j) at time t, L is the total number of connection layers allowed for the connection,
Figure BDA0003000426320000064
is the weight value of the weight value,
Figure BDA0003000426320000065
to compute the memory state of the acquired point (p, q) at time t-1 on the neighborhood set (i, j), θ is an operation parameter.
Further, the structural formula of the TrajGRU network is as follows:
u t ,v t =γ(x t ,H t-1 )
Figure BDA0003000426320000071
Figure BDA0003000426320000072
Figure BDA0003000426320000073
Figure BDA0003000426320000074
wherein, is the convolution operator,
Figure BDA0003000426320000078
is a Hadamard operation, u t Is the vector direction at time t, v t Is the velocity of the vector movement at time t, gamma is the connecting structure, x t Is an input, H t 、R t 、Z t 、H′ t The memory state, reset gate, update gate and new information at time t, respectively, σ is the gate, w xz Is to update the weight of the gate at X, which is the input of the network at time t, w xr Is the weight value of the reset gate at X,
Figure BDA0003000426320000075
is the weight of the H state in the new state at time t-1 when l is 1, H is the state at time t-1,
Figure BDA0003000426320000076
is the weight of the update gate of the connection structure at H,
Figure BDA0003000426320000077
h state in reset gate at time t-1 when l is 1Weight of (1), w xh Is the weight of the reset gate at H, H t-1 Is the memory state at time t-1, L is the total number of connection layers allowed to connect, warp is a function, u t,l The vector direction of the point at time t when l equals 1, v t,l The velocity at which the point vector moves when t is 1.
The invention also aims to provide a strong convection extrapolation system based on deep learning under multiple scales, which can be applied to the prediction of extreme weather types such as rainstorms, thunderstorms, hails and the like.
In order to realize the purpose, the technical scheme of the invention is as follows: a strong-convection extrapolation system at multiple scales based on deep learning, comprising:
the characteristic extraction module is used for receiving radar image data and extracting hidden state characteristics of the radar image data;
the strong convection extrapolation module is connected with the feature extraction module and used for carrying out convolution on the hidden state features and carrying out strong convection extrapolation on a convolution result to obtain a radar map;
the extrapolation optimization module is connected with the strong convection extrapolation module and used for performing second convolution on the radar chart and performing batch regularization simultaneously to obtain extrapolated image data; wherein the loss function of the batch regularization is:
Figure BDA0003000426320000081
wherein ξ 1 +ξ 2 1, M is the number of the real value and the predicted value, M is the number of the real value and the predicted value of MS-SSIM loss, xi 2 Is the weight taken by MS-SSIM loss, [ l M (x,y)]Is the coordinate (x, y) brightness at M, alpha, of MS-SSIM loss M Is a luminance factor, y i Is a true value,
Figure BDA0003000426320000082
Is a predicted value, [ c ] j (x,y)]Beta is the contrast of the MS-SSIM loss at J coordinates (x, y) ("beta") j Is a contrast factor, s j (x, y) is MS-SSIM lStructural contrast of oss at coordinate (x, y), γ j Is a structural contrast factor, ξ 1 The loss of the true value and the predicted value is weighted,
Figure BDA0003000426320000083
to offset the MMD loss, θ is the offset coefficient.
Further, the feature extraction module comprises a VGG16 network, and the VGG16 network is used for extracting doppler radar base data of the radar image data and then extracting implicit state features of the radar base data according to the doppler radar base data; wherein the content of the first and second substances,
the Doppler radar base data comprise radar basic reflectivity, radar combined reflectivity, radar basic radial velocity and combined radial velocity.
Further, the VGG16 network employs convolutional layers by stacking convolutional kernels of small size.
Further, the strong convection extrapolation module includes a ConvGRU network, the ConvGRU network is connected to the feature extraction module, and is configured to convolve the implicit state features, and a new information structural formula of the ConvGRU is:
Figure BDA0003000426320000091
wherein, H' t,:,i,j Is the new information at location (i, j) at time t, the total number of connections between layers of the L neural network, i.e. the total number of layers of connections allowed,
Figure BDA0003000426320000092
is the weight value of the weight value,
Figure BDA0003000426320000093
to compute the memory state of the acquired point (p, q) at time t-1 on the neighborhood set (i, j), θ is an operation parameter.
Further, the strong convection extrapolation module further comprises a TrajGRU network, the TrajGRU network performs strong convection extrapolation on the convolution result, and the structural formula of the TrajGRU network is as follows:
u t ,v t =γ(x t ,H t-1 )
Figure BDA0003000426320000094
Figure BDA0003000426320000095
Figure BDA0003000426320000096
Figure BDA0003000426320000097
wherein is the convolution operator. Is a Hadamard operation, u t Is the vector direction at time t, v t Is the velocity of the vector movement at time t, γ is the connection structure, x t Is an input, H t 、R t 、Z t 、H′ t The memory state, reset gate, update gate and new information at time t, respectively, σ is the gate, w xz Is to update the weight of the gate at X, w xr Is the weight value of the reset gate at X,
Figure BDA0003000426320000101
the weight of the H state in the new state at time t-1 when l is 1,
Figure BDA0003000426320000102
is the weight of the update gate of the connection structure at H,
Figure BDA0003000426320000103
weight of the H state in the reset gate at time t-1 when l is 1, w xh Is the weight of the reset gate at H, H t-1 Is the memory state at time t-1, L is the total number of connection layers allowed to connect, warp is the functionNumber u t,l Is the vector direction of the point at time t when l equals 1, v t,l The velocity at which the point vector moves when time t is 1.
Compared with the prior art, the invention has the following advantages:
1. the invention combines the TrajGRU network with the VGG network for the first time, further improves the performance and generalization capability of an extrapolation model, and is used for strong convection weather forecast (such as extreme weather types of rainstorm, thunderstorm, hail and the like) for the first time, different from the short-term forecast, the data type involved in the strong convection weather forecast is wider, and the problem scale is larger, so the invention combines the TrajGRU with two network structures of VGG16 to construct a new model meeting the problem, and based on Doppler radar base data, serialized extrapolation images are obtained through model training on different types of base data.
2. A multi-scale image similarity structure is introduced in an optimization mode of extrapolation model training, a target function is optimized in a multi-scale mode in an extrapolation task for the first time, and the method is different from the traditional methods such as MSE (mean square error) and the like. Based on the method, the structure multi-scale and the contrast multi-scale which accord with human visual characteristics are constructed, the two scale changes are creatively used in an extrapolation network, and the purpose that the scale changes and the image characteristics are simultaneously considered in the training and extrapolation processes is achieved. Compared with the traditional methods such as MSE and the like, the optimization method more truly depicts the physical characteristics of the objective image, so that the extrapolation effect is more consistent with the image imaging structure of the physical world.
3. In the construction of the objective function, a mixing mode is adopted, the advantages of different components are comprehensively considered, namely, the multi-scale components are used as a factor, and MAE and L1 are mixed and normalized simultaneously, so that a model is fitted better.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive exercise.
FIG. 1 is a block diagram of a strong-convection extrapolation system based on deep learning at multiple scales according to the present invention;
FIG. 2 is a flow chart of a method for strong convection extrapolation at multiple scales based on deep learning according to the present invention;
FIG. 3 is another block diagram of the flow of the method for strong convection extrapolation at multiple scales based on deep learning according to the present invention;
FIG. 4 is a diagram of a VGG network model architecture of the present invention;
FIG. 5 is a diagram of a convolutional RNN cyclic concatenation scheme in accordance with the present invention;
FIG. 6 is a schematic diagram of the TrajGRU loop connection of the present invention;
FIG. 7 is a diagram of the model structure of the present invention for improving the combination of VGG and TrajGRU;
FIG. 8 is a block diagram of the multi-scale structural similarity (MS-SSIM) of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The examples are given for the purpose of better illustration of the invention, but the invention is not limited to the examples. Therefore, those skilled in the art should make insubstantial modifications and adaptations to the embodiments of the present invention in light of the above teachings and remain within the scope of the invention.
Example 1
Referring to fig. 1, a diagram of a strong convection extrapolation system based on deep learning in multiple scales according to the present invention is shown, specifically, the system includes:
the characteristic extraction module 1 is used for receiving radar image data and extracting the hidden state characteristics of the radar image data;
in this embodiment, the feature extraction module 1 includes a VGG16 network, where the VGG16 network is configured to extract doppler radar base data of radar image data, and then extract implicit state features of the radar base data according to the doppler radar base data; wherein the content of the first and second substances,
the Doppler radar base data comprises radar basic reflectivity, radar combined reflectivity, radar basic radial velocity and combined radial velocity.
Further, the VGG16 network employs convolutional layers by stacking convolutional kernels of small size.
The strong convection extrapolation module 2 is connected with the feature extraction module and is used for convolving the hidden state features and carrying out strong convection extrapolation on the convolution result to obtain a radar map;
further, the strong convection extrapolation module includes a ConvGRU network 21, the ConvGRU network is connected to the feature extraction module, and is configured to convolve the hidden state feature, and a new information structural formula of the ConvGRU is:
Figure BDA0003000426320000131
wherein, H' t,:,i,j Is new information at location (i, j) at time t, L is the total number of connection layers allowed for the connection,
Figure BDA0003000426320000132
is the weight value of the weight value,
Figure BDA0003000426320000133
to calculate the memory state of the acquired point (p, q) at time t-1 on the neighborhood set (i, j), θ is an operation parameter.
Further, the strong convection extrapolation module further includes a TrajGRU network 22, the TrajGRU network 22 performs strong convection extrapolation on the convolution result, and the structural formula of the TrajGRU network 22 is as follows:
u t ,v t =γ(x t ,H t-1 )
Figure BDA0003000426320000134
Figure BDA0003000426320000135
Figure BDA0003000426320000136
Figure BDA0003000426320000137
wherein, is the convolution operator,
Figure BDA00030004263200001311
is a Hadamard operation, u t Is the vector direction at time t, v t Is the velocity of the vector movement at time t, gamma is the connecting structure, x t Is an input, H t 、R t 、Z t 、H′ t The memory state, reset gate, update gate and new information at time t, respectively, σ is the gate, w xz Is to update the weight of the gate at X, which is the input w of the network at time t xr Is the weight value of the reset gate at X,
Figure BDA0003000426320000138
the weight of the H state in the new state at time t-1 when l is 1,
Figure BDA0003000426320000139
is the weight of the update gate of the connection structure at H,
Figure BDA00030004263200001310
is the weight of the H state in the reset gate at time t-1 when l is 1, H is the state at time t-1, w xh Is resetting the gate at HWeight, H t-1 Is the memory state at time t-1, L is the total number of connection layers allowed to connect, warp is a function, u t,l The vector direction of the point at time t, where l equals 1, v t,l The velocity at which the point vector moves when time t is 1.
The extrapolation optimization module 3 is connected with the strong convection extrapolation module and is used for performing second convolution on the radar image and batch regularization to obtain extrapolated image data; wherein the loss function of the batch regularization is:
Figure BDA0003000426320000141
wherein ξ 1 +ξ 2 1, M is the number of the real value and the predicted value, M is the number of the real value and the predicted value of MS-SSIM loss, xi 2 Is the weight occupied by MS-SSIM loss, [ l M (x,y)]Is the coordinate (x, y) brightness at M, alpha, of MS-SSIM loss M Is a luminance factor, y i Is a true value,
Figure BDA0003000426320000142
Is a predicted value, [ c ] j (x,y)]Beta is the contrast of the MS-SSIM loss in coordinates (x, y) at J j Is a contrast factor, s j (x, y) is the structural contrast of MS-SSIM loss at J coordinates (x, y), γ j Is a structural contrast factor, ξ 1 The loss of the true value and the predicted value is weighted,
Figure BDA0003000426320000143
to offset the MMD loss, θ is the offset coefficient.
Example 2
Referring to fig. 2 and fig. 3, a flowchart of a strong convection extrapolation method based on deep learning in multiple scales according to the present embodiment is shown, where BN is normalization or normalization for each batch of data, and tanh is hyperbolic tangent function in fig. 3, the method includes the following steps:
s1: receiving radar image data, and extracting implicit state features of the radar image data;
in the step, Doppler radar base data of the radar image data, such as radar basic reflectivity with an elevation angle of 0.5 degrees, radar combined reflectivity, radar basic radial velocity with an elevation angle of 0.5 degrees, combined radial velocity and the like, are extracted through a VGG16 network;
referring to fig. 4, a structure diagram of a VGG16 network model in this embodiment is shown, where the first one is an original diagram, and the VGG16 network in this embodiment is different from a conventional one, and performs implicit state feature extraction of radar base data on input doppler radar base data without passing through softmax
Further, the VGG16 network mainly studies the influence of the depth of the convolutional network on the accuracy of large-scale image recognition, the VGG16 network in this embodiment modifies the size of the convolutional kernel of the network on the basis of Alexnet, and obtains a better effect by using a smaller convolutional kernel size instead of a large-size convolutional kernel, so that compared with the conventional Alexnet, the convolutional kernels used by the VGGNet are smaller, and in this embodiment, a better effect is obtained by stacking convolutional layers with small sizes than by using one large-size convolutional kernel.
S2: carrying out convolution on the hidden state characteristics, inputting a convolution result into a TrajGRU network, and carrying out strong convection extrapolation to obtain a radar chart;
in this step, the hidden state features are further feature extracted through the convolution ConvGRU, which has the advantages of improving the calculation efficiency and the feature extraction performance, so that the network can more effectively process the input and hidden state information.
Referring to fig. 5, when used to capture spatio-temporal correlations, the connection structure and weights of all positions of ConvGRU and other ConvRNNs are fixed, and in this step, the hidden state features are convolved by ConvGRU, which has the formula:
Z t =σ(W xz *X t +W hz *H t-1 )
R t =σ(W xr *X t +W hr *H t-1 )
H′ t =f(W xh *X t +W hh *H t-1 )
Figure BDA0003000426320000151
wherein, is the convolution operator,
Figure BDA0003000426320000152
is a Hadamard operation, H t 、R t 、Z t 、H′ t The memory state, reset gate, update gate and new information at time t are respectively, and the ranges are
Figure BDA0003000426320000161
Is the input, f is the activation function, e.g. Leaky ReLU, H, W are the height and width of the state tensor and input tensor, respectively, C h 、C i The number of channels, respectively state tensor and input tensor, [ sigma ] is the gate, W xz Is the weight of the update gate at X, X is the input of the network at time t, W xr Is the weight W of the reset gate at X xh Is the weight, W, of the reset gate at H hz The weight value W of the gate in H is updated hr Is the weight, W, of the reset gate in the H state hh Is the weight value of the state H at the time of t-1 in the new message, H t-1 Is the memory state at time t-1; ConvGRU resets the gate each time a new input arrives, determines whether the previous state is cleared, while the update gate determines how much new information is to be written into the state information.
When space-time correlation is obtained, ConvGRU and other ConvRNNs have the defect that the connection structure and the weight are fixed; in this embodiment, the convolution-based operation is basically a calculation process that applies a position-invariant filter to the input, and if the input and reset gates are both 0 and 1, respectively, ConvGRU overwrites new information at a specific position (i, j) of the timestamp t, specifically, when the hyper-parameters of the convolution are fixed, the neighborhood set N is the same position for all, however, most motion patterns have different neighborhood sets for different positions. For example, rotation and scaling may generate directions pointing to flow fields at different angles, and in this embodiment, a position variable connection structure, ConvGR, is used on ConvGRUH of U' t,:,i,j The new information has the structural formula:
Figure BDA0003000426320000162
wherein, H' t,:,i,j Is the new information at location (i, j) at time t, N is the ordered neighborhood set at location (i, j), defined by the hyper-parameters of the convolution operation, (p) l,i,j ,q l,i,j ) Is the l-th element in the vicinity of position (i, j), θ is the operation parameter, concat (. cndot.) is a function,
Figure BDA0003000426320000163
is a weight value;
convolution other than hyper-parametric fixation
Figure BDA0003000426320000171
Operation, the neighborhood set N is different for different locations (i, j). The following structure thus describes a dynamically varying linkage:
Figure BDA0003000426320000172
wherein, H' t,:,i,j Is new information at location (i, j) at time t, L is the total number of connection layers allowed for the connection,
Figure BDA0003000426320000173
is the weight value of the weight value,
Figure BDA0003000426320000174
to compute the memory state of the acquired point (p, q) at time t-1 on the neighborhood set (i, j), θ is an operation parameter.
In this example, warp (H) t-1 ,u t,l ,v t,l ) Function from H by bilinear sampling kernel t-1 In selection of u t,l ,v t,l In particular, Warp is an inclusion function, let u t,l 、v t,l Put into the state H at the time of t-1, namelyPlurality of dots at H t-1 If M is equal to warp (I, U, V) wherein M, I is equal to R L×H×W ,U,V∈R L×H×W Then, there are:
Figure BDA0003000426320000175
wherein, M c,i,j Is the sample point, H is the output state (height and width of state tensor) at different times, W is the weight (height and width of input tensor), I c,m,n Is a state where a point m, n can be taken out in the c state, m, n are values of the taken point, ranging from 1 to H and W, V i,j 、U i,j Is H t-1 Two points in the state.
After convoluting the implicit state features by ConvGRU, the convolution result is also input into TrajGRU network, as TrajGRU cyclic connection structure diagram shown in fig. 6, TrajGRU network can actively learn convolution structure, it uses current input and previous state to generate neighborhood set of current position for each position of each time stamp, the network has the ability of learning recursive connection structure, more effectively captures space-time correlation than ConvGRU, TrajGRU network uses current input and previous input to generate local neighbor set for each position in each time stamp, because the position index is discrete and non-differentiable, the invention uses a group of continuous optical flows to represent their "index", specifically, the structure of TrajGRU network is:
u t ,v t =γ(x t ,H t-1 )
Figure BDA0003000426320000181
Figure BDA0003000426320000182
Figure BDA0003000426320000183
Figure BDA0003000426320000184
wherein is the convolution operator. Is a Hadamard operation, u t Is the vector direction at time t, v t Is the velocity of the vector movement at time t, gamma is the connecting structure, x t Is an input, H t 、R t 、Z t 、H′ t The memory state, reset gate, update gate and new information at time t, respectively, σ is the gate, w xz Is to update the weight of the gate at X, which is the input of the network at time t, w xr Is the weight value of the reset gate at X,
Figure BDA0003000426320000185
is the weight of the H state in the new state at time t-1 when l is 1,
Figure BDA0003000426320000186
is the weight of the update gate of the connection structure at H,
Figure BDA0003000426320000187
is the weight of the H state in the reset gate at time t-1 when l is 1, H is the state at time t-1, w xh Is the weight of the reset gate at H, H t-1 Is the memory state at time t-1, L is the total number of connection layers allowed to connect, warp is a function, u t,l The vector direction of the point at time t when l equals 1, v t,l The velocity at which the point vector moves when time t is 1.
Preferably, warp (H) t-1 ,u t,l ,v t,l ) Function is run from H through bilinear sampling kernel t-1 In selection of u t,l ,v t,l . If M is equal to warp (I, U, V), the method is the same as warp in ConvGRU network, wherein M, I is equal to R L×H×W ,U,V∈R L×H×W Then, there are:
Figure BDA0003000426320000188
s3: performing second convolution on the radar image and performing batch regularization to obtain extrapolated image data;
in this embodiment, referring to fig. 7, the radar chart output by the TrajGRU network in step S2 is sequentially convolved by a filter of 40, the size of the convolution kernel is 3 × 3, then the radar chart sequentially convolved by two layers of filters of 60, the size of the convolution kernel is 3 × 3, then the radar chart is further convolved by a layer of filter of 40, the size of the convolution kernel is 3 × 3, the last layer is a convolution of 1, the size of the convolution kernel is 3 × 3 × 3, and batch regularization processing is performed on the other layers except the last layer.
The batch regularization processing in this embodiment adopts the present invention to provide a new Loss function (MMD Loss), wherein the Loss function of batch regularization is:
Figure BDA0003000426320000191
wherein xi is 1 +ξ 2 1, M is the number of the real value and the predicted value, M is the number of the real value and the predicted value of MS-SSIM loss, xi 2 Is the weight taken by MS-SSIM loss, [ l M (x,y)]Is the coordinate (x, y) brightness at M, alpha, of MS-SSIM loss M Is a luminance factor, y i Is a true value,
Figure BDA0003000426320000192
Is a predicted value, [ c ] j (x,y)]Beta is the contrast of the MS-SSIM loss at J coordinates (x, y) ("beta") j Is a contrast factor, s j (x, y) is the structural contrast of MS-SSIM loss at J coordinates (x, y), γ j Is a structural contrast factor, ξ 1 The loss of the true value and the predicted value is weighted,
Figure BDA0003000426320000193
to offset the MMD loss, θ is the offset coefficient.
The loss function MMD in the present embodiment is formed by
Figure BDA0003000426320000194
And
Figure BDA0003000426320000195
the MMD Loss (MMD Loss) comprehensively considers the problem that the partial derivative value disappears under some conditions, the fuzzy condition of an extrapolated image is improved, the contrast and structural information of the image are simultaneously considered from multiple scales, the brightness change is considered on the last scale, and an L1 regularization method is added to reduce the calculation magnitude and optimize model fitting.
In one embodiment, ξ 1 At 0.84, the loss function MMD works best, α for each scale j =β j =γ j And is and
Figure BDA0003000426320000201
that is to say that it means that,
Figure BDA0003000426320000202
further, MSE is often used as a loss function in the regression problem, but in the experiment of image extrapolation, if only MSE is used singly as a loss function, the effect is not good, and the experimental effect is very fuzzy, especially when the edge position of the extrapolated image is very fuzzy, and from the principle of MSE, one of the significant disadvantages is that the derivative value is very small when the output probability value is close to 0 or close to 1, which may cause the derivative value to almost disappear when the model is just trained; from the above of the image, it is understood that when the distance between two pixel values is calculated, the error is calculated in a single way of square, which is a great reason for blurring the image effect, and the experimental effect of L1 is much better, wherein the main reason is that the square term is not calculated, the L1 regularization in the embodiment has the excellent property of generating sparsity in addition to the function of preventing overfitting of the model, which causes many terms in W to become zero, and the sparse solution is not only reduced in the amount of calculation, but also more importantly, has "interpretability".
HVS, the human visual system. Namely, human eyeballs are generally sensitive to brightness, contrast and structural features, and the physical characteristics of objective images can be more truly described by using a structural similarity SSIM method as a target function of a model, so that the extrapolation effect is more consistent with an image imaging structure of the physical world.
And the image is expressed in a multi-scale mode, and the aim of optimizing the target under different brightness, contrast and structural characteristic scales is fulfilled. For example, on a radar basic reflectivity image at a certain elevation angle, images at different scales can focus on the shape, contour, local and other detailed features of the reflectivity factor of the image. Therefore, more useful image feature information can be extracted using HVS-based multi-scale techniques. The images are processed in a multi-scale situation by firstly expressing radar images in the multi-scale situation and finding the mutual connection among all the scales. And the pyramid structure is a multi-scale expression form of the image. The multi-scale transformation techniques employed to obtain a multi-scale representation can be basically classified into three major categories, scale-space techniques, time-scale techniques, and time-frequency techniques. In an extrapolation experiment, according to the structural similarity of images, namely, referring to a human visual system, because the human visual system can extract structural information in a scene highly adaptively, the distortion of radar images is considered by comparing the change of the structural information of the images, namely, the distortion condition of the radar images is compared from three levels of brightness, contrast and structure, wherein the structure accounts for the main influence factor. Thereby obtaining objective quality evaluation. The multi-scale information is mainly used for extracting the features of the image based on the image structure information, and comprises brightness, contrast and structure information on different scales, so that the distortion condition of the image can be evaluated more objectively. The quality of the radar-extrapolated image is evaluated based on the luminance l, the contrast c, and the structural information s as follows.
Average luminance comparison function:
Figure BDA0003000426320000211
wherein the content of the first and second substances,
Figure BDA0003000426320000212
i.e. the average grey scale is taken as an estimate of the illumination.
Average contrast function:
Figure BDA0003000426320000213
wherein the content of the first and second substances,
Figure BDA0003000426320000214
the average contrast is measured using the standard deviation, knowing by the measurement system that the average gray value is to be removed from the signal.
Average structure, comparison function:
Figure BDA0003000426320000215
C 1 =(K 1 L) 2 ,C 2 =(K 2 L) 2 and C 3 =C 2 /2
wherein the content of the first and second substances,
Figure BDA0003000426320000216
is y and
Figure BDA0003000426320000217
covariance of (C) 1 、C 2 、C 3 Is constant, to avoid systematic errors when the denominator is 0, K 1 Is, K 2 The loss is a different parameter, i.e. a multiple of the loss L,
Figure BDA0003000426320000221
to predict the product of the average contrast of the image and the original, y is the original to be predicted,
Figure BDA0003000426320000222
to predict the map, μ y To predict the average illumination of the artwork,
Figure BDA0003000426320000223
in order to predict the average illumination of the map,
Figure BDA0003000426320000224
for the average contrast, delta, of the original to be predicted y Average contrast for the prediction map;
fig. 8 is a block diagram of a multi-scale structure similarity (MS-SSIM) method in this embodiment, and fig. 8 is divided into three parts: illumination, contrast, structure, and the function is to calculate the loss of similarity of contrast structure MS-SSIM loss. With the "reference image" and "warped image" signals as inputs, the system iteratively applies a low pass filter to down-sample the filtered image by a factor of 2. The result is two corresponding copies of the "reference image" and "warped image" signals at different scales.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A strong convection extrapolation method based on deep learning under multiple scales is characterized by comprising the following steps:
receiving radar image data, and extracting hidden state features of the radar image data, specifically comprising:
extracting Doppler radar base data of the radar image data through a VGG16 network, wherein the Doppler radar base data comprise radar basic reflectivity, radar combined reflectivity, radar basic radial velocity and combined radial velocity;
extracting hidden state features of the radar base data according to the Doppler radar base data;
the VGG16 network employs convolutional layers by stacking convolutional kernels of small size;
convolving the implicit state features by ConvGRU, wherein the new information structural formula of the ConvGRU is:
Figure FDA0003702020120000011
wherein, H' t,:,i,j Is new information at location (i, j) at time t, L is the total number of connection layers allowed for the connection,
Figure FDA0003702020120000012
is the weight value of the weight value,
Figure FDA0003702020120000013
in order to calculate the memory state of the acquired point (p, q) at the time t-1 on the neighborhood set (i, j), theta is an operation parameter;
convolving the implicit state characteristics, inputting a convolution result into a TrajGRU network, and carrying out strong convection extrapolation to obtain a radar chart;
the structural formula of the TrajGRU network is as follows:
u t ,v t =γ(x t ,H t-1 )
Figure FDA0003702020120000021
Figure FDA0003702020120000022
Figure FDA0003702020120000023
Figure FDA0003702020120000024
wherein, is the convolution operator,
Figure FDA0003702020120000025
is a Hadamard operation, u t Is the vector direction at time t, v t Is the velocity of the vector movement at time t, gamma is the connecting structure, x t Is an input, H t 、R t 、Z t 、H′ t The memory state, reset gate, update gate and new information at time t, respectively, σ is the gate, w xz Is to update the weight of the gate at X, which is the input of the network at time t, w xr Is the weight value of the reset gate at X,
Figure FDA0003702020120000026
is the weight of the H state in the new state at time t-1 when l is 1, H is the state at time t-1,
Figure FDA0003702020120000027
is the weight of the update gate of the connection structure at H,
Figure FDA0003702020120000028
weight of the H state in the reset gate at time t-1 when l is 1, w xh Is the weight of the reset gate at H, H t-1 Is the memory state at time t-1, L is the total number of connection layers allowed to connect, warp is a function, u t,l The vector direction of the point at time t, where l equals 1, v t,l The moving speed of the point vector when t time l is equal to 1;
and performing second convolution on the radar graph and batch regularization to obtain extrapolated image data.
2. The method according to claim 1, wherein the step of receiving radar image data and extracting hidden-state features of the radar image data specifically comprises:
extracting Doppler radar base data of the radar image data through a VGG16 network, wherein the Doppler radar base data comprise radar basic reflectivity, radar combined reflectivity, radar basic radial velocity and combined radial velocity;
and extracting the hidden state characteristics of the radar base data according to the Doppler radar base data.
3. The method of claim 2, wherein the VGG16 network employs convolutional layers by stacking convolutional kernels of small size.
4. The method of claim 1, wherein the hidden-state feature is convolved with ConvGRU.
5. A strong convection extrapolation system at multiple scales based on deep learning, comprising:
the feature extraction module is configured to receive radar image data and extract hidden state features of the radar image data, and specifically includes:
extracting Doppler radar base data of the radar image data through a VGG16 network, wherein the Doppler radar base data comprise radar basic reflectivity, radar combined reflectivity, radar basic radial velocity and combined radial velocity;
extracting hidden state features of the radar base data according to the Doppler radar base data;
the VGG16 network employs convolutional layers by stacking convolutional kernels of small size;
convolving the implicit state features by ConvGRU, wherein the new information structural formula of the ConvGRU is:
Figure FDA0003702020120000031
wherein, H' t,:,i,j Is new information at location (i, j) at time t, L is the total number of connection layers allowed for the connection,
Figure FDA0003702020120000032
is a weight value of the weight value,
Figure FDA0003702020120000033
the point (p,q) the memory state at the time t-1, theta being an operation parameter;
the strong convection extrapolation module is connected with the feature extraction module and used for convolving the implicit state features and inputting the convolution result into a TrajGRU network to perform strong convection extrapolation to obtain a radar map; the structural formula of the TrajGRU network is as follows:
u t ,v t =γ(x t ,H t-1 )
Figure FDA0003702020120000041
Figure FDA0003702020120000042
Figure FDA0003702020120000043
Figure FDA0003702020120000044
wherein, is the convolution operator,
Figure FDA0003702020120000045
is a Hadamard operation, u t Is the vector direction at time t, v t Is the velocity of the vector movement at time t, gamma is the connecting structure, x t Is an input, H t 、R t 、Z t 、H′ t The memory state, reset gate, update gate and new information at time t, respectively, σ is the gate, w xz Is to update the weight of the gate at X, which is the input of the network at time t, w xr Is the weight value of the reset gate at X,
Figure FDA0003702020120000046
h state is in new state at time t-1 when l is 1H is the state at time t-1,
Figure FDA0003702020120000047
is the weight of the update gate of the connection structure at H,
Figure FDA0003702020120000048
weight of the H state in the reset gate at time t-1 when l is 1, w xh Is the weight of the reset gate at H, H t-1 Is the memory state at time t-1, L is the total number of connection layers allowed to connect, warp is a function, u t,l The vector direction of the point at time t, where l equals 1, v t,l The moving speed of the point vector when the time t is 1;
and the extrapolation optimization module is connected with the strong convection extrapolation module and used for performing second convolution on the radar graph and batch regularization simultaneously to obtain extrapolated image data.
6. The system of claim 5, wherein the feature extraction module comprises a VGG16 network, the VGG16 network is configured to extract Doppler radar base data of the radar image data and then extract implicit state features of the radar base data according to the Doppler radar base data; wherein the content of the first and second substances,
the Doppler radar base data comprise radar basic reflectivity, radar combined reflectivity, radar basic radial velocity and combined radial velocity.
7. The system of claim 6, wherein the VGG16 network employs convolutional layers by stacking convolutional kernels of small size.
8. The system of claim 5, wherein the strong convection extrapolation module comprises a ConvGRU network coupled to the feature extraction module for convolving the implicit state features.
CN202110345106.5A 2021-03-31 2021-03-31 Deep learning based strong convection extrapolation method and system under multi-scale Active CN113239722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345106.5A CN113239722B (en) 2021-03-31 2021-03-31 Deep learning based strong convection extrapolation method and system under multi-scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345106.5A CN113239722B (en) 2021-03-31 2021-03-31 Deep learning based strong convection extrapolation method and system under multi-scale

Publications (2)

Publication Number Publication Date
CN113239722A CN113239722A (en) 2021-08-10
CN113239722B true CN113239722B (en) 2022-08-30

Family

ID=77130975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345106.5A Active CN113239722B (en) 2021-03-31 2021-03-31 Deep learning based strong convection extrapolation method and system under multi-scale

Country Status (1)

Country Link
CN (1) CN113239722B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627546A (en) * 2021-08-16 2021-11-09 阳光新能源开发有限公司 Method for determining reflectivity data, method for determining electric quantity and related device
CN114067214B (en) * 2022-01-17 2022-05-13 北京弘象科技有限公司 Rainstorm identification method and device based on multi-model fusion convolutional network
CN114755745B (en) * 2022-05-13 2022-12-20 河海大学 Hail weather identification and classification method based on multi-channel depth residual shrinkage network
CN115857060B (en) * 2023-02-20 2023-05-09 国家海洋局北海预报中心((国家海洋局青岛海洋预报台)(国家海洋局青岛海洋环境监测中心站)) Short-term precipitation prediction method and system based on layered generation countermeasure network
CN116520459B (en) * 2023-06-28 2023-08-25 成都信息工程大学 Weather prediction method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112180375A (en) * 2020-09-14 2021-01-05 成都信息工程大学 Meteorological radar echo extrapolation method based on improved TrajGRU network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6920233B2 (en) * 2001-02-20 2005-07-19 Massachusetts Institute Of Technology Method and apparatus for short-term prediction of convective weather
CN106886023B (en) * 2017-02-27 2019-04-02 中国人民解放军理工大学 A kind of Radar Echo Extrapolation method based on dynamic convolutional neural networks
CN108508505B (en) * 2018-02-05 2020-12-15 南京云思创智信息科技有限公司 Heavy rainfall and thunderstorm forecasting method and system based on multi-scale convolutional neural network
CN111352113B (en) * 2020-04-01 2022-08-26 易天气(北京)科技有限公司 Strong convection weather short-term forecasting method and system, storage medium and terminal
CN111487624A (en) * 2020-04-23 2020-08-04 上海眼控科技股份有限公司 Method and equipment for predicting rainfall capacity
CN112363140B (en) * 2020-11-05 2024-04-05 南京叁云科技有限公司 Thermodynamic constraint extrapolation objective correction method based on cyclic neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112180375A (en) * 2020-09-14 2021-01-05 成都信息工程大学 Meteorological radar echo extrapolation method based on improved TrajGRU network

Also Published As

Publication number Publication date
CN113239722A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN113239722B (en) Deep learning based strong convection extrapolation method and system under multi-scale
WO2021139069A1 (en) General target detection method for adaptive attention guidance mechanism
Han et al. Convolutional neural network for convective storm nowcasting using 3-D Doppler weather radar data
CN112488210A (en) Three-dimensional point cloud automatic classification method based on graph convolution neural network
CN107016677A (en) A kind of cloud atlas dividing method based on FCN and CNN
CN107563411B (en) Online SAR target detection method based on deep learning
CN112836713A (en) Image anchor-frame-free detection-based mesoscale convection system identification and tracking method
CN106127725A (en) A kind of millimetre-wave radar cloud atlas dividing method based on multiresolution CNN
Jing et al. AENN: A generative adversarial neural network for weather radar echo extrapolation
CN112464911A (en) Improved YOLOv 3-tiny-based traffic sign detection and identification method
CN111666903B (en) Method for identifying thunderstorm cloud cluster in satellite cloud picture
CN108254750B (en) Down-blast intelligent identification early warning method based on radar data
CN113344045A (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN110633633B (en) Remote sensing image road extraction method based on self-adaptive threshold
CN110472514B (en) Adaptive vehicle target detection algorithm model and construction method thereof
CN113989612A (en) Remote sensing image target detection method based on attention and generation countermeasure network
Lu et al. Convolutional neural networks for hydrometeor classification using dual polarization Doppler radars
CN109784376A (en) A kind of Classifying Method in Remote Sensing Image and categorizing system
CN113327301B (en) Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data
CN112348062A (en) Meteorological image prediction method, meteorological image prediction device, computer equipment and storage medium
Abraham et al. Image Classification of Natural Disasters Using Different Deep Learning Models
CN117808650B (en) Precipitation prediction method based on Transform-Flownet and R-FPN
Čiurlionis et al. Nowcasting precipitation using weather radar data for Lithuania: The first results
Chen et al. Aviation visibility forecasting by integrating Convolutional Neural Network and long short-term memory network
Woldamanuel Grayscale Image Enhancement Using Water Cycle Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant