CN113327301B - Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data - Google Patents

Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data Download PDF

Info

Publication number
CN113327301B
CN113327301B CN202110570222.7A CN202110570222A CN113327301B CN 113327301 B CN113327301 B CN 113327301B CN 202110570222 A CN202110570222 A CN 202110570222A CN 113327301 B CN113327301 B CN 113327301B
Authority
CN
China
Prior art keywords
image
radar
extrapolation
images
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110570222.7A
Other languages
Chinese (zh)
Other versions
CN113327301A (en
Inventor
文立玉
罗飞
卫霄飞
舒红平
曹亮
刘魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202110570222.7A priority Critical patent/CN113327301B/en
Publication of CN113327301A publication Critical patent/CN113327301A/en
Application granted granted Critical
Publication of CN113327301B publication Critical patent/CN113327301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention provides a strong convection extrapolation method and a system based on a depth analogy network under multi-dimensional radar data, wherein the method comprises the following steps: (1) feature image coding: encoding the radar images to obtain a plurality of characteristic images; (2) Learning the plurality of radar images and the plurality of characteristic images by using a depth vision analogy network to obtain an extrapolation radar image; (3) Feeding the characteristic image into an extrapolation network to obtain an extrapolation characteristic image; (4) And optimizing the extrapolated characteristic image and the extrapolated radar image for the first time by using an optimizer, and optimizing the output after the first optimization for the second time. According to the method, two networks are used for simultaneously extrapolating the radar image and the feature image based on coding, the two networks are optimized by an optimizer respectively, and finally the output of the two networks is optimized secondarily, so that the process of strong convection extrapolation is optimized, and the obtained strong convection prediction image is more accurate and the method is simple and easy to implement.

Description

Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data
Technical Field
The invention belongs to the cross field of computer artificial intelligence and weather, and particularly relates to a strong convection extrapolation method and system based on a depth analogy network under multi-dimensional radar data.
Background
The strong convection weather generally refers to extreme weather phenomena with disaster properties, such as convection strong wind, hail, short-term strong precipitation and the like accompanying with thunderstorm phenomena. Such weather is generally characterized by: the weather system has the advantages of sudden occurrence, rapid movement, severe weather and extremely strong destructive power, mainly occurs in medium and small-scale weather systems, the spatial scale is small, the general horizontal range is about ten kilometers to two and three hundred kilometers, the horizontal scale is generally less than two hundred kilometers, and some horizontal ranges are only dozens of meters to ten and several kilometers. Its life history is short and with obvious sudden changes, about one to tens of hours, and less a few minutes to an hour. It often occurs in convective clouds or monolithic convective clouds. When strong convection weather comes temporarily, severe weather such as lightning thunder and lightning, wind and heavy rain are often accompanied, so that the house is destroyed, crop trees are destroyed, telecommunication traffic is damaged, and even casualties are caused.
Therefore, the prediction of the strong convection weather with extremely strong destructiveness is very important, because the motion trend and the occurrence situation of the strong convection weather can be known in advance through the prediction of the weather phenomenon, so that related meteorological personnel can report with related departments before the strong convection weather comes, take powerful measures and avoid unnecessary disasters as far as possible, and the prediction of the strong convection weather is very important. At present, in the aspect of extrapolation of radar images, the used technologies mainly include an optical flow method and extrapolation based on methods such as LSTM and related variants thereof under a deep neural network. The following introduces the prior art:
optical flow method
The optical flow method is an earlier method used in the extrapolation problem, and its essence is to use the correlation between adjacent sequence image frames in the serialized image to find some relation between two consecutive frames, and calculate the motion information of some area on the image between adjacent sequence frames according to this relation.
However, in the optical flow method, timeliness and accuracy are difficult to obtain, and the foundation of the optical flow method theory is established on the assumption that the brightness of the same object is constant, so that the optical flow method is difficult to completely meet the requirements in reality, which is a big disadvantage of the optical flow method; and the scholars of the related documents also mention that the optical flow estimation step and the extrapolation step in the optical flow method are separate, so that the parameter selection is difficult.
Advantage based on deep learning method
The convolutional neural network comprises a plurality of hidden layers which are provided with a plurality of hidden nodes, so that the expression capacity of the neural network is very strong, and therefore, the problem of radar extrapolation is solved by using the related technology of the neural network, and better effect is obtained through later experiments. A more common correlation method in an extrapolation network is described below,
1. extrapolation based on LSTM/ConvLSTM
Currently, some scholars realize the extrapolation prediction of radar by using an LSTM (long-short term memory artificial neural network), and can solve the problem of gradient disappearance by using LSTM fitting sequence data and forgetting part of information through forgetting gates and outputting gates. It can handle sequences of small magnitude. The hong Kong scholars in China perform a convolution operation on the basis of LSTM, and propose convLSTM (convolutional long-short term memory artificial neural network), which can extract features in space, is more suitable for time series data based on images, and is more effective for feature extraction of the images. ConvLSTM is a variant of LSTM, and the change is mainly that the weight calculation of W is changed into convolution operation, so that the characteristics of the image can be extracted.
However, the extrapolation of LSTM is not sufficient, and in the existing RNN-based model, LSTM still appears to be troublesome for sequences of large magnitude or longer, and as long as the time span of LSTM is large and the network is deep, the final calculation amount is large and time-consuming.
2. Improvement based on ConvGRU and VGGNet
Currently, some researchers have proposed a ConvGRU network model combining convolutional neural networks CNN and GRUs with reference to the ConvLSTM network structure, which is combined with relevant control gates because the GRU network structure is simpler than LSTM. But is less effective and therefore better using the ConvGRU network model because it has faster training speed and smaller memory requirements than the ConvLSTM structure.
The related document also improves the convolution layer of the ConvGRU based on the VGGNet network, and the improved network architecture uses a plurality of small convolution kernel superpositions to replace large convolution kernels, so that the advantage of reducing the number of training parameters is achieved, and meanwhile, the capability of the network in feature extraction is also enhanced. The advantage of this model is that it uses convGRU structures instead of convLSTM structures, while using a multi-level framework model on the stack of structures. Namely the spatial feature extraction capability of the convolution structure and the memory capability of the GRU structure which is easy to handle time series problems. Finally, the forecasting effects of the model and the optical flow method are compared through related experiments, and the applicability of the improved model in the short-term forecasting problem of rainfall is verified.
The VGG is combined with the convGRU, which makes the network hierarchy deeper, but the simple construction of the convGRU-based extrapolation model hierarchy using the VGG network has a further room for improvement in effect.
3. Radar echo extrapolation based on GAN algorithm
Scholars use the GAN for short-term prediction based on generation of the antagonistic network, and perform relevant experiments and practices. The GAN method mainly extracts image characteristics from a series of radar observation data according to the principle of a convolutional neural network, so as to establish a model of a prediction network, and optimizes the model through a loss function. Extrapolation experiments for a certain area in a certain year based on 4 relevant weather processes can find that the extrapolated shape, intensity and echo position of the generated confrontation network GAN in the short-term forecast of the convective weather are basically consistent with the live condition in most cases, so that the GAN technology has good effect on radar extrapolation. However, the literature also indicates that the echo range extrapolated based on the GAN method is large, and particularly indicates that the extrapolation effect is poor mainly on the prediction of lamellar cloud precipitation. The 3 levels of echo intensity values forecasted by 18 examples 1h in the precipitation caused by an east wind system, the precipitation caused by a southwest monsoon wind system, the precipitation caused by a west wind zone system and the typhoon precipitation are tested, and the GAN network model method has good forecast on the echoes with medium intensity, but still has improved space for forecasting the strong echoes.
In the radar echo extrapolation of the GAN method, the network working principle is as follows: the first generation generator G1 generates predicted data from the historical serialized data, and then the generated echo and the true echo are put into the first generation discriminator D1 for learning, so that the first generation discriminator D1 can truly distinguish the generated radar image from the true radar image. Then, the generator G2 of the second generation is utilized again. The radar image generated by the second generation generator G2 can trick the first generation discriminator D1. At this time, the second generation discriminator D2 is trained again, and so on. The generator G and the discriminator D play games with each other to form a min-max game, so that the G and the D continuously optimize themselves through interaction in the training process until dynamic balance is achieved, namely, the G and the D cannot be better, namely, false samples and true samples at the time can not be distinguished, and output echoes are used for proximity prediction.
It should be noted that, during the training process of the model, the historical sequence radar pictures and the relevant data of the predicted target within 1h are used as the training samples. Since the GAN method itself does not have an explicit "forecasting" process. Thus, the learned output of the model during the training process is a forecast for 1h in the future.
In addition, the conventional interpolation methods such as the reverse distance weighting method, the kriging method, the trend surface method, and the like have the problems of large error, complex calculation, high requirement on data distribution, small applicability, and the like.
In summary, most of the existing strong convection extrapolation technologies are based on single-scale research, the model level is simple, and most of the existing strong convection extrapolation technologies use MSE loss based on extrapolation research, which causes rapid degradation of the quality of multiple extrapolation effects, so that an optimization method more conforming to the service needs to be provided; at present, most of scholars propose an extrapolation network which can only accept one type of image data or only has single input, and an example of inputting a plurality of different feature data into a network for learning is not disclosed, so that the expressiveness of the image data input singly is not strong, and the learning ability needs to be further improved.
Disclosure of Invention
In view of this, an object of the present invention is to provide a strong convection extrapolation method based on a depth analogy network under multidimensional radar data, which learns a radar image by using the depth vision analogy network and optimizes the network in the process, so as to effectively improve the quality of the extrapolated radar image.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data comprises the following steps:
(1) And (3) characteristic image coding: encoding the radar images to obtain a plurality of characteristic images;
(2) Learning the plurality of radar images and the plurality of characteristic images by using a depth vision analogy network to obtain an extrapolation radar image;
(3) Feeding the characteristic image into an extrapolation network to obtain an extrapolation characteristic image;
(4) Performing first optimization on the extrapolated characteristic image and the extrapolated radar image by using an optimizer, and performing second optimization on the output after the first optimization; wherein the first optimization uses the following loss model:
Figure GDA0003955610750000061
wherein the content of the first and second substances,
Figure GDA0003955610750000062
for a comparison function of average illumination>
Figure GDA0003955610750000063
Is an average contrast function>
Figure GDA0003955610750000064
Is an average structural comparison function, y is the true value>
Figure GDA0003955610750000065
Is a predicted valueAlpha is the coefficient of the average illumination comparison function, beta is the coefficient of the average contrast comparison function, gamma is the coefficient of the average structure comparison function, theta is the coefficient of the offset value, w i Is an offset value, m is the number of samples, ξ 1 The weight value of the difference sum of the true value and the predicted value, xi 2 Is the weight value of the loss of the three structures, y i Is the true value with subscript i->
Figure GDA0003955610750000066
The predicted value with index i is shown.
Further, the radar image is encoded through a VGG16 network.
Further, the plurality of radar images are continuous sequence images in time, and the plurality of feature images are continuous sequence images corresponding to the plurality of radar images.
Further, the analog relation of the extrapolated radar image obtained in step (2) is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the feature images, and B s And X is an extrapolated radar image for a subsequent image of B in the plurality of characteristic images in time sequence.
Further, the extrapolation network in step (3) is a convLSTM extrapolation network.
Further, an LMOptimizer optimizer is adopted in the step (4).
In view of the above, a second objective of the present invention is to provide a strong convection extrapolation system based on a depth analogy network under multidimensional radar data, which utilizes all modules to encode, model, and optimize radar images, thereby improving the quality of the extrapolated radar images.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data comprises:
the radar image acquisition module is used for acquiring a plurality of radar images in real time;
the characteristic coding module is connected with the radar image acquisition module and used for coding the radar image to obtain a plurality of characteristic images;
the depth analogy module is connected with the radar image acquisition module and the feature coding module and used for learning the plurality of radar images and the plurality of feature images by using a depth vision analogy network to obtain an extrapolated radar image;
the extrapolation module is connected with the feature coding module and used for feeding the feature image into an extrapolation network to obtain an extrapolated feature image;
a double optimization module connected with the depth analogy module and the extrapolation module for
Receiving the extrapolated characteristic image and the extrapolated radar image, performing first optimization on the extrapolated characteristic image and the extrapolated radar image, and performing second optimization on the output after the first optimization; wherein the first optimization uses the following loss model:
Figure GDA0003955610750000081
wherein the content of the first and second substances,
Figure GDA0003955610750000082
for a comparison function of average illumination>
Figure GDA0003955610750000083
Is an average contrast function>
Figure GDA0003955610750000084
Is an average structural comparison function, y is the true value>
Figure GDA0003955610750000085
For the prediction values, α is the coefficient of the average illumination comparison function, β is the coefficient of the average contrast comparison function, γ is the coefficient of the average structure comparison function, θ is the coefficient of the offset value, w i Is an offset value, m is the number of samples, ξ 1 The weight value of the difference sum of the true value and the predicted value, xi 2 Is the weight value occupied by the loss of three structures, y i Is the true value with subscript i->
Figure GDA0003955610750000086
The predicted value with index i is shown.
Furthermore, the characteristic encoding module is provided with a VGG16 network, and encodes the radar image through the VGG16 network.
Further, the plurality of radar images are continuous sequence images in time, and the plurality of feature images are continuous sequence images corresponding to the plurality of radar images.
Further, in the depth analogy module, the analogy relation of the extrapolated radar image is as follows:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the feature images, and B s And X is an extrapolation radar image for a subsequent image B in the plurality of characteristic images in time sequence.
Further, the extrapolation network is a convLSTM extrapolation network.
Advantageous effects
The invention provides a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data, which comprises the steps of feeding a feature image obtained by coding radar images into an analogy network and an extrapolation radar network to obtain an extrapolation feature image and an extrapolation radar image, respectively optimizing the extrapolation feature image and the extrapolation radar image by using an optimizer, then performing secondary optimization on the output of the extrapolation feature image and the extrapolation radar image, and performing gradient calculation on the optimizer in the process to optimize parameters to finally obtain a high-quality extrapolation radar image; meanwhile, the invention also provides a strong convection extrapolation system based on the depth analogy network under the multi-dimensional radar data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive exercise.
FIG. 1 is a schematic structural diagram of an embodiment of a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data according to the present invention;
FIG. 2 is a schematic diagram of a characteristic image encoding process of a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data according to the present invention;
FIG. 3 is a flowchart illustrating an embodiment of a strong convection extrapolation method based on a depth analogy network under multidimensional radar data according to the present invention;
FIG. 4 is a schematic diagram of a radar image analogy generation process in a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data according to the present invention;
FIG. 5 is a schematic diagram of an analogy generation process in a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data according to the present invention;
FIG. 6 is a diagram of an analog network structure based on radar images in a strong convection extrapolation method based on a depth analog network under multi-dimensional radar data according to the present invention;
FIG. 7 is a schematic diagram of double optimization based on radar images in a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data according to the present invention;
FIG. 8 is a schematic diagram of a detailed structure of a double optimization extrapolation network in a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data according to the present invention;
FIG. 9 is a schematic diagram of a training flow of an extrapolation network in a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The examples are given for the purpose of better illustration of the invention, but the invention is not limited to the examples. Therefore, those skilled in the art should make insubstantial modifications and adaptations to the embodiments of the present invention in light of the above teachings and remain within the scope of the invention.
Example 1
Fig. 1 is a schematic structural diagram of an embodiment of a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data according to the present invention. Concretely, a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data comprises:
a radar image acquisition module 1, configured to acquire a plurality of radar images in real time;
further, the radar image acquisition module 1 may include a directional antenna, a transmitter, a receiver, an antenna controller, a display, a camera, an electronic computer, and an image transmission part, and is configured to acquire a plurality of radar images in real time;
the characteristic coding module 2 is connected with the radar image acquisition module 1 and is used for coding the radar image to obtain a plurality of characteristic images;
furthermore, the feature coding module 2 contains a coder f and a decoder g, the coder f and the decoder g are parameterized into a deep convolutional neural network, image features are extracted, and the coder can generate coding information which is easy to predict after training; the feature encoding module 2 is further provided with a VGG16 network, in this embodiment, the VGG16 network is used to perform feature extraction on the radar image to obtain an encoded image, and fig. 2 describes a process of extracting the feature image by using the VGG16, in which first, the obtained radar image is grayed, and then the radar image is fed into the VGG16 network to perform feature extraction and Encoder encoding, and finally, the encoded feature image is obtained.
The depth analogy module 3 is connected with the radar image acquisition module 1 and the feature coding module 2 and is used for learning a plurality of radar images and a plurality of feature images by using a depth vision analogy network to obtain an extrapolated radar image;
furthermore, the plurality of radar images are continuous sequence images according to time, and the plurality of characteristic images are continuous sequence images corresponding to the plurality of radar images;
in one embodiment, in the depth analogy module, the analogy relation for obtaining the extrapolated radar image is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the characteristic images, and B s And X is an extrapolated radar image for a subsequent image of B in the plurality of characteristic images in time sequence.
The extrapolation module 4 is connected with the feature coding module 2 and used for feeding the feature image into an extrapolation network to obtain an extrapolated feature image;
in this embodiment, the extrapolation network is a convLSTM extrapolation network.
And the double optimization module 5 is connected with the depth analogy module 3 and the extrapolation module 4 and is used for receiving the extrapolation characteristic image and the extrapolation radar image, performing first optimization on the extrapolation characteristic image and the extrapolation radar image and performing second optimization on the output after the first optimization.
Further, a LMOptimizer optimizer is arranged in the double optimization module 5, the double optimization module 5 adopts the LMOptimizer optimizer to simultaneously optimize parameters in the depth analogy network in the depth analogy module 3 and the extrapolation network in the extrapolation module 4, and the model performance is synchronously improved in a double optimization mode; specifically, gradient calculation is carried out by respective optimizers through a deep analogy network and an extrapolation network synchronously to obtain optimized parameters, secondary optimization after weighting is carried out again, and a Quaoptimizer generates a model;
the LMOptimizer optimizes the extrapolation feature image and the extrapolation radar image for the first time, and a new optimization mode is provided in this embodiment, which specifically includes: the first optimization used the following loss model:
Figure GDA0003955610750000131
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003955610750000132
is a comparison function of the average illuminance>
Figure GDA0003955610750000133
Is an average contrast function>
Figure GDA0003955610750000134
Is an average structural comparison function, y is the true value, <' > is>
Figure GDA0003955610750000135
For the prediction values, α is the coefficient of the average illumination comparison function, β is the coefficient of the average contrast comparison function, γ is the coefficient of the average structure comparison function, θ is the coefficient of the offset value, w i Is an offset value, m is the number of samples, ξ 1 The weight value of the difference sum of the true value and the predicted value, xi 2 Is the weight value of the loss of the three structures, y i Is the true value with subscript i->
Figure GDA0003955610750000136
The predicted value with index i is shown.
Preferably, since the plurality of radar images are time-wise continuous sequence images, the extrapolation module 4 in this embodiment obtains a plurality of time-wise continuous sequence extrapolated feature images, the depth analogy module 3 obtains a plurality of time-wise continuous sequence extrapolated radar images, and the double optimization module 5 optimizes the extrapolation module 4 and the depth analogy module 3 continuously by time to form a Training Flow (Training Flow) so that the final model is more reliable.
Example 2
Referring to fig. 3, a strong convection extrapolation method based on a depth analogy network under multidimensional radar data in embodiment 1 is specifically a strong convection extrapolation method based on a depth analogy network under multidimensional radar data, and includes the following steps:
s100: and (3) characteristic image coding: encoding the radar images to obtain a plurality of characteristic images; then, step S200 is executed;
in the embodiment, the radar image acquisition system can be used for acquiring a plurality of radar images in real time through parts such as a directional antenna, a transmitter, a receiver, an antenna controller, a display, a camera, an electronic computer, image transmission and the like; the plurality of radar images are time-wise continuous sequence images, and the plurality of feature images are continuous sequence images corresponding to the plurality of radar images.
Further, the encoder is trained to generate encoded information that is easily predictable. The feature extraction is performed on the radar image by using a VGG16 network to obtain an encoded image, and the encoder comprises an encoder f, a decoder g and the like.
S200: learning a plurality of radar images and a plurality of characteristic images by using a depth vision analogy network to obtain an extrapolation radar image; then, step S300 is executed;
further, the analogy relationship in obtaining the extrapolated radar image is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the characteristic images, and B s The subsequent image of B in the plurality of characteristic images in time sequence is X, and X is an extrapolation radar image; the specific radar image analogy generation process can refer to fig. 4.
In the embodiment, the key for solving the analogy problem is to learn the relation between images, and the idea is feasible and appropriate when being introduced into the task of radar image extrapolation; specifically, the method comprises the following steps:
in the deep visual analogy network, if a certain relation between P and Q is known and the relation is used in the same class of R and X, then knowing R, finding X is actually to generate a suitable image for effective analogy, and the relation is as follows:
(P,Q):(R,X)
the method in this embodiment applies this idea to the extrapolation task in the following specific process:
firstly, a radar basic reflectivity image at an elevation angle of 0.5 degrees is used as an original image to be analyzed and is marked as an image A, and a characteristic image is obtained after the characteristics are extracted by using a VGG network through the step S100 and is marked as an image B.
From the understanding of the extrapolation task, a set of n time-wise consecutive sequence of images I = (I) 1 ,i 2 ,i k ,...,i n-1 ,i n ) If the k-1 image is predicted, the k-1 image is the real situation of the k-1 image. Whereby image A is noted as A in the next image of its sequence s The same encoding as in step S100 is performed to obtain B s Obtained by a depth vision analog network using A, B, bs
Figure GDA0003955610750000156
Namely: (B, B) s ) A, X. For image A, it is calculated ≦ using the network>
Figure GDA0003955610750000157
I.e. the predicted image.
Specifically, referring to fig. 4, for the generation process of the analog network, the mathematical principle is as follows:
Figure GDA0003955610750000151
/>
Figure GDA0003955610750000152
Figure GDA0003955610750000153
Figure GDA0003955610750000154
Figure GDA0003955610750000155
wherein a, b, c, d in the formula are (a, b): (c, d) the relation pair, L add In order to add the analog network loss, d is the image to be solved, a, b, c are three input pictures a, b, c respectively, g is the decoder, f is the encoder. The analog network addition part is to encode three pictures a, b and c, respectively, and obtain the difference between the two pictures a and b by f (b) -f (a) and add the difference to the encoded c, i.e., f (c), to obtain the required f (d) Prediction ) After g decoding, the norm modulo of the difference between the g decoded d and d can obtain the difference between the prediction and the true value d, and the loss L is obtained add Is the additive analog network loss; same principle L mul For the multiplication analog network multiplication loss, three pictures of a, b and c are respectively coded, the difference of a and b is multiplied, the W weight is added, and d is obtained by adding c Prediction The difference norm of the prediction and the truth value d is modulo to obtain the loss L mul Analogy the network loss for multiplication; l is deep In order to solve the loss of the relation analog network, the h process is to solve the loss of the relation between the difference of a and b after encoding and the encoding c; r is the difference between the prediction value and the truth value, and three losses are normalized and integrated into a segmented loss to obtain T (x, y, z), wherein MLP is a multilayer neural network; in this embodiment, the structure of the analog network can refer to fig. 6, a, b, and c, and enter Encoder network f for encoding, and then enter increment function T for increment to obtain L add 、L deep 、L mul Then, the decoded image is decoded in a Decoder g to finally obtain a predicted image d;
in one embodiment, the encoder f and decoder g are parameterized as a deep convolutional neural network, extracting image features, L add The method is simple and easy to implement, but in some cases, L is applied add The method may not be ideal in situations such as rotation. Thus, L mul And L deep These problems can be solved. Wherein, at L mul Wherein the increment is generated using the interaction between f (b) -f (a) and f (c), W being a three-dimensional tensor, in order to reduce the number of weights, a form of parameterization of the tensor is used
Figure GDA0003955610750000161
Wherein W ∈ R K×K×K K is the image space after the image encoder, and the image space after the decoder is D, that is:
f:R D →R K ,R K →R D
in general K<D; wherein the content of the first and second substances,
Figure GDA0003955610750000162
W×1v×2w∈R K vector v, w ∈ R K The MLP is a multi-layer perceptron.
By introducing a regularizer, the prediction effect is improved, and the overall training target is the weighted combination of the analog prediction and the regularizer. For example: l is deep + α R is to L deep Optimization of the method, wherein α =0.01. If it is
Figure GDA0003955610750000171
Is the final training target, expressed as: />
Figure GDA0003955610750000172
Figure GDA0003955610750000173
Wherein α =0.01,>
Figure GDA0003955610750000174
to predict loss, the relationship is analogized to L of the network deep The influence factor a of the loss constant R is increased and the formula is schematically the same as in fig. 4.
The analogy generation process can refer to FIG. 5;
s300: feeding the characteristic image into an extrapolation network to obtain an extrapolation characteristic image; then, step S400 is performed;
s400: and performing gradient calculation on the depth analogy network and the extrapolation network by using an optimizer to obtain optimized parameters, performing weighted quadratic optimization again, and generating a model.
Obtaining the characteristic image B through the steps S100-S300 s And extrapolated feature images
Figure GDA0003955610750000175
And optimizing the real radar image A s And extrapolated radar image->
Figure GDA0003955610750000176
And simultaneously optimizing parameters in the deep analogy network and the extrapolation network by adopting an LMOptimizer optimizer, and synchronously improving the model performance in a double optimization mode.
Extrapolated radar image A s ' is obtained by a depth analogy network through a real radar image A and real characteristic images B and Bs, wherein B is s The method comprises the steps of obtaining an extrapolation network, synchronously performing gradient calculation by using a depth analogy network and the extrapolation network through respective optimizers, obtaining optimized parameters, performing weighted secondary optimization again, and generating a model. Specific optimization can be found in fig. 7.
Referring to fig. 8, a detailed structure diagram of a double optimized extrapolation network (i.e. VGG16, depth analogy network) in this embodiment, specifically, let a be a radar image, and simultaneously feed two networks, i.e. VGG network to obtain a feature image B and a Snet depth analogy network; then continuously feeding the characteristic image B into two networks, namely a convLSTM extrapolation network to obtain an extrapolated characteristic image and a Snet depth analogy network; the radar image As is the next radar image of the image A in the sequence, and is sent to a VGG network (namely an Encoder coding network) to obtain a characteristic image Bs, and then is fed to the Snet network, so that the radar image A can be extrapolated according to the mentioned Snet network analogy theory s ', convLSTM extrapolation network can directly extrapolate feature image B s ' this has the advantage of simultaneously extrapolating the radar image and the feature image based on the code using two networks, which are then separately optimized using an LMOptimizer optimizer, as will be described in detail belowThe LMOptimizer optimizer carries out secondary optimization on the output of the dual network.
The L2 loss, mean Square Error (MSE), is the most commonly used regression loss function, which is the sum of the squared differences of our target variables and predicted values; the images were evaluated for errors in illumination, contrast, and texture as quaoptisizers.
The L2 loss is easily blurred when dealing with the image problem, so it is not appropriate to use L2 as the loss function of the extrapolation problem because the image quality is seriously affected when the extrapolation time is long, and a new way (LMOptimizer) is proposed in this embodiment to calculate the image loss, and the two losses are combined to improve the quality of the extrapolated radar image.
In particular, the average absolute error can well reflect the actual situation of the predicted value error, but the errors of the image in illumination, contrast and structure cannot be well evaluated, so that the average change of the errors is introduced as follows:
the average illuminance comparison function is set to:
Figure GDA0003955610750000191
wherein the content of the first and second substances,
Figure GDA0003955610750000192
i is the first, summing subscript; y is the true value>
Figure GDA0003955610750000193
To predict value, μ y Illumination of the true value, <' > is present>
Figure GDA0003955610750000194
Is the illumination of the predicted value, m is the number of samples, c 1 Is a constant number y i The subscript is the true value of i, n is the number, namely the average gray level is taken as the estimation of the illumination; />
The average contrast function is set as:
Figure GDA0003955610750000195
wherein the content of the first and second substances,
Figure GDA0003955610750000196
knowing by the measurement system that the average gray value is to be removed from the signal, the average contrast is measured using the standard deviation;
the average structure comparison function is set to:
Figure GDA0003955610750000197
wherein, delta y
Figure GDA00039556107500001911
Respectively is y and/or>
Figure GDA0003955610750000198
Is greater than or equal to>
Figure GDA0003955610750000199
Is y and +>
Figure GDA00039556107500001910
Covariance of the difference, c 1 、c 2 、c 3 Is constant, in order to avoid system errors caused by denominator of 0;
adding average absolute error and regularization terms, the formula is defined as follows:
Figure GDA0003955610750000201
wherein ξ 12 =1,ξ 1 The weight value of the difference sum of the true value and the predicted value, xi 2 The weight of three structure losses, alpha is the coefficient of the average illuminance comparison function, beta is the coefficient of the average contrast function, gamma is the coefficient of the average structure comparison function, and theta isCoefficient of offset value, w i For the offset value, when the parameter α = β = γ =1 in Loss, the above formula can be expressed as:
Figure GDA0003955610750000202
preferably, when the coefficient ξ 2 The effect on the extrapolation task is better when the value is not less than 0.16.
Preferably, according to the above-mentioned double optimized extrapolation network structure, referring to the extrapolation network training stream of fig. 9, each block formed by the dotted line portion is referred to As a block (block), and it is known that from two input radar images a and As, the prediction of the subsequent two radar images can be performed by connecting a plurality of blocks in the manner of the following fig. 7 to realize the extrapolation of the subsequent images. The input images of the third block are the images extrapolated from the convLSTM in the second block, noting that the Snet network is always optimized, which also makes the final model reliable.
While the present invention has been described with reference to the particular illustrative embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various modifications, equivalent arrangements, and equivalents thereof, which may be made by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data is characterized by comprising the following steps:
(1) And (3) characteristic image coding: encoding the radar images to obtain a plurality of characteristic images;
(2) Learning the plurality of radar images and the plurality of characteristic images by using a depth vision analogy network to obtain an extrapolation radar image; in the deep visual analogy network, if a certain relation between P and Q is known and the relation is used in the same class of R and X, then knowing R, finding X is actually to generate a suitable image for effective analogy, and the relation is as follows:
(P,Q):(R,X);
the analog relation for the extrapolated radar image is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the characteristic images, bs is a subsequent image of B in the characteristic images in time sequence, and X is an extrapolation radar image;
(3) Feeding the characteristic image into an extrapolation network to obtain an extrapolation characteristic image; the extrapolation network is a convLSTM extrapolation network;
(4) Performing first optimization on the extrapolation characteristic image and the extrapolation radar image by using an LMOptimizer optimizer, and performing second optimization on the output after the first optimization; the LMOptimizer optimizer performs first optimization on the extrapolated feature image and the extrapolated radar image, wherein the first optimization uses the following loss model:
Figure FDA0003908628930000021
wherein the content of the first and second substances,
Figure FDA0003908628930000022
is a comparison function of the average illuminance>
Figure FDA0003908628930000023
Is an average contrast function>
Figure FDA0003908628930000024
Is an average structural comparison function, y is the true value, <' > is>
Figure FDA0003908628930000025
For the prediction values, α is the coefficient of the average illumination comparison function, β is the coefficient of the average contrast comparison function, γ is the coefficient of the average structure comparison function, θ is the coefficient of the offset value, w i Is offset byThe value, m is the number of samples, xi 1 The weight value of the difference sum of the true value and the predicted value, xi 2 Is the weight value of the loss of the three structures, y i Is the true value with subscript i->
Figure FDA0003908628930000026
Predicted values with index i.
2. The method of claim 1, wherein the radar image is encoded through a VGG16 network.
3. The method of claim 1, wherein the plurality of radar images are a time-wise continuous sequence of images, and wherein the plurality of feature images are a continuous sequence of images corresponding to the plurality of radar images.
4. A strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data is characterized by comprising:
the radar image acquisition module is used for acquiring a plurality of radar images in real time;
the characteristic coding module is connected with the radar image acquisition module and used for coding the radar image to obtain a plurality of characteristic images;
the depth analogy module is connected with the radar image acquisition module and the feature coding module and used for learning the plurality of radar images and the plurality of feature images by utilizing a depth vision analogy network to obtain an extrapolated radar image; in the deep visual analogy network, if a certain relation between P and Q is known and the relation is used in the same class of R and X, then knowing R, finding X is actually to generate a suitable image for effective analogy, and the relation is as follows:
(P,Q):(R,X);
the analog relation for the extrapolated radar image is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the characteristic images, bs is a subsequent image of B in the characteristic images in time sequence, and X is an extrapolation radar image; the extrapolation module is connected with the feature coding module and used for feeding the feature image into an extrapolation network to obtain an extrapolated feature image; the extrapolation network is a convLSTM extrapolation network; the double optimization module is connected with the depth analogy module and the extrapolation module and used for receiving the extrapolated characteristic image and the extrapolated radar image, performing first optimization on the extrapolated characteristic image and the extrapolated radar image by using an LMOptimizer optimizer, and performing second optimization on the output after the first optimization; the LMOptimizer optimizer performs first optimization on the extrapolated feature image and the extrapolated radar image, wherein the first optimization uses the following loss model:
Figure FDA0003908628930000031
wherein the content of the first and second substances,
Figure FDA0003908628930000032
is a comparison function of the average illuminance>
Figure FDA0003908628930000033
For a function of average contrast>
Figure FDA0003908628930000034
Is an average structural comparison function, y is the true value>
Figure FDA0003908628930000035
For the prediction value, α is the coefficient of the average illumination comparison function, β is the coefficient of the average contrast function, γ is the coefficient of the average texture comparison function, θ is the coefficient of the offset value, w i Is an offset value, m is the number of samples, ξ 1 The weight value of the difference sum of the true value and the predicted value, xi 2 Is the weight value of the loss of the three structures, y i Is the true value with subscript i->
Figure FDA0003908628930000036
Predicted values with index i.
5. The system of claim 4, wherein the feature encoding module is provided with a VGG16 network, and encodes the radar image through the VGG16 network.
6. The system of claim 4, wherein the plurality of radar images are a time-wise continuous sequence of images, and wherein the plurality of feature images are a continuous sequence of images corresponding to the plurality of radar images.
CN202110570222.7A 2021-05-25 2021-05-25 Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data Active CN113327301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110570222.7A CN113327301B (en) 2021-05-25 2021-05-25 Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110570222.7A CN113327301B (en) 2021-05-25 2021-05-25 Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data

Publications (2)

Publication Number Publication Date
CN113327301A CN113327301A (en) 2021-08-31
CN113327301B true CN113327301B (en) 2023-04-07

Family

ID=77416642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110570222.7A Active CN113327301B (en) 2021-05-25 2021-05-25 Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data

Country Status (1)

Country Link
CN (1) CN113327301B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937078A (en) * 2009-06-30 2011-01-05 深圳市气象局 Nowcasting method and system of thunder cloud cluster based on boundary recognition and tracer technique
CN110824481A (en) * 2019-10-28 2020-02-21 兰州大方电子有限责任公司 Quantitative precipitation prediction method based on radar reflectivity extrapolation
CN112446419A (en) * 2020-10-29 2021-03-05 中山大学 Time-space neural network radar echo extrapolation forecasting method based on attention mechanism

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11128935B2 (en) * 2012-06-26 2021-09-21 BTS Software Solutions, LLC Realtime multimodel lossless data compression system and method
WO2016057859A1 (en) * 2014-10-10 2016-04-14 The Penn State Research Foundation Identifying visual storm signatures form satellite images
US11391819B2 (en) * 2018-07-18 2022-07-19 Qualcomm Incorporate Object verification using radar images
US11169514B2 (en) * 2018-08-27 2021-11-09 Nec Corporation Unsupervised anomaly detection, diagnosis, and correction in multivariate time series data
US11170504B2 (en) * 2019-05-02 2021-11-09 Keyamed Na, Inc. Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network
CN112418409B (en) * 2020-12-14 2023-08-22 南京信息工程大学 Improved convolution long-short-term memory network space-time sequence prediction method by using attention mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937078A (en) * 2009-06-30 2011-01-05 深圳市气象局 Nowcasting method and system of thunder cloud cluster based on boundary recognition and tracer technique
CN110824481A (en) * 2019-10-28 2020-02-21 兰州大方电子有限责任公司 Quantitative precipitation prediction method based on radar reflectivity extrapolation
CN112446419A (en) * 2020-10-29 2021-03-05 中山大学 Time-space neural network radar echo extrapolation forecasting method based on attention mechanism

Also Published As

Publication number Publication date
CN113327301A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
Han et al. Convective precipitation nowcasting using U-Net Model
Zhang et al. Weather radar echo prediction method based on convolution neural network and long short-term memory networks for sustainable e-agriculture
CN110568442B (en) Radar echo extrapolation method based on confrontation extrapolation neural network
CN109001736B (en) Radar echo extrapolation method based on deep space-time prediction neural network
CN113239722B (en) Deep learning based strong convection extrapolation method and system under multi-scale
CN112132149B (en) Semantic segmentation method and device for remote sensing image
CN112507793A (en) Ultra-short-term photovoltaic power prediction method
CN110456355B (en) Radar echo extrapolation method based on long-time and short-time memory and generation countermeasure network
CN110738355A (en) urban waterlogging prediction method based on neural network
CN111242351A (en) Tropical cyclone track prediction method based on self-encoder and GRU neural network
Yao et al. Wave height forecast method with multi-step training set extension LSTM neural network
CN113344045A (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN116306203A (en) Intelligent simulation generation method for marine target track
CN113327301B (en) Strong convection extrapolation method and system based on depth analogy network under multi-dimensional radar data
CN111368843B (en) Method for extracting lake on ice based on semantic segmentation
CN117131991A (en) Urban rainfall prediction method and platform based on hybrid neural network
CN116822716A (en) Typhoon prediction method, system, equipment and medium based on space-time attention
CN110751699A (en) Color reconstruction method of optical remote sensing image based on convolutional neural network
CN116148864A (en) Radar echo extrapolation method based on DyConvGRU and Unet prediction refinement structure
CN115877483A (en) Typhoon path forecasting method based on random forest and GRU
CN113341419B (en) Weather extrapolation method and system based on VAN-ConvLSTM
CN114202091A (en) Indian ocean dipole index prediction method
CN117239744B (en) Ultra-short-term photovoltaic power prediction method integrating wind cloud No. 4 meteorological satellite data
CN117172372B (en) Typhoon path prediction method and typhoon path prediction system
CN117706660A (en) Strong convection weather forecast method based on CNN-ViT technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wen Liyu

Inventor after: Luo Fei

Inventor after: Wei Xiaofei

Inventor after: Shu Hongping

Inventor after: Cao Liang

Inventor after: Liu Kui

Inventor before: Wen Liyu

Inventor before: Luo Fei

Inventor before: Wei Xiaofei

Inventor before: Shu Hongping

Inventor before: Cao Liang

Inventor before: Liu Kui

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant