Background
The strong convection weather generally refers to extreme weather phenomena with disaster properties, such as convection strong wind, hail, short-term strong precipitation and the like accompanying with thunderstorm phenomena. Such weather is generally characterized by: the weather system has the advantages of sudden occurrence, rapid movement, severe weather and extremely strong destructive power, mainly occurs in medium and small-scale weather systems, the spatial scale is small, the general horizontal range is about ten kilometers to two and three hundred kilometers, the horizontal scale is generally less than two hundred kilometers, and some horizontal ranges are only dozens of meters to ten and several kilometers. Its life history is short and with obvious sudden changes, about one to tens of hours, and less a few minutes to an hour. It often occurs in convective clouds or monolithic convective clouds. When strong convection weather comes temporarily, severe weather such as lightning thunder and lightning, wind and heavy rain are often accompanied, so that the house is destroyed, crop trees are destroyed, telecommunication traffic is damaged, and even casualties are caused.
Therefore, the prediction of the strong convection weather with extremely strong destructiveness is very important, because the motion trend and the occurrence situation of the strong convection weather can be known in advance through the prediction of the weather phenomenon, so that related meteorological personnel can report with related departments before the strong convection weather comes, take powerful measures and avoid unnecessary disasters as far as possible, and the prediction of the strong convection weather is very important. At present, in the aspect of extrapolation of radar images, the used technologies mainly include an optical flow method and extrapolation based on methods such as LSTM and related variants thereof under a deep neural network. The following introduces the prior art:
optical flow method
The optical flow method is an earlier method used in the extrapolation problem, and its essence is to use the correlation between adjacent sequence image frames in the serialized image to find some relation between two consecutive frames, and calculate the motion information of some area on the image between adjacent sequence frames according to this relation.
However, in the optical flow method, timeliness and accuracy are difficult to obtain, and the foundation of the optical flow method theory is established on the assumption that the brightness of the same object is constant, so that the optical flow method is difficult to completely meet the requirements in reality, which is a big disadvantage of the optical flow method; and the scholars of the related documents also mention that the optical flow estimation step and the extrapolation step in the optical flow method are separate, so that the parameter selection is difficult.
Advantage based on deep learning method
The convolutional neural network comprises a plurality of hidden layers which are provided with a plurality of hidden nodes, so that the expression capacity of the neural network is very strong, and therefore, the problem of radar extrapolation is solved by using the related technology of the neural network, and better effect is obtained through later experiments. A more common correlation method in an extrapolation network is described below,
1. extrapolation based on LSTM/ConvLSTM
Currently, some scholars realize the extrapolation prediction of radar by using an LSTM (long-short term memory artificial neural network), and can solve the problem of gradient disappearance by using LSTM fitting sequence data and forgetting part of information through forgetting gates and outputting gates. It can handle sequences of small magnitude. The hong Kong scholars in China perform a convolution operation on the basis of LSTM, and propose convLSTM (convolutional long-short term memory artificial neural network), which can extract features in space, is more suitable for time series data based on images, and is more effective for feature extraction of the images. ConvLSTM is a variant of LSTM, and the change is mainly that the weight calculation of W is changed into convolution operation, so that the characteristics of the image can be extracted.
However, the extrapolation of LSTM is not sufficient, and in the existing RNN-based model, LSTM still appears to be troublesome for sequences of large magnitude or longer, and as long as the time span of LSTM is large and the network is deep, the final calculation amount is large and time-consuming.
2. Improvement based on ConvGRU and VGGNet
Currently, some researchers have proposed a ConvGRU network model combining convolutional neural networks CNN and GRUs with reference to the ConvLSTM network structure, which is combined with relevant control gates because the GRU network structure is simpler than LSTM. But is less effective and therefore better using the ConvGRU network model because it has faster training speed and smaller memory requirements than the ConvLSTM structure.
The related document also improves the convolution layer of the ConvGRU based on the VGGNet network, and the improved network architecture uses a plurality of small convolution kernel superpositions to replace large convolution kernels, so that the advantage of reducing the number of training parameters is achieved, and meanwhile, the capability of the network in feature extraction is also enhanced. The advantage of this model is that it uses convGRU structures instead of convLSTM structures, while using a multi-level framework model on the stack of structures. Namely the spatial feature extraction capability of the convolution structure and the memory capability of the GRU structure which is easy to handle time series problems. Finally, the forecasting effects of the model and the optical flow method are compared through related experiments, and the applicability of the improved model in the short-term forecasting problem of rainfall is verified.
The VGG is combined with the convGRU, which makes the network hierarchy deeper, but the simple construction of the convGRU-based extrapolation model hierarchy using the VGG network has a further room for improvement in effect.
3. Radar echo extrapolation based on GAN algorithm
Scholars use the GAN for short-term prediction based on generation of the antagonistic network, and perform relevant experiments and practices. The GAN method mainly extracts image characteristics from a series of radar observation data according to the principle of a convolutional neural network, so as to establish a model of a prediction network, and optimizes the model through a loss function. Extrapolation experiments for a certain area in a certain year based on 4 relevant weather processes can find that the extrapolated shape, intensity and echo position of the generated confrontation network GAN in the short-term forecast of the convective weather are basically consistent with the live condition in most cases, so that the GAN technology has good effect on radar extrapolation. However, the literature also indicates that the echo range extrapolated based on the GAN method is large, and particularly indicates that the extrapolation effect is poor mainly on the prediction of lamellar cloud precipitation. The 3 levels of echo intensity values forecasted by 18 examples 1h in the precipitation caused by an east wind system, the precipitation caused by a southwest monsoon wind system, the precipitation caused by a west wind zone system and the typhoon precipitation are tested, and the GAN network model method has good forecast on the echoes with medium intensity, but still has improved space for forecasting the strong echoes.
In the radar echo extrapolation of the GAN method, the network working principle is as follows: the first generation generator G1 generates predicted data from the historical serialized data, and then the generated echo and the true echo are put into the first generation discriminator D1 for learning, so that the first generation discriminator D1 can truly distinguish the generated radar image from the true radar image. Then, the generator G2 of the second generation is utilized again. The radar image generated by the second generation generator G2 can trick the first generation discriminator D1. At this time, the second generation discriminator D2 is trained again, and so on. The generator G and the discriminator D play games with each other to form a min-max game, so that the G and the D continuously optimize themselves through interaction in the training process until dynamic balance is achieved, namely, the G and the D cannot be better, namely, false samples and true samples at the time can not be distinguished, and output echoes are used for proximity prediction.
It should be noted that, during the training process of the model, the historical sequence radar pictures and the relevant data of the predicted target within 1h are used as the training samples. Since the GAN method itself does not have an explicit "forecasting" process. Thus, the learned output of the model during the training process is a forecast for 1h in the future.
In addition, the conventional interpolation methods such as the reverse distance weighting method, the kriging method, the trend surface method, and the like have the problems of large error, complex calculation, high requirement on data distribution, small applicability, and the like.
In summary, most of the existing strong convection extrapolation technologies are based on single-scale research, the model level is simple, and most of the existing strong convection extrapolation technologies use MSE loss based on extrapolation research, which causes rapid degradation of the quality of multiple extrapolation effects, so that an optimization method more conforming to the service needs to be provided; at present, most of scholars propose an extrapolation network which can only accept one type of image data or only has single input, and an example of inputting a plurality of different feature data into a network for learning is not disclosed, so that the expressiveness of the image data input singly is not strong, and the learning ability needs to be further improved.
Disclosure of Invention
In view of this, an object of the present invention is to provide a strong convection extrapolation method based on a depth analogy network under multidimensional radar data, which learns a radar image by using the depth vision analogy network and optimizes the network in the process, so as to effectively improve the quality of the extrapolated radar image.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data comprises the following steps:
(1) And (3) characteristic image coding: encoding the radar images to obtain a plurality of characteristic images;
(2) Learning the plurality of radar images and the plurality of characteristic images by using a depth vision analogy network to obtain an extrapolation radar image;
(3) Feeding the characteristic image into an extrapolation network to obtain an extrapolation characteristic image;
(4) Performing first optimization on the extrapolated characteristic image and the extrapolated radar image by using an optimizer, and performing second optimization on the output after the first optimization; wherein the first optimization uses the following loss model:
wherein the content of the first and second substances,
for a comparison function of average illumination>
Is an average contrast function>
Is an average structural comparison function, y is the true value>
Is a predicted valueAlpha is the coefficient of the average illumination comparison function, beta is the coefficient of the average contrast comparison function, gamma is the coefficient of the average structure comparison function, theta is the coefficient of the offset value, w
i Is an offset value, m is the number of samples, ξ
1 The weight value of the difference sum of the true value and the predicted value, xi
2 Is the weight value of the loss of the three structures, y
i Is the true value with subscript i->
The predicted value with index i is shown.
Further, the radar image is encoded through a VGG16 network.
Further, the plurality of radar images are continuous sequence images in time, and the plurality of feature images are continuous sequence images corresponding to the plurality of radar images.
Further, the analog relation of the extrapolated radar image obtained in step (2) is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the feature images, and B s And X is an extrapolated radar image for a subsequent image of B in the plurality of characteristic images in time sequence.
Further, the extrapolation network in step (3) is a convLSTM extrapolation network.
Further, an LMOptimizer optimizer is adopted in the step (4).
In view of the above, a second objective of the present invention is to provide a strong convection extrapolation system based on a depth analogy network under multidimensional radar data, which utilizes all modules to encode, model, and optimize radar images, thereby improving the quality of the extrapolated radar images.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data comprises:
the radar image acquisition module is used for acquiring a plurality of radar images in real time;
the characteristic coding module is connected with the radar image acquisition module and used for coding the radar image to obtain a plurality of characteristic images;
the depth analogy module is connected with the radar image acquisition module and the feature coding module and used for learning the plurality of radar images and the plurality of feature images by using a depth vision analogy network to obtain an extrapolated radar image;
the extrapolation module is connected with the feature coding module and used for feeding the feature image into an extrapolation network to obtain an extrapolated feature image;
a double optimization module connected with the depth analogy module and the extrapolation module for
Receiving the extrapolated characteristic image and the extrapolated radar image, performing first optimization on the extrapolated characteristic image and the extrapolated radar image, and performing second optimization on the output after the first optimization; wherein the first optimization uses the following loss model:
wherein the content of the first and second substances,
for a comparison function of average illumination>
Is an average contrast function>
Is an average structural comparison function, y is the true value>
For the prediction values, α is the coefficient of the average illumination comparison function, β is the coefficient of the average contrast comparison function, γ is the coefficient of the average structure comparison function, θ is the coefficient of the offset value, w
i Is an offset value, m is the number of samples, ξ
1 The weight value of the difference sum of the true value and the predicted value, xi
2 Is the weight value occupied by the loss of three structures, y
i Is the true value with subscript i->
The predicted value with index i is shown.
Furthermore, the characteristic encoding module is provided with a VGG16 network, and encodes the radar image through the VGG16 network.
Further, the plurality of radar images are continuous sequence images in time, and the plurality of feature images are continuous sequence images corresponding to the plurality of radar images.
Further, in the depth analogy module, the analogy relation of the extrapolated radar image is as follows:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the feature images, and B s And X is an extrapolation radar image for a subsequent image B in the plurality of characteristic images in time sequence.
Further, the extrapolation network is a convLSTM extrapolation network.
Advantageous effects
The invention provides a strong convection extrapolation method based on a depth analogy network under multi-dimensional radar data, which comprises the steps of feeding a feature image obtained by coding radar images into an analogy network and an extrapolation radar network to obtain an extrapolation feature image and an extrapolation radar image, respectively optimizing the extrapolation feature image and the extrapolation radar image by using an optimizer, then performing secondary optimization on the output of the extrapolation feature image and the extrapolation radar image, and performing gradient calculation on the optimizer in the process to optimize parameters to finally obtain a high-quality extrapolation radar image; meanwhile, the invention also provides a strong convection extrapolation system based on the depth analogy network under the multi-dimensional radar data.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The examples are given for the purpose of better illustration of the invention, but the invention is not limited to the examples. Therefore, those skilled in the art should make insubstantial modifications and adaptations to the embodiments of the present invention in light of the above teachings and remain within the scope of the invention.
Example 1
Fig. 1 is a schematic structural diagram of an embodiment of a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data according to the present invention. Concretely, a strong convection extrapolation system based on a depth analogy network under multi-dimensional radar data comprises:
a radar image acquisition module 1, configured to acquire a plurality of radar images in real time;
further, the radar image acquisition module 1 may include a directional antenna, a transmitter, a receiver, an antenna controller, a display, a camera, an electronic computer, and an image transmission part, and is configured to acquire a plurality of radar images in real time;
the characteristic coding module 2 is connected with the radar image acquisition module 1 and is used for coding the radar image to obtain a plurality of characteristic images;
furthermore, the feature coding module 2 contains a coder f and a decoder g, the coder f and the decoder g are parameterized into a deep convolutional neural network, image features are extracted, and the coder can generate coding information which is easy to predict after training; the feature encoding module 2 is further provided with a VGG16 network, in this embodiment, the VGG16 network is used to perform feature extraction on the radar image to obtain an encoded image, and fig. 2 describes a process of extracting the feature image by using the VGG16, in which first, the obtained radar image is grayed, and then the radar image is fed into the VGG16 network to perform feature extraction and Encoder encoding, and finally, the encoded feature image is obtained.
The depth analogy module 3 is connected with the radar image acquisition module 1 and the feature coding module 2 and is used for learning a plurality of radar images and a plurality of feature images by using a depth vision analogy network to obtain an extrapolated radar image;
furthermore, the plurality of radar images are continuous sequence images according to time, and the plurality of characteristic images are continuous sequence images corresponding to the plurality of radar images;
in one embodiment, in the depth analogy module, the analogy relation for obtaining the extrapolated radar image is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the characteristic images, and B s And X is an extrapolated radar image for a subsequent image of B in the plurality of characteristic images in time sequence.
The extrapolation module 4 is connected with the feature coding module 2 and used for feeding the feature image into an extrapolation network to obtain an extrapolated feature image;
in this embodiment, the extrapolation network is a convLSTM extrapolation network.
And the double optimization module 5 is connected with the depth analogy module 3 and the extrapolation module 4 and is used for receiving the extrapolation characteristic image and the extrapolation radar image, performing first optimization on the extrapolation characteristic image and the extrapolation radar image and performing second optimization on the output after the first optimization.
Further, a LMOptimizer optimizer is arranged in the double optimization module 5, the double optimization module 5 adopts the LMOptimizer optimizer to simultaneously optimize parameters in the depth analogy network in the depth analogy module 3 and the extrapolation network in the extrapolation module 4, and the model performance is synchronously improved in a double optimization mode; specifically, gradient calculation is carried out by respective optimizers through a deep analogy network and an extrapolation network synchronously to obtain optimized parameters, secondary optimization after weighting is carried out again, and a Quaoptimizer generates a model;
the LMOptimizer optimizes the extrapolation feature image and the extrapolation radar image for the first time, and a new optimization mode is provided in this embodiment, which specifically includes: the first optimization used the following loss model:
wherein, the first and the second end of the pipe are connected with each other,
is a comparison function of the average illuminance>
Is an average contrast function>
Is an average structural comparison function, y is the true value, <' > is>
For the prediction values, α is the coefficient of the average illumination comparison function, β is the coefficient of the average contrast comparison function, γ is the coefficient of the average structure comparison function, θ is the coefficient of the offset value, w
i Is an offset value, m is the number of samples, ξ
1 The weight value of the difference sum of the true value and the predicted value, xi
2 Is the weight value of the loss of the three structures, y
i Is the true value with subscript i->
The predicted value with index i is shown.
Preferably, since the plurality of radar images are time-wise continuous sequence images, the extrapolation module 4 in this embodiment obtains a plurality of time-wise continuous sequence extrapolated feature images, the depth analogy module 3 obtains a plurality of time-wise continuous sequence extrapolated radar images, and the double optimization module 5 optimizes the extrapolation module 4 and the depth analogy module 3 continuously by time to form a Training Flow (Training Flow) so that the final model is more reliable.
Example 2
Referring to fig. 3, a strong convection extrapolation method based on a depth analogy network under multidimensional radar data in embodiment 1 is specifically a strong convection extrapolation method based on a depth analogy network under multidimensional radar data, and includes the following steps:
s100: and (3) characteristic image coding: encoding the radar images to obtain a plurality of characteristic images; then, step S200 is executed;
in the embodiment, the radar image acquisition system can be used for acquiring a plurality of radar images in real time through parts such as a directional antenna, a transmitter, a receiver, an antenna controller, a display, a camera, an electronic computer, image transmission and the like; the plurality of radar images are time-wise continuous sequence images, and the plurality of feature images are continuous sequence images corresponding to the plurality of radar images.
Further, the encoder is trained to generate encoded information that is easily predictable. The feature extraction is performed on the radar image by using a VGG16 network to obtain an encoded image, and the encoder comprises an encoder f, a decoder g and the like.
S200: learning a plurality of radar images and a plurality of characteristic images by using a depth vision analogy network to obtain an extrapolation radar image; then, step S300 is executed;
further, the analogy relationship in obtaining the extrapolated radar image is:
(B,B s ):(A,X)
wherein A is one of the radar images, B is one of the characteristic images, and B s The subsequent image of B in the plurality of characteristic images in time sequence is X, and X is an extrapolation radar image; the specific radar image analogy generation process can refer to fig. 4.
In the embodiment, the key for solving the analogy problem is to learn the relation between images, and the idea is feasible and appropriate when being introduced into the task of radar image extrapolation; specifically, the method comprises the following steps:
in the deep visual analogy network, if a certain relation between P and Q is known and the relation is used in the same class of R and X, then knowing R, finding X is actually to generate a suitable image for effective analogy, and the relation is as follows:
(P,Q):(R,X)
the method in this embodiment applies this idea to the extrapolation task in the following specific process:
firstly, a radar basic reflectivity image at an elevation angle of 0.5 degrees is used as an original image to be analyzed and is marked as an image A, and a characteristic image is obtained after the characteristics are extracted by using a VGG network through the step S100 and is marked as an image B.
From the understanding of the extrapolation task, a set of n time-wise consecutive sequence of images I = (I)
1 ,i
2 ,i
k ,...,i
n-1 ,i
n ) If the k-1 image is predicted, the k-1 image is the real situation of the k-1 image. Whereby image A is noted as A in the next image of its sequence
s The same encoding as in step S100 is performed to obtain B
s Obtained by a depth vision analog network using A, B, bs
Namely: (B, B)
s ) A, X. For image A, it is calculated ≦ using the network>
I.e. the predicted image.
Specifically, referring to fig. 4, for the generation process of the analog network, the mathematical principle is as follows:
wherein a, b, c, d in the formula are (a, b): (c, d) the relation pair, L add In order to add the analog network loss, d is the image to be solved, a, b, c are three input pictures a, b, c respectively, g is the decoder, f is the encoder. The analog network addition part is to encode three pictures a, b and c, respectively, and obtain the difference between the two pictures a and b by f (b) -f (a) and add the difference to the encoded c, i.e., f (c), to obtain the required f (d) Prediction ) After g decoding, the norm modulo of the difference between the g decoded d and d can obtain the difference between the prediction and the true value d, and the loss L is obtained add Is the additive analog network loss; same principle L mul For the multiplication analog network multiplication loss, three pictures of a, b and c are respectively coded, the difference of a and b is multiplied, the W weight is added, and d is obtained by adding c Prediction The difference norm of the prediction and the truth value d is modulo to obtain the loss L mul Analogy the network loss for multiplication; l is deep In order to solve the loss of the relation analog network, the h process is to solve the loss of the relation between the difference of a and b after encoding and the encoding c; r is the difference between the prediction value and the truth value, and three losses are normalized and integrated into a segmented loss to obtain T (x, y, z), wherein MLP is a multilayer neural network; in this embodiment, the structure of the analog network can refer to fig. 6, a, b, and c, and enter Encoder network f for encoding, and then enter increment function T for increment to obtain L add 、L deep 、L mul Then, the decoded image is decoded in a Decoder g to finally obtain a predicted image d;
in one embodiment, the encoder f and decoder g are parameterized as a deep convolutional neural network, extracting image features, L
add The method is simple and easy to implement, but in some cases, L is applied
add The method may not be ideal in situations such as rotation. Thus, L
mul And L
deep These problems can be solved. Wherein, at L
mul Wherein the increment is generated using the interaction between f (b) -f (a) and f (c), W being a three-dimensional tensor, in order to reduce the number of weights, a form of parameterization of the tensor is used
Wherein W ∈ R
K×K×K K is the image space after the image encoder, and the image space after the decoder is D, that is:
f:R D →R K ,R K →R D ;
in general K<D; wherein the content of the first and second substances,
W×1v×2w∈R
K vector v, w ∈ R
K The MLP is a multi-layer perceptron.
By introducing a regularizer, the prediction effect is improved, and the overall training target is the weighted combination of the analog prediction and the regularizer. For example: l is
deep + α R is to L
deep Optimization of the method, wherein α =0.01. If it is
Is the final training target, expressed as: />
Wherein α =0.01,>
to predict loss, the relationship is analogized to L of the network
deep The influence factor a of the loss constant R is increased and the formula is schematically the same as in fig. 4.
The analogy generation process can refer to FIG. 5;
s300: feeding the characteristic image into an extrapolation network to obtain an extrapolation characteristic image; then, step S400 is performed;
s400: and performing gradient calculation on the depth analogy network and the extrapolation network by using an optimizer to obtain optimized parameters, performing weighted quadratic optimization again, and generating a model.
Obtaining the characteristic image B through the steps S100-S300
s And extrapolated feature images
And optimizing the real radar image A
s And extrapolated radar image->
And simultaneously optimizing parameters in the deep analogy network and the extrapolation network by adopting an LMOptimizer optimizer, and synchronously improving the model performance in a double optimization mode.
Extrapolated radar image A s ' is obtained by a depth analogy network through a real radar image A and real characteristic images B and Bs, wherein B is s The method comprises the steps of obtaining an extrapolation network, synchronously performing gradient calculation by using a depth analogy network and the extrapolation network through respective optimizers, obtaining optimized parameters, performing weighted secondary optimization again, and generating a model. Specific optimization can be found in fig. 7.
Referring to fig. 8, a detailed structure diagram of a double optimized extrapolation network (i.e. VGG16, depth analogy network) in this embodiment, specifically, let a be a radar image, and simultaneously feed two networks, i.e. VGG network to obtain a feature image B and a Snet depth analogy network; then continuously feeding the characteristic image B into two networks, namely a convLSTM extrapolation network to obtain an extrapolated characteristic image and a Snet depth analogy network; the radar image As is the next radar image of the image A in the sequence, and is sent to a VGG network (namely an Encoder coding network) to obtain a characteristic image Bs, and then is fed to the Snet network, so that the radar image A can be extrapolated according to the mentioned Snet network analogy theory s ', convLSTM extrapolation network can directly extrapolate feature image B s ' this has the advantage of simultaneously extrapolating the radar image and the feature image based on the code using two networks, which are then separately optimized using an LMOptimizer optimizer, as will be described in detail belowThe LMOptimizer optimizer carries out secondary optimization on the output of the dual network.
The L2 loss, mean Square Error (MSE), is the most commonly used regression loss function, which is the sum of the squared differences of our target variables and predicted values; the images were evaluated for errors in illumination, contrast, and texture as quaoptisizers.
The L2 loss is easily blurred when dealing with the image problem, so it is not appropriate to use L2 as the loss function of the extrapolation problem because the image quality is seriously affected when the extrapolation time is long, and a new way (LMOptimizer) is proposed in this embodiment to calculate the image loss, and the two losses are combined to improve the quality of the extrapolated radar image.
In particular, the average absolute error can well reflect the actual situation of the predicted value error, but the errors of the image in illumination, contrast and structure cannot be well evaluated, so that the average change of the errors is introduced as follows:
the average illuminance comparison function is set to:
wherein the content of the first and second substances,
i is the first, summing subscript; y is the true value>
To predict value, μ
y Illumination of the true value, <' > is present>
Is the illumination of the predicted value, m is the number of samples, c
1 Is a constant number y
i The subscript is the true value of i, n is the number, namely the average gray level is taken as the estimation of the illumination; />
The average contrast function is set as:
wherein the content of the first and second substances,
knowing by the measurement system that the average gray value is to be removed from the signal, the average contrast is measured using the standard deviation;
the average structure comparison function is set to:
wherein, delta
y 、
Respectively is y and/or>
Is greater than or equal to>
Is y and +>
Covariance of the difference, c
1 、c
2 、c
3 Is constant, in order to avoid system errors caused by denominator of 0;
adding average absolute error and regularization terms, the formula is defined as follows:
wherein ξ 1 +ξ 2 =1,ξ 1 The weight value of the difference sum of the true value and the predicted value, xi 2 The weight of three structure losses, alpha is the coefficient of the average illuminance comparison function, beta is the coefficient of the average contrast function, gamma is the coefficient of the average structure comparison function, and theta isCoefficient of offset value, w i For the offset value, when the parameter α = β = γ =1 in Loss, the above formula can be expressed as:
preferably, when the coefficient ξ 2 The effect on the extrapolation task is better when the value is not less than 0.16.
Preferably, according to the above-mentioned double optimized extrapolation network structure, referring to the extrapolation network training stream of fig. 9, each block formed by the dotted line portion is referred to As a block (block), and it is known that from two input radar images a and As, the prediction of the subsequent two radar images can be performed by connecting a plurality of blocks in the manner of the following fig. 7 to realize the extrapolation of the subsequent images. The input images of the third block are the images extrapolated from the convLSTM in the second block, noting that the Snet network is always optimized, which also makes the final model reliable.
While the present invention has been described with reference to the particular illustrative embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various modifications, equivalent arrangements, and equivalents thereof, which may be made by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.