CN112668615B - Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion - Google Patents

Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion Download PDF

Info

Publication number
CN112668615B
CN112668615B CN202011483471.4A CN202011483471A CN112668615B CN 112668615 B CN112668615 B CN 112668615B CN 202011483471 A CN202011483471 A CN 202011483471A CN 112668615 B CN112668615 B CN 112668615B
Authority
CN
China
Prior art keywords
scale
cloud picture
cloud
prediction
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011483471.4A
Other languages
Chinese (zh)
Other versions
CN112668615A (en
Inventor
程文聪
王志刚
王攀峰
邢平
张文军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
93213 Unit Of Pla
Original Assignee
93213 Unit Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 93213 Unit Of Pla filed Critical 93213 Unit Of Pla
Priority to CN202011483471.4A priority Critical patent/CN112668615B/en
Publication of CN112668615A publication Critical patent/CN112668615A/en
Application granted granted Critical
Publication of CN112668615B publication Critical patent/CN112668615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a satellite cloud picture prediction method based on depth cross-scale extrapolation fusion, which comprises the following steps: constructing a training set observation cloud picture sequence; adjusting the size proportion of observation cloud pictures in the training set observation cloud picture sequence to obtain a training set multi-scale cloud picture sequence; inputting the training set multi-scale cloud picture sequence into a depth prediction network for training to obtain multi-scale depth prediction model parameters; taking the training set observation cloud picture sequence as an input, and outputting a training set multi-scale prediction cloud picture by a multi-scale depth prediction model; amplifying the training set multi-scale prediction cloud picture to the size of the observation cloud picture to obtain a training set adjustment prediction cloud picture; obtaining multi-scale prediction cloud picture fusion model parameters; and inputting the real-time observation cloud picture sequence serving as operation data into the multi-scale depth prediction model and the multi-scale prediction cloud picture fusion model, and outputting a prediction result cloud picture. The method can realize the prediction of the satellite cloud picture sequence with high definition and high accuracy.

Description

Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion
Technical Field
The invention relates to the technical field of meteorology, in particular to a satellite cloud picture prediction method based on depth cross-scale extrapolation fusion.
Background
With the rapid development of satellite remote sensing technology, the role of satellite cloud pictures in weather analysis and forecast is increasing. The satellite cloud picture can be used for observing the cloud system structures and the activity rules of the cloud system structures with different scales, tracking the evolution of a large-scale weather system, and analyzing the trend and the state of atmospheric motion change, so that the satellite cloud picture is important data for weather forecasters to analyze weather phenomena and forecast in real time, and is also one of important means for disaster weather early warning and forecasting. Because the satellite cloud picture is a real-time monitoring data, only the satellite cloud picture of the past near time point can be obtained in the actual service process, and the satellite cloud picture is limited by the satellite transmission bandwidth and the transmission mode, the collection, the arrangement and the transmission of the satellite data have certain time lag, and the requirements of real-time or advanced meteorological service cannot be met. Therefore, the method has important guiding significance for the prediction of the satellite cloud picture, especially for the prediction of satellite cloud picture products within 2 hours, and the analysis and the prediction of the weather evolution of the short-term approach time by a forecaster.
The satellite cloud picture prediction is to take a satellite cloud picture sequence of the past times as input to obtain a satellite cloud picture product of a plurality of times in the future, the problem is essentially a space-time sequence prediction problem, the effect of the method based on deep learning in the research field is superior to that of the traditional method based on optical flow and the like, a ConvLSTM network model combines a convolution neural unit for spatial feature extraction and a cyclic neural network for time sequence feature extraction, and the convolution operation is merged into the state transfer process of the cyclic neural network for weather radar echo prediction; the space-time long short-term Memory network (ST-LSTM) model improves the ConvLSTM network model by using a polygonal line Memory channel, and the subsequent Memory In Memory (MIM) network model performs stabilization processing on sequence data on the basis, so that a better prediction effect is obtained on a plurality of data sets; a generative countermeasure network (GAN) is also used for sequence data generation and prediction, and although this type of approach has a weak ability to model timing, it can generate prediction results with higher definition.
Two unsolved problems exist when the existing space-time sequence prediction methods such as ConvLSTM or ST-LSTM are directly applied to satellite cloud picture prediction. First, current spatio-temporal sequence data prediction generally uses sequence data of a lower size, such as scaling a radar echo map sequence to 100 × 100 for prediction, based on computational resources and prediction effect considerations, and only data of 128 × 128 size can be processed at maximum under the experimental example device configuration of the present invention. However, new generation satellite cloud pictures at home and abroad have higher resolution, for example, a specific channel product of a 4A geostationary satellite of a wind cloud can reach 500m resolution, a full-channel product has 4km resolution, the size of the 4km full-channel product covering the northwest Pacific area (north latitude 10-51 degrees, east longitude 90-131 degrees) is 1024 x 1024, the cloud picture product has wider coverage range and higher resolution, the full-size satellite cloud picture cannot be directly predicted by using the existing depth prediction algorithm, and the full-size satellite cloud picture needs to be processed by adopting a block prediction and splicing method. Secondly, the prediction results of various current prediction methods based on the deep cycle neural network are fuzzy, while the method based on the generation of the countermeasure network can generate clear results, but the modeling capability of the time sequence is not strong, and the prediction accuracy is poor.
Disclosure of Invention
The invention solves the technical problems that: the method overcomes the defects of the prior art, and provides a satellite cloud picture prediction method based on depth cross-scale extrapolation and fusion.
In order to solve the technical problem, an embodiment of the present invention provides a satellite cloud picture prediction method based on depth cross-scale extrapolation fusion, including:
acquiring a historical observation cloud picture to construct a training set observation cloud picture sequence;
adjusting the size proportion of observation cloud pictures in each training set observation cloud picture sequence to obtain a training set multi-scale cloud picture sequence;
respectively inputting the training set multi-scale cloud picture sequence into corresponding depth prediction networks for training to obtain multi-scale depth prediction model parameters;
taking the training set observation cloud picture sequence as input again, and outputting a training set multi-scale prediction cloud picture by the multi-scale depth prediction model;
amplifying the multi-scale prediction cloud picture of the training set to the size corresponding to the observation cloud picture to obtain a training set adjustment prediction cloud picture;
adjusting the prediction cloud picture of the training set and the corresponding observation cloud picture of the training set to input depth conditions to generate a confrontation network for training, and obtaining multi-scale prediction cloud picture fusion model parameters;
and taking the real-time observation cloud picture sequence as operation data, sequentially inputting the multi-scale depth prediction model and the multi-scale prediction cloud picture fusion model, and outputting a prediction result cloud picture of the real-time observation cloud picture sequence.
Optionally, the performing size ratio adjustment on the observation cloud images in each training set observation cloud image sequence to obtain a training set multi-scale cloud image sequence includes:
and performing down-sampling on the observation cloud pictures in each observation cloud picture sequence by adopting an average pooling method to obtain a training set multi-scale cloud picture sequence.
Optionally, the respectively inputting the training set multi-scale cloud image sequences into corresponding depth prediction networks for training, and obtaining multi-scale depth prediction model parameters includes:
partitioning the cloud pictures in each training set multi-scale cloud picture sequence according to a specific scale to obtain a partitioned cloud picture sequence corresponding to the training set multi-scale cloud picture sequence;
and training a depth prediction model for the block cloud picture sequence of each scale, and inputting the block cloud picture sequence of the scale for training to obtain the multi-scale depth prediction model parameters.
Optionally, outputting, by the multi-scale depth prediction model, a training set multi-scale prediction cloud image by taking the training set observation cloud image sequence as an input again, including:
taking the observation cloud picture sequence of the training set as input, and operating the trained multi-scale depth prediction model to obtain a block prediction product of each scale;
and splicing the block prediction products of all scales according to the original positions to obtain a training set multi-scale prediction cloud picture product.
Optionally, the training set multi-scale prediction cloud image is enlarged to the size corresponding to the observation cloud image by an interpolation method, so that a training set adjustment prediction cloud image is obtained.
Optionally, the training set is adjusted to the predicted cloud picture and the corresponding training set observation cloud picture to input the depth condition to generate a countermeasure network for training, and the obtaining of the multi-scale predicted cloud picture fusion model parameters includes:
inputting the training set adjustment prediction cloud picture and the corresponding training set observation cloud picture into the multi-scale prediction cloud picture fusion model;
calling the generator to generate a predicted cloud image fusion product;
calling the discriminator to identify the predicted cloud picture fusion product and the corresponding type of the observed cloud picture product so as to obtain discrimination probability;
calculating loss values of the generator and the discriminator according to the discrimination probability;
updating model parameters according to the loss values;
and iterating the process until the training is completed.
Optionally, the sequentially inputting the multi-scale depth prediction model and the multi-scale prediction cloud picture fusion model with the real-time observation cloud picture sequence as the operation data and outputting the prediction result cloud picture of the real-time observation cloud picture sequence includes:
taking a real-time observation cloud picture sequence as an input;
sequentially inputting the multi-scale depth prediction model and the multi-scale prediction cloud image fusion model;
and obtaining a final predicted cloud picture product.
Compared with the prior art, the invention has the advantages that:
the embodiment of the invention provides a satellite cloud picture prediction method based on depth cross-scale extrapolation fusion, which can realize prediction of a large-scale satellite cloud picture sequence and can improve the definition and accuracy of a prediction result.
Drawings
FIG. 1 is a flowchart illustrating steps of a satellite cloud image prediction method based on depth cross-scale extrapolation fusion according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a cloud picture sequence prediction problem description according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a cross-scale extrapolation fusion prediction method for a cloud picture according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a cloud image multi-scale depth prediction model according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a multi-scale prediction cloud image fusion model according to an embodiment of the present invention;
fig. 6 is a schematic diagram of cloud image down-sampling and blocking according to an embodiment of the present invention.
Detailed Description
Example one
Referring to fig. 1, a flowchart illustrating steps of a satellite cloud image prediction method based on depth cross-scale extrapolation fusion according to an embodiment of the present invention is shown, as shown in fig. 1. In the embodiment of the invention, the original observation cloud picture is firstly zoomed to a plurality of smaller scales through a down-sampling method, then a plurality of depth models are trained to respectively carry out extrapolation prediction on a plurality of cloud pictures with different scales, so that the extrapolation models can extract cloud picture sequence characteristics on different scales, and then the prediction cloud pictures on the plurality of scales are subjected to result fusion through a conditional generation countermeasure network and the definition of the prediction result is improved. The satellite cloud picture prediction method provided by the invention is divided into two stages. The first stage is a multi-scale extrapolation prediction stage, in order to extract the distribution and motion characteristics of the cloud picture on different scales, firstly, the original scale cloud picture is reduced to a plurality of smaller scales by a down-sampling method, and the distribution and motion characteristics of the cloud picture on different scales are extracted by respectively carrying out extrapolation prediction on the cloud picture data of the plurality of scales; in the second stage, the multi-scale prediction result is subjected to conditional generation and confrontation network fusion to obtain an original scale prediction result product, and the two-stage prediction process is shown in fig. 3. The satellite cloud picture prediction method specifically comprises the following steps:
step 101: and obtaining a historical observation cloud picture to construct a training set observation cloud picture sequence.
First, the prediction process of the embodiment of the present invention can be defined and described as follows:
in the prediction process, the input is a cloud picture sequence of the first j continuous times including the current time, and the output is notFrom a cloud picture of several times (as shown in FIG. 2), let the observation region be an MxN grid point region, and the cloud picture is marked as a vector v ∈ R M×N One sequence of observed clouds is recorded as v 1 ,v 2 ,v 3 8230and its preparing process. The next time cloud prediction problem can be defined as:
v n+1 =argmaxp(v n+1 |v n-j+1 ,v n-j+2 ,...,v n )
for the cloud image prediction problem of the subsequent time, the current time prediction result can be used for replacing the actual observation cloud image in an iterative manner:
v n+2 =argmaxp(v n+2 |v n-j+2 ,v n-j+3 ,...,v n+1 )
the observation cloud sequence refers to a satellite cloud sequence of original size obtained, and the process can be described in detail in conjunction with the following.
A training set observation cloud picture sequence can be constructed through the data of the FY-4A geostationary satellite, a product with a resolution of 4km of an infrared 12-channel (wave band: 10.3-11.3 mu m) of the FY-4A geostationary meteorological satellite is selected as example data, the range of the data is 10-51 degrees in north latitude, 90-131 degrees in east longitude, and the size of the cloud picture is 1024 x 1024. The cloud images in the training set are hour-by-hour integer products from 12 days 0 in 3 months and 12 months in 2018 to 30 days 21 in 6 months and 30 months in 2019, 10664 products are formed by forward pushing one by one from the beginning, 10657 training sequences with the length of 8 are formed, the first 6 cloud image products in each training sequence are used as input sequences, and the 7 th and 8 th cloud image products are used as model output targets. The clouds in the test set are hourly punctuation products from 7/2019, 1/0 to 7/2019, 31/23. There are 727 cloud products in the test set, where the sequence formation is consistent with the training set, forming 720 test cloud sequences.
The FY-4A geostationary meteorological satellite is the first-generation satellite of the second generation geostationary orbit meteorological satellite in China, and carries various loads such as a multi-channel scanning imager, an interference type atmosphere vertical detector, a lightning imager, a space environment monitoring instrument package and the like, and the multi-channel scanning imager product carried by the FY-4A is selected as an example in the embodiment. The multichannel scanning imager is one of main loads of wind, cloud, number four and star A, and is mainly used for performing high-frequency, high-precision and multispectral quantitative remote sensing on the earth surface and cloud physical state parameters and directly serving weather analysis and forecast, short-term climate prediction and environment and disaster monitoring. The observation wave band covers visible light, near infrared, short wave infrared, medium wave infrared and long wave infrared, and not only can observe the full appearance of a large-scale weather system, but also can observe the rapid evolution process of a medium-scale weather system and a small-scale weather system. The multi-channel scanning imager is provided with 14 channels comprising 7 visible light-near infrared channels and 7 infrared channels. Of the 14 channels, 1 channel with 500-meter ground resolution, 2 channels with 1KM, 4 channels with 2KM, and 7 channels with 4 KM. The full disc observation time was 15 minutes.
In the embodiment, an hour-by-hour whole-point product with the resolution of 4km of an infrared 12 channel (wave band: 10.3-11.3 mu m) is selected as a target product.
After obtaining the historical observation cloud picture and constructing the training set observation cloud picture sequence, step 102 is executed.
Step 102: and adjusting the size proportion of the observation cloud pictures in the observation cloud picture sequence of each training set to obtain a multi-scale cloud picture sequence of the training sets.
The multi-scale cloud picture sequence refers to a cloud picture sequence obtained after the size of an observation cloud picture in the observation cloud picture sequence is adjusted.
For example, in an embodiment, when a target cloud image sequence of four scales needs to be obtained, the observation cloud image sequence may be taken as a first scale cloud image, that is, a size of 1024 × 1024, each observation cloud image in the observation cloud image sequence may be adjusted to a cloud image of a second size, that is, a size of 512 × 512, each observation cloud image in the observation cloud image sequence may be adjusted to a cloud image of a third size, that is, a size of 256 × 256, by downsampling, and each observation cloud image in the observation cloud image sequence may be adjusted to a cloud image of a fourth size, that is, a size of 128 × 128, by downsampling, so that a multi-scale cloud image sequence including four scales may be obtained, as shown in fig. 6.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
The manner in which the satellite clouds in each initial cloud image sequence are resized is described in detail below with reference to the following specific implementation.
In a specific implementation manner of the embodiment of the present invention, the step 102 may include:
substep A1: and performing down-sampling on the observation cloud pictures in each observation cloud picture sequence by adopting an average pooling method to obtain a multi-scale cloud picture sequence.
Substep A2: in the embodiment, the down-sampling coefficient adopts a power function of 2, namely, for the cloud image product with the original scale of 1024 × 1024, and the minimum target scale is 128 × 128, the down-sampled adjusted cloud image sizes are respectively 1024 × 1024, 512 × 512, 256 × 256 and 128 × 128.
After the training set multi-scale cloud picture sequence is obtained, step 103 is performed.
Step 103: and respectively inputting the multi-scale cloud picture sequences of the training set into corresponding depth prediction networks for training to obtain multi-scale depth prediction model parameters.
After the training set multi-scale cloud picture sequence is obtained, the training set multi-scale cloud picture sequence can be input into the depth prediction network with the corresponding scale, and the multi-scale depth prediction model parameters are obtained through iterative training. Specifically, the cloud image sequences of the 4 scales can be respectively predicted, the depth-limited prediction network of the embodiment can directly process the cloud image with the scale of 128 × 128, the cloud image sequence of the 128 × 128 scale can be directly used as the input of the prediction model, so as to extract the overall features of the cloud image sequence, while the cloud image sequence of the 256 × 256 scale and above cannot be directly used as the input of the prediction model, so the cloud image sequence of the 256 × 256 scale and above is processed by adopting a blocking method, the size of the blocks is set to 128 × 128 according to the processing capability, the 256 × 256 scale cloud image can be divided into 4 blocks (upper left, upper right, lower left and lower right), the cloud image of the 512 × 512 scale can be divided into 16 blocks (horizontal 4 and vertical 4), the cloud image of the 1024 scale can be divided into 64 blocks (horizontal 8 and vertical 8), and the cloud image downsampling and blocking are shown in fig. 6.
In the embodiment of the invention, a depth prediction network (a depth space-time sequence prediction network such as ConvLSTM, ST-LSTM, MIM and the like can be selected) needs to be trained for each prediction of a specific scale. In the specific training process, two factors may influence the training result, namely the regional characteristics of the cloud picture and the training data volume of the cloud picture block. The cloud images may have different evolution and motion characteristics in different regions, so that the prediction network can be trained independently for different regions, but considering that cloud cluster motions in different regions have similar physical characteristics, all blocks in the same scale can be used for training the block prediction network in the scale to increase the training data volume. According to the invention, through experimental comparison, the predicted network result obtained by taking all block cloud image sequences under the specific scale as input data is superior to that obtained by independently training 4 regional predicted networks, which shows that the cloud cluster motion mainly embodies the similarity characteristic, and a better predicted effect can be obtained by inputting all block sequences and increasing the training data amount.
After the multi-scale depth prediction model parameters are obtained by inputting the aforementioned data for training, step 104 may be executed.
Step 104: and taking the training set observation cloud picture sequence as an input, and outputting the training set multi-scale prediction cloud picture by the multi-scale depth prediction model in the step 103.
Specifically, the training set observation cloud images are input into the multi-scale depth prediction model trained in step 103 again, so that the block prediction results of the cloud images of all scales can be obtained, and then the block prediction results of all scales are spliced in situ, so that the training set multi-scale prediction cloud images can be obtained.
Step 105: and amplifying the multi-scale prediction cloud picture of the training set to the size corresponding to the observation cloud picture to obtain a training set adjustment prediction cloud picture.
Specifically, the training set multi-scale prediction cloud image can be enlarged to the size corresponding to the observation cloud image by a planar bilinear interpolation method, so as to obtain a training set adjustment prediction cloud image, as shown in fig. 4.
After the training set adjustment prediction cloud picture is obtained, step 106 is performed.
Step 106: and adjusting the prediction cloud picture of the training set and the corresponding observation cloud picture of the training set to input depth conditions to generate a confrontation network for training, and obtaining parameters of a multi-scale prediction cloud picture fusion model.
In order to fuse the prediction results of multiple scales, a depth condition is adopted to generate an antagonistic network for conducting cross-scale prediction result fusion, and the method is used for extracting corresponding information between an adjustment prediction cloud picture product and an actual cloud picture product in a training set adjustment prediction cloud picture. Specifically, as shown in fig. 5, generating the confrontation network model requires training two different networks, namely, a generation network G and a discrimination network D. In the embodiment, a coding-decoder with a U-Net structure is selected to form a generation network G. The U-Net structure is a codec network with added hopping connections, and in the embodiment, the U-Net used is a 64-layer structure. In the embodiment, the identification network D is a 64-layer convolution classification network, and whether the input cloud picture product is actually observed or the fusion prediction cloud picture product generated by the generation network is identified by calculating the probability that the input cloud picture product is actually observed. In the model training process, the discriminator D tries to correctly distinguish the real observation cloud picture product from the generated prediction cloud picture product, and the generator G tries to generate the cloud picture product which is simulated as much as possible, so that the discriminator D cannot distinguish true from false. Let x denote the adjusted predicted cloud picture, y denote the actual observed cloud picture product at the time of corresponding prediction,
Figure BDA0002838279320000091
representing the generated fusion prediction cloud picture product, and in order to extract the mapping relation between the input adjustment prediction cloud picture product x and the actual cloud picture product y, using the structure of a condition generation countermeasure network as the basic framework of a discriminator, namely x and the actual observation cloud picture product y or the generated fusion prediction cloud picture product
Figure BDA0002838279320000092
Together as input to discriminator D (discriminators conventionally generating countermeasure networks use only y or
Figure BDA0002838279320000093
Or as an input).
Let D (x, y) be the probability that the discriminator accurately identifies the true cloud image product, G (x, z) be the function that the adjusted predicted cloud image product x generates the fused predicted cloud image product (z is random noise), then for discriminator D, the objective function is to find the model parameters that maximize D as follows:
arg max x D log(D(x,y))+log(1-D(x,G(x,z)))
for generator G, the objective function is to find the following optimization parameters:
arg max x D log(D(x,G(x,z)))
in the model implementation, binary cross entropy is used as a loss measure. I.e. for discriminator D, the loss function is as follows: l is D =L bce (D(x,y),1)+L bce (D(x,G(x,z)),0)
Wherein:
Figure BDA0002838279320000101
n is the batch sample quantity of model input, a is epsilon {0,1} represents the label of input data (0: the generated fused predicted cloud picture product; 1: the actual observed cloud picture product),
Figure BDA0002838279320000102
the value is a discrimination value output by the discriminator D, and a value close to 0 indicates that the discriminator determines that the input is the generated fused predicted cloud product, and a value close to 1 indicates that the discriminator determines that the input is the actually observed cloud product.
For generator G, studies have shown that combining the generation of the countermeasures with a conventional loss function will yield better results. Therefore, let the loss function of G be a proportional combination of the resulting opposing loss and the L1 loss, the specific loss function is as follows:
L G =λ 1 L bce (D(x,G(x,z)),1)+λ 2 |y-G(x,z)|
wherein λ 1 And λ 2 To generate a proportionality coefficient to the loss of resistance and the loss of L1.
The training process is performed in an iterative manner. Firstly, training a discriminator D, inputting actual observation cloud picture products and fusion prediction cloud picture products generated by a generator G into the discriminator D in batches, and utilizing loss L D Updating parameters of discriminator D by means of back propagation; then freezing parameters of the discriminator D, inputting batch of the predicted product x and the corresponding actual cloud picture y to the generator G and the discriminator D, and calculating the loss L of the generator G And then updating the parameters of the generator G in a back propagation mode. The above process is repeated until the capabilities of the generator G and the discriminator D are balanced.
Specifically, the step 106 may include:
substep B1: taking the observation cloud picture and the training set adjustment prediction cloud picture as conditions to generate an input of a countermeasure network;
substep B2: calling the generation network to perform fusion processing on the adjusted predicted cloud picture to generate a fused predicted cloud picture;
substep B3: calling the discriminator to identify the cloud picture types corresponding to the observation cloud picture and the fusion prediction cloud picture so as to obtain discrimination probability;
and substep B4: calculating loss values of the generation network and the discriminator according to the discrimination probability;
substep B5: updating generation network and discriminator parameters by back propagation
Substep B6: and repeating the process until the loss value is in a preset range or reaches the target cycle number, wherein the obtained generated network parameters and discriminator parameters are the parameters of the multi-scale prediction cloud picture fusion model.
After obtaining the multi-scale prediction cloud image fusion model parameters, step 107 is performed.
Step 107: and taking the real-time observation cloud picture sequence as operation data, sequentially inputting the multi-scale depth prediction model and the multi-scale prediction cloud picture fusion model, and outputting a prediction result cloud picture of the real-time observation cloud picture sequence.
The step 107 may include:
substep C1: and extracting a real-time observation cloud picture sequence.
And a substep C2: and (4) adjusting the size proportion of the observation cloud pictures in the real-time observation cloud picture sequence according to the step (102) to obtain a real-time multi-scale cloud picture sequence.
And substep C3: and (4) respectively inputting the real-time multi-scale cloud picture sequence into the corresponding depth prediction network in the step 103, wherein the depth prediction network parameters are the model parameters after training in the step 103, and the real-time multi-scale cloud picture is obtained according to the method in the step 104.
Substep C4: and amplifying the real-time multi-scale prediction cloud picture to the size corresponding to the observation cloud picture by an interpolation method to obtain a real-time adjustment prediction cloud picture.
And substep C5: inputting the real-time adjustment prediction cloud picture and the corresponding observation cloud picture into the depth condition in the step 106 to generate a confrontation network, wherein the network parameters are the model parameters after training in the step 106, and the output is the prediction result cloud picture of the real-time observation cloud picture sequence.
Specifically, 1-hour prediction and 2-hour prediction from 7/1/5 in 2019 can be taken as examples for illustration, an hourly observation cloud image sequence consisting of 6 cloud image products from 0 at 7/1/7/2019 to 5 at 7/1/2019 in the sequence is input, and the output result can be respectively obtained by using an optical flow prediction method, a prediction method based on deep learning of an MIM network and a cross-scale fusion extrapolation prediction method provided by the invention to obtain prediction results for 1 hour (namely 6 hours at 7/1/6 in 2019) and 2 hours (namely 7 hours at 7/1/7 in 2019). The embodiment results show that the predicted product obtained by the method provided by the invention has clear edges, accurately processes the detailed part and has overall effect closer to that of a real cloud picture product. The prediction method based on the optical flow has the phenomena of edge blank and cloud cluster image distortion, while the deep learning prediction method based on the MIM network has the phenomenon of blocking visually due to blocking and splicing, the generated predicted product is fuzzy, and the fuzzy degree of the 2-hour predicted product is more serious than that of the 1-hour predicted product.
The model obtained by training in the embodiment of the invention can realize the prediction of the large-scale cloud picture and can improve the definition and accuracy of the prediction result.
While the invention has been described with reference to specific preferred embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.
Those skilled in the art will appreciate that the details of the invention not described in detail in this specification are well within the skill of those in the art.

Claims (6)

1. A satellite cloud picture prediction method based on depth cross-scale extrapolation fusion is characterized by comprising the following steps:
step 1, acquiring a historical observation cloud picture to construct a training set observation cloud picture sequence;
step 2, adjusting the size proportion of the observation cloud pictures in each training set observation cloud picture sequence to obtain a training set multi-scale cloud picture sequence;
step 3, respectively inputting the training set multi-scale cloud picture sequences into corresponding depth prediction networks for training to obtain multi-scale depth prediction model parameters;
step 4, taking the training set observation cloud picture sequence as input again, and outputting the training set multi-scale prediction cloud picture by the multi-scale depth prediction model;
step 5, amplifying the multi-scale prediction cloud picture of the training set to the size corresponding to the observation cloud picture to obtain a training set adjustment prediction cloud picture;
step 6, adjusting the prediction cloud pictures of the training set and the corresponding observation cloud pictures of the training set to input depth conditions to generate a confrontation network for training, and obtaining parameters of a multi-scale prediction cloud picture fusion model;
and 7, taking the real-time observation cloud picture sequence as operation data, sequentially inputting the multi-scale depth prediction model and the multi-scale prediction cloud picture fusion model, and outputting a prediction result cloud picture of the real-time observation cloud picture sequence.
2. The method of claim 1, wherein the scaling of the size of the observation cloud images in each training set observation cloud image sequence to obtain a training set multi-scale cloud image sequence comprises:
and performing down-sampling on the observation cloud pictures in each observation cloud picture sequence by adopting an average pooling method to obtain a training set multi-scale cloud picture sequence.
3. The method of claim 1, wherein the step of inputting the training set multi-scale cloud image sequences into corresponding depth prediction networks respectively for training to obtain multi-scale depth prediction model parameters comprises:
partitioning the cloud pictures in each training set multi-scale cloud picture sequence according to a specific scale to obtain a partitioned cloud picture sequence corresponding to the training set multi-scale cloud picture sequence;
and training a depth prediction model for each scale of the block cloud image sequence, and inputting the block cloud image sequence for training to obtain the multi-scale depth prediction model parameters.
4. The method of claim 1, wherein the outputting, by the multi-scale depth prediction model, the training set multi-scale prediction cloud image again using the training set observation cloud image sequence as an input comprises:
taking the observation cloud picture sequence of the training set as input, operating the training of the step 3, and then taking the multi-scale depth prediction model to obtain the block prediction products of all scales;
and splicing the block prediction products of all scales according to the original positions to obtain a training set multi-scale prediction cloud picture product.
5. The method of claim 1, wherein adjusting the training set to the predicted cloud images and the corresponding training set to observe the cloud image input depth condition generates a countermeasure network for training, and obtaining parameters of a multi-scale predicted cloud image fusion model comprises:
inputting the training set adjustment prediction cloud picture and the corresponding training set observation cloud picture into the multi-scale prediction cloud picture fusion model;
calling the generator to generate a predicted cloud image fusion product;
calling the discriminator to identify the predicted cloud picture fusion product and the corresponding type of the observed cloud picture product so as to obtain discrimination probability;
calculating loss values of the generator and the discriminator according to the discrimination probability;
updating model parameters according to the loss values;
and iterating the process until the training is completed.
6. The method of claim 1, wherein the step of inputting the real-time observation cloud picture sequence as operation data, sequentially inputting the multi-scale depth prediction model and the multi-scale prediction cloud picture fusion model, and outputting the prediction cloud picture of the real-time observation cloud picture sequence comprises:
taking a real-time observation cloud picture sequence as an input;
sequentially inputting the multi-scale depth prediction model and the multi-scale prediction cloud image fusion model;
and obtaining a final predicted cloud picture product.
CN202011483471.4A 2020-12-15 2020-12-15 Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion Active CN112668615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011483471.4A CN112668615B (en) 2020-12-15 2020-12-15 Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011483471.4A CN112668615B (en) 2020-12-15 2020-12-15 Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion

Publications (2)

Publication Number Publication Date
CN112668615A CN112668615A (en) 2021-04-16
CN112668615B true CN112668615B (en) 2022-11-18

Family

ID=75405302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011483471.4A Active CN112668615B (en) 2020-12-15 2020-12-15 Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion

Country Status (1)

Country Link
CN (1) CN112668615B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661277B (en) * 2022-10-20 2023-06-02 中山大学 Typhoon cloud picture extrapolation method, system, equipment and medium based on variation self-coding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364097A (en) * 2018-02-07 2018-08-03 国家海洋局北海预报中心 Based on the typhoon cloud system prediction technique for generating confrontation network
CN111210483A (en) * 2019-12-23 2020-05-29 中国人民解放军空军研究院战场环境研究所 Simulated satellite cloud picture generation method based on generation of countermeasure network and numerical mode product

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815785A (en) * 2018-12-05 2019-05-28 四川大学 A kind of face Emotion identification method based on double-current convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364097A (en) * 2018-02-07 2018-08-03 国家海洋局北海预报中心 Based on the typhoon cloud system prediction technique for generating confrontation network
CN111210483A (en) * 2019-12-23 2020-05-29 中国人民解放军空军研究院战场环境研究所 Simulated satellite cloud picture generation method based on generation of countermeasure network and numerical mode product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Seq2Seq和Attention的时序卫星云图台风等级预测;郑宗生等;《遥感信息》;20200820(第04期);全文 *

Also Published As

Publication number Publication date
CN112668615A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
Han et al. Convolutional neural network for convective storm nowcasting using 3-D Doppler weather radar data
CN110363327B (en) ConvLSTM and 3D-CNN-based short rainfall prediction method
CN113128134B (en) Mining area ecological environment evolution driving factor weight quantitative analysis method
CN113936142B (en) Precipitation proximity forecasting method and device based on deep learning
CN109840553B (en) Extraction method and system of cultivated land crop type, storage medium and electronic equipment
CN108428220B (en) Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence
CN110728197B (en) Single-tree-level tree species identification method based on deep learning
CN111210483B (en) Simulated satellite cloud picture generation method based on generation of countermeasure network and numerical mode product
CN113469278B (en) Strong weather target identification method based on deep convolutional neural network
CN111104850B (en) Remote sensing image building automatic extraction method and system based on residual error network
CN113592132B (en) Rainfall objective forecasting method based on numerical weather forecast and artificial intelligence
CN117148360B (en) Lightning approach prediction method and device, electronic equipment and computer storage medium
CN112308029A (en) Rainfall station and satellite rainfall data fusion method and system
Peterson et al. Thunderstorm cloud-type classification from space-based lightning imagers
CN116484189A (en) ERA5 precipitation product downscaling method based on deep learning
CN110516552B (en) Multi-polarization radar image classification method and system based on time sequence curve
CN115062527A (en) Geostationary satellite sea temperature inversion method and system based on deep learning
CN112668615B (en) Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
Ni et al. Hurricane eye morphology extraction from SAR images by texture analysis
CN117710508A (en) Near-surface temperature inversion method and device for generating countermeasure network based on improved condition
CN112285808A (en) Method for reducing scale of APHRODITE precipitation data
Yang et al. Convective cloud detection and tracking using the new-generation geostationary satellite over South China
CN113779863B (en) Ground surface temperature downscaling method based on data mining
CN115393731A (en) Method and system for generating virtual cloud picture based on interactive scenario and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant