CN115393731A - Method and system for generating virtual cloud picture based on interactive scenario and deep learning - Google Patents

Method and system for generating virtual cloud picture based on interactive scenario and deep learning Download PDF

Info

Publication number
CN115393731A
CN115393731A CN202210968903.3A CN202210968903A CN115393731A CN 115393731 A CN115393731 A CN 115393731A CN 202210968903 A CN202210968903 A CN 202210968903A CN 115393731 A CN115393731 A CN 115393731A
Authority
CN
China
Prior art keywords
cloud
network
virtual
cloud picture
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210968903.3A
Other languages
Chinese (zh)
Inventor
程文聪
黄芳
何红红
张文军
王志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
93213 Unit Of Pla
Original Assignee
93213 Unit Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 93213 Unit Of Pla filed Critical 93213 Unit Of Pla
Priority to CN202210968903.3A priority Critical patent/CN115393731A/en
Publication of CN115393731A publication Critical patent/CN115393731A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a system for generating a virtual cloud picture based on interactive scenario and deep learning, wherein the method comprises the following steps: constructing a training data set according to historical weather phenomenon observation data; according to the training data set, a virtual cloud picture generation model based on a depth generation countermeasure network is constructed and trained to obtain a trained virtual cloud picture generation model; setting a planned weather scene based on a man-machine interaction mode; and taking the customized cloud product with the weather phenomenon identification as the input of the trained virtual cloud picture generation model, reconstructing the virtual satellite cloud picture under the planned weather scene and outputting the reconstructed virtual satellite cloud picture as a result. The invention realizes the purpose of generating the virtual cloud picture based on interactive scenario and deep learning.

Description

Method and system for generating virtual cloud picture based on interactive scenario and deep learning
Technical Field
The invention belongs to the technical field of weather, and particularly relates to a method and a system for generating a virtual cloud picture based on interactive scenario and deep learning.
Background
With the rapid development of satellite remote sensing technology, the satellite cloud pictures have increasingly increased functions in weather analysis and forecast, the cloud system structures and the activity rules of the cloud system structures with different scales can be analyzed by using the satellite cloud pictures, and the cloud picture data has strong intuition and is easy to understand and is important information for weather forecasters to analyze weather phenomena. In the deduction of weather processes by weather professionals and weather information users, corresponding virtual weather information needs to be generated according to a preset weather scene. In the past, due to the lack of technical means, the planned virtual meteorological information can only be marked in an abstract symbol marking mode, the planned meteorological information is not intuitive, and the planned meteorological information can not be directly used for training of meteorological forecasters or used as input data of a meteorological related information system. Therefore, obtaining meteorological data with higher simulation degree through man-machine interaction operation according to the planned meteorological scene is work with greater practical significance.
In the deduction of weather processes by weather professionals and weather information users, corresponding virtual weather information needs to be generated according to a preset weather scene. Due to the lack of technical means, the planned virtual meteorological information can be marked only in an abstract symbol marking mode, the planned meteorological information is not intuitive, and the planned meteorological information can not be directly used for training of meteorological forecasters or used as input data of a meteorological related information system.
Disclosure of Invention
The technical problem of the invention is solved: the method and the system for generating the virtual cloud picture based on the interactive scenario and the deep learning are provided, and the defects of the existing weather scenario deduction annotation method are overcome.
In order to solve the technical problem, the invention discloses a method for generating a virtual cloud picture based on interactive scenario and deep learning, which comprises the following steps:
constructing a training data set according to historical weather phenomenon observation data;
according to the training data set, a virtual cloud picture generation model based on a depth generation countermeasure network is constructed and trained to obtain a trained virtual cloud picture generation model;
setting a planned weather scene based on a man-machine interaction mode;
and taking the customized cloud product with the weather phenomenon identification as the input of the trained virtual cloud picture generation model, reconstructing the virtual satellite cloud picture under the planned weather scene and outputting the reconstructed virtual satellite cloud picture as a result.
In the method for generating the virtual cloud picture based on the interactive scenario and the deep learning, a training data set is constructed according to historical weather phenomenon observation data, and the method comprises the following steps:
acquiring historical weather phenomenon observation data;
sorting and analyzing the historical weather phenomenon observation data, determining the type, position and range of the historical weather phenomenon, and identifying the historical weather phenomenon by using color blocks or weather phenomenon samples on corresponding cloud volume products; meanwhile, collecting and sorting real satellite cloud pictures of corresponding time and regions;
and constructing to obtain a training data set based on the identification results of the types, positions and ranges of the historical weather phenomena on the corresponding cloud volume products and the collected and sorted real satellite cloud pictures of the corresponding time and region.
In the method for generating the virtual cloud picture based on the interaction scenario and the deep learning, a weather scenario is set based on a human-computer interaction mode, and the method comprises the following steps:
and customizing and modifying the cloud product corresponding to the basic satellite cloud picture in a man-machine interaction mode, and marking the cloud product by using color blocks or weather phenomenon samples to finish setting of the planned weather scene.
In the method for generating the virtual cloud picture based on the interactive scenario and the deep learning, the cloud product refers to the distribution condition of the cloud in a specific area and reflects the number of the cloud; types of weather phenomena include: typhoons, thunderstorms, heavy precipitation, hail and fog.
In the method for generating the virtual cloud picture based on the interactive scenario and the deep learning, the color of the color block is used for representing the intensity characteristic of the weather phenomenon, and the position of the color block is used for representing the influence range of the weather phenomenon; the profile map of the weather phenomenon sample is used for representing the intensity characteristics and the influence range of the weather phenomenon.
In the method for generating the virtual cloud picture based on the interactive scenario and the deep learning, the satellite cloud picture is specific visible light channel data, vapor channel data, infrared channel data or a combination of a plurality of channel data of the meteorological satellite in the same regional range as the cloud product.
In the method for generating a virtual cloud image based on interactive scenario and deep learning, the virtual cloud image generation model for generating the countermeasure network based on the depth includes: generating a network G and a discrimination network D; the generation network G is configured by a codec network as a backbone network, and the discrimination network D is configured by a binary classification network composed of a plurality of convolutional neural network layers as a backbone network.
In the method for generating a virtual cloud image based on interactive scenario and deep learning, when training a virtual cloud image generation model for establishing a countermeasure network based on deep generation, the method includes:
according to the training data set, determining identification results of types, positions and ranges of historical weather phenomena on corresponding cloud cover products and real satellite cloud pictures of corresponding times and regions;
taking the identification result of the type, the position and the range of the historical weather phenomenon on the corresponding cloud product and the real satellite cloud picture of the corresponding time and region as the input of a generation network G;
taking the identification results of the types, positions and ranges of the historical weather phenomena on the corresponding cloud cover products, the real satellite cloud pictures corresponding to the time and the regions and the virtual satellite cloud picture generated by the generation network G as the input of the identification network D;
and carrying out iterative training to obtain a trained virtual cloud picture generation model.
In the method for generating the virtual cloud picture based on the interactive scenario and the deep learning, the iterative training process is as follows:
inputting identification results of the types, positions and ranges of historical weather phenomena on corresponding cloud volume products, real satellite cloud pictures corresponding to the time and the regions and virtual satellite cloud pictures generated by the generation network G into the identification network D in batches, and updating parameters of the identification network D by using an identification network loss function L _ D in a back propagation mode;
freezing parameters of the discrimination network D;
inputting identification results of the types, positions and ranges of historical weather phenomena on corresponding cloud cover products and real satellite cloud pictures corresponding to the time and the region in batches into a generating network G, and updating parameters of the generating network G by using a generating network loss function L _ G in a back propagation mode;
and repeating the process until the capacities of generating the network G and distinguishing the network D are balanced, acquiring parameters of the virtual cloud picture generation model during operation, and then obtaining the trained virtual cloud picture generation model.
Correspondingly, the invention also discloses a system for generating the virtual cloud picture based on the interactive scenario and the deep learning, which comprises the following steps:
the training data set construction module is used for constructing a training data set according to the historical weather phenomenon observation data;
the model building module is used for building and training a virtual cloud picture generation model based on a depth generation countermeasure network according to the training data set to obtain a trained virtual cloud picture generation model;
the setting module is used for setting and customizing a weather scene based on a human-computer interaction mode;
and the generation module is used for taking the customized cloud product with the weather phenomenon identification as the input of the trained virtual cloud picture generation model, reconstructing the virtual satellite cloud picture under the planned weather scene and outputting the reconstructed virtual satellite cloud picture as a result.
The invention has the following advantages:
the invention generates a virtual satellite cloud picture product with higher simulation degree through interactive scenario and deep learning, thereby supporting cloud picture identification training of professional weather forecasters, weather scene preset deduction based on the cloud pictures and simulated cloud picture data input of a weather related information system, overcoming the defects of the existing weather scenario deduction marking method, and being used as a basis and a preposition method for generating various virtual weather data.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for generating a virtual cloud based on interactive scenario and deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an architecture for generating a confrontation network model VCloudgGAN based on interaction scenario and deep learning to generate a virtual satellite cloud map in an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating comparison between the effect of a cloud image of a visible light simulation satellite and an actual cloud image product according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in detail with reference to the accompanying drawings.
In order to simulate natural environment scenes and train weather forecasters, virtual weather data needs to be interactively generated according to a planned scene. The satellite cloud picture is important meteorological data, and relevant researches show that the satellite cloud picture can be used for generating other virtual meteorological data such as a virtual weather radar map and the like together with other data.
In the prior art method, a generation confrontation network model in the deep learning field is a direction of more research in the artificial intelligence field in recent years, training optimization of generator parameters and discriminator parameters is realized through confrontation of a generator and a discriminator, so that a data mapping relation from a field A to a field B is found, conversion from the field A to the field B is realized, and the method is widely applied to the image generation field and can generate images with high simulation degree.
Through a large amount of research, a training data set consisting of cloud scale products, weather phenomenon identifications and corresponding time-domain satellite cloud pictures is constructed, a virtual cloud picture deep learning model is established and trained, and a data mapping relation among the cloud scale, the weather phenomenon identifications and the corresponding satellite cloud pictures can be found out. In the actual process of deducing the meteorological scenario, cloud scale products and weather phenomenon identifications are edited in advance according to the man-machine interaction of the planned meteorological scene, and virtual satellite cloud picture products corresponding to the planned meteorological scene can be generated based on the virtual cloud picture deep learning model; the theory and method may be applied in generating virtual satellite cloud images based on the scenario.
As shown in fig. 1, the present invention provides a method for generating a virtual cloud image based on interactive scenario and deep learning, the method comprising the following steps:
step 101, constructing a training data set according to historical weather phenomenon observation data.
In the embodiment, firstly, historical weather phenomenon observation data is obtained; then, sorting and analyzing the historical weather phenomenon observation data, determining the type, the position and the range of the historical weather phenomenon, and marking by using color blocks or weather phenomenon samples on corresponding cloud product; meanwhile, collecting and sorting real satellite cloud pictures of corresponding time and regions; and finally, constructing to obtain a training data set based on the identification results of the types, positions and ranges of the historical weather phenomena on the corresponding cloud volume products and the collected and sorted real satellite cloud pictures of the corresponding time and region.
It should be noted that the cloud product is a description product of cloud coverage in a certain area in the weather, is used to indicate the distribution of clouds in a specific area, reflects the number of the clouds, and can be obtained by methods such as observation, satellite radar inversion, numerical model products, and the like, and usually, the cloud value of a single lattice point is represented by 0 to 10 (integer) or 0.0 to 1.0 (1 decimal). The weather phenomena are acquired and released by observation business work of meteorological institutions, the types of the weather phenomena commonly used in the field of weather comprise but are not limited to typhoons, thunderstorms, strong precipitation, hails, fog and other observable weather phenomena, and data released by various institutions generally comprise parameters such as positions, influence ranges, intensity levels and the like of specific weather phenomena. When the color blocks are used for marking the weather phenomenon on the corresponding cloud product, the colors of the color blocks can be used for representing the intensity characteristics of the weather phenomenon, and the positions of the color blocks can be used for representing the influence range of the weather phenomenon; when the weather phenomenon is identified on the corresponding cloud volume product by using the weather phenomenon sample, the profile map of the weather phenomenon sample can be used for representing the intensity characteristics and the influence range of the weather phenomenon. The satellite cloud picture is weather satellite specific visible light channel data, water vapor channel data, infrared channel data or a combination of a plurality of channel data in the same regional range with the cloud product.
In addition, the collected satellite cloud picture data, the cloud product and the weather phenomenon observation data may not be aligned in time and region, and in order to realize a virtual satellite cloud picture product generated by the cloud product and the weather phenomenon identification product in the model building process, training for identifying a network is implemented, and the cloud product, the weather phenomenon identification and the satellite cloud picture data in the same time and the same region need to be obtained through arrangement.
And 102, constructing and training a virtual cloud picture generation model based on the depth generation countermeasure network according to the training data set to obtain the trained virtual cloud picture generation model.
In this embodiment, the virtual cloud map generation model based on the depth generation countermeasure network mainly includes: generating a network G and discriminating a network D. The generation network G is formed by network modules such as a coder-decoder network and the like as a backbone network, and the discrimination network D is formed by a plurality of convolutional neural network layers forming a two-class network as a backbone network. The coding-decoder network can adopt a U-Net network structure, is a coding-decoder network added with jump connection, and is widely used in the field of image segmentation. The discrimination network D is a multilayer convolution classification network, and discriminates whether the image input into the network D is a real satellite cloud image or a generated virtual cloud image by calculating the probability that the image input into the network D is the real satellite cloud image. In the training process, the discrimination network D tries to correctly discriminate the real satellite cloud picture from the virtual satellite cloud picture, and the generation network G tries to generate a cloud picture which is as real as possible, so that the discrimination network D cannot discriminate true from false. As shown in fig. 2, the left-side numerical model product can be converted into a right-side specific channel simulated cloud image product through the virtual cloud image generation model and output; the generation network G corresponds to the generator in fig. 2, and the discrimination network D corresponds to the discriminator in fig. 2.
Preferably, when training a virtual cloud image generation model for establishing a depth-based generation countermeasure network, the method comprises the following steps: determining the identification result of the type, the position and the range of the historical weather phenomenon on the corresponding cloud cover product and the real satellite cloud picture of the corresponding time and region according to the training data set; taking the identification result of the type, the position and the range of the historical weather phenomenon on the corresponding cloud product and the real satellite cloud picture of the corresponding time and region as the input of a generation network G; taking the identification result of the type, the position and the range of the historical weather phenomenon on the corresponding cloud product, the real satellite cloud picture of the corresponding time and the area and the virtual satellite cloud picture generated by the generation network G as the input of the identification network D, so that the cloud product and the weather phenomenon identification participate in the calculation process of the identification network; and carrying out iterative training to obtain a trained virtual cloud picture generation model.
Further, the process of iterative training is as follows: inputting identification results of the types, positions and ranges of historical weather phenomena on corresponding cloud volume products, real satellite cloud pictures corresponding to the time and the regions and virtual satellite cloud pictures generated by the generation network G into the identification network D in batches, and updating parameters of the identification network D by using an identification network loss function L _ D in a back propagation mode; freezing parameters of the discrimination network D; inputting identification results of the types, positions and ranges of historical weather phenomena on corresponding cloud cover products and real satellite cloud pictures corresponding to the time and the region in batches into a generating network G, and updating parameters of the generating network G by using a generating network loss function L _ G in a back propagation mode; repeating the above process until the capacities of the generating network G and the distinguishing network D reach balance (namely, the network is continuously generated by iteration and the network is distinguished without obvious optimization), obtaining the parameters of the virtual cloud picture generating model during operation, and then obtaining the trained virtual cloud picture generating model.
Further, binary cross entropy may be used as a loss measure.
For the discrimination network D, the loss function (discrimination network loss function L _ D) is as follows:
L_D=L bce (D(x,y),1)+L bce (D(x,G(x,z)),0)…(1)
wherein x represents the combined data of the cloud product and the weather phenomenon identifier, y represents the corresponding real satellite cloud picture, and G (x, z) represents a virtual satellite cloud picture generated by x and random noise z in the generation network G; d (x, y) represents the probability of identifying a true satellite cloud picture by the discrimination network D.
For the generating network G, the loss function (generating network loss c-function L _ G) is as follows:
L_G=λ 1 L bce (D(x,G(x,z)),1)+λ 2 |y-G(x,z)|…(2)
wherein λ is 1 And λ 2 To represent two scale parameters, respectively.
Wherein:
Figure BDA0003795748980000071
wherein N is the number of samples of one batch input by the model; a ∈ {0,1} represents the label of the input data, 0 represents the simulated satellite cloud picture; 1 represents a real satellite cloud;
Figure BDA0003795748980000072
the discrimination value is output by the discrimination network D, the probability that the input is the virtual satellite cloud image is high when the discrimination network D is close to 0, and the probability that the input is the real satellite cloud image is high when the discrimination network D is close to 1.
In addition, because the resolution ratios of the cloud cover product, the weather phenomenon identifier and the meteorological satellite cloud image product are not necessarily consistent, an up-sampling module is required to be added in the generation countermeasure network model, the resolution ratio of input data is unified, and countermeasure training is carried out on the model based on a training data set according to a common training mode of the generation countermeasure network, so that model parameters are obtained to extract relevant information among the cloud cover product, the weather phenomenon identifier and the meteorological satellite cloud image product.
And 103, setting a planned weather scene based on a man-machine interaction mode.
In this embodiment, the cloud volume product corresponding to the basic satellite cloud picture can be customized and modified through a graphical editing tool in a human-computer interaction manner, and the cloud volume product corresponding to the basic satellite cloud picture is marked by using a color block or a weather phenomenon sample (by using a weather phenomenon marking method which is the same as that of the training data set in step 101), so that the setting of the planned weather scene is completed. For example, the cloud product can be converted into a picture format by modifying the cloud product, and the content in the picture corresponding to the cloud product is deleted, copied, pasted, translated, rotated, scaled, dimmed, enhanced and the like by using the existing image editing tool or a self-developed image editing tool; the weather phenomenon identification is placed as required, moved to a required position and zoomed to a preset weather phenomenon influence range.
And step 104, taking the customized cloud product with the weather phenomenon identification as the input of the trained virtual cloud picture generation model, reconstructing a virtual satellite cloud picture under the planned weather scene and outputting the virtual satellite cloud picture as a result.
In summary, the invention provides a method for generating a virtual satellite cloud picture based on interactive scenario and deep learning, and provides a relevant example for generating a virtual wind cloud 4A geostationary satellite product by editing a cloud product through human-computer interaction and identifying typhoon weather phenomena, and explains the effectiveness of the specific method and the method, aiming at the problem that the existing method cannot generate a weather product with higher simulation degree according to a scenario, so that the method can be used as a basis and a preposition method for generating various virtual weather data.
On the basis of the above-described embodiments, a specific example will be described below.
The method for generating the virtual satellite cloud picture based on the interactive scenario and the deep learning comprises the following specific implementation processes:
(1) Historical weather phenomenon observation data are collected and sorted, color blocks or historical weather phenomenon samples are used for identification on corresponding cloud cover products according to the types, positions and ranges of historical weather phenomena, and training data sets are constructed together by collecting and sorting satellite cloud pictures of corresponding time and regions.
The weather phenomenon is selected as typhoon, satellite inversion cloud products corresponding to the occurrence time of the historical typhoon within three years are collected, and the coverage ranges of the selected products are 20 degrees to 50 degrees of north latitude and 100 degrees to 130 degrees of east longitude. The historical weather phenomenon samples are adopted for identification, the typhoon sample gray level images are stored in sample files, and satellite cloud images corresponding to the time regions are collected.
This example uses a 4km resolution product from a wind cloud 4A (FY-4A) geostationary satellite. The FY-4A geostationary meteorological satellite is the first-generation satellite of the second generation geostationary orbit meteorological satellite in China, and carries various loads such as a multichannel scanning imager, an interference type atmosphere vertical detector, a lightning imager, a space environment monitoring instrument package and the like, and the multichannel scanning imager product carried by the FY-4A is selected as a simulated object in the work of the software. The multichannel scanning imager is one of main loads of FY-4A, and is mainly used for carrying out high-frequency, high-precision and multispectral quantitative remote sensing on the surface of the earth and physical state parameters of clouds, and directly serving weather analysis and forecast, short-term climate prediction and environment and disaster monitoring. The observation wave band covers visible light, near infrared, short wave infrared, medium wave infrared and long wave infrared, and not only can observe the full appearance of a large-scale weather system, but also can observe the rapid evolution process of a medium-scale weather system and a small-scale weather system. The multi-channel scanning imager is provided with 14 channels comprising 7 visible light-near infrared channels and 7 infrared channels. Among the 14 channels, 1 channel with 500-meter ground resolution, 2 channels with 1KM, 4 channels with 2KM, and 7 channels with 4 KM. The full disc observation time was 15 minutes. The present embodiment selects visible light 1, 2, 3 channels as target products. In order to unify the product resolution and not to influence the generality of the work, products with the resolution of 4km are selected, and hour-by-hour integral point products are selected.
Finally, the constructed training data set is { C (t) + W (t) → FY4A (t) }, wherein C (t) is a cloud cover product selected in the embodiment at the time t, W (t) is a typhoon identification chart, and FY4A (t) is a stationary meteorological satellite channel combination product at the time t.
(2) And constructing a virtual cloud picture generation model based on a depth generation countermeasure network according to the training data set and training the model.
Constructing a simulation satellite cloud graph as shown in fig. 3 to generate a confrontation network model VCloudGAN, training the model on a training set { C (t) + W (t) → FY4A (t) } according to a loss function set by formulas (1) - (3), generating the confrontation network model VCloudGAN according to the training set and the simulation satellite cloud graph, and performing training in an iterative manner:
selecting a batch { C (t) + W (t) } from { C (t) + W (t) } 8 As input to the model.
Training the discriminator D, and obtaining the generated simulated satellite cloud picture product through the forward calculation of the model VCloudgAN generator G
Figure BDA0003795748980000101
Inputting an original numerical value pattern product, a real satellite cloud picture and a simulation satellite cloud picture generated by a generator G in batches to a discriminator D { (C (t) + W (t), FY4A (t)) } 8 And
Figure BDA0003795748980000102
updating parameters of the discriminator D by means of back propagation by using formula (1); then, parameters of the discriminator D are frozen, and a numerical mode product ECWMF (t) and corresponding real satellite cloud pictures { (C (t) + W (t), FY4A (t)) }are input into the generator G in batches 8 The parameters for updating G by back propagation are calculated as in equation (2). The parameter optimization method selected in the embodiment of the invention is an ADAM method, and after each batch of training is completed, a batch of data is taken down to repeat the process until the capacities of the generator and the discriminator are balanced. After 500 iteration cycles (each cycle comprising one treatment of all batches), the determined model parameters are obtained.
In the training process, the optimization method is a small batch-based random gradient descent (minipatch SGD) method, the learning rate is 0.0002, and the momentum parameter beta of the optimizer 1 =0.5,β 2 =0.999。
(3) And customizing and modifying the cloud product corresponding to the basic cloud picture in a man-machine interaction mode, and marking the weather phenomenon by using color blocks or samples according to a weather phenomenon marking method which is the same as that of the training set, so as to set up the designed weather scene.
Selecting a time as a basic time, converting the cloud product corresponding to the time as a basic cloud product into a picture format, and performing image operations such as deleting, copying, pasting, translating, rotating, zooming, dimming, enhancing and the like on the content in the picture corresponding to the cloud product by using the existing image editing tool or an image editing tool developed by a user; the weather phenomenon identification is placed as required, moved to a required position and zoomed to a preset weather phenomenon influence range. As shown in fig. 3, which is an example of cloud map transformation based on 2018, 7, 10, and 11, two customized scenes are respectively set. In fig. 3, a is a product related to a basic cloud picture, b is a customized scene 1, namely, the lower left is converted into a thin rolling cloud on the original basis, and the human-computer interaction scenario is edited as follows: b.1, copying and pasting the sub-image at the lower left side to form a thin rolling cloud on the basis of a basic cloud product, and keeping the typhoon weather phenomenon identification of the b.2 sub-image unchanged; column c is a customized scene 2, namely the typhoon moves to the left, and the typhoon is transformed into a thin rolling cloud in situ, and the man-machine interaction is planned and edited as follows: and c.1, moving the cloud amount of the original typhoon area to the left by the subgraph on the basis of a basic cloud amount product, copying and pasting the cloud amount to be a thin roll cloud in the typhoon in situ, and moving the typhoon weather phenomenon identifier to the position of the typhoon cloud amount of the c.1 subgraph by the c.2 subgraph.
(4) And generating a model according to the virtual satellite cloud picture, taking a customized cloud product with a weather phenomenon identifier as model input, reconstructing the virtual satellite cloud picture under the planned weather scene and outputting the virtual satellite cloud picture as a result.
And (4) taking the customized cloud amount product and the typhoon identification set in the step (3) as input, running the trained virtual cloud picture generation model, and reconstructing a corresponding current simulated satellite cloud picture data product as output. The reconstruction effect is shown in fig. 3, where a b.3 sub-graph is a virtual cloud under the generated customized scene 1, and a c.3 sub-graph is a virtual cloud under the generated customized scene 2. Visual experience shows that the method provided by the invention can better virtualize a virtual cloud picture product under an interactive scenario.
Meanwhile, the inventor also implements a virtual satellite cloud picture generation test of infrared channel data and water vapor channel data.
On the basis of the above embodiment, the present invention also discloses a system for generating a virtual cloud picture based on interactive scenario and deep learning, comprising: the training data set construction module is used for constructing a training data set according to the historical weather phenomenon observation data; the model building module is used for building and training a virtual cloud picture generation model based on a depth generation countermeasure network according to the training data set to obtain a trained virtual cloud picture generation model; the setting module is used for setting a planned weather scene based on a human-computer interaction mode; and the generating module is used for taking the customized cloud product with the weather phenomenon identification as the input of the trained virtual cloud picture generating model, reconstructing the virtual satellite cloud picture under the planned weather scene and outputting the reconstructed virtual satellite cloud picture as a result.
For the system embodiment, since it corresponds to the method embodiment, the description is relatively simple, and for the relevant points, refer to the description of the method embodiment section.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.
Those skilled in the art will appreciate that the invention may be practiced without these specific details.

Claims (10)

1. A method for generating a virtual cloud picture based on interactive scenario and deep learning is characterized by comprising the following steps:
constructing a training data set according to historical weather phenomenon observation data;
according to the training data set, a virtual cloud picture generation model based on a depth generation countermeasure network is constructed and trained to obtain a trained virtual cloud picture generation model;
setting a planned weather scene based on a man-machine interaction mode;
and taking the customized cloud product with the weather phenomenon identification as the input of the trained virtual cloud picture generation model, reconstructing the virtual satellite cloud picture under the planned weather scene and outputting the reconstructed virtual satellite cloud picture as a result.
2. The method for generating a virtual cloud picture based on interactive scenario and deep learning of claim 1, wherein constructing a training data set according to historical weather phenomenon observation data comprises:
acquiring historical weather phenomenon observation data;
sorting and analyzing the historical weather phenomenon observation data, determining the type, position and range of the historical weather phenomenon, and identifying the historical weather phenomenon by using color blocks or weather phenomenon samples on corresponding cloud volume products; meanwhile, collecting and sorting real satellite cloud pictures of corresponding time and regions;
and constructing to obtain a training data set based on the identification results of the types, positions and ranges of the historical weather phenomena on the corresponding cloud volume products and the collected and sorted real satellite cloud pictures of the corresponding time and region.
3. The method for generating a virtual cloud picture based on interactive scenario and deep learning of claim 1, wherein the setting of the weather scene based on the man-machine interaction mode comprises:
and customizing and modifying the cloud product corresponding to the basic satellite cloud picture in a man-machine interaction mode, and marking the cloud product by using color blocks or weather phenomenon samples to finish setting of the planned weather scene.
4. The method for generating the virtual cloud picture based on the interaction scenario and the deep learning as claimed in claim 2 or 3, wherein the cloud amount product refers to the distribution of the cloud in a specific area, and reflects the amount of the cloud; types of weather phenomena include: typhoons, thunderstorms, heavy precipitation, hail and fog.
5. The method for generating the virtual cloud picture based on the interaction scenario and the deep learning of claim 2 or 3, wherein the color of the color block is used for representing the intensity characteristics of the weather phenomenon, and the position of the color block is used for representing the influence range of the weather phenomenon; the profile map of the weather phenomenon sample is used for representing the intensity characteristics and the influence range of the weather phenomenon.
6. The method for generating the virtual cloud picture based on the interaction scenario and the deep learning of claim 2 or 3, wherein the satellite cloud picture is meteorological satellite specific visible light channel data, water vapor channel data, infrared channel data or a combination of multiple channel data in the same regional range as the cloud product.
7. The method for generating virtual cloud images based on interaction scenario and deep learning of claim 1, wherein the virtual cloud image generation model for the countermeasure network based on deep generation comprises: generating a network G and a discriminating network D; the generation network G is formed by a codec network as a backbone network, and the discrimination network D is formed by a plurality of convolutional neural network layers forming a binary classification network as a backbone network.
8. The method for generating cloud images based on interactive scenario and deep learning of claim 7, wherein in training the cloud image generation model for creating the confrontation network based on deep generation, the method comprises:
determining the identification result of the type, the position and the range of the historical weather phenomenon on the corresponding cloud cover product and the real satellite cloud picture of the corresponding time and region according to the training data set;
taking the identification result of the type, the position and the range of the historical weather phenomenon on the corresponding cloud product and the real satellite cloud picture of the corresponding time and region as the input of a generation network G;
taking the identification result of the type, the position and the range of the historical weather phenomenon on the corresponding cloud product, the real satellite cloud picture of the corresponding time and the corresponding region and the virtual satellite cloud picture generated by the generation network G as the input of the identification network D;
and carrying out iterative training to obtain a trained virtual cloud picture generation model.
9. The method for generating a virtual cloud map based on interactive scenario and deep learning of claim 8, wherein the iterative training process comprises:
inputting identification results of the types, positions and ranges of historical weather phenomena on corresponding cloud volume products, real satellite cloud pictures corresponding to the time and the regions and virtual satellite cloud pictures generated by the generation network G into the identification network D in batches, and updating parameters of the identification network D by using an identification network loss function L _ D in a back propagation mode;
freezing parameters of the discrimination network D;
inputting identification results of the types, positions and ranges of historical weather phenomena on corresponding cloud cover products and real satellite cloud pictures corresponding to the time and the region in batches into a generating network G, and updating parameters of the generating network G by using a generating network loss function L _ G in a back propagation mode;
and repeating the process until the capabilities of generating the network G and distinguishing the network D are balanced, acquiring parameters of the virtual cloud picture generation model during operation, and then obtaining the trained virtual cloud picture generation model.
10. A system for generating a virtual cloud image based on interactive scenario and deep learning, comprising:
the training data set construction module is used for constructing a training data set according to the historical weather phenomenon observation data;
the model building module is used for building and training a virtual cloud picture generation model based on a depth generation countermeasure network according to the training data set to obtain a trained virtual cloud picture generation model;
the setting module is used for setting and customizing a weather scene based on a human-computer interaction mode;
and the generation module is used for taking the customized cloud product with the weather phenomenon identification as the input of the trained virtual cloud picture generation model, reconstructing the virtual satellite cloud picture under the planned weather scene and outputting the reconstructed virtual satellite cloud picture as a result.
CN202210968903.3A 2022-08-12 2022-08-12 Method and system for generating virtual cloud picture based on interactive scenario and deep learning Pending CN115393731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210968903.3A CN115393731A (en) 2022-08-12 2022-08-12 Method and system for generating virtual cloud picture based on interactive scenario and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210968903.3A CN115393731A (en) 2022-08-12 2022-08-12 Method and system for generating virtual cloud picture based on interactive scenario and deep learning

Publications (1)

Publication Number Publication Date
CN115393731A true CN115393731A (en) 2022-11-25

Family

ID=84118945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210968903.3A Pending CN115393731A (en) 2022-08-12 2022-08-12 Method and system for generating virtual cloud picture based on interactive scenario and deep learning

Country Status (1)

Country Link
CN (1) CN115393731A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437230A (en) * 2023-12-21 2024-01-23 山东科技大学 Photovoltaic power station power prediction method and system based on image restoration

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437230A (en) * 2023-12-21 2024-01-23 山东科技大学 Photovoltaic power station power prediction method and system based on image restoration
CN117437230B (en) * 2023-12-21 2024-04-05 山东科技大学 Photovoltaic power station power prediction method and system based on image restoration

Similar Documents

Publication Publication Date Title
Hengl et al. SoilGrids250m: Global gridded soil information based on machine learning
Rautenhaus et al. Three-dimensional visualization of ensemble weather forecasts–Part 1: The visualization tool Met. 3D (version 1.0)
CN111210483B (en) Simulated satellite cloud picture generation method based on generation of countermeasure network and numerical mode product
CN109410313B (en) Meteorological three-dimensional information 3D simulation inversion method
CN114742272A (en) Soil cadmium risk prediction method based on space-time interaction relation
Cheng et al. Remote sensing and social sensing data fusion for fine-resolution population mapping with a multimodel neural network
Khayyal et al. Creation and spatial analysis of 3D city modeling based on GIS data
CN115630567A (en) Coastal zone soil organic carbon reserve simulation and prediction method
CN117541940B (en) Land utilization classification method and system based on remote sensing data
Ma et al. PyLUR: Efficient software for land use regression modeling the spatial distribution of air pollutants using GDAL/OGR library in Python
CN115393731A (en) Method and system for generating virtual cloud picture based on interactive scenario and deep learning
CN113780175B (en) Remote sensing identification method for typhoon and storm landslide in high vegetation coverage area
CN112668615B (en) Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion
Charlton-Perez et al. Do AI models produce better weather forecasts than physics-based models? A quantitative evaluation case study of Storm Ciarán
Chen et al. Remote sensing of diverse urban environments: From the single city to multiple cities
CN110717252B (en) Virtual atmospheric environment data generation method and system based on WRF and SEDRIS
Toms et al. Testing the reliability of interpretable neural networks in geoscience using the madden-julian oscillation
Diehl et al. Hornero: Thunderstorms characterization using visual analytics
Zhang et al. Voxel-based urban vegetation volume analysis with LiDAR point cloud
Wolters et al. Classification of large-scale remote sensing images for automatic identification of health hazards: Smoke detection using an autologistic regression classifier
Ait Mouloud et al. Explainable forecasting of global horizontal irradiance over multiple time steps using temporal fusion transformer
Kruse et al. Recreating observed convection-generated gravity waves from weather radar observations via a neural network and a dynamical atmospheric model
LOTFIAN Urban climate modeling: case study of Milan city
Awuah Effects of spatial resolution, land-cover heterogeneity and different classification methods on accuracy of land-cover mapping
Hasey et al. Form data as a resource in architectural analysis: an architectural distant reading of wooden churches from the Carpathian Mountain regions of Eastern Europe

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination