CN115222837A - True color cloud picture generation method and device, electronic equipment and storage medium - Google Patents

True color cloud picture generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115222837A
CN115222837A CN202210719485.4A CN202210719485A CN115222837A CN 115222837 A CN115222837 A CN 115222837A CN 202210719485 A CN202210719485 A CN 202210719485A CN 115222837 A CN115222837 A CN 115222837A
Authority
CN
China
Prior art keywords
data
cloud picture
true color
cloud
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210719485.4A
Other languages
Chinese (zh)
Inventor
徐娜
吴佳蔚
张璐
胡奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Satellite Meteorological Center
Original Assignee
National Satellite Meteorological Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Satellite Meteorological Center filed Critical National Satellite Meteorological Center
Priority to CN202210719485.4A priority Critical patent/CN115222837A/en
Publication of CN115222837A publication Critical patent/CN115222837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a true color cloud picture generation method and device, electronic equipment and a storage medium, which belong to the technical field of weather and comprise the following steps: acquiring infrared cloud picture data and numerical mode product data of a target area at the same time; inputting the preprocessed infrared cloud picture data and the preprocessed numerical mode product data into a true color cloud picture generation model to obtain a true color cloud picture of a corresponding time output by the true color cloud picture generation model; the true color cloud picture generation model is generated after the improved Pix2Pix network is pre-trained. Through reasonable model structure design, the invention provides a true color cloud picture generation model based on an improved Pix2Pix network, which is suitable for generating true color cloud pictures of any size under the condition that the overall structure of the model is kept unchanged, solves the problem of data loss of night visible light channels, and has MAE (maximum intensity extraction) and RMSE (remote measurement and reconstruction) indexes superior to those of the conventional simulated cloud picture generation method through experimental verification and quantitative index evaluation.

Description

True color cloud picture generation method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of weather, in particular to a true color cloud picture generation method and device, electronic equipment and a storage medium.
Background
The FY-4A satellite (Fengyun No. 4 satellite) is loaded with an advanced geostationary orbit radiation imager AGRI, the observation wave band of the satellite covers visible light to long-wave infrared, wherein the cloud images of the visible light and the near infrared wave band have high resolution and rich information, the satellite is very important for observing weather processes such as typhoon, and the energy of the satellite is from the earth surface and the solar energy reflected by the atmosphere, and the target can be distinguished according to different reflectivities of different objects. However, due to the lack of solar radiation during the night, AGRI cannot observe this part of the information during the night. In addition, the night earth surface radiation characteristic and the temperature are different from those in the daytime, and the difficulty of night cloud prediction is greatly increased, so that real-time weather monitoring and satellite remote sensing are severely limited at night. Therefore, the simulation generation of the night visible light true color cloud picture (hereinafter referred to as true color cloud picture) can make up for the deficiency of the AGRI visible light channel data at night.
In the field of simulation generation of real color cloud pictures of visible light at night, researchers have made certain research progress. The main idea of simulation generation of the night true color cloud picture is to establish a mapping relation by utilizing meteorological products such as satellite infrared cloud pictures in the secondary and same regions and the like which can be detected at night to generate a corresponding true color cloud picture. The method does not need a model to extract time dimension information, and can realize the simulation generation of the true color cloud picture at any time at night. In 2021, cheng Wencong et al proposed a satellite cloud map simulation generation method based on generation of a countermeasure network (GAN) and a numerical model product. The method realizes the simulation of an FY-4A satellite infrared 12-channel cloud picture and the simulation of a daytime and nighttime visible light 1, 2 and 3-channel fusion cloud picture. In the same year, cheng Wencong et al further propose a method for generating simulated cloud pictures of weather satellites at night based on generation of a countermeasure network, and satellite infrared channel cloud picture data and numerical mode products in the same region and the second time are input into a GAN model together to generate simulated cloud pictures at night.
However, the method for generating the true color cloud picture based on the GAN model cannot conveniently switch the size of the simulated cloud picture, and once the size of the simulated cloud picture changes, the network structure of the discriminator needs to be redesigned, so that the model cannot be flexibly applied to the generation of the simulated cloud picture in any geographic range, and has no expansion capability.
Disclosure of Invention
The invention provides a true color cloud picture generation method, a true color cloud picture generation device, electronic equipment and a storage medium, which are used for solving the defect that a model in the prior art cannot be flexibly applied to the generation of simulated cloud pictures in any geographical range, have good expandability and can be used for carrying out all-weather true color cloud picture monitoring on the interaction of typhoons or a plurality of typhoon weather systems moving in a large range.
In a first aspect, the present invention provides a method for generating a true color cloud picture, including:
acquiring infrared cloud picture data and numerical mode product data of a target area at the same time;
performing geographic positioning, radiometric calibration and data normalization processing on the infrared cloud chart data, and performing normalization processing on the numerical mode product data;
inputting the processed infrared cloud picture data and the processed numerical mode product data into a true color cloud picture generation model to obtain a true color cloud picture of a corresponding time output by the true color cloud picture generation model;
The true color cloud picture generation model is generated after an improved Pix2Pix network is trained by utilizing a pre-constructed data training set.
According to the method for generating the true color cloud picture provided by the invention, the improved Pix2Pix network specifically comprises the following steps:
adding an up-sampling module aiming at the numerical value mode product data at the input ends of a generator and a discriminator of an original Pix2Pix network, wherein the up-sampling module is composed of a plurality of transposed convolution layers;
setting the structure of the discriminator into a U-Net structure comprising a lower sampling module and an upper sampling module;
adding a convolution attention module in the generator;
all convolutional layers of the modified Pix2Pix network are set as deep separable convolutional layers.
According to the method for generating the true color cloud picture, the improved Pix2Pix network is trained by utilizing the pre-constructed data training set, and the method comprises the following steps:
obtaining a plurality of training samples; the sample data of each training sample comprises infrared cloud picture data and numerical mode product data of the same time and the same region, and the label of each training sample is a true color cloud picture of the corresponding time and the corresponding region;
sequentially carrying out geographic positioning, radiation calibration, visible light channel reflectivity correction and data normalization processing on the infrared cloud picture data in each sample data and the true cloud pictures in the corresponding labels, and carrying out data normalization processing on the numerical mode product data in each sample data;
Cleaning all training samples, and constructing a data training set by using all cleaned training samples so as to train the improved Pix2Pix network by using the data training set;
performing data cleaning on all training samples, including removing part of training samples with missing values or invalid filling values in all training samples, and screening out all training samples located in a preset daytime period;
and iteratively training the improved Pix2Pix network by using all training samples after data cleaning and the label corresponding to each training sample.
According to the method for generating the true color cloud picture, the improved Pix2Pix network is trained by utilizing the data training set, and the method comprises the following steps of iteratively executing each training sample until the average absolute error determined by utilizing the verification set is smaller than a preset threshold value:
fixing the parameters of the generator, and updating the parameters of the discriminator by using a discriminator loss function through a back propagation mode according to the label input by the discriminator and a true color cloud picture generated by the generator based on sample data corresponding to the label;
And fixing the parameters of the discriminator, and updating the parameters of the generator by using a generator loss function in a back propagation mode according to the sample data input by the generator and the label corresponding to the sample data.
According to the method for generating the true color cloud picture, the discriminator loss function is determined based on the loss function of the discriminator of the Pix2Pix network before improvement and the loss function of the U-Net structure; the generator loss function is determined based on the loss function of the generator of the Pix2Pix network before improvement and the judger of the U-Net structure.
According to the true cloud picture generation method provided by the invention, the infrared cloud picture data comprise full-disk data of an FY-4A satellite AGRI; the numerical mode product data includes ERA5 data.
According to the method for generating the true color cloud picture, provided by the invention, the cloud picture data is geographically positioned, and the method comprises the following steps: projecting the cloud picture data to an equal longitude and latitude grid so as to extract corresponding data from the equal longitude and latitude grid according to longitude and latitude information of a region corresponding to the cloud picture data;
radiometric calibration of cloud map data, comprising: taking the digital quantization value of each pixel in the cloud picture of each channel in the cloud picture data as an index; extracting the radiant brightness temperature or reflectivity corresponding to the index of each pixel from the calibration table to assign a value to each pixel;
In the case where the cloud data includes visible light cloud data of channels 1 to 3 and infrared cloud data of channels 7 to 14 of an FY-4A satellite AGRI, performing visible light channel reflectance correction on the cloud data, including: correcting the reflectivity of the visible light cloud image data of the channels 1 to 3 after radiometric calibration to be an apparent reflectivity;
the calculation formula of data normalization is as follows:
Figure RE-GDA0003748145220000051
wherein x is a data value before normalization; x is the number of * Is a normalized data value; for the visible light cloud image data of 1 to 3 channels, max is 1, and min is 0; for the infrared cloud chart data of 7-14 channels, max is 350, and min is 0; for any level data in the ERA5 data, max is the maximum value of the level data, and min is the minimum value of the level data.
In a second aspect, the present invention further provides a true color cloud picture generating apparatus, including:
the data acquisition unit is used for acquiring infrared cloud picture data and numerical mode product data of a target area at the same time;
the data preprocessing unit is used for carrying out geographic positioning, radiometric calibration and data normalization processing on the infrared cloud picture data and carrying out normalization processing on the numerical mode product data;
The cloud picture generation unit is used for inputting the infrared cloud picture data after the normalization processing and the numerical mode product data after the normalization processing into a true color cloud picture generation model so as to obtain a true color cloud picture which is output by the true color cloud picture generation model and corresponds to the time;
the true color cloud picture generation model is generated by training an improved Pix2Pix network by utilizing a pre-constructed data training set.
In a third aspect, the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement any one of the methods for generating a true cloud image.
In a fourth aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for generating a true color cloud.
The method, the device, the electronic equipment and the storage medium for generating the true color cloud picture provided by the invention provide a true color cloud picture generation model based on an improved Pix2Pix network through reasonable model structure design, are suitable for generating the true color cloud picture with any size under the condition that the overall structure of the model is not changed, solve the problem of data loss of a visible light channel at night, and are superior to the conventional simulated cloud picture generation method in MAE (maximum intensity extraction) and RMSE (remote measurement and reconstruction) indexes through experimental verification and quantitative index evaluation.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the following briefly introduces the drawings needed for the embodiments or the prior art descriptions, and obviously, the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart of a true color cloud generating method provided by the present invention;
fig. 2 is a schematic structural diagram of a true cloud generation model of an original Pix2Pix network;
FIG. 3 is a schematic structural diagram of a true cloud generation model based on an improved Pix2Pix network provided by the present invention;
FIG. 4 is a schematic flow chart of the method for generating an all-weather typhoon true color cloud chart provided by the invention;
FIG. 5 is a schematic structural diagram of a true color cloud generating apparatus provided in the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that in the description of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. The terms "upper", "lower", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one.
In the field of simulation generation of visible light true color clouds (also called true color clouds) at night, researchers have made certain research progress. The main idea of simulation generation of the night true color cloud picture is to establish a mapping relation by utilizing meteorological products such as satellite infrared cloud pictures in the same region and the secondary region while the meteorological products can be detected at night to generate the corresponding true color cloud picture. The method does not need a model to extract time dimension information, and can realize the simulation generation of the true color cloud picture at any time at night.
In 2021, cheng Wencong et al proposed a satellite cloud map simulation generation method based on generation of a countermeasure network (GAN) and a numerical model product. The method realizes simulation of an FY-4A satellite infrared 12-channel cloud picture and simulation of a daytime and nighttime visible light 1, 2 and 3-channel fusion cloud picture.
In the same year, cheng Wencong further provides a GAN-based method for generating simulated cloud pictures of real colors of meteorological satellites at night, which is characterized in that satellite infrared channel cloud picture data and numerical mode products (ERA 5 data) in the same time and the same region are jointly input into a GAN model to generate a visible light simulated cloud picture at night.
However, the discriminator in the existing GAN model is a typical convolution classification network, which outputs a scalar representing the global true and false information of the image, and the convolution layer setting of the discriminator network is related to the overall size of the image, which results in that the method cannot conveniently switch the size of the simulated cloud picture, once the size of the simulated cloud picture changes, the network structure of the discriminator needs to be redesigned, and the model cannot be flexibly applied to the generation of the simulated cloud picture in any geographical range, and has no expansion capability.
In order to overcome the defects of the existing method completely or partially, the invention improves the prior art and provides a novel true color cloud picture generation method.
The following describes a true color cloud picture generation method, a true color cloud picture generation device, an electronic device, and a storage medium according to embodiments of the present invention with reference to fig. 1 to 6.
Fig. 1 is a schematic flow chart of a true cloud image generation method provided by the present invention, as shown in fig. 1, including but not limited to the following steps:
step 101: and acquiring infrared cloud picture data and numerical mode product data of the target area at the same time.
As an alternative embodiment, the infrared cloud image data mainly includes full disk data of the FY-4A satellite AGRI, and specifically may be 4km resolution full disk data of the FY-4A satellite AGRI, and especially infrared cloud image data of 7-14 channels therein. The numerical model product data mainly comprises ERA5 data.
Specifically, AGRI is one of the main loads of the FY-4A satellite, and the invention mainly utilizes the 4km resolution full disk cloud image data thereof.
The AGRI has 14 detection bands, wherein channels 1, 2 and 3 are visible light channels, so that a true cloud picture can be detected only in the daytime, and when a data training set is constructed in advance, the true cloud picture can be used as label data of a corresponding time. The channels 7 to 14 are medium-wave and long-wave infrared channels, infrared cloud picture data can be detected day and night, and when a data training set is constructed in advance, the infrared cloud picture data can be used as one of sample data of corresponding time.
Specifically, when constructing the data training set, the infrared cloud image data of any one of the 7 to 14 channels acquired at a certain time and the numerical mode product data of the time may be used together as one of sample data, and the true color cloud images of the 1, 2, and 3 channels acquired at the time are used as labels to generate a training sample.
By adopting the mode, a plurality of training samples can be generated by collecting the data in a certain research area at different times, and a data training set is constructed.
The full disc data details of the AGRI used are shown in table 1:
TABLE 1 AGRI data List
Figure RE-GDA0003748145220000101
Figure RE-GDA0003748145220000111
Further, the ERA5 data used in the present invention refers to the fifth generation of global re-atmosphere analysis data (ECMWF reality V5, ERA 5) of the European middle-term Weather forecast Center (ECMWF), which is the most widely used Numerical Weather forecast (NWP) product at present.
As an optional embodiment, the invention comprehensively considers the characteristics of the species and the levels of ERA5 data required to be adopted in the true color cloud chart simulation, and selects ERA5 data and infrared cloud chart data at the same time as shown in table 2 as input data of the true color cloud chart generation model:
TABLE 2 ERA5 data List
Figure RE-GDA0003748145220000112
Figure RE-GDA0003748145220000121
It should be noted that, when the true color cloud picture is generated, the infrared cloud picture data and the numerical mode product data at any time around the clock can be analyzed to obtain the true color cloud picture at each time, and finally, all the time true color cloud pictures are spliced in time sequence to obtain the all-weather true color cloud picture.
Step 102: and carrying out geographic positioning, radiometric calibration and data normalization processing on the infrared cloud picture data, and carrying out normalization processing on the numerical mode product data.
As an alternative embodiment, the geo-locating of the infrared cloud image data mainly includes: and projecting the infrared cloud picture data to the equal-longitude-latitude grid so as to extract corresponding data from the equal-longitude-latitude grid according to the longitude and latitude information of the area corresponding to the infrared cloud picture data.
Optionally, the radiometric calibration of the infrared cloud image data includes: taking a digital quantization value (DN value) of each pixel in the infrared cloud picture of each channel in the infrared cloud picture data as an index; and extracting the radiant brightness temperature corresponding to the index of each pixel from the calibration table so as to assign the radiant brightness temperature to each pixel.
Optionally, the calculation formula of the data normalization is:
Figure RE-GDA0003748145220000131
wherein x is a data value before normalization; x is the number of * Is a normalized data value; for the visible light cloud image data of 1 to 3 channels, max is 1, and min is 0; for the infrared cloud chart data of 7-14 channels, max is 350, and min is 0; for any level data in the ERA5 data, max is the maximum value of the level data, and min is the minimum value of the level data.
Before the infrared cloud picture data and the numerical mode product data are identified by using the true cloud picture generation model, the input data need to be preprocessed according to the identification requirement of the model, and the method mainly comprises the following steps:
(1) Geographic positioning
The full disk data can be projected into the equal longitude and latitude grid by using a conversion formula between the row number and the longitude and latitude provided by the official website of the national satellite weather center. And then, extracting corresponding data on the latitude and longitude grids according to the latitude and longitude range of the target area, and realizing the geographic positioning of the cloud map data of each channel of the AGRI.
(2) Radiometric calibration
Taking the numerical quantization value (DN value) in the nephogram data (NOMChannel xx, xx is the name of the channel)) of each channel of the AGRI as an index, and extracting the corresponding radiation brightness temperature or reflectivity of the index position from a calibration table (CALChannel xx, xx is the name of the channel) and assigning the radiation brightness temperature or reflectivity to the corresponding position of the nephogram data.
It should be noted that, in the present invention, the visible light 1, 2, 3 channel data in the AGRI cloud map data is calibrated as the reflectivity, and the infrared 7-14 channel data is calibrated as the radiant brightness temperature.
(3) Visible light channel reflectivity correction
For a fixed geographic area on the earth, a certain light and shade difference exists at different moments due to different distances between the sun and the earth and different zenith angles of the sun, which causes interference to the training process of a real cloud picture generation model, also causes fuzzy time concept and physical meaning during normal cloud picture simulation, and generates the light and shade change which should not exist at night.
In order to eliminate the brightness difference of label data, namely the true color cloud picture at different moments during model training, the reflectivity after the AGRI cloud picture data 1, 2 and 3 channel calibration is corrected to be the apparent reflectivity, and the calculation formula is as follows:
Figure RE-GDA0003748145220000141
ref is the reflectivity after 1, 2 and 3 channel radiometric calibration, rho is the corresponding apparent reflectivity, D ES Is the distance between the sun and the ground, and u is the zenith angle of the sun.
After the reflectivity correction, the brightness of each pixel point in the real color cloud picture as the label and the brightness of each pixel point in the real color cloud picture obtained through simulation are equivalent to the brightness under the condition of direct sunlight at noon, and the image is brighter, higher in contrast and richer in information.
It should be noted that the present invention only needs visible channel reflectivity correction for the true color cloud used as the improved Pix2Pix network training label.
(4) Data normalization
The data used for generating the true color cloud picture mainly comprises AGRI cloud picture data, ERA5 data and the like, the data comprise various different physical quantities and have different value ranges, and in order to prevent the range difference between the data from influencing the improved Pix2Pix network training process, the data preprocessing stage of the invention carries out normalization operation on the AGRI cloud picture data and the ERA5 data, and the calculation method comprises the following steps:
Figure RE-GDA0003748145220000151
wherein x is a data value before normalization; x is the number of * Is a normalized data value; for visible light cloud image data of 1 to 3 channels in the AGRI cloud image data, max is 1, and min is 0; for the infrared cloud chart data of 7-14 channels, max is 350, and min is 0; for any level data in ERA5 data, max is the maximum of the level dataAnd taking the min value as the minimum value of the hierarchical data. After normalization, all data are mapped to [0,1]Within the range.
As an optional embodiment, in the process of constructing the data training set, after the AGRI cloud map data and ERA5 data related to all training samples are normalized, the normalized data may be further cleaned, mainly to screen out data that can be used for model training. The specific screening conditions are mainly two: firstly, each sample data does not contain missing values or invalid filling values so as to avoid the interference of the incomplete data on model training; secondly, the sample data needs to be daytime data of the geographic area where the sample data is located.
The method for screening the daytime data comprises the following steps:
according to the time zone where the longitude of the sampling area where the AGRI cloud picture data and the ERA5 data are located is obtained, hourly sample data of the time zone from 10 to 15.
According to the method for generating the true color cloud picture, the reflectivity of a visible light channel for label data (called as the label true color cloud picture) is corrected in a data preprocessing stage, the daytime time interval extracted by a training data set is shortened, and the brightness difference of the label true color cloud picture at different moments is eliminated, so that the brightness of each pixel point in the true color cloud picture and the label true color cloud picture output by a model is equivalent to the brightness under the condition of direct sunlight at noon moment, the time concept of the cloud picture is unified, and the cloud picture change which is not existed in the simulation cloud picture at night is prevented. In particular, the output night true color cloud picture has bright image, high contrast, rich information, complete land information and clear land margin.
Step 103: and inputting the infrared cloud picture data after the normalization processing and the numerical mode product data after the normalization processing into a true color cloud picture generation model so as to obtain a true color cloud picture of a corresponding time output by the true color cloud picture generation model.
The true color cloud picture generation model is generated by training an improved Pix2Pix network by utilizing a pre-constructed data training set.
The original Pix2Pix network, an important variant of the GAN network, is another Image-to-Image Translation, also called picture Translation (Translation) network, similar to the CycleGAN network.
Fig. 2 is a schematic structural diagram of a true cloud image generation model of an original Pix2Pix network, and when simulated cloud image generation is performed by using the original Pix2Pix network, the following defects mainly exist:
first, the ERA5 data has a lower resolution than the cloud data, and the ERA5 data and the infrared cloud data cannot be directly used together as the input of the original Pix2Pix network.
Secondly, the existing Pix2Pix network extracts and learns the channel features and the spatial features of the input data indiscriminately, and when the number of input data channels is increased or the spatial size is increased, the model cannot be ensured to accurately extract the important features to inhibit redundant features, which is not beneficial to model convergence.
In addition, although the discriminator of the conventional Pix2Pix network can perform true and false discrimination on local blocks of an image, the precision is still insufficient for the details of a cloud image, and the true and false discrimination on the generated image pixel by pixel cannot be realized.
In order to overcome the defects of the prior art, the true color cloud picture generation method provided by the invention improves the original Pix2Pix network, obtains a true color cloud picture generation model after training the improved Pix2Pix network, performs feature extraction on input preprocessed infrared cloud picture data and numerical mode product data by using the true color cloud picture generation model, and outputs a corresponding true color cloud picture.
Fig. 3 is a schematic structural diagram of a true cloud image generation model based on an improved Pix2Pix network provided in the present invention, and as an alternative embodiment, the improved Pix2Pix network provided in the present invention specifically includes:
adding an up-sampling module aiming at the numerical mode product data at the input ends of a generator and a discriminator of the original Pix2Pix network;
setting the structure of the discriminator into a U-Net structure comprising a lower sampling module and an upper sampling module;
adding a convolution attention module in the generator;
all convolutional layers of the modified Pix2Pix network are set as deep separable convolutional layers.
The method for generating the true color cloud picture provided by the invention adopts a Pix2Pix network as a basic frame, and obtains a true color cloud picture generation model by improving and pre-training an original Pix2Pix network. During pre-training, the sample data input to the true color cloud map generation model may be 7 to 14-channel infrared cloud maps and 75-channel ERA5 data of the AGRI at the same time, and the label is a visible light cloud map (i.e., true color cloud map) corresponding to 1 to 3 channels of the AGRI at the time.
Specifically, the present invention mainly includes, but is not limited to, the following 4 aspects for improving the original Pix2Pix network:
(1) Introducing an up-sampling module aiming at ERA5 data at the input ends of the generator and the discriminator;
(2) The structure of the U-Net GAN is referred, and the discriminator is improved to be in the U-Net structure, so that the overall and detailed generation effects of the simulated cloud picture are improved;
(3) A convolution Attention Module (CBAM) is introduced into the generator, so that the model can capture important channel and space characteristics;
(4) All the traditional convolutional layers in the model are replaced by depth Separable convolutional layers (DSC) so as to reduce the parameter amount of the model, improve the training speed and prevent overfitting.
One of the benefits of the original Pix2Pix network using PatchGAN as the discriminator is that the model can handle images of arbitrary size. This is because, unlike the conventional GAN model's arbiter that outputs a scalar representing the global true and false information of the image, the output of the PatchGAN is a matrix in which each element represents the local true and false information of each small region in the image, regardless of the size of the image as a whole. This gives the Pix2Pix network great flexibility, with sufficient computational resources, to enable image generation of arbitrary size.
The invention uses the Pix2Pix network as a basic model and improves on the basis of the original Pix2Pix network. After improvement, both the discriminator based on the U-Net structure and the generator added with the CBAM can act on cloud pictures with any size, when real color cloud pictures with different sizes are generated, the whole structure of the model does not need to be changed, and the only module needing to be adjusted according to the actual size of the real color cloud pictures is the up-sampling module aiming at the ERA5 data (the corresponding adjustment is carried out according to the relation between the size of the input ERA5 data and the size of the infrared cloud pictures and the output real color cloud pictures).
The invention provides a method for adjusting an up-sampling module of ERA5 data aiming at the size requirement of a true cloud picture with output corresponding size, which comprises the following steps:
because the ERA5 data upsampling module is composed of a plurality of layers of transposed convolutional layers, the convolutional cores and the number of layers of the transposed convolutional layers in the ERA5 upsampling module need to be reasonably designed according to the relationship between the sizes of the transposed convolutional output image and the input image, and the specific calculation formula is as follows:
H out =(H in -1)×stride-2×padding+dilation×(kernel_size-1)+output_padding+1;
wherein H in Size of ERA5 data before upsampling; h out The size of the cloud image data, namely the target size of the ERA5 data expected to be obtained by the up-sampling operation; stride, padding, scaling, kernel _ size, output _ padding are all transposed convolution parameters.
And replacing the ERA5 upsampling module at the front end of the generator and the arbiter in the model, namely completing the partial construction of the improved Pix2Pix network capable of generating cloud pictures with other sizes.
In summary, the improved Pix2Pix network provided by the present invention can be applied to true cloud picture generation of any size.
Fig. 4 is a schematic flow chart of the method for generating an all-weather typhoon true color cloud chart provided by the present invention, and as shown in fig. 4, the method for generating a true color cloud chart provided by the present invention can be applied to the simulation generation of an all-weather satellite true color cloud chart. An ERA5 upsampling module which can participate in model training, a discriminator of a U-Net structure, a convolution attention mechanism and a deep separable convolution are fused with an original Pix2Pix network, the improved Pix2Pix network is provided through reasonable model structure design, and then a true color cloud picture generation model is obtained through sufficient pre-training of the improved Pix2Pix network. The constructed true color cloud picture generation model can better realize the simulation generation of the all-day true color cloud picture, and solves the problem of visible light channel data loss at the night of the AGRI. Through experimental verification and quantitative index evaluation, the MAE and RMSE indexes of the true color cloud picture generated by simulation are superior to those of the current mainstream simulation method.
As shown in fig. 4, to obtain the true cloud image generation model, the method for generating the all-weather typhoon true cloud image according to the present invention first needs to construct a data training set, and pre-trains the improved Pix2Pix network for the constructed data training set, including:
firstly, selecting a target area to be subjected to all-weather typhoon true color cloud picture monitoring according to a concerned geographical area range;
sample data were then collected, including 4km resolution full disk data for the FY-4A satellite AGRI and fifth generation reanalysis field product ERA5 data from the european mid-term weather forecast center.
Further, by using the method provided by the above embodiment, the collected data is preprocessed in sequence, including geographic positioning, radiometric calibration, visible light channel reflectivity correction of the AGRI cloud map data, and normalization operations of the AGRI cloud map data and the ERA5 data.
Because the true color cloud picture generation model is pre-trained in a supervised training mode, the true color cloud picture is required to be used as a training truth value (label). Because there is no true color cloud picture at night and only there is a true color cloud picture in the daytime, the data training set can only be constructed by using the daytime data for training. Therefore, data cleaning is also required to be performed on the normalized AGRI cloud map data and ERA5 data, mainly deleting part of data including missing values or invalid filling values, and screening daytime data (such as hourly data of 10 to 15. And performing iterative training on the improved Pix2Pix network by using the training samples until the result of model training is converged (namely the average absolute error determined by using the verification set is smaller than a preset threshold).
Due to the fact that the improved Pix2Pix network comprises the generator and the discriminator, when the model is pre-trained, the method comprises the following steps of iteratively executing by using each training sample until the average absolute error determined by using the verification set is smaller than a preset threshold value:
fixing the parameters of the generator, and updating the parameters of the discriminator by using a discriminator loss function in a back propagation mode according to the label input by the discriminator and a true color cloud picture generated by the generator based on sample data corresponding to the generated label;
and fixing the parameters of the discriminator, and updating the parameters of the generator by using a generator loss function in a back propagation mode according to the sample data input by the generator and the label corresponding to the sample data.
Specifically, an Adam optimizer is adopted during model pre-training, and the learning rate is set to 0.0002 and the momentum parameter beta is set in a mode of alternately updating parameters of a generator and a discriminator 1 =0.5,β 2 =0.999。
It needs to be emphasized that, when updating the parameters of the discriminator, the parameters of the generator need to be fixed, and the loss function of the discriminator is used for calculating the gradient and carrying out back propagation; and (3) fixing the parameters of the discriminator when the parameters of the generator are updated, calculating the gradient by using the loss function of the generator and performing back propagation.
The data Batch size (Batch size) input per parameter update is set to 16. And in the training process, the average absolute error (MAE) on the verification set is used as an indication, and a model obtained by one round of training with the minimum MAE on the verification set is saved as a final true color cloud picture generation model.
And finally, acquiring continuous time infrared cloud picture data and ERA5 data in a typhoon period in a target area, preprocessing the infrared cloud picture data and the ERA5 data, and inputting the preprocessed data to the trained true color cloud picture generation model, thereby realizing all-weather true color cloud picture monitoring of the typhoon in any size of geographic range.
The invention combines the loss function of the original Pix2Pix network with the loss function of the U-Net GAN network, and is used for gradient calculation and parameter updating.
Based on the content of the above embodiment, as an optional embodiment, the discriminator loss function of the improved Pix2Pix network adopted in the true color cloud image generation method provided by the present invention is determined based on the loss function of the discriminator of the Pix2Pix network before improvement and the loss function of the U-Net structure; the generator loss function is determined based on the loss function of the generator of the Pix2Pix network before improvement and the U-Net structure discriminator.
Specifically, the loss function of the original Pix2Pix network is combined with the loss function of the U-Net GAN network, denoted by the symbol D U Represents a U-Net structure based discriminator in which the encoder portion of the discriminator is denoted D enc U The decoder part being denoted D dec U The present discriminator loss function is the sum of the loss functions of the encoder and decoder, as follows:
Figure RE-GDA0003748145220000221
wherein the loss functions of the encoder and decoder are respectively as follows:
Figure RE-GDA0003748145220000222
Figure RE-GDA0003748145220000223
the loss function of the generator is:
Figure RE-GDA0003748145220000224
wherein the content of the first and second substances,
Figure RE-GDA0003748145220000231
and
Figure RE-GDA0003748145220000232
representing the true and false judgments of the discriminator for each pixel point (i, j) of the input image,
Figure RE-GDA0003748145220000233
and the distance L1 between the true cloud picture generated by the generator and the true value of the corresponding label true cloud picture.
Alternatively, λ in the above formula may be 100.
According to the method for generating the true color cloud picture, the true color cloud picture generation model is applied to all-weather typhoon monitoring, the continuous observation of typhoon all the day is realized, and the application value of simulation generation of the true color cloud picture in the aspects of real-time monitoring of a weather system, weather forecast, disaster prevention and reduction at night is embodied. The true color cloud picture generation model is constructed based on the improved Pix2Pix network, so that the true color cloud picture generation model is suitable for cloud picture generation of any size, the simulation true color cloud picture generation of a larger area is realized under the condition that the overall structure of the model is unchanged, all-weather true color cloud picture monitoring can be performed aiming at the interaction of typhoon weather systems or a plurality of typhoon weather systems moving in a large range, and an idea is provided for observing a full-disc visible light remote sensing image day and night in real time.
Through observation, the simulation effect of the cloud picture of the improved true color cloud picture generation model is greatly improved compared with the generation effect of the original Pix2Pix model, wherein the cloud detail distortion phenomenon of the simulated cloud picture in the daytime is solved; the land area information of the night simulation cloud picture is complete, the contrast with the sea area is improved, the details of the cloud are richer, and the quantitative index pair is shown in a table 3:
TABLE 3 quantization comparison Table
Index (I) MSE RMSE MAE PSNR(dB)
True color cloud picture generation model 0.005 0.0683 0.0454 23.6677
Original Pix2Pix 0.0072 0.0768 0.0516 22.9864
Compared with a cloud picture generation model based on an original Pix2Pix network, the cloud picture generation model provided by the invention has the advantages that various indexes are obviously improved, wherein MSE, RMSE and MAE are respectively reduced by 30.56%, 11.07% and 12.02%, and PSNR is improved by 0.6813dB.
The experimental phenomena fully verify that the improved true color cloud picture generation model has better performance in the true color cloud picture simulation generation aspect than the original Pix2Pix network.
Fig. 5 is a schematic structural diagram of the true color cloud image generation apparatus provided in the present invention, as shown in fig. 5, the apparatus mainly includes a data acquisition unit 51, a data preprocessing unit 52, and a cloud image generation unit 53, wherein:
the data acquisition unit 51 is mainly used for acquiring infrared cloud picture data and numerical mode product data of a target area at the same time;
The data preprocessing unit 52 is mainly used for performing geographic positioning, radiometric calibration and data normalization processing on the infrared cloud image data, and performing normalization processing on the numerical mode product data;
the cloud picture generation unit 53 is mainly configured to input the infrared cloud picture data after the normalization processing and the numerical mode product data after the normalization processing into a true color cloud picture generation model, so as to obtain a true color cloud picture output by the true color cloud picture generation model at a corresponding time.
The true color cloud picture generation model is generated after an improved Pix2Pix network is trained by utilizing a pre-constructed data training set.
It should be noted that, when the device for generating a true color cloud image provided in the embodiment of the present invention is in specific operation, the method for generating a true color cloud image described in any of the foregoing embodiments may be executed, and details of this embodiment are not described herein.
The true color cloud picture generation method provided by the invention provides a true color cloud picture generation model based on an improved Pix2Pix network through reasonable model structure design, is suitable for generating true color cloud pictures of any size under the condition that the overall structure of the model is not changed, solves the problem of data loss of a visible light channel at night, and has MAE (maximum intensity extraction) and RMSE (remote maximum intensity extraction) indexes superior to those of the conventional simulated cloud picture generation method through experimental verification and quantitative index evaluation.
Fig. 6 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 6, the electronic device may include: a processor (processor) 610, a communication Interface (Communications Interface) 620, a memory (memory) 630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a true cloud generation method comprising: acquiring infrared cloud picture data and numerical mode product data of a target area at the same time; performing geographic positioning, radiometric calibration and data normalization processing on the infrared cloud chart data, and performing normalization processing on the numerical mode product data; inputting the infrared cloud picture data after normalization and the numerical mode product data after normalization into a true color cloud picture generation model to obtain a true color cloud picture of a corresponding time output by the true color cloud picture generation model; the true color cloud picture generation model is generated by training an improved Pix2Pix network by utilizing a pre-constructed data training set.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the true cloud picture generation method provided by the above methods, the method including: acquiring infrared cloud picture data and numerical mode product data of a target area at the same time; performing geographic positioning, radiometric calibration and data normalization processing on the infrared cloud chart data, and performing normalization processing on the numerical mode product data; inputting the infrared cloud picture data after normalization and the numerical mode product data after normalization into a true color cloud picture generation model to obtain a true color cloud picture of a corresponding time output by the true color cloud picture generation model; the true color cloud picture generation model is generated by training an improved Pix2Pix network by utilizing a pre-constructed data training set.
In still another aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented by a processor to perform the method for generating a true color cloud provided by the foregoing embodiments, where the method includes: acquiring infrared cloud picture data and numerical mode product data of a target area at the same time; performing geographic positioning, radiometric calibration and data normalization processing on the infrared cloud chart data, and performing normalization processing on the numerical mode product data; inputting the infrared cloud picture data after normalization and the numerical mode product data after normalization into a true color cloud picture generation model to obtain a true color cloud picture of a corresponding time output by the true color cloud picture generation model; the true color cloud picture generation model is generated by training an improved Pix2Pix network by utilizing a pre-constructed data training set.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A true color cloud picture generation method is characterized by comprising the following steps:
acquiring infrared cloud picture data and numerical mode product data of a target area at the same time;
performing geographic positioning, radiometric calibration and data normalization processing on the infrared cloud chart data, and performing normalization processing on the numerical mode product data;
inputting the processed infrared cloud picture data and the processed numerical mode product data into a true color cloud picture generation model to obtain a true color cloud picture of a corresponding time output by the true color cloud picture generation model;
the true color cloud picture generation model is generated by training an improved Pix2Pix network by utilizing a pre-constructed data training set.
2. The method for generating a true color cloud according to claim 1, wherein the improved Pix2Pix network specifically comprises:
adding an up-sampling module aiming at the numerical value mode product data at the input ends of a generator and a discriminator of an original Pix2Pix network, wherein the up-sampling module is composed of a plurality of transposed convolution layers;
setting the structure of the discriminator into a U-Net structure comprising a down-sampling module and an up-sampling module;
adding a convolution attention module in the generator;
all convolutional layers of the modified Pix2Pix network are set as deep separable convolutional layers.
3. The method for generating a true color cloud picture according to claim 2, wherein the training of the modified Pix2Pix network using the pre-constructed data training set comprises:
obtaining a plurality of training samples; the sample data of each training sample comprises infrared cloud picture data and numerical mode product data of the same time and the same region, and the label of each training sample is a true color cloud picture of the corresponding time and the corresponding region;
sequentially carrying out geographic positioning, radiation calibration, visible light channel reflectivity correction and data normalization processing on the infrared cloud picture data in each sample data and the true cloud pictures in the corresponding labels, and carrying out data normalization processing on the numerical mode product data in each sample data;
Cleaning all training samples, and constructing a data training set by using all cleaned training samples so as to train the improved Pix2Pix network by using the data training set;
performing data cleaning on all training samples, including removing part of training samples with missing values or invalid filling values in all training samples, and screening out all training samples located in a preset daytime period;
and iteratively training the improved Pix2Pix network by using all training samples after data cleaning and the label corresponding to each training sample.
4. The method as claimed in claim 3, wherein the training of the modified Pix2Pix network with the training set of data comprises iteratively performing the following steps with each training sample until the mean absolute error determined with the validation set is less than a predetermined threshold:
fixing the parameters of the generator, and updating the parameters of the discriminator by using a discriminator loss function in a back propagation mode according to the label input by the discriminator and a true color cloud picture generated by the generator based on sample data corresponding to the label;
And fixing the parameters of the discriminator, and updating the parameters of the generator by using a generator loss function in a back propagation mode according to the sample data input by the generator and the label corresponding to the sample data.
5. The method for generating a true color cloud picture according to claim 4, wherein the discriminator loss function is determined based on a loss function of a discriminator of a Pix2Pix network before improvement and a loss function of the U-Net structure discriminator; the generator loss function is determined based on the loss function of the generator of the Pix2Pix network before improvement and the judger of the U-Net structure.
6. The method for generating a true color cloud picture according to any one of claims 1 to 5, wherein the infrared cloud picture data includes full disk data of an FY-4A satellite AGRI; the numerical mode product data includes ERA5 data.
7. The method of claim 6, wherein geo-locating cloud data comprises: projecting the cloud picture data to an equal longitude and latitude grid so as to extract corresponding data from the equal longitude and latitude grid according to longitude and latitude information of a region corresponding to the cloud picture data;
Radiometric calibration of cloud map data, comprising: taking the digital quantization value of each pixel in the cloud picture of each channel in the cloud picture data as an index; extracting the radiant brightness temperature or reflectivity corresponding to the index of each pixel from the calibration table to assign a value to each pixel;
in the case where the cloud data includes visible light cloud data of channels 1 to 3 and infrared cloud data of channels 7 to 14 of an FY-4A satellite AGRI, performing visible light channel reflectance correction on the cloud data, including: correcting the reflectivity of the visible light cloud image data of the channels 1 to 3 after radiometric calibration to be an apparent reflectivity;
the calculation formula of data normalization is as follows:
Figure FDA0003710762860000031
wherein x is a data value before normalization; x is the number of * Is a normalized data value; for the visible light cloud image data of 1 to 3 channels, max is 1, and min is 0; for the infrared cloud chart data of 7-14 channels, max is 350, and min is 0; for any level data in the ERA5 data, max is the maximum value of the level data, and min is the minimum value of the level data.
8. A true color cloud picture generation device, comprising:
the data acquisition unit is used for acquiring infrared cloud picture data and numerical mode product data of a target area at the same time;
The data preprocessing unit is used for carrying out geographic positioning, radiometric calibration and data normalization processing on the infrared cloud picture data and carrying out normalization processing on the numerical mode product data;
the cloud picture generation unit is used for inputting the infrared cloud picture data after the normalization processing and the numerical mode product data after the normalization processing into a true color cloud picture generation model so as to obtain a true color cloud picture which is output by the true color cloud picture generation model and corresponds to the time;
the true color cloud picture generation model is generated by training an improved Pix2Pix network by utilizing a pre-constructed data training set.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for generating a true color cloud according to any one of claims 1 to 7 when executing the computer program.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the true color cloud generating method according to any one of claims 1 to 7.
CN202210719485.4A 2022-06-23 2022-06-23 True color cloud picture generation method and device, electronic equipment and storage medium Pending CN115222837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210719485.4A CN115222837A (en) 2022-06-23 2022-06-23 True color cloud picture generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210719485.4A CN115222837A (en) 2022-06-23 2022-06-23 True color cloud picture generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115222837A true CN115222837A (en) 2022-10-21

Family

ID=83609939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210719485.4A Pending CN115222837A (en) 2022-06-23 2022-06-23 True color cloud picture generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115222837A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342448A (en) * 2023-03-28 2023-06-27 北京华云星地通科技有限公司 Full-disc visible light fitting method, system, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342448A (en) * 2023-03-28 2023-06-27 北京华云星地通科技有限公司 Full-disc visible light fitting method, system, equipment and medium
CN116342448B (en) * 2023-03-28 2023-09-29 北京华云星地通科技有限公司 Full-disc visible light fitting method, system, equipment and medium

Similar Documents

Publication Publication Date Title
Hilburn et al. Development and interpretation of a neural-network-based synthetic radar reflectivity estimator using GOES-R satellite observations
CN109840553B (en) Extraction method and system of cultivated land crop type, storage medium and electronic equipment
CN111383192A (en) SAR-fused visible light remote sensing image defogging method
CN110298211B (en) River network extraction method based on deep learning and high-resolution remote sensing image
CN109815916A (en) A kind of recognition methods of vegetation planting area and system based on convolutional neural networks algorithm
KR20200059085A (en) A Method for Sea Surface Temperature Retrieval using Surface Drifter Temperature Data and Satellite Infrared Images
CN117148360B (en) Lightning approach prediction method and device, electronic equipment and computer storage medium
Yoo et al. Spatial downscaling of MODIS land surface temperature: Recent research trends, challenges, and future directions
CN116778354B (en) Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method
CN116310883B (en) Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment
CN110059745A (en) A kind of Basin Rainfall product correction method based on star merged and system
CN109409014B (en) BP neural network model-based annual illuminable time calculation method
CN111798132B (en) Cultivated land dynamic monitoring method and system based on multi-source time sequence remote sensing depth cooperation
CN116519557B (en) Aerosol optical thickness inversion method
WO2024036739A1 (en) Reservoir water reserve inversion method and apparatus
Oehmcke et al. Creating cloud-free satellite imagery from image time series with deep learning
CN115222837A (en) True color cloud picture generation method and device, electronic equipment and storage medium
CN115601281A (en) Remote sensing image space-time fusion method and system based on deep learning and electronic equipment
Liu et al. Estimating hourly all-weather land surface temperature from FY-4A/AGRI imagery using the surface energy balance theory
CN113705340B (en) Deep learning change detection method based on radar remote sensing data
CN107576399B (en) MODIS forest fire detection-oriented brightness and temperature prediction method and system
Hu et al. Correcting the saturation effect in dmsp/ols stable nighttime light products based on radiance-calibrated data
Bartkowiak et al. Land surface temperature reconstruction under long-term cloudy-sky conditions at 250 m spatial resolution: case study of vinschgau/venosta valley in the european alps
CN111273376B (en) Downscaling sea surface net radiation determination method, system, equipment and storage medium
CN115546658B (en) Night cloud detection method combining quality improvement and CNN improvement of data set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination