CN112669201A - Infrared light-based visible light cloud image conversion method, system and terminal - Google Patents

Infrared light-based visible light cloud image conversion method, system and terminal Download PDF

Info

Publication number
CN112669201A
CN112669201A CN202011565575.XA CN202011565575A CN112669201A CN 112669201 A CN112669201 A CN 112669201A CN 202011565575 A CN202011565575 A CN 202011565575A CN 112669201 A CN112669201 A CN 112669201A
Authority
CN
China
Prior art keywords
visible light
model
data
cloud image
satellite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011565575.XA
Other languages
Chinese (zh)
Other versions
CN112669201B (en
Inventor
王卓阳
崔传忠
吴家豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhitian Zhuhai Hengqin Meteorological Technology Co ltd
Original Assignee
Zhitian Zhuhai Hengqin Meteorological Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhitian Zhuhai Hengqin Meteorological Technology Co ltd filed Critical Zhitian Zhuhai Hengqin Meteorological Technology Co ltd
Priority to CN202011565575.XA priority Critical patent/CN112669201B/en
Publication of CN112669201A publication Critical patent/CN112669201A/en
Application granted granted Critical
Publication of CN112669201B publication Critical patent/CN112669201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a visible light cloud image conversion method, a system and a terminal based on infrared light, which can realize the conversion of infrared light to visible light in 24 hours all day and at any global place, and have high conversion speed and higher resolution, and the method comprises the following steps: data acquisition: collecting and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted satellite visible light data is 500 meters; model training: importing historical data into a gradient lifting model, and reconstructing the gradient lifting model to obtain an inversion visible light cloud image model; and (3) inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image.

Description

Infrared light-based visible light cloud image conversion method, system and terminal
Technical Field
The invention relates to the field of artificial intelligence, in particular to a visible light cloud picture conversion method and system based on infrared light and a terminal thereof.
Background
The visible light cloud picture plays an important role in detecting the occurrence and the development of typhoon and other weather systems in the aspect of researching the movement and the development of cloud blocks and cloud systems, and achieves good effect. The visible light cloud image has the advantages of high resolution, bright contrast and capability of directly distinguishing cloud layers of any height. However, since the image is taken by relying on visible light waves, the brightness and contrast are limited by the sunlight angle, and the image cannot be taken at night. On the contrary, the contrast of the infrared cloud picture comes from the temperature difference after the infrared light wave intensity conversion, and then the position and the height of the cloud are distinguished by assuming that the troposphere temperature is reduced along with the height. The method does not depend on visible light waves, and has the advantages that the shooting can be carried out for 24 hours all day long, and the brightness and the contrast are not influenced by the angle of sunlight. During the evening, however, the surface/sea surface temperature may be comparable to or even lower than the temperature of the lower cloud. At this time, weather personnel cannot distinguish the position of the low cloud through the infrared cloud picture, and certain limitations are caused to weather forecasting such as typhoon positioning at night, fog monitoring, cold air monitoring and the like.
Disclosure of Invention
The invention aims to provide a visible light cloud image conversion method, a system and a terminal based on infrared light, which can realize conversion of infrared light to visible light in 24 hours all day and at any global place, and have high conversion speed and higher resolution.
The embodiment of the invention is realized by the following steps:
a visible light cloud image conversion method based on infrared light comprises the following steps:
data acquisition: collecting and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted satellite visible light data is 500 meters;
model training: importing historical data into a gradient lifting model, and reconstructing the gradient lifting model to obtain an inversion visible light cloud image model;
and (3) inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image.
In a preferred embodiment of the present invention, the step of reconstructing the gradient boost model includes: taking an output result of the gradient lifting model as an input for generating an antagonistic neural model, taking historical data of a satellite visible light channel as an output, training to obtain a generated antagonistic neural network model, and constructing a pix2pixHD model by using a pyrrch framework; and generating the output of the antagonistic neural model and historical data of the satellite visible light cloud image with the resolution of 500 meters as input, and training to obtain a super-resolution reconstruction model.
In a preferred embodiment of the present invention, the base model of the elevator lift model is a tree model, and the base classifier uses a GBDT algorithm.
In a preferred embodiment of the present invention, in the step of constructing the gradient boost model, a layered random sampling method is adopted to perform the following steps on the satellite visible light channel data in the ranges of 0-0.25, 0.25-0.5, 0.5-0.75 and 0.75-1, 4 intervals according to a ratio of 1: 1: 1: 1 segmentation random sampling.
In a preferred embodiment of the present invention, the super-resolution reconstruction model adopts an RCAN reconstruction algorithm, and the optimizer of the super-resolution reconstruction model is Adam.
In a preferred embodiment of the present invention, the step of reconstructing the gradient boost model further includes: and optimizing a LightGBM model in a gradient lifting algorithm by adopting a genetic algorithm, and taking the average absolute error, the root mean square error and the R-square as a basic tuning model.
In a preferred embodiment of the present invention, in the above inversion step, the mean square error, the peak signal-to-noise ratio, and the structural similarity are used as indexes for inverting the visible light cloud image.
In a preferred embodiment of the present invention, the resolution of the historical data of the satellite infrared light channel data, the atmospheric environment data and the satellite visible light channel data is 500-2000 m.
An infrared light based visible cloud image conversion system, the system comprising:
a data acquisition module: collecting and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted satellite visible light data is 500 meters;
a model training module: importing historical data into a gradient lifting model, and reconstructing the gradient lifting model to obtain an inversion visible light cloud image model;
an inversion module: and importing the acquired real-time data and the output result of the reconstruction model into an inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image.
A visible light cloud image conversion terminal based on infrared light comprises a memory and a processor, wherein the memory is used for storing a computer program and a method for converting a visible light cloud image based on infrared light; and the processor is used for executing the computer program and the method for converting the visible light cloud image based on the infrared light so as to realize the method steps for converting the visible light cloud image based on the infrared light.
The embodiment of the invention has the beneficial effects that: according to the invention, based on infrared light channel data and the combination of atmospheric physical data and global land utilization rate data, the obtained model is more precise, more objective and more comprehensive, the model is improved through gradient, an antagonistic neural network and super-resolution reconstruction are generated for training, and real-time satellite data, atmospheric environment parameters and land utilization data are used for inverting the visible light cloud picture, so that the whole hemisphere image can be converted once in a short time, the resolution is higher, and the conversion of infrared light to visible light can be realized at any place of the world within 24 hours all day.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic flow chart of a first embodiment of the present invention;
FIG. 2 is a schematic flow chart of a second embodiment of the present invention;
fig. 3 is a flowchart of a scheme for converting infrared light data into a visible cloud image.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, meteorological satellite monitoring is one of the important means for people to detect weather with strong convection. Weather satellites have been in existence for more than 50 years, and the mainstream synchronous weather satellites currently running in the world include FY-2H and FY-4A in China, GOES-16 and GOES-17 in the United states, Himapari-8 in Japan, and Metasat 8 and 11 in the European Union.
Meteorological satellites typically use the electromagnetic spectrum to observe different frequency bands, including visible, near-infrared, and thermal infrared light. Wherein the visible light wavelength is 0.6-1.6 microns, the near infrared light wavelength is 3.9-7.3 microns, and the thermal infrared light wavelength is 8.7-13.4 microns. Generally, a visible cloud image can be interpreted as a visible image captured on a meteorological satellite during the daytime, including clouds, cloud systems (e.g., frontal and tropical storms), lakes, forests, mountains, etc., and even the arrangement and movement of clouds can be observed by continuous shooting. The infrared cloud picture includes images taken by near infrared and thermal infrared, by which professional weather personnel can determine the height and type of the cloud, and calculate the land and surface water temperatures, etc.
The meteorological radar is also an important means for detecting the strong convection weather, and compared with a meteorological satellite, the meteorological radar has obvious defects, the observation distance is often only hundreds of kilometers, the meteorological radar is extremely easy to be interfered by terrain objects, the meteorological radar is difficult to apply to remote areas, and the manufacturing cost is high. Compared with a radar, the satellite can cover the whole hemisphere, and particularly can detect deep sea, remote mountain areas, plateaus and the like; a wider range of weather systems, such as typhoons, tropical cyclones, etc., can also be detected.
In order to obtain a visible light image at night, in the prior art, an image fusion method based on wavelet change is generally adopted, and the process is as follows:
1. preprocessing a source image and performing multi-wavelet decomposition, resampling the infrared cloud image with low resolution to make the resolution of the infrared cloud image consistent with that of a visible light cloud image, and pre-filtering the infrared cloud image and the visible light cloud image by using a pre-filter group to obtain a low-frequency sub-image and a high-frequency sub-image;
2. fusing low-frequency components, and performing weighted fusion on the low-frequency components;
3. fusing high-frequency components, performing local area summation on the high-frequency components, comparing local area variance, and selecting infrared and visible light high-frequency subgraphs;
4. and reconstructing the high-frequency component and the low-frequency component, reconstructing the high-frequency component and the low-frequency component into a pair of images through multi-wavelet inverse transformation, and filtering the images to obtain a result image.
However, by adopting a wavelet transform mode, because the image fusion technology has no uniform fusion result quality evaluation criterion, the current rating method can only reflect the performance of a certain aspect; moreover, due to the fact that imaging principles of infrared light and visible light are different, splicing edges cannot be continuous, and temperature value fluctuation is overlarge. The high-frequency coefficient fusion criterion is difficult to realize and is suitable for all high-frequency sub-bands at the same time, different fusion criteria need to be adopted for different high-frequency bands to achieve a better fusion effect, the realization is complex, the hemispherical infrared cloud image and the hemispherical visible cloud image cannot be smoothly spliced, only the infrared cloud image and the hemispherical visible cloud image are fused with each other, one conversion is not realized, and the visible cloud image of 24 hours in a whole day can be presented.
Based on this, this section shows through 2 embodiments the method and system of converting the visible light cloud picture through meteorological satellite infrared light data.
The application fields of the transformation method in the embodiment include but are not limited to: 1. offshore operation risk management, the offshore operation belongs to high-risk operation, and the largest risk factor is sudden change weather. The ship can be sunk by the instantaneously generated strong wind and waves. Therefore, real-time meteorological cloud monitoring is extremely important for offshore operations. 2. Aeronautical weather, which directly affects flight safety. If the aircraft encounters strong convection weather in the lifting stage, the aircraft can be crashed. Therefore, this technique is very valuable to the aviation industry. 3. Weather prediction, professional weather personnel can observe the range of cloud field distribution of different weather systems in real time through a weather cloud picture, can identify the characteristics of various clouds in time at night, and provide help for weather early warning.
First embodiment
Referring to fig. 1 and fig. 3, the present embodiment provides a method for converting a visible light cloud image based on infrared light, the method comprising:
data acquisition: collecting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, and collecting historical data of satellite visible light;
model training: importing historical data into a gradient lifting model, and reconstructing the gradient lifting model to obtain an inversion visible light cloud image model;
and (3) inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image.
More specifically, in the data acquisition step in this embodiment, the data acquisition of the satellite infrared light is performed by collecting historical sunflower weather satellite lane data No. 8, which are infrared light channels B08, B09, B10, B11, B12, B13, B14, B15, and B16, respectively. In this embodiment, these channels are projection-converted to have a resolution of 2 km.
The data acquisition of the atmospheric environment is to obtain the cloud layer height on historical space lattice points through an atmospheric physical model and to perform up-sampling on the cloud layer height, and the resolution ratio after sampling is 2 kilometers. In the embodiment, the cloud top temperature observed by the satellite infrared channel has low penetrating power, and the cloud top temperature is easily influenced by seasons, so that the cloud layer height is misjudged. Generally, the higher the cloud top height is, the lower the cloud top temperature is, and the larger the scale developed by cloud clusters is, after the atmospheric cloud top height is blended, the model can better capture the cloud layer position of each height, and the inversion accuracy of the visible light cloud image is increased.
The land utilization data is obtained by down-sampling the land utilization data, selecting the maximum point, and the resolution ratio after sampling is 2 kilometers. Because the infrared channel data of the meteorological satellite are difficult to include all terrains, the land use data is increased, so that the model can effectively capture various terrains, and the overall accuracy of the model is enhanced.
The data acquisition of satellite visible light is realized by collecting historical sunflower No. 8 meteorological satellite track data, namely a visible light channel B03 channel, and the channel is subjected to projection conversion to the resolution of 2 kilometers and 500 meters. In order to realize all-weather inversion and enable the visible light at night to be incapable of acquiring data, the invention does not use satellite visible light channel data, but uses infrared light data, and the satellite visible light data is only historical data.
More specifically, in the model training step in this embodiment, historical satellite infrared light channel data, historical atmospheric environment data, and land use data are used as inputs of the gradient boost model, and historical satellite visible light channel data is used as an output. Due to the characteristic of data imbalance of the visible light data, in the embodiment, the visible light data is divided into 0 to 0.25, 0.25 to 0.5, 0.5 to 0.75 and 0.75 to 1 by a hierarchical random sampling method, and the four segments are randomly sampled, wherein the sampling ratio is 1: 1: 1: 1, so that the number of each interval is the same. The main parameters of the gradient boost model in this embodiment are: the base classifier is GBDT, the total number of trees is 1000, the learning rate is 0.005, the minimum number of leaf nodes is 100, the maximum depth of the tree is 8, the data proportion used in each iteration is 0.8, the L1 is normalized to 0.01, the iteration number is 600, and the maximum bin of feature storage is 64. Because the base model is a tree model, data standardization is not needed, and data output by the gradient lifting model is converted into a 16-bit RGB image, so that subsequent training is facilitated.
In the embodiment, because the input dimensionality is large, the training prediction speed of the model is low, and how all the data are input into the neural network model, the gradient lifting model of machine learning is adopted in the embodiment, historical meteorological satellite data, land utilization data and atmospheric environment data are projected and interpolated to be uniform longitude and latitude again and used as input, historical visible light cloud map data is used as output, point-to-point searching mapping is carried out, feature dimensionality reduction is primarily achieved, and the result is used as an intermediate variable and transmitted to the next model to be used as input.
In the model training step in this embodiment, the final inversion output result of the gradient lifting model is used as an input for generating the countermeasure neural model, and the historical satellite visible light channel data is used as an output. This is trained. The main function of the generation of the antagonistic neural network is to perform image translation, translating an input image from one domain to another, given an input-output image pair as training data. Because the accuracy of the visible light cloud image data is high, and the common generation of the antagonistic neural network cannot be completed, the basic model of the embodiment adopts pix2pixHD, and the multi-scale discriminator is more suitable for the generation of high-resolution images, so that fine details and real textures can be generated. And (4) generating an antagonistic neural network model by using the construction of the pyrrch framework, and training the model. The main parameters for generating the antagonistic neural network model are: the learning rate is 0.0002, the optimizer is adam, the optimizer hyper-parameter is 0.5, the learning rate attenuation number is 100, the number of discriminators is 2, the number of first pass conversion layer filters is 64, and the feature matching loss weight is 10.
Different from the conventional convolutional neural network, the main model of the invention uses the generation antagonistic neural network, the network structure of the network is mainly divided into two models, namely a generation model (G) and an identification network (D), and compared with the conventional convolutional neural network, the network has the advantages that the back propagation is used without a Markov chain, the inference on hidden variables is not needed during training, and the learning speed is accelerated. In addition, theoretically, only differentiable functions can be used for constructing generators and discriminators, because parameter updates which can be combined with the deep neural network to generate the model G are not directly from data samples, but from back propagation D, the method represents that any generation network can be trained by generating the antagonistic neural network, and other models are in some specific functional forms, like a model that an output layer is required to be Gaussian, and generators required by other frameworks are sample points generated by non-zero weight. And taking the output in the gradient lifting model as the input for generating the antagonistic neural network, taking historical visible light cloud image data as corresponding output, and carrying out image translation training through conversion.
In order to obtain a visible light cloud image with higher resolution in the present embodiment, the RCAN in the super-resolution reconstruction algorithm is finally used, the output of the antagonistic neural network is generated as the input of the super-resolution reconstruction model, and the visible light cloud image data with 500 m resolution is used as the input, and this is trained. The principle is that a very deep Residual channel attention network RCAN is constructed for high-precision image reconstruction, compared with the conventional convolutional neural network, the RCAN can be constructed to be deeper and obtain higher reconstruction performance, and due to the fact that Residual _ in _ Residual (RIR) exists in a generated antagonistic neural network model structure, long-hop and short-hop connection can be achieved, transmission of rich low-frequency information is facilitated, and a main network can learn more effective information; and there is a Channel Attention (CA) mechanism to adaptively adjust the characteristics by considering the interdependencies between the characteristic channels. This CA mechanism further improves the expressive power of the network. Compared with the traditional up-sampling algorithm, the algorithm model adopted by the embodiment can generate clearer images with higher details. The main parameters for generating the antagonistic neural model in this embodiment are: batch _ size is 16, input channel number is 3, output channel number is 3, intermediate channel number is 64, block number is 16, reconstruction magnification is 4, optimizer is Adam, learning rate is 0.0002, Adam optimizer beta1 and beta2 are 0.9 and 0.99, total iteration this number is 1000000.
In order to obtain a visible light cloud image with higher resolution, the next adjustment is performed through the image super-resolution network, the output of the antagonistic neural network and the visible light cloud image data with 500 m resolution are generated as the input of the model, and the visible light cloud image with higher resolution is output.
The inversion step in this embodiment specifically includes the following steps:
data processing, namely performing the same processing steps as those in training on satellite infrared light data and atmospheric environment data observed in real time, performing projection conversion on the infrared light data, performing statistical conversion on the atmospheric environment data, and using the data processed in training as land use data;
an inversion step, namely putting the processed data into a trained gradient lifting model, putting an output result into a generated antagonistic neural network, outputting the data with the resolution of 2 kilometers, inputting the data into an image super-resolution network, converting the data into data with the resolution of 500 meters, and finally completing inversion of a visible light cloud image;
and the inversion result index takes Mean Square Error (MSE), peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) as indexes for inverting the visible light cloud image, wherein,
Figure BDA0002860783380000101
Figure BDA0002860783380000102
Figure BDA0002860783380000103
in the real image I and the inversion image K with a given size m x n, the MSE is an expected value of the square of the difference between the parameter estimation value and the parameter value, and the smaller the value is, the smaller the difference between the inversion image and the real image is.
PSNR is the ratio of the energy of peak signal to the average energy of noise, usually expressed by log to decibel, the larger the value, the smaller the noise of the inverted image, MAXIIs the maximum value representing the color of the image point,
Figure BDA0002860783380000104
is the maximum possible for the image.
The SSIM in this embodiment defines the structural information from the image set angle as an attribute reflecting the structure of objects in the scene independent of brightness, contrast. Assuming that the true image is X, the inverted image is Y, and that luminance l (X, Y) is
Figure BDA0002860783380000111
Contrast c (x, y) of
Figure BDA0002860783380000112
Structure S (x, y) is
Figure BDA0002860783380000113
SSIM (x, y) ranges from-1 to +1, with closer to 1 representing more similarity to the real image structure, where μx、μyRepresenting the mean, σ, of images X and Y, respectivelyx、σyRepresenting the variance, σ, of images X and Y, respectivelyxyRepresenting the covariance of images X and Y, c1And c2For maintaining a stable constant, c1=(k1L)2,c2=(k2L)2
Figure BDA0002860783380000114
k1And k2The values are 0.01 and 0.03 respectively, and L is the gray level number.
Compared with the conventional scheme, the night satellite cloud map conversion in the embodiment has the following advantages: by adopting the method, the conversion of infrared light to visible light can be realized in 24 hours all day and in any global place, so that weather forecasters can clearly distinguish the position of low cloud at night, and the capabilities of the forecasters in typhoon positioning, fog monitoring, cold air monitoring and the like at night are improved. In addition, the resolution after conversion can reach 500 meters compared with the resolution of 2 kilometers of infrared light. The reconstruction model in the embodiment not only uses infrared cloud picture data, but also uses atmospheric physical data and global land utilization rate data, so that the obtained model is more precise, more objective and more comprehensive, the whole hemisphere image can be converted in one step in a short time, and the resolution is 500 m.
The embodiment also provides a visible light cloud image conversion system based on infrared light, which comprises a data acquisition module, a model training module and an inversion module, wherein:
the data acquisition module is used for acquiring and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data and acquiring and converting historical data of satellite visible light;
the model training module is used for importing the acquired historical data into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model; the specific reconstruction step of the gradient boost model in this embodiment is the model training step in the first embodiment.
The method specifically comprises the following steps: taking historical satellite infrared light channel data, historical atmospheric environment data and land utilization data as input, taking historical satellite visible light channel data as output, and training a gradient lifting model; then, taking the output in the gradient lifting model as the input for generating the antagonistic neural network, taking the historical visible light cloud image data as the corresponding output, and carrying out the training of image translation through conversion; in order to obtain a visible light cloud image with higher resolution, the output of the antagonistic neural network generated by the step is used as the input of the super-resolution network through the image super-resolution network, the light cloud image with higher resolution can be used as the output, and the visible light cloud image model with the resolution of 500 meters can be output through the training of the step.
An inversion module: and importing the acquired real-time data and the output result of the reconstruction model into an inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image. For the specific operation of the inversion module, please refer to the inversion step in the first embodiment, which is not described herein.
In view of the above-mentioned methods, the present embodiment further provides a visible light cloud image conversion terminal based on infrared light, which includes but is not limited to intelligent terminals such as a visible light cloud image conversion mobile phone terminal based on infrared light, a computer terminal, and the like, where the conversion terminal includes a memory and a processor, and the memory is used for storing a computer program and a method for storing visible light cloud image conversion based on infrared light; and the processor is used for executing the computer program and the method for converting the visible light cloud image based on the infrared light so as to realize the method steps for converting the visible light cloud image based on the infrared light.
Second embodiment
Referring to fig. 2 and fig. 3, the present embodiment provides another method for converting a visible light cloud image based on infrared light, which includes 3 steps in the first embodiment:
data acquisition: the real-time data and the historical data of land utilization data, satellite infrared light data and atmospheric environment data are collected in the same way, and the historical data of satellite visible light is collected;
model training: importing the historical data into a gradient lifting model, and finally obtaining an inversion visible light cloud image model after the gradient lifting model is reconstructed;
and (3) inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image.
The data acquisition step and the model training step in this embodiment are different from those in the first embodiment.
Specifically, in the data acquisition step, the embodiment also acquires real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, and acquires historical data of satellite visible light; but the resolution of the historical data described above is translated to 500 meters.
The method specifically comprises the following steps: and (3) satellite infrared light channel data processing, namely collecting historical sunflower No. 8 meteorological satellite channel data, wherein the channel data comprises infrared light channels B08, B09, B10, B11, B12, B13, B14, B15 and B16, and performing projection conversion on the channels to enable the resolution to be 500 meters.
And (3) processing atmospheric environment data, obtaining the cloud layer height on the historical space lattice point through an atmospheric physical model, and performing up-sampling on the cloud layer height, wherein the resolution ratio after sampling is 500 m.
And (3) land utilization data processing, namely downsampling the land utilization data, selecting a maximum point, and obtaining the sampled resolution of 500 m.
And (3) processing the satellite visible light channel data, and collecting historical sunflower No. 8 meteorological satellite channel data as a visible light channel B03 channel. The projection for the channel translates to a resolution of 500 meters.
In the model training step, the land utilization data after resampling conversion, the historical visible light data, the historical atmospheric environment data after statistical conversion and the historical infrared light data after projection conversion are led into a gradient lifting model, a LightGBM model in the gradient lifting algorithm is optimized by using a genetic algorithm, the average absolute error, the root mean square error and the R square are used as a basic tuning model, and the reconstructed inversion visible light cloud image model is output.
In the prior art, because the LightGBM model in the gradient lifting algorithm is complex, model parameters often show key effects on model performance, and model parameter adjustment depends on artificial experience and continuous trial and error to a certain extent, for this reason, the LightGBM is optimized by using a Genetic Algorithm (GA) in the embodiment, a genetic mechanism simulating the nature is calculated, and an optimal result suitable for a specified environment is obtained by evolution by using ideas of genetic replication and cross variation, so that the LightGBM model has randomness, parallelism and global property, and can approach the state of an optimal value. And the model is further optimized based on Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and R-Square. Compared with the original scheme, the scheme has the advantages that the speed of image conversion is greatly improved, the accuracy corresponding to the image conversion is reduced, and infrared data can be converted into a visible light cloud image model.
In the inversion step in this embodiment, the inversion method is the same as that in the first embodiment, and specifically includes the following steps:
data processing, namely performing the same processing steps as those in training on satellite data and atmospheric environment data observed in real time, wherein the land use data uses the data processed in the training;
and an inversion step, namely putting the historical data into a trained gradient lifting model, outputting an inversion visible light cloud image model with the resolution of 500 meters, merging the processed real-time data into the inversion visible light cloud image model, inverting the visible light cloud image, and completing the inversion of the visible light cloud image.
In the embodiment, the mean square error, the peak signal-to-noise ratio and the structural similarity are also used as indexes for inverting the visible light cloud image, so that the accuracy of the visible light cloud image is improved.
For the conversion method provided by the second embodiment, the present embodiment further provides a visible light cloud image conversion system based on infrared light, where the system includes a data acquisition module, a model training module, and an inversion module, where:
the data acquisition module is used for acquiring and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data and acquiring and converting historical data of satellite visible light;
the model training module is used for importing the acquired historical data into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model; the specific reconstruction step of the gradient boost model in this embodiment is as the model training step in the second embodiment.
The method specifically comprises the following steps: providing a LightGBM model in a genetic algorithm optimization gradient lifting algorithm, evolving to obtain an optimal result suitable for a specified environment, having randomness, parallelism and global property, and being capable of approaching to an optimal value state; and then, further optimizing the model on the basis of the average absolute error, the root mean square error and the R square, and taking the final optimized model as a reconstructed inversion visible light cloud image model.
An inversion module: and importing the acquired real-time data and the output result of the reconstruction model into an inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image. For the specific operation of the inversion module, please refer to the inversion step in the second embodiment, which is not described herein.
In summary, the invention utilizes artificial intelligence to realize the conversion from the infrared cloud picture to the visible cloud picture, and compared with the traditional method, the invention has the following advantages:
(1) the time spent on operation is obviously reduced by adopting artificial intelligence, and the effect is remarkable in real-time severe weather risk monitoring with the necessity of minutes and seconds;
(2) in the process of model reconstruction and training, the method of the invention takes the land utilization rate and the atmospheric environment parameters as one of the parameters, improves the accuracy after data conversion, and simultaneously enables the prediction of the model to be applied to the geographical positions with different terrains, earth surfaces and land purposes.
(3) The method can realize inversion to the visible light cloud image at any geographic position and any time, the image resolution can be 500 meters, and the accuracy of the visible light cloud image is greatly improved through artificial intelligence learning and a large amount of experience.
This description describes examples of embodiments of the invention, and is not intended to illustrate and describe all possible forms of the invention. It should be understood that the embodiments described in this specification can be implemented in many alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Specific structural and functional details disclosed are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention. It will be appreciated by persons skilled in the art that a plurality of features illustrated and described with reference to any one of the figures may be combined with features illustrated in one or more other figures to form embodiments which are not explicitly illustrated or described. The described combination of features provides a representative embodiment for a typical application. However, various combinations and modifications of the features consistent with the teachings of the present invention may be used as desired for particular applications or implementations.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A visible light cloud image conversion method based on infrared light is characterized by comprising the following steps: data acquisition: collecting and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted historical data of the satellite visible light is 500 meters;
model training: importing historical data into a gradient lifting model, and obtaining an inversion visible light cloud image model through reconstruction training of the gradient lifting model;
and (3) inversion: and importing the acquired real-time data and the output result of the reconstruction model into the inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image.
2. The infrared-light-based visible light cloud map conversion method according to claim 1, wherein the step of reconstructing the gradient boost model comprises: taking an output result of the gradient lifting model as an input of a generated antagonistic neural model, taking historical data of the satellite visible light as an output, training to obtain the generated antagonistic neural network model, and constructing a pix2pixHD model by using a pyrrch framework; and the output of the antagonistic neural model and the historical data of the satellite visible light with the resolution of 500 meters are generated as input, and the super-resolution reconstruction model is obtained through training.
3. The infrared-based visible cloud image conversion method of claim 2, wherein the gradient boosting model adopts a tree model, and the base classifier adopts a GBDT algorithm.
4. The infrared-based visible light cloud map conversion method according to claim 2, wherein in the step of reconstructing the gradient boost model, a layered random sampling method is adopted, and the satellite visible light data is converted into the satellite visible light data in a ratio of 1: 1: 1: 1 segmentation random sampling.
5. The infrared-light-based visible light cloud map conversion method of claim 2, wherein the super-resolution reconstruction model adopts an RCAN reconstruction algorithm, and an optimizer of the super-resolution reconstruction model is Adam.
6. The infrared-based visible cloud image conversion method of claim 1, wherein the step of reconstructing the gradient boost model further comprises: and optimizing a LightGBM model in a gradient lifting algorithm by adopting a genetic algorithm, and taking the average absolute error, the root mean square error and the R-square as a basic tuning model.
7. The infrared-light-based visible light cloud image conversion method according to claim 1, wherein in the inversion step, a mean square error, a peak signal-to-noise ratio and structural similarity are used as indexes for inverting the visible light cloud image.
8. The infrared-based visible light cloud map conversion method as claimed in claim 1, wherein the resolution of the historical data of the satellite infrared light data, the atmospheric environment data and the satellite visible light channel data is 500-2000 m.
9. An infrared light-based visible light cloud map conversion system, the system comprising: a data acquisition module: collecting and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted historical data of the satellite visible light is 500 meters; a model training module: importing historical data into a gradient lifting model, and reconstructing the gradient lifting model to obtain an inversion visible light cloud image model;
an inversion module: and importing the acquired real-time data and the output result of the reconstruction model into the inversion visible light cloud image model, and performing inversion to obtain a visible light cloud image.
10. The visible light cloud image conversion terminal based on the infrared light comprises a memory and a processor, and is characterized in that the memory is used for storing a computer program and a method for storing visible light cloud image conversion based on the infrared light; a processor for executing the computer program and the method for infrared light based visible light cloud pattern conversion for realizing the method steps for infrared light based visible light cloud pattern conversion as claimed in claims 1-8.
CN202011565575.XA 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof Active CN112669201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011565575.XA CN112669201B (en) 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011565575.XA CN112669201B (en) 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof

Publications (2)

Publication Number Publication Date
CN112669201A true CN112669201A (en) 2021-04-16
CN112669201B CN112669201B (en) 2023-09-12

Family

ID=75409443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011565575.XA Active CN112669201B (en) 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof

Country Status (1)

Country Link
CN (1) CN112669201B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297174A (en) * 2021-05-24 2021-08-24 中南大学 Land use change simulation method based on deep learning
WO2024021225A1 (en) * 2022-07-29 2024-02-01 知天(珠海横琴)气象科技有限公司 High-resolution true-color visible light model generation method, high-resolution true-color visible light model inversion method, and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
US20150341619A1 (en) * 2013-01-01 2015-11-26 Inuitive Ltd. Method and system for light patterning and imaging
US20160073043A1 (en) * 2014-06-20 2016-03-10 Rambus Inc. Systems and Methods for Enhanced Infrared Imaging
CN106651772A (en) * 2016-11-25 2017-05-10 宁波大学 Super-resolution reconstruction method of satellite cloud picture
CN110544205A (en) * 2019-08-06 2019-12-06 西安电子科技大学 Image super-resolution reconstruction method based on visible light and infrared cross input
CN111368817A (en) * 2020-02-28 2020-07-03 北京师范大学 Method and system for quantitatively evaluating heat effect based on earth surface type
CN111861884A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Satellite cloud image super-resolution reconstruction method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
US20150341619A1 (en) * 2013-01-01 2015-11-26 Inuitive Ltd. Method and system for light patterning and imaging
US20160073043A1 (en) * 2014-06-20 2016-03-10 Rambus Inc. Systems and Methods for Enhanced Infrared Imaging
CN106651772A (en) * 2016-11-25 2017-05-10 宁波大学 Super-resolution reconstruction method of satellite cloud picture
CN110544205A (en) * 2019-08-06 2019-12-06 西安电子科技大学 Image super-resolution reconstruction method based on visible light and infrared cross input
CN111368817A (en) * 2020-02-28 2020-07-03 北京师范大学 Method and system for quantitatively evaluating heat effect based on earth surface type
CN111861884A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Satellite cloud image super-resolution reconstruction method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KHIZAR HAYAT: "Multimedia super-resolution via deep learning: A survey", DIGITAL SIGNAL PROCESSING, vol. 81, pages 198 - 217, XP085466988, DOI: 10.1016/j.dsp.2018.07.005 *
苏锦程;胡勇;巩彩兰;: "一种混合红外云图超分辨率重建算法", 红外, no. 08, pages 36 - 41 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297174A (en) * 2021-05-24 2021-08-24 中南大学 Land use change simulation method based on deep learning
CN113297174B (en) * 2021-05-24 2023-10-13 中南大学 Land utilization change simulation method based on deep learning
WO2024021225A1 (en) * 2022-07-29 2024-02-01 知天(珠海横琴)气象科技有限公司 High-resolution true-color visible light model generation method, high-resolution true-color visible light model inversion method, and system

Also Published As

Publication number Publication date
CN112669201B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN107463901B (en) Multi-scale regional flood disaster risk remote sensing evaluation method and system
Hilburn et al. Development and interpretation of a neural-network-based synthetic radar reflectivity estimator using GOES-R satellite observations
CN101303764B (en) Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave
CN112669201B (en) Visible light cloud image conversion method and system based on infrared light and terminal thereof
CN116151610B (en) Method for simulating risk exposure space of disaster-bearing body on underlying surface of heterogeneous city
CN115267941B (en) High-resolution true color visible light model generation and inversion method and system
CN113312993A (en) Remote sensing data land cover classification method based on PSPNet
CN110532918A (en) Method is determined based on the offshore wind electric field time-space attribute of time series remote sensing image
CN114724045A (en) Method for inverting submarine topography of shallow sea on land
CN115641514A (en) Pseudo visible light cloud map generation method for night sea fog monitoring
CN116630818A (en) Plateau lake boundary online extraction method and system based on GEE and deep learning
CN117233870B (en) Short-term precipitation set forecasting and downscaling method based on multiple meteorological elements
CN114708494A (en) Rural homestead building identification method and system
CN117148360B (en) Lightning approach prediction method and device, electronic equipment and computer storage medium
CN113869157A (en) Cloud classification method based on visible light and infrared cloud pictures
CN112488190A (en) Point cloud data classification method and system based on deep learning
CN114511787A (en) Neural network-based remote sensing image ground feature information generation method and system
CN116720156A (en) Weather element forecasting method based on graph neural network multi-mode weather data fusion
Yi et al. MHA-CNN: Aircraft fine-grained recognition of remote sensing image based on multiple hierarchies attention
Ji et al. CLGAN: a generative adversarial network (GAN)-based video prediction model for precipitation nowcasting
CN112733746B (en) Collaborative classification method for fusing InSAR coherence and multispectral remote sensing
CN115546658A (en) Night cloud detection method combining data set quality improvement and CNN improvement
CN115063437A (en) Mangrove canopy visible light image index characteristic analysis method and system
CN117217103B (en) Satellite-borne SAR sea clutter generation method and system based on multi-scale attention mechanism
CN114882139B (en) End-to-end intelligent generation method and system for multi-level map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant