CN112669201B - Visible light cloud image conversion method and system based on infrared light and terminal thereof - Google Patents

Visible light cloud image conversion method and system based on infrared light and terminal thereof Download PDF

Info

Publication number
CN112669201B
CN112669201B CN202011565575.XA CN202011565575A CN112669201B CN 112669201 B CN112669201 B CN 112669201B CN 202011565575 A CN202011565575 A CN 202011565575A CN 112669201 B CN112669201 B CN 112669201B
Authority
CN
China
Prior art keywords
visible light
model
data
cloud image
satellite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011565575.XA
Other languages
Chinese (zh)
Other versions
CN112669201A (en
Inventor
王卓阳
崔传忠
吴家豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhitian Zhuhai Hengqin Meteorological Technology Co ltd
Original Assignee
Zhitian Zhuhai Hengqin Meteorological Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhitian Zhuhai Hengqin Meteorological Technology Co ltd filed Critical Zhitian Zhuhai Hengqin Meteorological Technology Co ltd
Priority to CN202011565575.XA priority Critical patent/CN112669201B/en
Publication of CN112669201A publication Critical patent/CN112669201A/en
Application granted granted Critical
Publication of CN112669201B publication Critical patent/CN112669201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a visible light cloud image conversion method, a system and a terminal thereof based on infrared light, which can realize the conversion from infrared light to visible light in 24 hours all over the world and at any place around the world, and has high conversion speed and higher resolution, and the method comprises the following steps: and (3) data acquisition: collecting and converting land utilization data, satellite infrared light data and real-time data and historical data of atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted satellite visible light data is 500 meters; model training: the historical data is imported into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model; inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inverted visible light cloud image model, and inverting to obtain a visible light cloud image.

Description

Visible light cloud image conversion method and system based on infrared light and terminal thereof
Technical Field
The invention relates to the field of artificial intelligence, in particular to a visible light cloud picture conversion method and system based on infrared light and a terminal thereof.
Background
The visible light cloud image plays an important role in researching movement and development of cloud blocks and cloud systems and detecting occurrence and development of typhoons and other weather systems, and achieves good results. The visible light cloud picture has the advantages of higher resolution, clear contrast and capability of directly distinguishing cloud layers with any height. However, since the photographing is performed by depending on the visible light wave, the brightness and contrast are limited by the angle of sunlight, and the photographing cannot be performed at night. Conversely, the contrast of the infrared cloud image is derived from the temperature difference after the infrared light wave intensity conversion, and then the position and height of the cloud are resolved by assuming that the tropospheric temperature decreases with height. The method does not depend on visible light waves, and has the advantages that the method can shoot in 24 hours all weather, and the brightness and the contrast are not influenced by the angle of sunlight. However, during the evening hours, the surface/sea surface temperature may be even lower than the temperature of the lower cloud. At this time, weather staff cannot distinguish the low cloud position through the infrared light cloud image, and certain limitation is caused on weather forecast such as night typhoon positioning, fog monitoring, cold air monitoring and the like.
Disclosure of Invention
The invention aims to provide a visible light cloud picture conversion method and system based on infrared light and a terminal thereof, which can realize conversion from infrared light to visible light in 24 hours all over the day and at any place around the world, and have high conversion speed and higher resolution.
Embodiments of the present invention are implemented as follows:
a visible light cloud image conversion method based on infrared light, the method comprising:
and (3) data acquisition: collecting and converting land utilization data, satellite infrared light data and real-time data and historical data of atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted satellite visible light data is 500 meters;
model training: the historical data is imported into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model;
inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inverted visible light cloud image model, and inverting to obtain a visible light cloud image.
In a preferred embodiment of the present invention, the specific steps of reconstructing the gradient lifting model include: taking the output result of the gradient lifting model as input for generating an antagonistic nerve model, taking the historical data of a satellite visible light channel as output, training to obtain a generated antagonistic nerve network model, and constructing a pix2pixHD model by using a pytorch frame by using the generated antagonistic nerve network model; and generating output of the antagonistic nerve model and historical data of the satellite visible light cloud image with 500 m resolution as input, and training to obtain a super-resolution reconstruction model.
In a preferred embodiment of the present invention, the base model of the ladder person lifting model adopts a tree model, and the base classifier adopts a GBDT algorithm.
In the step of constructing the gradient lifting model, the method of layered random sampling is adopted to make satellite visible light channel data in the ranges of 0-0.25, 0.25-0.5, 0.5-0.75 and 0.75-1,4 intervals according to 1:1:1:1 segment random sampling.
In a preferred embodiment of the present invention, the above-mentioned super-resolution reconstruction model adopts an RCAN reconstruction algorithm, and an optimizer of the super-resolution reconstruction model is Adam.
In a preferred embodiment of the present invention, the step of reconstructing the gradient lifting model further includes: and optimizing a LightGBM model in the gradient lifting algorithm by adopting a genetic algorithm, and taking an average absolute error, a root mean square error and an R party as a basic tuning model.
In the preferred embodiment of the present invention, in the inversion step, the mean square error, the peak signal-to-noise ratio and the structural similarity are used as indexes for inverting the visible light cloud image.
In a preferred embodiment of the present invention, the resolution of the historical data of the satellite infrared light channel data, the atmospheric environment data and the satellite visible light channel data is 500-2000 meters.
A visible light cloud image conversion system based on infrared light, the system comprising:
and a data acquisition module: collecting and converting land utilization data, satellite infrared light data and real-time data and historical data of atmospheric environment data, collecting and resampling and converting the historical data of satellite visible light, wherein the resolution of the converted satellite visible light data is 500 meters;
model training module: the historical data is imported into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model;
and an inversion module: and importing the acquired real-time data and the output result of the reconstruction model into an inverted visible light cloud image model, and inverting to obtain a visible light cloud image.
The visible light cloud image conversion terminal based on infrared light comprises a memory and a processor, wherein the memory is used for storing a computer program and a method for storing visible light cloud image conversion based on infrared light; and the processor is used for executing a computer program and a method for converting the visible light cloud image based on infrared light so as to realize the method steps of converting the visible light cloud image based on infrared light.
The embodiment of the invention has the beneficial effects that: according to the invention, based on infrared light channel data and comprehensive atmospheric physical data and global land utilization rate data, the obtained model is more precise, objective and comprehensive, training is performed by gradient lifting the model, generating an antagonistic neural network and reconstructing super-resolution, and real-time satellite data, atmospheric environment parameters and land utilization data are inverted into a visible light cloud image, so that the whole hemispherical image can be converted in a short time, the resolution is higher, and infrared light can be converted into visible light in 24 hours all over the world and at any place around the world.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a first embodiment of the present invention;
FIG. 2 is a flow chart of a second embodiment of the present invention;
fig. 3 is a flow chart of a scheme for converting infrared light data into visible light cloud patterns.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the prior art, meteorological satellite monitoring is often one of the important means for people to detect strong convection weather. Meteorological satellites have been in motion in the world for more than 50 years since birth, and the dominant synchronous meteorological satellites currently in motion are FY-2H and FY-4A in China, GOES-16 and GOES-17 in the United states, himaware-8 in Japan, meteosat8 and Meteosat11 in the European Union.
Meteorological satellites typically use the electromagnetic spectrum to observe different bands, including visible light, near infrared light, and thermal infrared light. Wherein the wavelength of visible light is 0.6-1.6 microns, the wavelength of near infrared light is 3.9-7.3 microns, and the wavelength of thermal infrared light is 8.7-13.4 microns. In general, a visible light cloud image can be interpreted as a visible light image taken on meteorological satellites in the local daytime, including clouds, cloud systems (such as fronts and tropical storms), lakes, forests, mountains, etc., and even the arrangement and movement of the clouds can be observed through continuous shooting. The infrared cloud image comprises images shot by near infrared and thermal infrared, and a professional weather person can judge the height and type of the cloud through the infrared cloud image, calculate land and surface water temperature and the like.
Compared with weather satellites, weather radars have obvious defects, the observation distance is often hundreds of kilometers, the weather radars are extremely easy to be interfered by terrain objects, the weather radars are difficult to be applied to remote areas, and the manufacturing cost is high. Compared with radar, the satellite can cover the whole hemisphere, and particularly can detect deep sea, remote mountain areas, plateaus and the like; a wider range of weather systems, such as typhoons, tropical cyclones, etc., can also be detected.
In order to obtain a visible light image at night, in the prior art, an image fusion method based on wavelet change is generally adopted, and the process is as follows:
1. preprocessing and multi-wavelet decomposition are carried out on a source image, resampling is carried out on an infrared cloud image with low resolution, the resolution of the infrared cloud image is consistent with that of a visible light cloud image, and a pre-filter group is used for pre-filtering the infrared cloud image and the visible light cloud image to obtain a low-frequency sub-image and a high-frequency sub-image;
2. the fusion of the low-frequency components is carried out, and the low-frequency components are subjected to weighted fusion;
3. the fusion of the high-frequency components is carried out, local area summation is carried out on the high-frequency components, the local area variance is compared, and the infrared and visible light high-frequency subgraphs are selected;
4. reconstructing the high-frequency component and the low-frequency component, reconstructing an image through inverse multi-wavelet transformation, and filtering to obtain a result image.
However, by adopting a wavelet transformation mode, the current rating method only can reflect the performance of a certain aspect because the image fusion technology has no unified fusion result quality evaluation criterion; moreover, due to the fact that the infrared light imaging principle and the visible light imaging principle are different, the spliced edge cannot be continuous, and the temperature value fluctuation is too large. The high-frequency coefficient fusion criterion is difficult to realize and is suitable for all high-frequency sub-bands, different fusion criteria are needed to be adopted for different high-frequency bands to achieve better fusion effects, the realization is complex, the hemispherical infrared cloud image and the visible cloud image cannot be spliced smoothly, only the infrared cloud image and the visible cloud image are fused with each other, one conversion is not achieved, and the visible cloud image 24 hours in the whole day can be displayed.
Based on this, this section shows, by way of 2 examples, a method and system for converting visible light cloud patterns by weather satellite infrared light data.
The application fields of the transformation method in this embodiment include, but are not limited to: 1. risk management of offshore operations, which belongs to high-risk operations, wherein the largest risk factor is abrupt weather. The instant strong wind and wave can make the ship turn down. Thus, real-time weather cloud monitoring is extremely important for offshore operations. 2. Aeronautical weather, which directly affects the flight safety. If the aeroplane encounters strong convection weather in the lifting stage, the aeroplane can be caused to lose. Thus, this technique is very valuable for the aviation industry. 3. Weather forecast, professional weather staff can observe the cloud field distribution range of different weather systems in real time through a weather cloud chart, can timely identify various cloud characteristics at night, and provides assistance for weather early warning.
First embodiment
Referring to fig. 1 and 3, the present embodiment provides a visible light cloud image conversion method based on infrared light, which includes:
and (3) data acquisition: collecting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, and collecting historical data of satellite visible light;
model training: the historical data is imported into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model;
inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inverted visible light cloud image model, and inverting to obtain a visible light cloud image.
More specifically, in the data acquisition step in this embodiment, the data acquisition of satellite infrared light is performed by collecting historical sunflower No. 8 weather satellite channel data, which are respectively the infrared light channels B08, B09, B10, B11, B12, B13, B14, B15 and B16. In this example, these channels were projectively transformed to a resolution of 2 km.
The data acquisition of the atmospheric environment is to obtain the cloud layer height on the historical space grid point through the atmospheric physical model, and up-sample the cloud layer height, wherein the resolution after sampling is 2 km. In this embodiment, the penetration of the cloud top temperature observed by the satellite infrared channel is low, and the cloud top temperature is easily affected by seasons, which results in misjudgment of the cloud layer height. Generally, the higher the cloud top height is, the lower the cloud top temperature is, the larger the scale of cloud cluster development is, after the atmospheric cloud top height is integrated, the model can better capture the positions of cloud layers at all heights, and the inversion accuracy of the visible light cloud image is increased.
The land utilization data is obtained by downsampling the land utilization data, selecting the maximum value point, and the resolution after sampling is 2 km. Because the meteorological satellite infrared channel data is difficult to include all terrains, the model can effectively capture various terrains by adding the land use data, and the overall accuracy of the model is enhanced.
The data acquisition of satellite visible light is that historical sunflower No. 8 meteorological satellite channel data is collected to be a visible light channel B03, and the channel is projected and converted to have the resolution of 2 km and 500 m. In this embodiment, in order to realize all-weather inversion, and the data cannot be obtained by the night visible light, the invention does not use satellite visible light channel data, but uses infrared light data, and the satellite visible light data is only historical data.
More specifically, in the model training step in this embodiment, the historical satellite infrared light channel data, the historical atmospheric environment data and the land utilization data are used as inputs of the gradient lifting model, and the historical satellite visible light channel data are used as outputs. Because of the unbalanced data characteristic of the visible light data, the embodiment divides the visible light data into 0-0.25, 0.25-0.5, 0.5-0.75 and 0.75-1 by a layered random sampling method, and randomly samples the four segments, wherein the sampling ratio is 1:1:1:1 such that the number of each interval is the same. The main parameters of the gradient lifting model in this embodiment are: the basic classifier is GBDT, the total number of trees is 1000, the learning rate is 0.005, the minimum record number of leaf nodes is 100, the maximum depth of the tree is 8, the data proportion used in each iteration is 0.8, the L1 regularization is 0.01, the iteration number is 600, and the feature storage maximum bin is 64. Because the base model is a tree model, data standardization is not needed, and the data output by the gradient lifting model is converted into a 16-bit RGB image, so that subsequent training is facilitated.
In this embodiment, because the input dimensions are more, the training prediction speed of the model is slower, and how all the model is input into the neural network model, the embodiment adopts a gradient lifting model of machine learning, re-projects and interpolates historical meteorological satellite data, land utilization data and atmospheric environment data onto uniform longitude and latitude, takes the historical meteorological satellite data, the land utilization data and the atmospheric environment data as inputs, takes historical visible light cloud image data as outputs, looks for and maps point to point, preliminarily realizes feature dimension reduction, and takes the result as an intermediate variable to be transmitted to the next model as inputs.
In the model training step in this embodiment, the final inversion output result of the gradient lifting model is used as the input for generating the antagonistic nerve model, and the historical satellite visible light channel data is used as the output. This is trained. The primary function of generating an antagonistic neural network is to perform image translation, translating one input image from one domain to another, given an input-output image pair as training data. Because the visible light cloud image data has high precision, the common generation of the antagonistic neural network can not be completed, the basic model of the embodiment adopts pix2pixHD, and the multi-scale discriminant is more applicable to the generation of high-resolution images, and can generate fine details and real textures. The model is trained using a pytorch framework build to generate an antagonistic neural network model. The main parameters for generating the antagonistic neural network model are: the learning rate is 0.0002, the optimizer is adam, the optimizer super-parameter is 0.5, the learning rate attenuation number is 100, the number of discriminators is 2, the number of filters of the first conversion layer is 64, and the feature matching loss weight is 10.
Different from the existing convolutional neural network, the main model of the invention uses the generation countermeasure neural network, the network structure is mainly divided into two models of the generation model (G) and the identification network (D), and compared with the traditional convolutional neural network, the invention has the advantages that the opposite propagation is used, a Markov chain is not needed, hidden variables are not needed to be inferred during training, and the learning speed is accelerated. And theoretically, as long as differentiable functions can be used to construct the generator and discriminator, because the parameter updates that can be combined with the deep neural network to make the generation model G are not directly from the data samples, but use the back propagation D from, any generation network can be trained on behalf of the generation antagonistic neural network, while other models are some specific functional form, just as the output layer must be gaussian, and the generators required by other frameworks are all models of sample points generated by non-zero weights. And taking the output in the gradient lifting model as the input for generating the antagonistic neural network, taking the historical visible light cloud image data as the corresponding output, and performing image translation training through conversion.
In this embodiment, in order to obtain a visible light cloud image with higher resolution, the RCAN in the super-resolution reconstruction algorithm is finally used, the output in the generated antagonistic neural network is used as the input of the super-resolution reconstruction model, and the visible light cloud image data with the resolution of 500 meters is used as the input, so that the visible light cloud image is trained. The principle is that a very deep Residual channel attention network RCAN is constructed, and is used for high-precision image reconstruction, compared with the previous convolutional neural network, the RCAN can construct a deeper network and obtain higher reconstruction performance, and the long-jump and short-jump connection can be realized due to the fact that the residual_in_residual (RIR) exists in the anti-neural network model structure, so that the RCAN is beneficial to transmitting abundant low-frequency information, and the main network learns more effective information; and there is a Channel Attention (CA) mechanism to adaptively adjust the characteristics by taking into account the interdependencies between characteristic channels. This CA mechanism further increases the expressive power of the network. Compared with the traditional upsampling algorithm, the algorithm model adopted by the embodiment can generate a clearer image with higher details. The main parameters of the generation antagonistic nerve model in this embodiment are: the batch_size is 16, the number of input channels is 3, the number of output channels is 3, the number of intermediate channels is 64, the number of blocks is 16, the reconstruction magnification is 4, the optimizer is Adam, the learning rate is 0.0002, adam optimizers beta1 and beta2 are 0.9 and 0.99, and the total iteration number is 1000000.
In order to obtain a visible light cloud image with higher resolution, the embodiment performs next adjustment through an image super-resolution network, takes the generated output of the antagonistic neural network and visible light cloud image data with 500 m resolution as the input of the model, and outputs the visible light cloud image with higher resolution.
The inversion step in this embodiment specifically includes the following steps:
the data processing comprises the steps of performing the same processing steps as training on satellite infrared light data and atmospheric environment data observed in real time, performing projection conversion on the infrared light data, performing statistical conversion on the atmospheric environment data, and using the processed data in the training process on land use data;
the inversion step, the processed data are put into a trained gradient lifting model, the output result is put into a generated antagonistic neural network, the data with the resolution of 2 km are output, the data are input into an image super-resolution network and are converted into the data with the resolution of 500 m, and finally the inversion of the visible light cloud picture is completed;
inversion result indexes, namely Mean Square Error (MSE), peak signal to noise ratio (PSNR) and Structural Similarity (SSIM) are used as indexes for inverting the visible light cloud image, wherein,
in the real image I and the inversion image K of given size m×n, the MSE is the expected value of the square of the difference between the parameter estimation value and the parameter value, and the smaller the value is, the smaller the difference between the inversion image and the real image is.
PSNR is the ratio of the peak signal energy to the average noise energy, typically expressed in terms of log-to-decibel, with a larger value representing a smaller inversion image noise, MAX I Is the maximum value representing the color of the image point,is the maximum possible value of the image.
The SSIM in this embodiment defines structural information from an image composition perspective as attributes reflecting the structure of objects in the scene independent of brightness, contrast. Assuming that the true image is X, the inverted image is Y, and the luminance l (X, Y) isContrast c (x, y) is +.>Structure S (x, y) is +.>SSIM (x, y) ranges from-1 to +1, with closer to 1 representing more similar to a real image structure, where μ x 、μ y Representing the mean value, sigma, of images X and Y, respectively x 、σ y Representing the variance, sigma, of images X and Y, respectively xy Representing the covariance of images X and Y, c 1 And c 2 To maintain a constant of stability c 1 =(k 1 L) 2 ,c 2 =(k 2 L) 2 ,/>k 1 And k 2 The values are 0.01 and 0.03 respectively, and L is the gray level number.
Compared with the conventional scheme, the night satellite cloud image conversion method in the embodiment has the following advantages: the method can realize the conversion from infrared light to visible light in 24 hours all over the day and at any global place, so that weather forecasters can clearly distinguish the low cloud position at night, and the capacity of the forecasters for carrying out night typhoon positioning, fog monitoring, cold air monitoring and the like is improved. In addition, the resolution after conversion can reach 500 meters compared with the resolution of 2 km of infrared light. The reconstruction model in the embodiment not only uses infrared cloud image data, but also uses atmospheric physical data and global land utilization rate data, so that the obtained model is more precise, objective and comprehensive, and the whole hemispherical image can be converted in a short time at one time, so that the resolution is 500 meters.
The embodiment also provides a visible light cloud image conversion system based on infrared light, which comprises a data acquisition module, a model training module and an inversion module, wherein:
the data acquisition module is used for acquiring and converting land utilization data, satellite infrared light data, real-time data and historical data of atmospheric environment data and acquiring and converting historical data of satellite visible light;
the model training module is used for guiding the collected historical data into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model; the specific reconstruction step of the gradient lifting model in the present embodiment is as the model training step in the first embodiment.
The method specifically comprises the following steps: taking historical satellite infrared light channel data, historical atmospheric environment data and land utilization data as inputs, taking historical satellite visible light channel data as outputs, and training a gradient lifting model; then taking the output in the gradient lifting model as the input for generating an antagonistic neural network, taking historical visible light cloud image data as corresponding output, and performing image translation training through conversion; in order to obtain a visible light cloud image with higher resolution, the output of the generated antagonistic neural network is used as the input of the super-resolution network through the image super-resolution network, the visible light cloud image with higher resolution can be used as the output, and the training of the step can output a visible light cloud image model with the resolution of 500 meters.
And an inversion module: and importing the acquired real-time data and the output result of the reconstruction model into an inverted visible light cloud image model, and inverting to obtain a visible light cloud image. For specific operation of the inversion module, please refer to the inversion step in the first embodiment, which is not described herein.
In view of the foregoing, the present embodiment further provides a visible light cloud image conversion terminal based on infrared light, including, but not limited to, intelligent terminals such as a visible light cloud image conversion mobile phone terminal based on infrared light, a computer terminal, and the like, where the conversion terminal includes a memory and a processor, where the memory is used to store a computer program and store a method of visible light cloud image conversion based on infrared light; and the processor is used for executing a computer program and a method for converting the visible light cloud image based on infrared light so as to realize the method steps of converting the visible light cloud image based on infrared light.
Second embodiment
Referring to fig. 2 and 3, the present embodiment provides another method for converting visible light cloud patterns based on infrared light, which includes 3 steps in the first embodiment:
and (3) data acquisition: the real-time data and the historical data of land utilization data, satellite infrared light data and atmospheric environment data are collected, and the historical data of satellite visible light are collected;
model training: the historical data are imported into a gradient lifting model, and the inverted visible light cloud image model is finally obtained after the gradient lifting model is reconstructed;
inversion: and importing the acquired real-time data and the output result of the reconstruction model into an inverted visible light cloud image model, and inverting to obtain a visible light cloud image.
The data acquisition step and the model training step in the present embodiment are different from those of the first embodiment.
Specifically, in the data acquisition step, the embodiment also acquires real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, and acquires historical data of satellite visible light; but translates the resolution of the historical data described above to 500 meters.
The method specifically comprises the following steps: the satellite infrared light channel data processing is carried out by collecting historical sunflower No. 8 meteorological satellite channel data, wherein the channel data comprise infrared light channels B08, B09, B10, B11, B12, B13, B14, B15 and B16, and carrying out projection conversion on the channels to enable the resolution of the channels to be 500 meters.
And (3) atmospheric environment data processing, namely obtaining the cloud layer height on the historical space lattice point through an atmospheric physical model, and up-sampling the cloud layer height, wherein the resolution after sampling is 500 meters.
And (3) land utilization data processing, namely downsampling the land utilization data, selecting a maximum value point, and enabling the sampled resolution to be 500 meters.
And (3) processing satellite visible light channel data, and collecting historical sunflower No. 8 meteorological satellite channel data as a visible light channel B03. The projection conversion is performed for the channel to a resolution of 500 meters.
And in the model training step, resampling the converted land utilization data, the historical visible light data, the statistical converted historical atmospheric environment data and the projective converted historical infrared light data are imported into a gradient lifting model, a genetic algorithm is used for optimizing a LightGBM model in the gradient lifting algorithm, an average absolute error, a root mean square error and an R party are used as basic optimization models, and the reconstructed inverted visible light cloud image model is output.
In the prior art, as the LightGBM model in the gradient lifting algorithm is complex, model parameters often play a key role on model performance, model parameter adjustment relies on manual experience and continuous trial and error to a certain extent, for this purpose, the embodiment optimizes LightGBM by using Genetic Algorithm (GA), calculates a genetic mechanism simulating nature, and evolves to obtain an optimal result suitable for a specified environment by using ideas of genetic replication and cross variation, and has randomness, parallelism and global property, and can approach to the state of an optimal value. And the model is further optimized based on the mean absolute error (Mean Absolute Error, MAE), root mean square error (Root Mean Square Error, RMSE) and R party. Compared with the original scheme, the scheme has the advantages that the speed of image conversion is greatly improved, the accuracy corresponding to the scheme is reduced, and the scheme can be converted into a visible light cloud image model by utilizing infrared light data.
In the inversion step in the present embodiment, the same inversion method as that in the first embodiment specifically includes the steps of:
data processing, namely performing the same processing steps as those of the satellite data and the atmospheric environment data observed in real time during training, and processing the land use data during training;
and in the inversion step, the historical data are put into a trained gradient lifting model, an inversion visible light cloud image model with the resolution of 500 meters is output, the processed real-time data are merged into the inversion visible light cloud image model, the inversion of the visible light cloud image is performed, and the inversion of the visible light cloud image is completed.
In the embodiment, the mean square error, the peak signal-to-noise ratio and the structural similarity are also used as indexes for inverting the visible light cloud image, so that the accuracy of the visible light cloud image is improved.
Aiming at the conversion method provided by the second embodiment, the embodiment also provides a visible light cloud image conversion system based on infrared light, which comprises a data acquisition module, a model training module and an inversion module, wherein:
the data acquisition module is used for acquiring and converting land utilization data, satellite infrared light data, real-time data and historical data of atmospheric environment data and acquiring and converting historical data of satellite visible light;
the model training module is used for guiding the collected historical data into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model; the specific reconstruction step of the gradient lifting model in this embodiment is as in the model training step of the second embodiment.
The method specifically comprises the following steps: providing a genetic algorithm to optimize a LightGBM model in a gradient lifting algorithm, and evolving to obtain an optimal result suitable for a specified environment, wherein the optimal result has randomness, parallelism and global property and can approach to the state of an optimal value; and then taking the average absolute error, the root mean square error and the R party as the basis, further tuning the model, and taking the final tuning model as the reconstructed inversion visible light cloud image model.
And an inversion module: and importing the acquired real-time data and the output result of the reconstruction model into an inverted visible light cloud image model, and inverting to obtain a visible light cloud image. For specific operation of the inversion module, please refer to the inversion step in the second embodiment, which is not described herein.
In summary, the invention utilizes artificial intelligence to realize the conversion from infrared light cloud pictures to visible light cloud pictures, and has the following advantages compared with the traditional method:
(1) The time spent in operation is obviously reduced by adopting artificial intelligence, and the effect is outstanding in real-time bad weather risk monitoring which is necessary in the minute and the second;
(2) In the process of model reconstruction and training, the invention takes the land utilization rate and the atmospheric environment parameter as one of the parameters, improves the accuracy of the data after conversion, and simultaneously enables the prediction of the model to be applied to geographic positions of different topography, earth surface and land application.
(3) The invention can realize inversion to the visible light cloud picture at any geographic position and at any time, and the image resolution can realize 500 meters, and the accuracy of the visible light cloud picture is greatly improved by artificial intelligence learning of a large amount of experience.
This description describes examples of embodiments of the invention and is not intended to illustrate and describe all possible forms of the invention. It should be understood that the embodiments in the specification may be embodied in many alternate forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Specific structural and functional details disclosed are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention. Those skilled in the art will appreciate that a plurality of features illustrated and described with reference to any one drawing may be combined with features illustrated in one or more other drawings to form embodiments not explicitly illustrated or described. The illustrated combination of features provides representative embodiments for typical applications. However, various combinations and modifications of the features consistent with the teachings of the present invention may be used in particular applications or implementations as desired.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A visible light cloud image conversion method based on infrared light, the method comprising: and (3) data acquisition: collecting and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, collecting and resampling and converting historical data of satellite visible light, wherein the resolution of the converted historical data of the satellite visible light is 500 meters;
model training: the historical data is imported into a gradient lifting model, and the gradient lifting model is subjected to reconstruction training to obtain an inversion visible light cloud image model;
inversion: importing the acquired real-time data and the output result of the reconstruction model into the inverted visible light cloud image model, and inverting to obtain a visible light cloud image;
the specific steps of reconstructing the gradient lifting model comprise: taking the output result of the gradient lifting model as the input of an antagonistic neural model, taking the historical data of the satellite visible light as the output, training to obtain an antagonistic neural network model, and constructing a pix2pixHD model by using a pyrach frame by using the antagonistic neural network model; and generating output of the antagonistic nerve model and historical data of the satellite visible light with 500 m resolution as input, and training to obtain a super-resolution reconstruction model.
2. The infrared-based visible light cloud image conversion method of claim 1, wherein a base model of the gradient lifting model adopts a tree model, and a base classifier adopts a GBDT algorithm.
3. The method for converting visible light cloud image based on infrared light according to claim 1, wherein in the step of reconstructing the gradient lifting model, a layered random sampling method is adopted to convert the satellite visible light data in the ranges of 0-0.25, 0.25-0.5, 0.5-0.75 and 0.75-1,4 intervals according to 1:1:1:1 segment random sampling.
4. The infrared-based visible light cloud image conversion method according to claim 1, wherein the super-resolution reconstruction model adopts an RCAN reconstruction algorithm, and an optimizer of the super-resolution reconstruction model is Adam.
5. The infrared-based visible light cloud image conversion method of claim 1, wherein the step of reconstructing the gradient lifting model further comprises: and optimizing a LightGBM model in the gradient lifting algorithm by adopting a genetic algorithm, and taking an average absolute error, a root mean square error and an R party as a basic tuning model.
6. The method of claim 1, wherein in the inverting step, a mean square error, a peak signal-to-noise ratio, and a structural similarity are used as indexes for inverting the visible cloud.
7. The infrared-based visible light cloud image conversion method of claim 1, wherein the resolution of the historical data of the satellite infrared light data, the atmospheric environment data and the satellite visible light channel data is 500-2000 meters.
8. An infrared light-based visible light cloud image conversion system, characterized in that the system is obtained using the visible light cloud image conversion method according to any one of claims 1 to 7, the visible light cloud image conversion system comprising:
and a data acquisition module: collecting and converting real-time data and historical data of land utilization data, satellite infrared light data and atmospheric environment data, collecting and resampling and converting historical data of satellite visible light, wherein the resolution of the converted historical data of the satellite visible light is 500 meters;
model training module: the historical data is imported into a gradient lifting model, and the gradient lifting model is reconstructed to obtain an inversion visible light cloud image model;
and an inversion module: and importing the acquired real-time data and the output result of the reconstruction model into the inverted visible light cloud image model, and inverting to obtain a visible light cloud image.
9. The visible light cloud image conversion terminal based on the infrared light comprises a memory and a processor, and is characterized in that the memory is used for storing a computer program and a method for storing visible light cloud image conversion based on the infrared light; a processor for executing the computer program and the method of infrared light based visible light cloud image conversion for implementing the method steps of infrared light based visible light cloud image conversion according to any of claims 1-7.
CN202011565575.XA 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof Active CN112669201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011565575.XA CN112669201B (en) 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011565575.XA CN112669201B (en) 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof

Publications (2)

Publication Number Publication Date
CN112669201A CN112669201A (en) 2021-04-16
CN112669201B true CN112669201B (en) 2023-09-12

Family

ID=75409443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011565575.XA Active CN112669201B (en) 2020-12-25 2020-12-25 Visible light cloud image conversion method and system based on infrared light and terminal thereof

Country Status (1)

Country Link
CN (1) CN112669201B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297174B (en) * 2021-05-24 2023-10-13 中南大学 Land utilization change simulation method based on deep learning
CN115267941B (en) * 2022-07-29 2023-05-23 知天(珠海横琴)气象科技有限公司 High-resolution true color visible light model generation and inversion method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
CN106651772A (en) * 2016-11-25 2017-05-10 宁波大学 Super-resolution reconstruction method of satellite cloud picture
CN110544205A (en) * 2019-08-06 2019-12-06 西安电子科技大学 Image super-resolution reconstruction method based on visible light and infrared cross input
CN111368817A (en) * 2020-02-28 2020-07-03 北京师范大学 Method and system for quantitatively evaluating heat effect based on earth surface type
CN111861884A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Satellite cloud image super-resolution reconstruction method based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10466359B2 (en) * 2013-01-01 2019-11-05 Inuitive Ltd. Method and system for light patterning and imaging
US10188289B2 (en) * 2014-06-20 2019-01-29 Rambus Inc. Systems and methods for lensed and lensless optical sensing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
CN106651772A (en) * 2016-11-25 2017-05-10 宁波大学 Super-resolution reconstruction method of satellite cloud picture
CN110544205A (en) * 2019-08-06 2019-12-06 西安电子科技大学 Image super-resolution reconstruction method based on visible light and infrared cross input
CN111368817A (en) * 2020-02-28 2020-07-03 北京师范大学 Method and system for quantitatively evaluating heat effect based on earth surface type
CN111861884A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Satellite cloud image super-resolution reconstruction method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Khizar Hayat.Multimedia super-resolution via deep learning: A survey.Digital Signal Processing.2018,第81卷198-217. *

Also Published As

Publication number Publication date
CN112669201A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
Hilburn et al. Development and interpretation of a neural-network-based synthetic radar reflectivity estimator using GOES-R satellite observations
CN112669201B (en) Visible light cloud image conversion method and system based on infrared light and terminal thereof
CN111428862B (en) Polar unbalanced space-time combined convection primary short-term prediction method
CN117233870B (en) Short-term precipitation set forecasting and downscaling method based on multiple meteorological elements
CN115267941B (en) High-resolution true color visible light model generation and inversion method and system
CN115862010B (en) High-resolution remote sensing image water body extraction method based on semantic segmentation model
CN112308029A (en) Rainfall station and satellite rainfall data fusion method and system
CN116245757A (en) Multi-scene universal remote sensing image cloud restoration method and system for multi-mode data
CN109767465B (en) Method for rapidly extracting daytime fog based on H8/AHI
CN117148360B (en) Lightning approach prediction method and device, electronic equipment and computer storage medium
CN114266947A (en) Classification method and device based on fusion of laser point cloud and visible light image
CN114708494A (en) Rural homestead building identification method and system
CN114764752B (en) Night image defogging algorithm based on deep learning
CN113869157A (en) Cloud classification method based on visible light and infrared cloud pictures
Yuan et al. [Retracted] Weather Radar Image Superresolution Using a Nonlocal Residual Network
CN117911886A (en) Space target intelligent detection system and method based on digital twin technology
CN114882139B (en) End-to-end intelligent generation method and system for multi-level map
Yi et al. MHA-CNN: Aircraft fine-grained recognition of remote sensing image based on multiple hierarchies attention
CN116993598A (en) Remote sensing image cloud removing method based on synthetic aperture radar and visible light fusion
CN115641514A (en) Pseudo visible light cloud map generation method for night sea fog monitoring
CN115564100A (en) Photovoltaic power prediction method, system and equipment
Li et al. Recognition algorithm for deep convective clouds based on FY4A
CN107038684A (en) A kind of method for lifting TMI spatial resolution
CN111986109A (en) Remote sensing image defogging method based on full convolution network
Xin et al. Super-resolution research on remote sensing images in the megacity based on improved srgan

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant