CN117115621A - Satellite cloud image prediction method based on improved U-Net network - Google Patents

Satellite cloud image prediction method based on improved U-Net network Download PDF

Info

Publication number
CN117115621A
CN117115621A CN202311383722.5A CN202311383722A CN117115621A CN 117115621 A CN117115621 A CN 117115621A CN 202311383722 A CN202311383722 A CN 202311383722A CN 117115621 A CN117115621 A CN 117115621A
Authority
CN
China
Prior art keywords
data
image
cloud
model
satellite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311383722.5A
Other languages
Chinese (zh)
Inventor
殷晓斌
陈奇
郑沛楠
李炎
徐青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202311383722.5A priority Critical patent/CN117115621A/en
Publication of CN117115621A publication Critical patent/CN117115621A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention belongs to the field of deep learning and image processing, and relates to a satellite cloud image prediction method based on an improved U-Net network, which comprises the following steps: acquiring historical satellite cloud picture sequence data; performing geometric correction, radiometric calibration and data normalization preprocessing on the data; improving a U-Net network, and constructing a cloud picture prediction model ARRU-Net, wherein the cloud picture prediction model ARRU-Net comprises an attention mechanism, a residual error module and a circular convolution module; training a cloud picture prediction model by using the preprocessed data set; and generating a final predicted satellite cloud image by using the trained cloud image prediction model. The prediction model fully utilizes the output feature map, so that the feature extraction capability of the model and the capability of capturing the space dimension and channel dimension information of an image are effectively improved; the feature use efficiency is further enhanced, the generalization performance of the model is improved, so that satellite cloud image which is clearer and more similar to a real image is predicted, and the method has higher accuracy for long-time cloud image prediction.

Description

Satellite cloud image prediction method based on improved U-Net network
Technical Field
The invention belongs to the field of deep learning and image processing, and particularly relates to a satellite cloud image prediction method based on an improved U-Net network.
Background
The satellite cloud picture serves as a vital meteorological data, plays a great auxiliary role in weather analysis and forecasting, and the satellite cloud picture forecasting task is a space-time sequence forecasting task and is used for forecasting the position, shape and change of an infrared channel bright temperature value of a cloud cluster to a certain degree.
At present, three main methods for predicting satellite cloud images are: block matching, optical flow, and artificial intelligence. The block matching method needs to divide the cloud image, and only features in a block area are processed, so that the influence of the whole space information is not considered; moreover, the method is performed on the premise that all pixels in the block have the same displacement, is a linear model, and dynamic change of cloud is a nonlinear motion, so that the prediction accuracy of the method is low. The optical flow method is to calculate the movement speed according to the change condition of the pixel points on the image so as to predict the position of the object, but the gray level of the object changes along with the time to influence the calculation of the optical flow field so as to influence the cloud image prediction result. With the continuous iteration of the deep learning technique, network structures with different functions are proposed successively. The convolution long-short time memory network, the combination of the generation countermeasure network and the long-short time memory network, the track gate control circulation unit and the like in the deep learning algorithm can also be used for cloud picture prediction, but the problems of low resolution and image blurring in the cloud picture extrapolation process exist in the traditional method and the existing artificial intelligence method.
Disclosure of Invention
The invention provides a satellite cloud image prediction method based on an improved U-Net network, aiming at solving the problems of lower prediction precision, inaccurate result, low resolution, blurred image and the like in the existing cloud image prediction method, wherein the L1-level data of an FY-4A satellite multichannel scanning imaging radiometer is used as a data source for research, and an improved U-Net model ARRU-Net is utilized to realize the task of predicting the movement and development trend of cloud clusters, so that the cloud image prediction precision is effectively improved.
The technical scheme of the invention is as follows:
a satellite cloud image prediction method based on an improved U-Net network comprises the following steps:
s1, acquiring historical satellite cloud image sequence data;
s2, performing geometric correction, radiometric calibration and data normalization preprocessing on the data;
s3, improving a U-Net network, and constructing a cloud picture prediction model ARRU-Net, wherein the cloud picture prediction model ARRU-Net comprises an attention mechanism, a residual error module and a circular convolution module;
s4, training a cloud picture prediction model by using the preprocessed data set;
s5, generating a final predicted satellite cloud image by using the trained cloud image prediction model.
Further, in the step S1, L1-level historical satellite cloud image sequence data of the FY-4A multichannel scanning imaging radiometer are obtained, and data of water vapor and long-wave infrared wave bands of the FY-4A multichannel scanning imaging radiometer are obtained through extraction.
Further, the data preprocessing in the step S2 includes the following steps:
performing radiometric calibration on the extracted data, taking a digital quantization value corresponding to a certain position in an image data layer as an index, and finding a bright temperature value corresponding to the index position;
performing geometric correction on the extracted data, and mapping each pixel point in the original satellite image into a projected image through an equal longitude and latitude projection transformation formula, wherein the equal longitude and latitude projection transformation formula is as follows: wherein,xrepresenting the abscissa in the image after projection,yrepresenting the ordinate in the image after projection;
the calculation formula of the number of rows and columns of the projected image is as follows: wherein,columnthe number of columns representing the projected image,rowthe number of lines after projection is indicated,lon max represents the maximum value of the range of longitudes,lon min representing the minimum value of the longitude range,lat max representing the maximum value of the latitude range,lat min representing the minimum value of the latitude range,resrepresenting spatial resolution;
and normalizing the data by adopting a minimum-maximum normalization method.
Further, the extracted data is normalized, the data range is 124-325, 124 is the minimum value of the data, 325 is the maximum value, and the data is calculated by the formulaMapping data to [0,1]]The interval is within; in the formula (i),xas the original value of the value,x * as a value of the normalization,minfor the minimum value in the sample data,maxis the maximum value of the sample data.
Further, the ARRU-Net model comprises an attention mechanism module, a residual error module and a circular convolution module; wherein:
the attention mechanism module is used for improving the feature extraction capability of the ARRU-Net model and the capability of capturing the space dimension and channel dimension information of the image; performing linear transformation on the input 2 feature maps through 1×1 convolution, adding the two feature maps, and generating an intermediate feature map through a ReLU activation function; the intermediate feature map is again subjected to a 1 x 1 convolution operation, sigmoid function processing and resampling operation to obtain an attention coefficient alpha; multiplying the attention coefficient alpha byx l Obtaining an output characteristic diagram
The cyclic convolution module is used for extracting spatial features in cloud image data by applying convolution operation at each time step, capturing information of different cloud types, including shapes, textures and structures, and providing richer input features for subsequent prediction tasks; in each cyclic convolution block, repeating Conv+BN+ReLU operations t times by setting a total time step parameter t;
and the residual module is connected with the residual, so that the input cloud image is directly added into the output cloud image, the characteristic information lost in the convolution process is supplemented, and the more comprehensive cloud characteristics with the same resolution are allowed to be extracted in the convolution layer.
Further, the model training in the step S4 includes the following steps:
using 75% of data set as training set for training, using 25% of data set as verification set for verifying training effect of model, and storing the model with minimum error of verification set;
training a model by using an average absolute value error loss function, wherein a loss function formula is as follows:in the method, in the process of the invention,fx i ) Represent the firstiThe test value of the individual samples is determined,y i represent the firstiThe corresponding true value of the individual samples,nis the number of samples.
Further, in step S5, a plurality of frame sequence cloud charts to be predicted are input into the trained cloud chart prediction model, and a prediction result is obtained.
The invention has the beneficial effects that:
(1) The satellite cloud image prediction method based on the improved U-Net network has higher precision. The prediction method is based on a U-Net network, introduces an attention mechanism and adopts a residual error connection and cyclic convolution module. The convolution layer of the original U-Net is changed into a circular convolution module, so that multi-scale characteristics of different receptive fields are learned, and an output characteristic diagram is fully utilized; the feature extraction capability of the model and the capability of capturing the space dimension and channel dimension information of the image can be effectively improved by using an attention mechanism; the problem of degradation of a deep network can be avoided by adding the residual error module, the service efficiency of the features is further enhanced, and the generalization performance of the model is improved, so that satellite cloud image which is clearer and more similar to a real image is predicted, and the method has higher accuracy for long-time cloud image prediction.
(2) According to the invention, the L1-level multichannel data of the FY-4A satellite multichannel scanning imaging radiometer is used as a research data source, a new prediction model ARRU-Net is provided, cloud image prediction method research is carried out on eastern coast and North Pacific regions of China, and accurate prediction is carried out on cloud distribution conditions at future moments and channel brightness temperature data of the cloud images, so that reliable basis is provided for weather forecast.
Drawings
FIG. 1 is a flow chart of a satellite cloud image prediction method based on an improved U-Net network;
FIG. 2 is a diagram of a cloud image prediction network based on an improved U-Net model;
FIG. 3 is an attention mechanism module of the ARRU-Net model;
FIG. 4 is a residual block of the ARRU-Net model;
FIG. 5 is a circular convolution module of the ARRU-Net model.
Detailed Description
For further understanding of the present invention, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, the embodiment provides a satellite cloud image prediction method based on an improved U-Net network, which includes the following steps:
s1, acquiring historical satellite cloud image sequence data; specifically, acquiring wind cloud number four (FY-4A) multi-channel scanning imaging radiometer L1 level historical satellite cloud image sequence data; and then extracting data by using a netCDF4 library in Python3 to obtain the data of water vapor and long-wave infrared wave bands in the FY-4A multichannel scanning imaging radiometer.
S2, carrying out geometric correction, radiometric calibration and data normalization on the extracted water vapor and long-wave infrared band data in the FY-4A multichannel scanning imaging radiometer;
the data preprocessing comprises the following steps:
carrying out radiometric calibration on the data, taking a digital quantization value (DN value) corresponding to a certain position in an image data layer as an index, and then finding out a bright temperature value corresponding to the index position in a wind cloud No. A star calibration table;
performing geometric correction on the data, mapping each pixel point in an original satellite image into a projected image through an equal longitude and latitude projection transformation formula, wherein the calculation formula of the number of rows and columns of the projected image is as follows:
wherein,columnthe number of columns representing the projected image,rowthe number of lines after projection is indicated,lon max represents the maximum value of the range of longitudes,lon min representing the minimum value of the longitude range,lat max representing the maximum value of the latitude range,lat min representing the minimum value of the latitude range,resrepresenting spatial resolution; the spatial resolution of the data used in this example was 0.4 °.
Mapping each pixel point in an original satellite image into a projected image through an equal longitude and latitude projection transformation formula, wherein the equal longitude and latitude projection transformation formula is as follows:
wherein,xrepresenting the abscissa in the image after projection,yrepresenting the ordinate in the image after projection,lonrepresents a value of the longitude and,latrepresenting a latitude value.
Normalizing the data by adopting a minimum-maximum normalization method; the adopted research data are the data of the FY-4A satellite multichannel scanning imaging radiometer vapor and the long-wave infrared band, the data range is 124-325, the minimum value of 124 is taken as the data, the maximum value of 325 is taken as the data, and the data are mapped into the [0,1] interval by the calculation method of the following formula.
Wherein,xas the original value of the value,x * as a value of the normalization,minfor the minimum value in the sample data,maxis the maximum value of the sample data.
S3, constructing a cloud picture prediction model ARRU-Net based on a U-Net network, wherein the cloud picture prediction model ARRU-Net comprises an attention mechanism, a residual error module and a circular convolution module.
As shown in fig. 2, the ARRU-Net network is an encoder-decoder architecture.
On the left is an encoder with four sub-modules, a downsampled module consisting of a cyclic residual convolution plus a 2 x 2 max pooling layer. Firstly, 256×256 images are input, and after the first layer cyclic residual convolution and the maximum pooling, the resolution of the feature map is reduced to 128×128, so that the aims of reducing the calculated amount and improving the detection efficiency are achieved. The image dimension becomes 64×64 through layer 2 cyclic residual convolution and max pooling, the image dimension is reduced to 32×32 through layer 3 cyclic residual convolution and max pooling, and the image dimension is reduced to 16×16 through layer 4 cyclic residual convolution and max pooling.
On the right is a decoder that uses the same number of levels of convolution operations as the downsampling stage to extract features. And up-sampling the 16×16 feature map to obtain a 32×32 feature map, and performing channel splicing on the 32×32 feature map and the previous 32×32 feature map, so that the features extracted by the down-sampling layer can be directly transferred to the up-sampling layer. In this way, different levels of feature information can be better utilized, thereby improving the performance of the model and the performance between each coding and decoding module, using a jump connection with Attention Gate (AG). Then, rolling and up-sampling are carried out on the spliced characteristic images to obtain a 64 multiplied by 64 characteristic image; the feature map is spliced with the previous 64×64 feature map, then convolved, and up-sampled. Four upsampling steps can obtain a 256×256 prediction result with the same size as the input image. The attention module directly skips the pooling layer and the lower convolution layer to be cascaded to the corresponding decoding end through jump connection at the encoding end, fuses complementary information and adopts 1 multiplied by 1 convolution to carry out linear change.
The attention mechanism module is shown in fig. 3, and the attention mechanism operates as follows: first, 2 features of the input are mapped (respectivelygAndx l ) Performing linear transformation through 1×1 convolution to obtain corresponding feature maps a and B; then, adding the two feature maps and generating an intermediate feature map through a ReLU activation function; the intermediate feature map is subjected to 1×1 convolution operation, sigmoid function processing and resampling operation again to obtain an attention coefficient alpha; finally, the attention coefficient alpha is multiplied byx l Obtaining an output characteristic diagram
Residual connection as shown in fig. 4, using the output of the residual structure can solve the problem of gradually decreasing performance as the number of network layers increases, and accelerate the convergence speed of the network. The residual unit introduces a skip connection, directly adding the input and output, thereby supplementing the feature information that may be lost during the convolution process. The jump connection design can effectively transmit gradient signals, lighten the problem of gradient disappearance and is beneficial to optimization and training of a network.
The cyclic convolution module is shown in fig. 5, in which conv.+ bn+relu is a layer of the network, conv. is a convolution layer, BN is batch normalization, and ReLU is a linear activation function. And the cyclic convolution and combination are carried out on the layer of feature images, so that the extraction of richer feature information is facilitated. However, since there may be a problem of redundancy of characteristic information and network parameters after the cyclic convolution output, the problem is solved by compressing the characteristic information and the network parameters by inputting the characteristic information into a 1×1 convolution layer, the dimension of the characteristic map can be reduced, and the redundant information can be removed, thereby improving the efficiency and performance of the model. By setting the total time step parameter to t, the conv+bn+relu operation repeats the time of t in each recursive convolutional block. And the stability of the neural network is increased by using batch normalization, so that the convergence speed of the network in the up-sampling process is increased.
S4, training a cloud picture prediction model by using the sequence cloud picture data set processed in the step S2; training compliance is the following steps:
1) The visible light and short wave infrared wave bands of the FY-4A satellite multichannel scanning imaging radiometer are not adopted, 6 wave bands are taken as input channels from the 9 th to the 14 th channels of the FY-4A satellite, 5 continuous time data are taken as input data, and the method is used for predicting 5 future satellite cloud images, wherein each cloud image is 30 minutes apart;
2) In the model training process, 75% of the data set is used as a training set for training, 25% is used as a verification set for verifying iterative parameters, and a model with the minimum error of the verification set is stored;
3) Training a model by using an average absolute value error loss function, wherein a loss function formula is specifically as follows:
in the method, in the process of the invention,andrespectively represent the firstPredicted values and corresponding true values for the individual samples,nis the number of samples. S5, inputting 5 frames of sequence cloud pictures to be predicted into the trained sequence cloud picture prediction model ARRU-Net to obtain a prediction result.
Example 2
The model training areas in the embodiment are eastern coastal areas and North-west Pacific areas of China, and the latitude and longitude ranges are 118 degrees E-128 degrees E and 22 degrees N-32 degrees N.
The present example selected 5000 time series from 2021, 6, 29 th to 2021, 9 th, 16 th, and 2022, 6, 29 th to 2022, 9 th, 16 th, 4000 of which were training sets, 500 of which were verification sets, and the remaining 500 of which were test sets; in order to ensure the validity of the experimental results, the training data, the verification data and the test data are independent of each other.
In this embodiment, the peak signal-to-noise ratio (peak signal-to-NoiseRatio, PSNR), root mean square error (Root Mean Square Error, RMSE) and determination coefficient (r2_score) of all samples of the test set are used to evaluate the model prediction accuracy, and the calculation manners are as follows:
wherein: m is the image height, N is the image width,representing the observation imageiLine 1jThe pixel values of the columns are used to determine,representing the first in a model predictive imageiLine 1jPixel values of the columns.
In order to more objectively evaluate the comparison result of the method and other models, the present embodiment compares the accuracy of the ARRU-Net model constructed by the present invention with the accuracy of 3 models of U-Net, U-Net+attention mechanism (U-Net+attention), U-Net+residual connection+cyclic convolution (U-Net+block+current) in satellite cloud image prediction, and the comparison result is shown in Table 1 below.
Table 1 prediction accuracy comparison results obtained with four models
From the comparison result data in Table 1, it can be seen that the ARRU-Net prediction effect is superior to other models. The average PSNR of the ARRU-Net predicted image exceeds 33 dB for five future times, the average RMSE for the five future times is less than 7K, and the average R2 for the five future times is higher than 0.88. The ARRU-Net model predicts the next five times on the test set, with a 1.3K decrease in average RMSE and 1.95 dB increase in PSNR compared to U-Net. This demonstrates that the inventive method can predict images that are clearer, closer to the label image and have higher long-term prediction accuracy.
In summary, the satellite cloud image prediction method based on the improved U-Net network has higher precision, can predict images which are clearer and more similar to real images, has higher accuracy for long-time prediction, provides reliable data support for weather forecast, and further avoids huge losses caused by civil life safety and social economic construction when cloud clusters are changed sharply.
The foregoing description is only a preferred embodiment of the present invention and is not intended to limit the present invention, but although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments, or that equivalents may be substituted for part of the technical features thereof. Any modification, equivalent replacement, variation, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. The satellite cloud image prediction method based on the improved U-Net network is characterized by comprising the following steps of:
s1, acquiring historical satellite cloud image sequence data;
s2, performing geometric correction, radiometric calibration and data normalization preprocessing on the data;
s3, improving a U-Net network, and constructing a cloud picture prediction model ARRU-Net, wherein the cloud picture prediction model ARRU-Net comprises an attention mechanism, a residual error module and a circular convolution module;
s4, training a cloud picture prediction model by using the preprocessed data set;
s5, generating a final predicted satellite cloud image by using the trained cloud image prediction model.
2. The method for predicting satellite cloud patterns based on the improved U-Net network according to claim 1, wherein in the step S1, L1-level historical satellite cloud pattern sequence data of the FY-4A multichannel scanning imaging radiometer are obtained, and data of water vapor and long-wave infrared wave bands of the FY-4A satellite multichannel scanning imaging radiometer are extracted.
3. The improved U-Net network based satellite cloud image prediction method according to claim 1, wherein the data preprocessing in step S2 comprises the steps of:
performing radiometric calibration on the extracted data, taking a digital quantization value corresponding to a certain position in an image data layer as an index, and finding a bright temperature value corresponding to the index position;
performing geometric correction on the extracted data, and mapping each pixel point in the original satellite image into a projected image through an equal longitude and latitude projection transformation formula, wherein the equal longitude and latitude projection transformation formula is as follows: wherein,xrepresenting the abscissa in the image after projection,yrepresenting the ordinate in the image after projection;
the calculation formula of the number of rows and columns of the projected image is as follows: wherein,columnthe number of columns representing the projected image,rowthe number of lines after projection is indicated,lon max represents the maximum value of the range of longitudes,lon min representing the minimum value of the longitude range,lat max representing the maximum value of the latitude range,lat min representing the minimum value of the latitude range,resrepresenting spatial resolution; and normalizing the data by adopting a minimum-maximum normalization method.
4. The satellite cloud image prediction method based on the improved U-Net network according to claim 3, wherein extracted data is normalized, the data range is 124-325, 124 is the minimum value of the data, 325 is the maximum value, and the method is as followsMapping data to [0,1]]The interval is within; in the formula (i),xas the original value of the value,x * as a value of the normalization,minfor the minimum value in the sample data,maxis the maximum value of the sample data.
5. The improved U-Net network based satellite cloud image prediction method of claim 1, wherein the ARRU-Net model comprises an attention mechanism module, a residual module, and a loopA convolution module; wherein: the attention mechanism module is used for improving the feature extraction capability of the ARRU-Net model and the capability of capturing the space dimension and channel dimension information of the image; performing linear transformation on the input 2 feature maps through 1×1 convolution, adding the two feature maps, and generating an intermediate feature map through a ReLU activation function; the intermediate feature map is subjected to 1×1 convolution operation, sigmoid function processing and resampling operation again to obtain an attention coefficient alpha; multiplying the attention coefficient alpha byx l Obtaining an output characteristic diagram;
the cyclic convolution module is used for extracting spatial features in cloud image data by applying convolution operation at each time step, capturing information of different cloud types, including shapes, textures and structures, and providing richer input features for subsequent prediction tasks; in each cyclic convolution block, repeating Conv+BN+ReLU operations t times by setting a total time step parameter t;
and the residual module is connected with the residual, so that the input cloud image is directly added into the output cloud image, the characteristic information lost in the convolution process is supplemented, and the more comprehensive cloud characteristics with the same resolution are allowed to be extracted in the convolution layer.
6. The improved U-Net network based satellite cloud image prediction method according to claim 1, wherein the model training in step S4 comprises the steps of:
using 75% of data set as training set for training, using 25% of data set as verification set for verifying training effect of model, and storing the model with minimum error of verification set;
training a model by using an average absolute value error loss function, wherein a loss function formula is as follows:in the method, in the process of the invention,fx i ) Represent the firstiThe test value of the individual samples is determined,y i represent the firstiThe corresponding true value of the individual samples,nis the number of samples.
7. The method for predicting satellite cloud images based on the improved U-Net network according to claim 1, wherein in step S5, a plurality of frame sequence cloud images to be predicted are input into a trained cloud image prediction model to obtain a prediction result.
CN202311383722.5A 2023-10-24 2023-10-24 Satellite cloud image prediction method based on improved U-Net network Pending CN117115621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311383722.5A CN117115621A (en) 2023-10-24 2023-10-24 Satellite cloud image prediction method based on improved U-Net network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311383722.5A CN117115621A (en) 2023-10-24 2023-10-24 Satellite cloud image prediction method based on improved U-Net network

Publications (1)

Publication Number Publication Date
CN117115621A true CN117115621A (en) 2023-11-24

Family

ID=88795246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311383722.5A Pending CN117115621A (en) 2023-10-24 2023-10-24 Satellite cloud image prediction method based on improved U-Net network

Country Status (1)

Country Link
CN (1) CN117115621A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341778A (en) * 2017-07-10 2017-11-10 国家测绘地理信息局卫星测绘应用中心 SAR image ortho-rectification methods based on satellite control point storehouse and DEM
CN114648704A (en) * 2022-03-17 2022-06-21 山东师范大学 Farmland boundary high-precision extraction method and system
CN116070132A (en) * 2022-12-19 2023-05-05 陕西九州遥感信息技术有限公司 Method for predicting transparency of seawater and sea surface temperature based on multi-source remote sensing data
CN116091640A (en) * 2023-04-07 2023-05-09 中国科学院国家空间科学中心 Remote sensing hyperspectral reconstruction method and system based on spectrum self-attention mechanism
CN116128741A (en) * 2022-10-26 2023-05-16 苏州空天信息研究院 Geometric correction method for wide SAR image
CN116185616A (en) * 2023-02-10 2023-05-30 重庆市气象科学研究所(重庆市生态气象和卫星遥感中心、重庆市农业气象中心) FY-3D MERSI L1B data automatic reprocessing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341778A (en) * 2017-07-10 2017-11-10 国家测绘地理信息局卫星测绘应用中心 SAR image ortho-rectification methods based on satellite control point storehouse and DEM
CN114648704A (en) * 2022-03-17 2022-06-21 山东师范大学 Farmland boundary high-precision extraction method and system
CN116128741A (en) * 2022-10-26 2023-05-16 苏州空天信息研究院 Geometric correction method for wide SAR image
CN116070132A (en) * 2022-12-19 2023-05-05 陕西九州遥感信息技术有限公司 Method for predicting transparency of seawater and sea surface temperature based on multi-source remote sensing data
CN116185616A (en) * 2023-02-10 2023-05-30 重庆市气象科学研究所(重庆市生态气象和卫星遥感中心、重庆市农业气象中心) FY-3D MERSI L1B data automatic reprocessing method
CN116091640A (en) * 2023-04-07 2023-05-09 中国科学院国家空间科学中心 Remote sensing hyperspectral reconstruction method and system based on spectrum self-attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡朋艳: "基于FY-4A 卫星的云检测与云图预测方法研究", 硕士电子期刊, pages 1 - 34 *

Similar Documents

Publication Publication Date Title
CN110570396B (en) Industrial product defect detection method based on deep learning
CN112991354B (en) High-resolution remote sensing image semantic segmentation method based on deep learning
CN112634292B (en) Asphalt pavement crack image segmentation method based on deep convolutional neural network
CN111210435A (en) Image semantic segmentation method based on local and global feature enhancement module
CN110930439B (en) High-grade product automatic production system suitable for high-resolution remote sensing image
CN110599502B (en) Skin lesion segmentation method based on deep learning
CN111861884A (en) Satellite cloud image super-resolution reconstruction method based on deep learning
CN115439751A (en) Multi-attention-fused high-resolution remote sensing image road extraction method
CN112417752B (en) Cloud layer track prediction method and system based on convolution LSTM neural network
CN114549555A (en) Human ear image planning and division method based on semantic division network
CN115484410B (en) Event camera video reconstruction method based on deep learning
CN115113301A (en) Emergency short-term forecasting method and system based on multi-source data fusion
CN113936204A (en) High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network
CN113240169A (en) Short-term rainfall prediction method of GRU network based on multi-mode data and up-down sampling
CN115908805A (en) U-shaped image segmentation network based on convolution enhanced cross self-attention deformer
CN116958827A (en) Deep learning-based abandoned land area extraction method
CN115457043A (en) Image segmentation network based on overlapped self-attention deformer framework U-shaped network
CN116229106A (en) Video significance prediction method based on double-U structure
CN116415730A (en) Fusion self-attention mechanism time-space deep learning model for predicting water level
CN114998373A (en) Improved U-Net cloud picture segmentation method based on multi-scale loss function
CN112598590B (en) Optical remote sensing time series image reconstruction method and system based on deep learning
CN113628180A (en) Semantic segmentation network-based remote sensing building detection method and system
CN117115621A (en) Satellite cloud image prediction method based on improved U-Net network
CN116957921A (en) Image rendering method, device, equipment and storage medium
CN116309213A (en) High-real-time multi-source image fusion method based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination