CN117152637A - Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence - Google Patents

Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence Download PDF

Info

Publication number
CN117152637A
CN117152637A CN202311419510.8A CN202311419510A CN117152637A CN 117152637 A CN117152637 A CN 117152637A CN 202311419510 A CN202311419510 A CN 202311419510A CN 117152637 A CN117152637 A CN 117152637A
Authority
CN
China
Prior art keywords
cloud
strong convection
bright temperature
satellite
tbb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311419510.8A
Other languages
Chinese (zh)
Inventor
殷晓斌
陈奇
郑沛楠
李炎
徐青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202311419510.8A priority Critical patent/CN117152637A/en
Publication of CN117152637A publication Critical patent/CN117152637A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention belongs to the field of deep learning and meteorology, and particularly relates to a strong convection cloud identification method based on FY-4A satellite cloud image prediction sequences, which comprises the following steps: preprocessing the original data; selecting spectral feature quantity for strong convection cloud identification; extracting texture features of the strong convection cloud image in different directions by Gabor transformation; manufacturing a strong convection cloud label by a bright temperature threshold method and a bright temperature difference threshold method; improving a U-Net network, and constructing a model R2AttU-Net suitable for strong convection cloud prediction and identification; inputting a satellite cloud picture into the R2AttU-Net model to obtain a prediction result of a future cloud picture; and inputting a prediction result into the R2AttU-Net model, and dividing the strong convection cloud. The invention combines the spectral characteristics and the texture characteristics of the strong convection cloud cluster to improve the accuracy and the recognition rate of the strong convection cloud recognition, and makes more accurate guidance for reducing the disaster influence caused by strong convection weather.

Description

Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence
Technical Field
The invention belongs to the field of deep learning and meteorology, and particularly relates to a strong convection cloud identification method based on FY-4A satellite cloud image prediction sequences.
Background
Strong convection cloud clusters often bring disaster weather such as short-time strong rainfall, thunderstorm strong wind, hail, broken lines, tornados and the like, and the rapid and accurate monitoring of the strong convection cloud clusters is significant, but because the life cycle of the strong convection weather is short and the spatial scale is small, the position of the strong convection cloud clusters is difficult to timely and accurately monitor and forecast by using a traditional monitoring method. The coverage range of the static weather satellite is wide, and the time resolution is high, so that the static weather satellite becomes an important means for monitoring strong convection weather, and the application of the static weather satellite in the field of remote sensing image recognition is increasingly deep along with the rapid development of deep learning.
One of the main criteria for strong convective cloud identification using satellite images is spectral characteristics. The cloud top of the strong convection cloud is high and the cloud layer is compact, so that the bright temperature of the cloud is low in the thermal infrared band. In 1987, inoue et al found that strong convective clouds and entrainment clouds can be better distinguished by splitting the bright temperature difference between window channels; in 1996, ackerman found that the bright Wen Fuzhi area of infrared and water vapor wave bands in the cloud picture corresponds well to the strong convection up-rushing cloud top; in 2006, mecikalski et al utilized the infrared water vapor bright temperature difference and the bright temperature difference between split window channels as one of the criteria for strong convection cloud identification; in 2008, zheng Yongguang and the like research and analyze the activity conditions of a mesoscale convection system in China and surrounding areas in summer by utilizing time-by-time infrared bright temperature data of a static meteorological satellite for many years, and the statistical characteristics lower than-52 ℃ are found to better show the basic climate characteristics of the space-time distribution of the summer convection system in the areas; in 2011, cai Shumei and the like aim at the defect of utilizing a single bright temperature threshold, an adaptive temperature threshold based on infrared images is adopted, and different temperature thresholds are used for cloud clusters in different development stages, so that the complete tracking of the life process of the cloud clusters is realized; in 2009, zhu Ya equally utilized NOAA/AMSU microwave channel data and GOES-9 infrared channel data to identify strong convection cloud clusters, found that when strong convection cloud is identified by using an infrared brightness Wen Chan value, the identification result severely depends on a threshold value, and the strong convection cloud identification result is insensitive to the threshold value by using an infrared water brightness temperature difference as an identification criterion.
Besides spectral characteristics, the shape, size, texture structure and evolution rule of the cloud can be used as important basis for identification. In 1992, welch et al extracted LANDSAT satellite images by using gray level co-occurrence matrix to obtain better results by cloud classification; in 2008, the Zinner combines a displacement vector field obtained by calculation according to an image matching algorithm and spectrum images of adjacent times to determine an area with strong convection development in a cloud picture, and images with different wave bands are used for identifying strong convection clouds in different development stages; in 2013, the method of gray level co-occurrence matrix is proposed by Jianyang and the like to conduct identification research on strong convection precipitation cloud clusters; the distribution of cloud cluster pixels is subjected to straight line fitting in infrared and water vapor spectrum space by smile and the like, and through analyzing the distribution characteristics of data, strong convection clouds can be identified, and convection clouds which do not reach vigorous stages can be identified.
However, on the static meteorological satellite image, the cloud features are complex and changeable, the method has limited accuracy in identifying strong convection cloud clusters, and the problem that part of cloud rolling noise is difficult to remove still exists. Some of the method mistakes part of the cloud into strong convection cloud, and some of the method does not develop vigorous convection cloud omission, so that the method is unfavorable for timely finding the strong convection cloud and influences the early warning of strong convection weather.
Disclosure of Invention
The invention aims to solve the problems of low accuracy of strong convection cloud identification and difficult removal of partial cloud rolling noise in the existing method, and provides a strong convection cloud identification method based on FY-4A satellite cloud image prediction sequences.
The technical scheme of the invention is as follows:
a strong convection cloud identification method based on FY-4A satellite cloud image prediction sequences comprises the following steps:
s1, preprocessing original data;
s2, selecting spectral feature quantity for strong convection cloud identification;
s3, extracting texture features of the strong convection cloud image in different directions by Gabor transformation;
s4, manufacturing a strong convection cloud label through a bright temperature threshold method and a bright temperature difference threshold method;
s5, improving a U-Net network, and constructing a model R2AttU-Net suitable for strong convection cloud prediction and identification;
s6, inputting a satellite cloud image into the R2AttU-Net model to obtain a prediction result of a future cloud image;
s7, inputting a prediction result into the R2AttU-Net model, and dividing the strong convection cloud.
Further, in the step S1, data processing technologies including clipping, projection conversion and data normalization are adopted to preprocess the data of the advanced stationary orbit radiation imager (AGRI) 4 km resolution L1 of the wind cloud star No. a (FY-4A).
Further, in the step S2, 3 spectral feature values are selected from the water vapor channel and the long-wave infrared channel, and are respectively the spectral feature brightness temperature values TBB 9 Bright temperature difference value TBB 9 -TBB 12 And TBB (Tunnel boring mill) 12 -TBB 13
Further, in the step S3, the texture features of the image are extracted in the directions of 0 °, 45 °, 90 ° and 135 ° by using a Gabor filter, and the two-dimensional Gabor filter function is as follows:wherein->,/>S x S y Is the variable range in the x-axis and y-axis, namely the size of the selected Gabor wavelet window;xandyas the spatial coordinates of the filter,fis the frequency of the sinusoidal function,θis the orientation of the Gabor filter.
Further, in the step S4, a strong convection cloud label is manufactured by a vapor channel bright temperature threshold method, a vapor-infrared channel bright temperature difference threshold method and a splitting window bright temperature difference threshold method, and the steps are as follows:
binarizing the image by adopting a water vapor channel bright temperature threshold method, and removing cloud clusters with the water vapor channel bright temperature value larger than 220K;
binarizing the image by adopting a water vapor-infrared channel bright temperature difference threshold method, wherein the water vapor-infrared window bright temperature difference TBB 9 -TBB 12 >-4K for the preliminary extraction of strong convective cloud;
adopting a splitting window brightness temperature difference TBB 12 -TBB 13 <2K, removing partial noise existing in the primarily extracted convection cloud;
performing a closing operation and an intersection operation on the binarized image, and adding an area threshold;
and obtaining the final tag of the strong convection cloud.
Further, the R2AttU-Net model constructed in the step S5 uses the attention module to improve the feature extraction capability of the model and the capability of capturing the space dimension and channel dimension information of the image before splicing the features on each resolution of the encoder with the corresponding features in the decoder;
the convolution layer of the original U-Net is changed into a cyclic residual convolution module, and the convolution result of the front layer and the convolution result of the own layer are accumulated to be used as the input of the lower layer, so that the multi-scale characteristics of different receptive fields are learned, and the output characteristic diagram is fully utilized.
Further, in the step S6, five satellite cloud patterns with 30 minutes intervals are input into the R2AttU-Net model, so as to obtain the prediction results of the five cloud patterns.
Further, in the step S7, the strong convection cloud is segmented by inputting the satellite cloud image prediction result obtained in the step S6 into the R2AttU-Net model.
The invention has the beneficial effects that:
(1) According to the strong convection cloud identification method based on the FY-4A satellite cloud image prediction sequence, which is provided by the invention, the multispectral observation capability of a new generation of domestic satellites and the texture characteristics of strong convection clouds are fully utilized, the spectral characteristics and the texture characteristics extracted based on the spectral characteristics are respectively extracted for each picture, and more cloud cluster characteristic parameters are excavated; the invention takes the data of the FY-4A satellite AGRI L1 level as a data source for research, takes a U-Net network as a basis, introduces an attention mechanism, adopts residual connection, changes the original U-Net convolution into circular convolution, and provides a new network R2AttU-Net, thereby effectively improving the strong convection cloud identification precision.
(2) The identification method of the invention analyzes the strong convection cloud from the two aspects of spectral characteristics and texture characteristics, and determines the characteristic parameters which can be used for the identification of the strong convection cloud, so that the spectral characteristics TBB are respectively extracted from each picture 9 、TBB 9 -TBB 12 And TBB (Tunnel boring mill) 12 -TBB 13 And texture features extracted based on the spectral features; the invention combines the spectral characteristics and the texture characteristics of the strong convection cloud cluster to improve the accuracy and the recognition rate of the strong convection cloud recognition, and makes more accurate guidance for reducing the disaster influence caused by the strong convection weather in China.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a label of the resulting strong convection cloud of the present invention;
FIG. 3 is a diagram of the R2AttU-Net network of the present invention.
Detailed Description
For a further understanding of the present invention, reference will now be made to the drawings and examples; it will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment provides a strong convection cloud identification method based on FY-4A satellite cloud image prediction sequences, which is shown in fig. 1 and comprises the following steps:
s1, preprocessing data processing technologies such as cutting, projection conversion, data normalization and the like on the data of the FY-4A AGRI 4 km resolution L1 level;
s2, analyzing spectral characteristics of strong convection clouds, and selecting spectral characteristic quantity for strong convection cloud identification as spectral characteristic brightness temperature value TBB 9 Bright temperature difference value TBB 9 -TBB 12 And TBB (Tunnel boring mill) 12 -TBB 13 The method comprises the steps of carrying out a first treatment on the surface of the The process is as follows:
the analysis of the spectral characteristics of strong convection cloud in the water vapor cloud image and the long-wave infrared cloud image comprises the following steps:
the radiation of the water vapor channel can be expressed as:
wherein,the radiation value is represented, and the two items on the right are radiation information emitted by the low-level cloud and the high-level cloud respectively;representing solar radiation;representing wavelength;representing the emissivity of the ground;blackbody radiation representing a cloud;representing the radiation transmittance of the ground to the water vapor band;representing middle and high waterBlack body radiation of steam;a change rate indicating the transmittance of the water vapor wave band, the value of which changes with the change of the height;zrepresenting the weight of the radiation in the water vapor to the top of the atmosphere.
The radiation quantity emitted by the water vapor wave band can be expressed by a contribution function, and the radiation quantity is as follows:
in the formula (2),zindicating the amount of radiation that the water vapor emissions reach the satellite at altitude; the meaning of the remaining parameters is consistent with equation (1).
The infrared cloud image mainly adopts the infrared channel with the bandwidth of 10.3 mu m-12.5 mu m, and the total amount of infrared radiation can reflect the temperature of a target object. The meteorological satellite receives cloud radiation information in an infrared channel as follows:
wherein the method comprises the steps ofT SAT Representing the brightness temperature;E λ indicating the emissivity of the infrared channel;Tindicating the temperature of the atmosphere; epsilon λ Representing the emissivity of the ground;T s representing the temperature value of the ground. The strong convection cloud is similar to a blackbody due to deep cloud body, and the cloud bottom has no transmission radiation, namelyE λ 1, so that its brightness temperature substantially represents the actual temperature of the cloud top; when the infrared cloud image data is applied, the radiation intensity value is generally converted into a bright temperature value of the cloud top, namely, blackbody radiation received by a satellite is converted according to the Planckian formula, and the conversion formula is as follows:
wherein λ represents a wavelength;hrepresenting the planck constant;krepresenting the Boltzmann constant;Tis the brightness temperature to be calculated and is,cis the speed of light in vacuum.
Visible light and short wave infrared band of FY-4A satellite can not be used at nightUsing an FY-4A satellite advanced static orbit radiation imager, wherein a ninth channel is a water vapor channel, 12 th and 13 th channels are long-wave infrared channels, and 3 spectral characteristic values and a brightness temperature value TBB are selected through the analysis 9 Bright temperature difference value TBB 9 -TBB 12 And bright temperature difference value TBB 12 -TBB 13
S3, extracting texture features of the strong convection cloud image in different directions by Gabor transformation; specifically, the texture features of the image are extracted in the directions of 0 °, 45 °, 90 °, and 135 ° using Gabor filters.
In two-dimensional image processing, the Gabor filter has good filtering performance, is similar to a human visual system, and has a good texture detection function. The two-dimensional Gabor filter function is as follows:
wherein, is the range of variation of the variables in the x, y axes, i.e., the size of the selected Gabor wavelet window;xandyas the spatial coordinates of the filter,fis the frequency of the sinusoidal function,θis the orientation of the Gabor filter. The Gabor filter extracts texture features of the image in the directions of 0 °, 45 °, 90 °, and 135 °.
S4, manufacturing a strong convection cloud label through a bright temperature threshold method and a bright temperature difference threshold method; specifically, a strong convection cloud label is manufactured by a water vapor channel bright temperature threshold method and a water vapor-infrared channel bright temperature difference threshold method, and the method comprises the following steps:
binarizing the image by using a water vapor channel bright temperature threshold method, and removing cloud clusters with the water vapor channel bright temperature value larger than 220K.
Binarizing the image by using a vapor-infrared channel bright temperature difference threshold method, wherein the vapor-infrared window bright temperature TBB 9 -TBB 12 >-4K for strong convection cloud preliminary extraction. The main principle of extracting convection cloud by using the bright temperature difference of the water vapor-infrared window area is as follows: the water vapor is absorbed by the water vapor channel radiation on the cloud in the convection layer, and the radiation of the infrared window channel is less absorbed, so that the bright temperature difference of the water vapor-infrared window is negative, the higher the cloud roof is, the less the water vapor on the cloud is, the higher the bright temperature of the water vapor channel is, the closer the bright temperature of the water vapor channel is to the bright temperature of the infrared window channel, the bright temperature difference of the water vapor-infrared window region is close to 0K, when strong convection or top-rushing convection occurs, the water vapor enters the convection layer reverse temperature layer, the bright temperature of the water vapor channel is further increased or even exceeds the bright temperature of the infrared window channel, and the bright temperature difference of the water vapor-infrared window region becomes positive.
Noise such as rolling cloud still exists in convection cloud primarily extracted by water vapor-infrared window area bright temperature difference, and split window bright temperature difference TBB 12 -TBB 13 <And 2K, which is used for further eliminating noise such as partial cloud and the like. The main principle of eliminating the cloud by the splitting window bright temperature difference method is as follows: the ice clouds have different radiation absorption characteristics in the splitting window, i.e. the absorption of the water vapor and cloud particles to radiation of wavelength 12 μm is stronger than the absorption of radiation of wavelength 11 μm. For the rolling cloud structure of the edge of the integrated cloud, although the bright temperature of the cloud top is very low, the optical thickness of the cloud is smaller, and the upward radiation of the cloud bottom is not enough shielded, so that the bright temperature on two channels has TBB 12 >TBB 13 The higher the cloud layer is, the lower the cloud top bright temperature is, and the lower the bright temperature difference of the two channels is.
And performing a closing operation and an intersection operation on the binarized image, and adding an area threshold value.
A final strongly convective cloud of labels is obtained as shown in fig. 2, with the abscissa representing the latitude range and the ordinate representing the longitude range.
S5, improving a U-Net network, and constructing a model R2AttU-Net suitable for strong convection cloud prediction and identification;
the R2AttU-Net network structure is shown in FIG. 3. Before splicing the features on each resolution of the encoder with the corresponding features in the decoder, the network model uses an attention module, so that the feature extraction capacity of the model and the capacity of capturing the space dimension and channel dimension information of an image can be effectively improved; the convolution layer of the original U-Net is changed into a cyclic residual convolution module, and the convolution result of the front layer and the convolution result of the own layer are accumulated to be used as the input of the lower layer, so that the multi-scale characteristics of different receptive fields are learned, and the output characteristic diagram is fully utilized; and the degradation problem of the deep network can be avoided by adding residual connection, the characteristic use efficiency is further enhanced, and the generalization performance of the model is improved.
S6, five satellite cloud pictures with the time interval of 30 minutes are input into the model R2AttU-Net, and the prediction results of the five cloud pictures are obtained;
and S7, inputting the satellite cloud image prediction result in the S6 to the R2AttU-Net model, so as to divide strong convection cloud.
Example 2
The model training areas in the embodiment are eastern coastal areas and North west Pacific areas of China, and the latitude and longitude ranges are 118.52 degrees E-128.72 degrees E and 25.28 degrees N-35.48 degrees N. In this embodiment, 5000 time series are selected from the group consisting of 29 th year from 2021, 6 th month to 2021, 9 th month, 16 th year from 2022, 6 th month, 29 th month, and 2022, 9 th month, 16 th day, 4000 time series are used as training sets 500 time series are used as verification sets, and the remaining 500 data are used as test sets, so that the training data, the verification data, and the test data are independent of each other in order to ensure the validity of the experimental results.
In this embodiment, the quantitative analysis of the strong convective cloud identification results is performed using 4 evaluation indexes, namely, accuracy (Accuracy), precision (Precision), recall (Recall), and F1 Score (F1-Score) of all samples in the test set. The calculation formulas of the four evaluation indexes are as follows:
wherein:TPjudging the correct number in the positive sample;FPjudging the number of errors in the positive sample;TNjudging the correct number in the negative sample;FNthe number of errors is determined in the negative sample.
The accuracy of the model R2AttU-Net constructed by the invention in satellite cloud image recognition is compared with that of other 3 models, namely U-Net, U-Net+attention mechanism (U-Net+attention) and U-Net+residual connection+circular convolution (U-Net+block+current) models, and the comparison results are shown in the following table 1.
Table 1 recognition accuracy comparison results obtained with four models
Compared with other models, the R2AttU-Net network model has higher precision, and the average accuracy and recall rate of identification at five future moments on a test set respectively reach 97.62% and 83.34%.
By adopting the technical scheme of the invention, the capability of capturing cloud features is improved, the segmentation and identification effects of strong convection cloud clusters are improved, the harm caused by strong convection weather is reduced or even avoided to a certain extent, and the method has important economic value and social significance.
The foregoing description is only a preferred embodiment of the present invention and is not intended to limit the present invention, but although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments, or that equivalents may be substituted for part of the technical features thereof. Any modification, equivalent replacement, variation, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A strong convection cloud identification method based on FY-4A satellite cloud image prediction sequences is characterized by comprising the following steps:
s1, preprocessing original data;
s2, selecting spectral feature quantity for strong convection cloud identification;
s3, extracting texture features of the strong convection cloud image in different directions by Gabor transformation;
s4, manufacturing a strong convection cloud label through a bright temperature threshold method and a bright temperature difference threshold method;
s5, improving a U-Net network, and constructing a model R2AttU-Net suitable for strong convection cloud prediction and identification;
s6, inputting a satellite cloud image into the R2AttU-Net model to obtain a prediction result of a future cloud image;
s7, inputting a prediction result into the R2AttU-Net model, and dividing the strong convection cloud.
2. The strong convection cloud identification method based on the FY-4A satellite cloud image prediction sequence according to claim 1, wherein in the step S1, data processing technology including clipping, projection conversion and data normalization is adopted to preprocess the resolution L1 level data of the wind cloud star No. a advanced stationary orbit radiation imager 4 km.
3. The strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence according to claim 1, wherein in the step S2, 3 spectral feature values are selected from a water vapor channel and a long-wave infrared channel, and are respectively spectral feature bright temperature values TBB 9 Bright temperature difference value TBB 9 -TBB 12 And TBB (Tunnel boring mill) 12 -TBB 13
4. The strong convection cloud identification method based on the FY-4A satellite cloud image prediction sequences according to claim 1, wherein in the step S3, the texture features of the image are extracted in the directions of 0 °, 45 °, 90 ° and 135 ° by using Gabor filters, and the two-dimensional Gabor filter functions are as follows:wherein, the method comprises the steps of, wherein,,/>S x S y is the variable range in the x-axis and y-axis, namely the size of the selected Gabor wavelet window;xandyas the spatial coordinates of the filter,fis the frequency of the sinusoidal function,θis the orientation of the Gabor filter.
5. The strong convection cloud identification method based on the FY-4A satellite cloud image prediction sequence according to claim 1, wherein in the step S4, the strong convection cloud label is manufactured by a water vapor channel bright temperature threshold method, a water vapor-infrared channel bright temperature difference threshold method and a split window bright temperature difference threshold method, and the method comprises the following steps:
binarizing the image by adopting a water vapor channel bright temperature threshold method, and removing cloud clusters with the water vapor channel bright temperature value larger than 220K;
binarizing the image by adopting a water vapor-infrared channel bright temperature difference threshold method, wherein the water vapor-infrared window bright temperature difference TBB 9 -TBB 12 >-4K for the preliminary extraction of strong convective cloud;
adopting a splitting window brightness temperature difference TBB 12 -TBB 13 <2K, removing partial noise existing in the primarily extracted convection cloud;
performing a closing operation and an intersection operation on the binarized image, and adding an area threshold;
and obtaining the final tag of the strong convection cloud.
6. The strong convection cloud identification method based on the FY-4A satellite cloud image prediction sequences according to claim 1, wherein the R2AttU-Net model constructed in step S5 uses an attention module to improve the feature extraction capability of the model and the capability of capturing the spatial dimension and channel dimension information of the image before splicing the features at each resolution of the encoder with the corresponding features in the decoder;
the convolution layer of the original U-Net is changed into a cyclic residual convolution module, and the convolution result of the front layer and the convolution result of the own layer are accumulated to be used as the input of the lower layer, so that the multi-scale characteristics of different receptive fields are learned, and the output characteristic diagram is fully utilized.
7. The strong convection cloud identification method based on the FY-4A satellite cloud image prediction sequence according to claim 1, wherein in the step S6, the prediction results of five future cloud images are obtained by inputting five satellite cloud images with an interval of 30 minutes into an R2AttU-Net model.
8. The strong convection cloud identification method based on the FY-4A satellite cloud image prediction sequence according to claim 1, wherein in the step S7, strong convection clouds are segmented by inputting the satellite cloud image prediction result obtained in the step S6 into an R2AttU-Net model.
CN202311419510.8A 2023-10-30 2023-10-30 Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence Pending CN117152637A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311419510.8A CN117152637A (en) 2023-10-30 2023-10-30 Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311419510.8A CN117152637A (en) 2023-10-30 2023-10-30 Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence

Publications (1)

Publication Number Publication Date
CN117152637A true CN117152637A (en) 2023-12-01

Family

ID=88906507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311419510.8A Pending CN117152637A (en) 2023-10-30 2023-10-30 Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence

Country Status (1)

Country Link
CN (1) CN117152637A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678295A (en) * 2016-01-04 2016-06-15 武汉科技大学 Method for real-time monitoring gas heating furnace flame on the basis of ROI average image analysis
CN106845541A (en) * 2017-01-17 2017-06-13 杭州电子科技大学 A kind of image-recognizing method based on biological vision and precision pulse driving neutral net
CN108629297A (en) * 2018-04-19 2018-10-09 北京理工大学 A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics
CN113569810A (en) * 2021-08-30 2021-10-29 黄河水利委员会黄河水利科学研究院 Remote sensing image building change detection system and method based on deep learning
CN113792756A (en) * 2021-08-17 2021-12-14 西安电子科技大学 SAR sample expansion method based on semi-supervised generation countermeasure network
CN114638990A (en) * 2022-03-16 2022-06-17 中国人民解放军61540部队 Satellite cloud picture classification method and classification system
CN116704557A (en) * 2023-04-25 2023-09-05 哈尔滨理工大学 Low-quality fingerprint matching method based on texture information
CN116778354A (en) * 2023-08-08 2023-09-19 南京信息工程大学 Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678295A (en) * 2016-01-04 2016-06-15 武汉科技大学 Method for real-time monitoring gas heating furnace flame on the basis of ROI average image analysis
CN106845541A (en) * 2017-01-17 2017-06-13 杭州电子科技大学 A kind of image-recognizing method based on biological vision and precision pulse driving neutral net
CN108629297A (en) * 2018-04-19 2018-10-09 北京理工大学 A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics
CN113792756A (en) * 2021-08-17 2021-12-14 西安电子科技大学 SAR sample expansion method based on semi-supervised generation countermeasure network
CN113569810A (en) * 2021-08-30 2021-10-29 黄河水利委员会黄河水利科学研究院 Remote sensing image building change detection system and method based on deep learning
CN114638990A (en) * 2022-03-16 2022-06-17 中国人民解放军61540部队 Satellite cloud picture classification method and classification system
CN116704557A (en) * 2023-04-25 2023-09-05 哈尔滨理工大学 Low-quality fingerprint matching method based on texture information
CN116778354A (en) * 2023-08-08 2023-09-19 南京信息工程大学 Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
蔡朋艳: "基于FY-4A卫星的云检测与云图预测方法研究", 《硕士电子期刊》, pages 1 - 68 *
郑益勤 等: "深度学习模型识别静止卫星图像海上强对流云团", 《遥感学报》, pages 97 - 106 *

Similar Documents

Publication Publication Date Title
Walker et al. An enhanced geostationary satellite–based convective initiation algorithm for 0–2-h nowcasting with object tracking
Simpson et al. A procedure for the detection and removal of cloud shadow from AVHRR data over land
Yang et al. An automated cirrus cloud detection method for a ground-based cloud image
CN109284706B (en) Hot spot grid industrial aggregation area identification method based on multi-source satellite remote sensing data
CN109101955A (en) Industrial heat anomaly area recognizing method based on Multi-sensor satellite remote sensing
Behrangi et al. Daytime precipitation estimation using bispectral cloud classification system
CN112069955B (en) Typhoon intensity remote sensing inversion method based on deep learning
Alonso et al. Sky camera imagery processing based on a sky classification using radiometric data
CN115437036A (en) Sunflower satellite-based convective birth forecasting method
Kan et al. Snow Cover Mapping for Mountainous Areas by Fusion of MODIS L1B and Geographic Data Based on Stacked Denoising Auto-Encoders.
CN109767465B (en) Method for rapidly extracting daytime fog based on H8/AHI
Lee et al. Pre-trained feature aggregated deep learning-based monitoring of overshooting tops using multi-spectral channels of GeoKompsat-2A advanced meteorological imagery
Ni et al. Hurricane eye morphology extraction from SAR images by texture analysis
Jee et al. Development of GK-2A AMI aerosol detection algorithm in the East-Asia region using Himawari-8 AHI data
CN114170503A (en) Processing method of meteorological satellite remote sensing cloud picture
Zhao et al. Cloud identification and properties retrieval of the Fengyun-4A satellite using a ResUnet model
Zheng et al. Quantitative Ulva prolifera bloom monitoring based on multi-source satellite ocean color remote sensing data.
CN107576399A (en) Towards bright the temperature Forecasting Methodology and system of MODIS forest fire detections
CN115639979B (en) High-resolution SPEI data set development method based on random forest regression model
CN117152637A (en) Strong convection cloud identification method based on FY-4A satellite cloud image prediction sequence
Wang et al. Obtaining cloud base height and phase from thermal infrared radiometry using a deep learning algorithm
Rumi et al. Automated cloud classification using a ground based infra-red camera and texture analysis techniques
Hao et al. Arctic sea ice concentration retrieval using the DT-ASI algorithm based on FY-3B/MWRI data
Yang et al. Analyzing of cloud macroscopic characteristics in the Shigatse area of the Tibetan Plateau using the total-sky images
Andreev et al. Cloud detection from the Himawari-8 satellite data using a convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination