CN114170528A - Strong convection region identification method based on satellite cloud picture - Google Patents

Strong convection region identification method based on satellite cloud picture Download PDF

Info

Publication number
CN114170528A
CN114170528A CN202111454780.3A CN202111454780A CN114170528A CN 114170528 A CN114170528 A CN 114170528A CN 202111454780 A CN202111454780 A CN 202111454780A CN 114170528 A CN114170528 A CN 114170528A
Authority
CN
China
Prior art keywords
training
feature
network
satellite cloud
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111454780.3A
Other languages
Chinese (zh)
Inventor
张军
荆雨岩
王萍
杨正瓴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202111454780.3A priority Critical patent/CN114170528A/en
Publication of CN114170528A publication Critical patent/CN114170528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a strong convection area identification method based on a satellite cloud picture. The method mainly works around the expansion of three infrared channel data and radar combined reflectivity data of an FY-2G geostationary satellite, and a set of semantic segmentation algorithm for identifying the strong convection region of the image in the satellite cloud is designed by combining a computer vision technology and a deep learning algorithm aiming at the identification problem of the strong convection region in the satellite cloud image. Firstly, acquiring satellite and radar data and aligning the satellite and the radar data with the space-time resolution, then constructing a feature extraction network and a feature fusion network, and finally training to obtain a semantic segmentation model which can be used for identifying a strong image convection area in satellite cloud. Compared with two algorithms, the algorithm provided by the invention has higher intersection ratio of the algorithm results and better comprehensive identification effect on the strong convection area.

Description

Strong convection region identification method based on satellite cloud picture
Technical Field
The invention relates to the field of meteorology and machine learning, in particular to a satellite cloud picture strong convection area identification method based on radar combined reflectivity picture labeling.
Background
With the rapid development of social economy and the continuous improvement of meteorological modernization level, people can more deeply recognize the important influence of strong convection on daily life trip, industrial and agricultural production, national disaster prevention and reduction, important social activities and the like[1]. In recent years, big data and artificial intelligence technologies have been developed vigorously. Among them, semantic segmentation is one of the key problems in the computer vision field today[2]. Semantic segmentation is the classification of images at the pixel level, grouping pixels belonging to the same class into one class, so semantic segmentation is the understanding and classification of images from the minute level of pixels[3]. The problems of large number of objects, multiple types, information intersection and the like exist in the complex scene image, so that the accurate identification of the positions and the types of various objects in the image is still challenging. Therefore, how to effectively extract semantic information aiming at complex scenes and various objects provides a more feasible scene semantic analysis solution, which is an urgent need of content perception and intelligent analysis in the field of computer vision.
The strong convection refers to a convective cloud system formed by unstable atmosphere under a certain water vapor condition under the action of violent lifting, and can induce strong convection weather with strong disaster-causing capacity, such as thunderstorms, thunderstorm strong winds, short-time strong precipitation, hailstones and the like. Because disasters caused by strong convection weather bring huge losses to life and property safety of people and national economic infrastructure, the method has great significance for accurately identifying the strong convection cloud.
Reference to the literature
[1] Zheng Yongguang, Zhang Xiaoling, Zhou Qing Liang, etc. Strong convection weather is a short-time approach forecast business technology progress and challenge [ J ] meteorology, 2010,36(7):33-42.
[2] Evaringham M, Eslami S, Gool L V, et al, The Pascal Visual Object Classes Challenge A retroactive [ J ]. International Journal of computer Vision 2015,111(1): 98-136.
[3] Chenhongxiang, image semantic segmentation based on convolutional neural network [ D ]. Zhejiang university, 2016.
[4] Zhang X, Wang T, Chen G, et al, comparative samples Extraction From Himapari-8 Satellite Images Based on Double-Stream fusion functional Networks [ J ]. IEEE Geoscience and Remote Sensing Letters,2019, PP (99): 1-5.
[5] Symmetry details-short Deep full complement Networks for the selective selection of Very-High-Resolution replacement Sensing Images [ J ]. IEEE Journal of Selected Topics in Applied Earth requirements and replacement Sensing,2018,11(5): 1633-1644.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a strong convection region identification method based on a satellite cloud picture, and solves the problems that strong convection clouds cannot be accurately identified and then strong convection disasters cannot be effectively prevented in the prior art.
The technical scheme of the invention is as follows:
1. alignment of acquisition of satellite and radar data with both spatial and temporal resolutions:
the radar data was from the Tianjin weather service, using data from their radar combined reflectivity maps. Because of the large number, density and diameter of the strong convective precipitation particles, the weather radar chart usually shows a high reflectivity value. Therefore, when strong convection weather occurs, a large area with high intensity usually appears on the reflectivity factor image, so the radar combined reflectivity map can be used as a mark of a satellite cloud map, and the area with the radar combined reflectivity greater than 35dBZ is generally considered to have strong convection. The radar combined reflectivity map belongs to an equal distance lattice map.
The satellite data is from the national satellite meteorological center, and data of three infrared channels of IR1-IR3 from G star of Fengyun No. 2 are used, wherein IR1 is a long-wave infrared channel, IR2 is an infrared split window channel, and IR3 is a water vapor channel. The satellite disc cloud pictures of the three channels can be used for inverting satellite cloud pictures with equal longitude and latitude lattice points, and then the Tianjin area is divided according to the longitude and latitude range covered by the Tianjin meteorological radar to be used as the original picture of the satellite cloud picture to be identified.
Obviously, it is not feasible to label the convection region in the satellite cloud image directly by using the radar image, because on the radar combined reflectivity image, each adjacent pixel is equidistant (the distance is 1km), and each adjacent pixel in the satellite cloud image is equal in latitude and longitude, so that the pixel points between the two images are not corresponding to each other in space, and the two images need to be aligned in space, that is, the projection mode is changed. The latitude and longitude information of the central point of the radar map is known, so that the latitude and longitude information of each point in the radar map can be deduced, and then the radar map and the satellite cloud map can be aligned in latitude and longitude by utilizing a bilinear interpolation algorithm.
2. Construction of feature extraction network based on ResNet algorithm
The first step of the semantic segmentation algorithm is to extract the features of the image, the performance of the feature extraction network also influences the quality of the semantic segmentation model result, and the invention introduces the used feature extraction neural network algorithm ResNet (residual error neural network).
Due to the information loss, gradient disappearance, gradient explosion and the like caused by the deeper network training process, the classification performance is reduced when the number of the previous deep network layers is increased more. ResNet, on the other hand, introduces the residual concept used in the conventional computer vision field and adds it to the construction of the deep learning model, so that the residual module in the model is generated. The idea of the algorithm is to introduce a certain layer of data output of the previous layers directly skipping the layers into the input part of the following data layer, whereby it is known that the content of the following feature layer will have a part linearly contributed by a certain layer in front of it. It does the mapping by using multiple parameter layers to learn the residual between the input and the output, rather than using parameter layers to directly try to learn the mapping between the input and the output as in a general convolutional neural network. Because the neural network is proved to be easier to directly fit a residual function compared with a directly-fitted network function in mathematics, experiments also show that the direct learning of the residual by using a parameter layer in the general sense is much easier than the direct learning of mapping between input and output, the convergence speed of the model is higher, and along with the improvement of the depth of the model, the classification accuracy of the model can also be increased rather than decreased.
The invention needs to adopt a ResNet model with higher depth to improve the feature extraction capability of the satellite cloud picture. Meanwhile, in order to ensure the training speed of the ResNet model, the depth of the model is not too high, and the strong convection area to be identified has the characteristics of small size and uneven step by step, so that the down-sampling is better for 3 times, and the sensitivity to the small area can be reduced by more times of down-sampling, after balance, the invention adopts a network which is widely used and is ResNet50 to extract the image characteristics of the satellite cloud image, wherein 50 represents that the depth of the network is 50 layers.
The ResNet algorithm mainly comprises two basic modules, namely Conv Block and Identity Block, the two basic modules are stacked continuously according to a rule to form the required ResNet50 design, and a ResNet50 feature extraction network is formed by 4 Conv blocks and 12 Identity Block.
The ResNet neural network effectively solves the problems of information loss, gradient disappearance, gradient explosion and the like caused by the increase of the depth of the neural network, the classification performance of the neural network is better improved by continuously increasing the network depth, and the introduction of more deeper neural network algorithms based on the residual error network mostly achieves better effect.
3. Construction of feature fusion network based on pyramid pooling module
A necessary link in the semantic segmentation algorithm is feature fusion, and the method mainly recovers a feature map obtained through deep learning into an original segmentation result map by using methods such as bilinear interpolation, deconvolution and the like so as to achieve the purposes of taking an image as input and classifying each pixel in the image. To obtain global image-level features, a spatial pyramid pooling model is employed, where the spatial statistics provide a good descriptor for the overall scene interpretation.
The feature fusion module of the invention embeds a pyramid pooling module after extracting the network based on the ResNet feature. This module is the power of the feature fusion module. Due to the FCN-based semantic segmentation model and the lack of ability to utilize global context information, the global context information of an image is important to understand the entire scene and improve the segmentation performance. The pyramid pooling model embeds global context features into a ResNet-based feature extraction framework by fusing information from different sub-regions of different scales. The pyramid pooling model takes as input the feature map of the last convolutional layer from the convolutional neural network and fuses these features at four different pyramid scales. Upsampling (bilinear interpolation) is then applied to the features at different scales, making them the same size as the original feature map. After that, the upsampled features are connected with the original feature map to form a final global feature map, which contains local and global context information.
The present invention aims to identify strong convection regions from the satellite cloud and therefore the task of the present invention is a binary task. For common semantic segmentation tasks, the segmentation of single objects is basically processed by the semantic segmentation tasks, the sizes and the steps in the images are uniform, the segmentation tasks are limited by the characteristic that strong convection areas are small and scattered in a satellite cloud picture, and the segmentation difficulty is relatively high, so that the invention needs to introduce more and thinner semantic features and more and shallow features. In the convolutional neural network, the receptive field refers to the size of an area where pixel points on a feature map output by each layer of the convolutional neural network are mapped on an original image. For the particularity of the invention, the receptor field which is larger than the actual size of the picture is not needed, but the receptor field of a small area needs to be searched by turning the head, so that the identification result is better.
The invention adopts ResNet50 neural network to extract the characteristic, firstly, the ResNet50 characteristic extraction network is used to obtain the characteristic diagram of the last convolutional layer, the resolution ratio is 60 multiplied by 60; then, a pyramid pooling module is applied to collect different sub-region representations, namely, the feature maps are subjected to batch pooling, pooled feature maps with the sizes of 3 × 3, 6 × 6, 15 × 15 and 30 × 30 are output, then, 1 × 1 convolution operation is carried out on the 4 feature maps with different sizes to reduce dimensionality, then, bilinear interpolation is utilized to carry out upsampling and the original feature maps are combined into a new connecting layer to form a final feature representation, and local and global context information is carried in the feature representation; finally, the merged results are sent to the convolutional layer to obtain pixel-by-pixel semantic segmentation results.
The feature fusion model gives greater weight to shallow features on the basis of combining multi-scale context information, thereby enhancing the attention to small-area receptive fields, enabling the model to be closer to the strong convection target which is concerned by the invention and is small and uneven step by step, and performing targeted feature fusion to improve the segmentation effect.
4. Training method for fusing transfer learning idea and freezing model training idea
The specific steps of the training of the invention are as follows:
(1) firstly, loading an integral model framework, simultaneously pre-training model parameters, and initializing parameters of a feature extraction network ResNet50 and a feature fusion model.
(2) And loading the obtained satellite cloud picture training set data, carrying out numerical value normalization preprocessing operation on the initial data, and loading radar combined reflectivity icon data.
(3) And performing feedforward operation on the whole network, processing the whole network by a feature extraction network ResNet50 and a feature fusion model to obtain a segmentation result graph, and calculating Loss on the satellite cloud image strong convection segmentation result and the marked radar combined reflectivity image by using a Dice Loss function.
(4) The learning rate of the neural network is updated by using an exponentially decaying strategy, and then parameters of the network are updated by using a Stochastic Gradient Descent (SGD) method.
(5) And (3) circularly operating the steps (2), (3) and (4), and introducing an early stopping method (early stopping) mechanism in the training process for obtaining better generalization capability, namely when the generalization loss exceeds a certain threshold value in the training process, the generalization loss change degree in a specified continuous period is not high, or when the generalization error is increased in the specified continuous period, stopping the training process and outputting a final training result.
(6) In order to accelerate the training process, the invention introduces the idea of freezing training, namely, firstly freezing the parameter update of the feature extraction network, preferentially updating the subsequent feature extraction network, stopping freezing the parameters of the feature extraction network after triggering the early stop mechanism introduced in the step (5), continuing training on the basis of the obtained training parameters, continuously updating the parameters of all networks, and stopping all training processes after triggering the early stop mechanism introduced in the step (5) for two times, wherein the training result is the final required result.
5. Exponential decay strategy-centric parameter setting starting from low learning rate
After debugging, the learning rate set during the training of the freezing model is determined to be 0.0001, the learning rate set during the training of the thawing model is determined to be 0.00001, an exponentially decaying learning rate updating strategy is adopted, and the Batch _ size is set to be 2 so as to prevent the video memory from overflowing. And an early stopping mechanism is selected, when the generalization loss exceeds a certain threshold value in the training process, or the degree of change of the generalization loss in a specified continuous period is not high, or the generalization error increases in the specified continuous period, the training process is stopped, and a training result is output.
Has the advantages that:
Zhang[4]in 2020, we propose a novel deep network based on spectral feature extraction using only 1 × 1 convolution (3ONet), and then fuse SDFCN based on this algorithm[5]The (symmetry depth-short full volumetric Network) feature extraction algorithm generates a Double-Stream Network (DSNet) algorithm with better performance than that of the 3ONet algorithm, and Zhang utilizes the two algorithms to perform semantic segmentation on the strong convection region in the satellite cloud image to obtain a better result. The present invention compares the experimental results using the 2 algorithms mentioned above with the results of the study itself, and the results are shown in table 1:
TABLE 1 comparison of results of different semantic segmentation algorithms
Figure BDA0003386284860000051
Compared with the 2 algorithms, the intersection ratio of the algorithm results provided by the invention is higher, and the comprehensive identification effect on the strong convection area is better.
Drawings
Fig. 1 is a satellite cloud carousel for each channel: wherein (a) is a diagram of a long-wave infrared channel IR1, (b) is a diagram of a split window channel IR2, (c) is a diagram of a water vapor channel IR3 and (d) is a diagram of a medium-wave infrared channel IR 4;
FIG. 2 is a pseudo-color graph of radar combined reflectivity;
FIG. 3 is a spatial-aligned satellite cloud and radar combined reflectivity plot: wherein (a) is a radar combined reflectivity map and (b) is a satellite cloud map;
FIG. 4 is a radar data annotation graph;
FIG. 5 is the residual structure of ResNet;
FIG. 6 is a diagram illustrating the structure of Conv Block and Identity Block: wherein (a) is a Conv Block structure diagram, and (b) is an Identity Block structure diagram;
FIG. 7 is a diagram of the neural network architecture of ResNet 50;
FIG. 8 is a diagram of the overall structure of a semantic segmentation network;
FIG. 9 is a training/testing flow diagram;
FIG. 10 is an exemplary diagram of a partial semantic segmentation result: wherein (a) is a satellite cloud picture, (b) is a semantic segmentation result picture, and (c) is a radar mark picture.
Detailed Description
The technical solutions of the present invention are further described in detail with reference to the accompanying drawings and specific embodiments, which are only illustrative of the present invention and are not intended to limit the present invention.
1 data acquisition and processing
The satellite cloud atlas disc images of all the channels are shown in figure 1, which is the most original satellite cloud atlas data provided by the national meteorological center, the radar combined reflectivity original image is shown in figure 2, which is a pseudo-color image of the reflectivity data provided by the Tianjin meteorological office, and because the longitude and latitude information of each point in the radar image is known, the longitude and latitude information of each point in the radar image can be deduced, and then the alignment of the radar image and the satellite cloud atlas on the longitude and latitude can be well carried out by utilizing a bilinear interpolation algorithm. The spatially aligned satellite cloud and radar combined reflectivity map is shown in fig. 3.
Strong convection occurs due to the area where the radar combined reflectivity is greater than 35 dBZ. The regions of the radar composite reflectivity map greater than 35dBZ may be labeled as 1 and the remaining regions may be labeled as 0, thus generating the label map required for the data set, as shown in fig. 4.
2 construction of feature extraction network
The residual structure of the ResNet feature extraction network is shown in FIG. 5, wherein if the input is x and a certain participating network layer is F, the output of the network with x as input will be F (x). A general convolutional neural network learns the expression of the parameter function F directly through training, and thus learns the function mapping from x to F (x). Residual learning is focused on learning the residual between the network input and output, i.e., x, using multiple layers of the network with parameters, and the goal is to learn the function mapping from x to f (x) + x. Wherein the part x is the direct input and the residual between the input and the output of the network layer with parameters.
In the design of the ResNet algorithm, two basic modules are mainly included, namely Conv Block and Identity Block. ResNet is just to realize the depth increase through the stacking of the two modules, and the structures of the ResNet are shown in FIG. 6, wherein Conv Block cannot be used in series because the input dimension and the output dimension of the convolution module are different, and the Conv Block mainly has the function of changing the dimension of the network to accelerate training; the input dimension and the output dimension of the Identity Block are the same and can be connected in series to deepen the depth of the network and enhance the extraction capability of the image pixel characteristics.
Stacking these two modules regularly results in the design of the ResNet50 required by the present invention, as shown in fig. 7, a ResNet50 feature extraction network used by the present invention is formed by 4 Conv blocks and 12 Identity blocks, and 50 indicates that the depth of the network is 50 layers.
3, constructing a feature fusion network:
as shown in fig. 8, the semantic segmentation network structure used in the present invention mainly consists of two parts, namely, the former deep convolution network ResNet50 for extracting features, and the latter pyramid pooling model for extracting fusion features. For a given input image, the invention firstly uses a deep convolutional neural network to obtain a feature map of the last convolutional layer, then uses a pyramid analysis module to collect different sub-region representations, so-called pyramid pooling is to pool the feature maps in batches, the pooled feature maps with the sizes of 3 × 3, 6 × 6, 15 × 15 and 30 × 30 are output, then the 4 feature maps with different sizes are convolved by 1 × 1 to reduce the dimensionality, then the feature maps with different sizes are upsampled by utilizing bilinear interpolation, and the original feature maps are merged into a new connecting layer to form a final feature representation, and the feature representation carries local and global context information. Finally, the merged results are sent to the convolutional layer to obtain pixel-by-pixel semantic segmentation results. And combining the feature maps of different levels generated by pyramid pooling into a final feature map, and adding a full-connection layer for classification. The global prior can eliminate the fixed size constraint of the deep convolutional neural network on image classification, and can further reduce the context information loss among different sub-regions. Global context information as well as sub-region context helps to distinguish between the various classes, a more in-depth reason here is the fusion of information from different sub-regions with these receptive fields. In the rightmost binary image in fig. 8, white areas indicate areas where the model is predicted to be strongly convective, and black areas indicate areas where the model is predicted to be non-strongly convective.
4 model training
Fig. 9 shows the training/testing process of the present invention, where CNN represents the feature extraction network ResNet50 used in the present invention, and Loss represents the calculated Loss between the output of the neural network and the labeled data, and the pre-training parameters are derived from the result of the pre-training of the network on other data sets.
In the semantic segmentation research, the Loss functions mainly comprise CE Loss and Dice Loss, and in view of the fact that the invention focuses on the two-classification problem of a strong convection area and a non-strong convection area, Dice Loss with better effect on the two-classification problem is used as the Loss function.
The specific steps of the training of the invention are as follows:
(1) firstly, loading an integral model framework, simultaneously pre-training model parameters, and initializing parameters of a feature extraction network ResNet50 and a feature fusion model.
(2) And loading satellite cloud picture training set data, carrying out numerical value normalization preprocessing operation on the initial data, and loading radar combined reflectivity icon data.
(3) And performing feedforward operation on the whole network, processing the whole network by a feature extraction network ResNet50 and a feature fusion model to obtain a segmentation result graph, and calculating Loss on the satellite cloud image strong convection segmentation result and the marked radar combined reflectivity image by using a Dice Loss function.
(4) The learning rate of the neural network is updated by using an exponentially decaying strategy, and then parameters of the network are updated by using a Stochastic Gradient Descent (SGD) method.
(5) And (3) circularly operating the steps (2), (3) and (4), and introducing an early stopping method (early stopping) mechanism in the training process for obtaining better generalization capability, namely when the generalization loss exceeds a certain threshold value in the training process, the generalization loss change degree in a specified continuous period is not high, or when the generalization error is increased in the specified continuous period, stopping the training process and outputting a final training result.
(6) In order to accelerate the training process, the invention introduces the idea of freezing training, namely, firstly freezing the parameter update of the feature extraction network, preferentially updating the subsequent feature extraction network, stopping freezing the parameters of the feature extraction network after triggering the early stop mechanism introduced in the step (5), continuing training on the basis of the obtained training parameters, continuously updating the parameters of all networks, and stopping all training processes after triggering the early stop mechanism introduced in the step (5) for two times, wherein the training result is the final required result.
Fig. 10 shows an example of a set of test results, where (a) the graph is a satellite cloud graph, (b) the graph is a gray region predicted to be strongly convective by the semantic segmentation model, and a black region is a region predicted to be not strongly convective, and (c) the graph is a binary labeled graph based on a radar combined reflectivity graph, where the gray region is a region labeled as strongly convective and the black region is a region labeled as not strongly convective.
While the present invention has been described with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments, which are illustrative only and not restrictive, and various modifications which do not depart from the spirit of the present invention and which are intended to be covered by the claims of the present invention may be made by those skilled in the art.

Claims (6)

1. A strong convection area identification method based on a satellite cloud picture is characterized by comprising the following steps:
(1) the acquisition of satellite and radar data and the alignment of the time-space resolution of the two;
(2) constructing a feature extraction network based on a ResNet algorithm;
(3) constructing a feature fusion network based on the pyramid pooling module;
(4) a training method for fusing a transfer learning idea and a freezing model training idea;
(5) parameter settings centered on the exponential decay strategy are started from a low learning rate.
2. The method for identifying the strongly convective region based on the satellite cloud picture according to claim 1, wherein the step (1) is that after the combined reflectivity of the satellite cloud picture and the radar is obtained, the longitude and latitude information of each point in the radar picture is obtained by knowing the longitude and latitude information of the center point of the radar picture, the longitude and latitude information of each point in the radar picture is derived according to the distance and the direction from each point to the center point in the radar picture, and then the alignment of the radar picture and the satellite cloud picture on the longitude and latitude can be made by using a bilinear interpolation algorithm.
3. The method for identifying the strong convection area based on the satellite cloud picture as claimed in claim 1, wherein the step (2) refers to the construction of a ResNet50 feature extraction network, the design of the ResNet algorithm mainly comprises two basic modules, namely Conv Block and Identity Block, the required ResNet50 design is formed by continuously stacking the two basic modules according to a rule, and the ResNet50 feature extraction network is formed by 4 Conv blocks and 12 Identity blocks.
4. The method for identifying strong convection current regions based on satellite cloud images as claimed in claim 1, wherein said step (3) is a step of constructing a pyramid pooling feature fusion network, and firstly using ResNet50 feature extraction network to obtain a feature map of the last convolution layer, the resolution of which is 60 x 60; then, a pyramid pooling module is applied to collect different sub-region representations, namely, the feature maps are subjected to batch pooling, pooled feature maps with the sizes of 3 × 3, 6 × 6, 15 × 15 and 30 × 30 are output, then, 1 × 1 convolution operation is carried out on the 4 feature maps with different sizes to reduce dimensionality, then, bilinear interpolation is utilized to carry out upsampling and the original feature maps are combined into a new connecting layer to form a final feature representation, and local and global context information is carried in the feature representation; finally, the merged results are sent to the convolutional layer to obtain pixel-by-pixel semantic segmentation results.
5. The strong convection region identification method based on the satellite cloud picture according to claim 1, wherein the step (4) is a training method, and specifically comprises the following steps:
(1) firstly, loading an integral model architecture, simultaneously carrying out pre-trained model parameters, and initializing parameters of a feature extraction network ResNet50 and a feature fusion model;
(2) loading satellite cloud picture training set data, carrying out numerical value normalization preprocessing operation on initial data, and loading radar combined reflectivity icon data;
(3) performing feedforward operation on the whole network, processing the whole network by a feature extraction network ResNet50 and a feature fusion model to obtain a segmentation result graph, and calculating Loss on a satellite cloud graph strong convection segmentation result and a marked radar combined reflectivity image by using a Dice Loss function;
(4) updating the learning rate of the neural network by using an exponential decay strategy, and then updating the parameters of the network by using a random gradient descent method;
(5) circularly operating the steps (2), (3) and (4), and introducing an early stopping method mechanism in the training process for obtaining better generalization capability, namely when the generalization loss exceeds a certain threshold value in the training process, the generalization loss change degree in a specified continuous period is not high, or when the generalization error increases in the specified continuous period, stopping the training process and outputting a final training result;
(6) in order to accelerate the training process, a freezing training idea is introduced, namely, the parameters of the feature extraction network are firstly frozen for updating, the feature extraction network behind the feature extraction network is preferentially updated, after the early stopping method mechanism introduced in the step (5) is triggered, the freezing of the parameters of the feature extraction network is stopped, the training is continued on the basis of the obtained training parameters, the parameters of all the networks are continuously updated together, after the early stopping method mechanism introduced in the step (5) is triggered for two times, all the training processes are stopped, and the training result at the moment is the final required result.
6. The method for identifying the strong convection region based on the satellite cloud picture according to claim 1, wherein the step (5) is to set the hyper-parameter details during training, and the learning rate set during the training of the freezing model is determined to be 0.0001 through debugging, the learning rate set during the training of the thawing model is determined to be 0.00001, an exponentially decaying learning rate updating strategy is adopted, and the Batch _ size is set to be 2 so as to prevent the video memory overflow; and an early stopping mechanism is selected, when the generalization loss exceeds a certain threshold value in the training process, or the degree of change of the generalization loss in a specified continuous period is not high, or the generalization error increases in the specified continuous period, the training process is stopped, and a training result is output.
CN202111454780.3A 2021-12-01 2021-12-01 Strong convection region identification method based on satellite cloud picture Pending CN114170528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111454780.3A CN114170528A (en) 2021-12-01 2021-12-01 Strong convection region identification method based on satellite cloud picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111454780.3A CN114170528A (en) 2021-12-01 2021-12-01 Strong convection region identification method based on satellite cloud picture

Publications (1)

Publication Number Publication Date
CN114170528A true CN114170528A (en) 2022-03-11

Family

ID=80482173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111454780.3A Pending CN114170528A (en) 2021-12-01 2021-12-01 Strong convection region identification method based on satellite cloud picture

Country Status (1)

Country Link
CN (1) CN114170528A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778354A (en) * 2023-08-08 2023-09-19 南京信息工程大学 Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778354A (en) * 2023-08-08 2023-09-19 南京信息工程大学 Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method
CN116778354B (en) * 2023-08-08 2023-11-21 南京信息工程大学 Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method

Similar Documents

Publication Publication Date Title
Wang et al. Multiscale visual attention networks for object detection in VHR remote sensing images
CN113780296B (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN111914686B (en) SAR remote sensing image water area extraction method, device and system based on surrounding area association and pattern recognition
CN111738111A (en) Road extraction method of high-resolution remote sensing image based on multi-branch cascade void space pyramid
CN113469278B (en) Strong weather target identification method based on deep convolutional neural network
CN114943876A (en) Cloud and cloud shadow detection method and device for multi-level semantic fusion and storage medium
CN111428862B (en) Polar unbalanced space-time combined convection primary short-term prediction method
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
Feng et al. Embranchment cnn based local climate zone classification using sar and multispectral remote sensing data
CN115661932A (en) Fishing behavior detection method
CN111191704B (en) Foundation cloud classification method based on task graph convolutional network
CN114170528A (en) Strong convection region identification method based on satellite cloud picture
CN115861756A (en) Earth background small target identification method based on cascade combination network
CN117726954B (en) Sea-land segmentation method and system for remote sensing image
Huan et al. MAENet: multiple attention encoder–decoder network for farmland segmentation of remote sensing images
CN113361528B (en) Multi-scale target detection method and system
Wang et al. YOLO V4 with hybrid dilated convolution attention module for object detection in the aerial dataset
CN114419444A (en) Lightweight high-resolution bird group identification method based on deep learning network
Wang et al. Deep learning in extracting tropical cyclone intensity and wind radius information from satellite infrared images—A review
CN114386654A (en) Multi-scale numerical weather forecasting mode fusion weather forecasting method and device
Kaparakis et al. Wf-unet: Weather fusion unet for precipitation nowcasting
Qian et al. Cloud detection method based on improved deeplabV3+ remote sensing image
CN115984714B (en) Cloud detection method based on dual-branch network model
Liu et al. Integration transformer for ground-based cloud image segmentation
Shi et al. Complex optical remote-sensing aircraft detection dataset and benchmark

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination