CN111666903B - Method for identifying thunderstorm cloud cluster in satellite cloud picture - Google Patents

Method for identifying thunderstorm cloud cluster in satellite cloud picture Download PDF

Info

Publication number
CN111666903B
CN111666903B CN202010521993.2A CN202010521993A CN111666903B CN 111666903 B CN111666903 B CN 111666903B CN 202010521993 A CN202010521993 A CN 202010521993A CN 111666903 B CN111666903 B CN 111666903B
Authority
CN
China
Prior art keywords
neural network
convolutional neural
training
cloud picture
satellite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010521993.2A
Other languages
Chinese (zh)
Other versions
CN111666903A (en
Inventor
魏祥
陈曦
王峰
严勇杰
毛亿
孙蕊
聂建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 28 Research Institute
Original Assignee
CETC 28 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 28 Research Institute filed Critical CETC 28 Research Institute
Priority to CN202010521993.2A priority Critical patent/CN111666903B/en
Publication of CN111666903A publication Critical patent/CN111666903A/en
Application granted granted Critical
Publication of CN111666903B publication Critical patent/CN111666903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention provides a method for identifying a thunderstorm cloud cluster in a satellite cloud picture, and belongs to the field of artificial intelligence application. The invention mainly uses the public satellite cloud picture data set of the Japanese sunflower satellite to compare with the meteorological radar chart used in the thunderstorm early warning at the present stage, and uses the neural network system to learn the characteristics of the thunderstorm cloud group in the satellite cloud picture, thereby achieving the automatic identification of the thunderstorm cloud group in the satellite cloud picture. The method can accurately identify the position and the range of the thunderstorm cloud cluster in the satellite cloud picture, and the identification process is short in application time. Meanwhile, the invention improves the convergence algorithm in the training process of the neural network system, so that the time consumed in the training process of the neural network is reduced, thereby providing effective technical support for the prediction and early warning of meteorological disasters.

Description

Method for identifying thunderstorm cloud cluster in satellite cloud picture
Technical Field
The invention belongs to the field of artificial intelligence application, and particularly relates to a method for identifying a thunderstorm cloud cluster in a satellite cloud picture.
Background
Thunderstorms are locally strong convective weather occurring in tropical and temperate regions. Thunderstorms are often accompanied by lightning strikes, lightning, strong wind and obvious precipitation when the thunderstorms occur, and the thunderstorms have the characteristics of strong burst property, high strength and short duration time, and have strong destructiveness. The geographical position, the regional scope and the like of the thunderstorm cloud cluster are distinguished through meteorological data such as a meteorological radar and a cloud picture, the travelling path of the thunderstorm cloud cluster is predicted, disasters with strong destructiveness such as lightning stroke and lightning in the thunderstorm weather can be predicted and prevented in advance, the disasters brought by the disasters can be reduced to the minimum, and effective guarantee is provided for the personal safety and the property safety of the thunderstorm occurrence area. In the present stage, the most favorable observation means for thunderstorms is to detect by using a weather radar, the technical development of cloud cluster observation on a satellite cloud picture is slow, and meanwhile, a radar observation method cannot continuously detect in some regions with rare traces. The resolution of the satellite cloud picture is continuously enhanced, the interval time is continuously shortened, the physical boundary of the thunderstorm cloud cluster can be obtained, and the method is suitable for identifying the thunderstorm cloud cluster.
With the rapid development of artificial intelligence scientific theory, artificial intelligence can be applied in more and more fields. At the present stage, the identification process of the cloud cluster in the satellite cloud picture needs to have very rich professional knowledge, and the position of the thunderstorm cloud cluster cannot be accurately positioned. The neural network system is applied to a meteorological monitoring system to learn the characteristics of the thunderstorm cloud cluster in the satellite cloud picture, so that the automatic identification of the thunderstorm cloud cluster in the satellite cloud picture is achieved. These advantages compensate for the limitations in the identification process of the thunderstorm cloud at the present stage. Therefore, the method has very important practical value for distinguishing and predicting the thunderstorm cloud cluster in the satellite cloud picture by using the neural network system.
Disclosure of Invention
Aiming at the existing technical problems and the latest scientific and technical progress, the invention provides a method for identifying a thunderstorm cloud cluster in a satellite cloud picture, which applies a neural network system to the field of monitoring of meteorological cloud pictures, strengthens the automatic identification of the thunderstorm cloud cluster, improves the matching speed of the thunderstorm cloud cluster identification, and meets the requirements of an application system for artificial intelligence in a big data era.
The method is characterized in that a meteorological satellite cloud cluster detection and classification method is researched by combining a meteorological satellite theory and a computer image theory, how to better utilize a computer technology to realize the detection of the meteorological satellite cloud cluster is researched and discussed by the method, the reasonability of the method design algorithm is verified by the method and an application example, the related detection process is further perfected, and a relatively satisfactory detection and classification result is finally obtained, so that the industrial level is improved.
Based on a deep learning convolutional neural network, aiming at an infrared satellite cloud image, combining cloud image characteristics of thunderstorm weather, optimizing a model by adopting an improved two-path multi-pooling input series convolutional neural network model and combining segmentation image characteristics based on an Adam model used for network training, and identifying the thunderstorm weather under the satellite cloud image by adopting the improved image training model applied to the two-path multi-pooling input series convolutional neural network model.
The invention specifically comprises the following steps:
step 1, preprocessing an image;
step 2, marking a meteorological cloud picture;
step 3, establishing an improved convolutional neural network model;
and 4, training the convolutional neural network model, and identifying the thunderstorm cloud cluster based on the trained convolutional neural network model.
The step 1 comprises the following steps:
step 1-1, analyzing satellite cloud picture data:
step 1-2, median filtering is carried out on the image:
and 1-3, performing gray level histogram enhancement.
The step 1-1 comprises the following steps:
according to the electromagnetic wave wavelength used by the Japanese sunflower meteorological observation satellite in the observation process, the original transmission data of the satellite cloud picture is analyzed into an infrared cloud picture, the geographic coordinates in the satellite cloud picture are identified according to an algorithm provided by the sunflower meteorological observation satellite data set data, all the positions of the earth sphere are continuously represented in the plane of the satellite cloud picture, and the point-to-point corresponding functional relation between the earth surface position and pixel points in the satellite cloud picture is established.
The step 1-2 comprises the following steps:
in a satellite or radar system, noise is inevitably introduced in the processes of generation, processing and data analysis, and the noise not only reduces the quality of an image, but also has adverse effects on the subsequent processes of image labeling, feature extraction and model training. And (3) median filtering processing is used, a nonlinear filter is adopted to filter noise interference in the processes, edge details of the satellite cloud picture are protected, a filtering weight coefficient used in a local window in the satellite cloud picture is calculated, the local window is traversed in an image sequence, and therefore noise in the satellite cloud picture is removed, and detail information in the cloud picture is retained, and the satellite cloud picture is convenient to process in the next step.
The weighted median filter weight coefficient W (i, j) is calculated using the following formula:
W(i,j)=[W(N+1,N+1)-α·d·D/m]
wherein, the size of the local window is 2N +1, alpha is a constant, D is the distance from the point (i, j) force to the center of the local window, i, j respectively represent the abscissa and the ordinate, and D and m are respectively the variance and the mean of the local window; median filtering is performed based on the weighted median filtering weight coefficients. Since the sum of α, D, m is greater than or equal to 0, it is clear that the weight of the center point is the largest. Generally speaking, in those relatively uniform regions, if abrupt change occurs, the abrupt change is mainly caused by noise, in these regions, the local variance is small, α · D/m is close to zero, the weights of each pixel point in the region are approximately equal, which is equivalent to general median filtering, so that the abrupt change point can be removed. In the areas containing detail information or boundaries, the local variance is large, so that the weight of a pixel point in the area is rapidly reduced along with the increase of the distance from a central point, the gray value near the center of the window is reserved, and the purpose of reserving details is achieved. Experiments have shown that a window size of 5 gamma 5 and a value of 1 gives good results in most images.
However, in the initial process of processing and analysis, it is found that for an area where both the D and m values are small, the D/m may still be large, and the expected effect cannot be achieved, and the processing mode can only manually adjust the value of α. Considering the statistical characteristics of the satellite cloud image noise, the calculation formula of the weighted median filtering weight coefficient is improved as follows:
Figure BDA0002532412630000031
wherein D is (i,j) (ii) is the variance of the local window centered at (i, j); c is the log compression range.
The steps 1-3 comprise: because the meteorological satellite cloud images adopt different wavelengths in the acquisition process, different types of satellite cloud images have different characteristics, direct observation is very difficult, and the gray value of the image is processed according to the local statistical characteristics of the pixels by using an enhancement method of adaptive gray histogram equalization.
The adaptive histogram equalization method is to save the detail in the original image before histogram equalization is performed, and add the detail to the original image during histogram equalization. Calculating the gray value of the satellite cloud image by adopting the following formula:
Figure BDA0002532412630000032
wherein x is i,j ,x’ i,j The gray values of the satellite cloud picture image before and after transformation, m i,j Is x i,j H (-) is the mean value of the window neighborhood centered, and H is the transform function of histogram equalization. And the parameter k satisfies the following condition: when x is i,j When located without detail, i.e. x i,j The gray value of the pixel and the gray value of the pixel around the pixel do not exceed 20, which can indicate that the information of the image at the position is smooth and has no fast-changing gray information of the pixel, and k approaches to 0 at the moment, otherwise, the value of k approaches to 1.
In the image processing process, because the resolution ratio of the original satellite cloud image is very large, the original image is directly poor in processing effect, so that a small window W needs to be selected on the original image for self-adaptive processing, and in order to achieve self-adaptation, the neighborhood gray variance in the window W is selected as a self-adaptation variable calculation parameter k:
Figure BDA0002532412630000041
wherein the content of the first and second substances,
Figure BDA0002532412630000042
is the variance of the gray levels within the window W,
Figure BDA0002532412630000043
the noise variance of the original satellite cloud image, k, is a scaling factor. Since the image neighborhood variance should be greater than or equal to the image noise variance, the details of the image are enhanced in the process.
The step 2 comprises the following steps:
step 2-1, data selection: and (3) comparing the basic information of the satellite cloud pictures obtained in the step (1), wherein the basic information to be compared mainly comprises the latitude and longitude range contained in the satellite cloud pictures and the cloud layer range information of the cloud pictures. Selecting a typical thunderstorm meteorological phenomenon for research, recording basic information of a satellite cloud picture according to subject knowledge such as meteorology and the like, and comparing the longitude and latitude information of the satellite cloud picture with the longitude and latitude information in a radar basic reflectivity image to ensure the accuracy of the classical thunderstorm meteorological phenomenon in the satellite cloud picture; and (3) arranging the basic satellite cloud picture information so as to facilitate the data labeling in the step 2-2 and making a data set for training and verifying the neural network system.
Step 2-2, data annotation: according to the book of Doppler radar and meteorological observation, when the intensity of a region in a radar basic reflectivity image is more than 40dBz, the region is basically in thunderstorm weather and is accompanied by phenomena of lightning stroke, strong precipitation and the like. Comparing basic reflectivity images disclosed in national weather websites by using the corresponding function relationship obtained in the step 1-1, accurately obtaining thunderstorm weather basic information in a weather cloud picture, wherein the thunderstorm weather basic information comprises basic information such as edges and positions, and manually marking by using the satellite cloud picture basic information sorted in the step 2-1 to form a data set required during training and verification; the data set is divided into 7: the scale of 3 is divided into a training set and a validation set, wherein the training set is used for training the proposed model, and the validation set is used for carrying out inspection analysis on the trained model.
The step 3 comprises the following steps:
step 3-1, establishing a convolutional neural network model comprising two convolutional neural network structures: the convolutional neural network structure comprises an input layer, a convolutional layer, a maximum pooling layer, a full-link layer, a last Softmax classification layer and an output layer, and the two convolutional neural network structures are connected in parallel to obtain a convolutional neural network model comprising the two convolutional neural network structures, namely the convolutional neural network model comprises two convolutional neural network structures which are respectively a local path and a global path, wherein the local path is a path using a 7 multiplied by 7 convolutional kernel, and the global path is a path using a 13 multiplied by 13 convolutional kernel;
the general convolutional neural network architecture comprises an input layer, a convolutional layer, a maximum pooling layer, a full connection layer, a last Softmax classification layer and an output layer, and the aim of classifying the targets is fulfilled by extracting the characteristics of the targets. And connecting the two convolutional neural networks together in parallel, and extracting the characteristics of the target simultaneously. Namely, it is composed of two CNNs: the path with the smaller 7 x 7 convolution kernel and the path with the larger 13 x 13 convolution kernel are referred to as the local path and the global path, respectively. The feature of the target is extracted by using convolution kernels with different sizes, and the label of the expected pixel is influenced by two aspects: a region surrounding the visual detail of the pixel and a larger peripheral extent thereof;
step 3-2, performing convolution operation twice in the local path and the global path, wherein after the convolution operation including the convolution kernel convolution operation, the convolution operation is performed by using a convolution kernel with the size of 3 multiplied by 3; the feature mappings of the local path and the global path are connected in series and then sent to an output layer; respectively designing convolution operations of 4 layers, 8 layers and 12 layers in the convolution process of a local path, respectively inputting images, and comparing results;
to allow concatenation of the top hidden layers of the two paths, two layers are used for the local path and a kernel (filter) size of 3 × 3 is used for the second layer. Although this means that the feature valid feature maps in the top layer of each path are the same, parameterization of the global path models the features more directly and flexibly in that same region, and finally the feature maps of the local and global paths are concatenated and then sent to the output layer.
Step 3-3, inputting a series structure: by means of connection of the convolutional layer, the output of a conventional convolutional neural network is added to serve as the additional input of a global path, so that the neural network system can utilize the efficiency of CNN in the process of processing the satellite cloud picture, meanwhile, the dependence between adjacent labels in the segmentation can also be directly simulated, a final prediction model is influenced by the adjacent labels, and finally, the improved convolutional neural network model in the step 3-2 is used as the subsequent connection of two convolutional neural network structures; an input-cascade architecture in which the output of the local path is provided directly as input to the global path, in the local path of the input-cascade architecture.
The CNNs are computationally efficient, and in order to be able to use the efficiency of the CNNs and also to directly model the dependencies between adjacent labels in the segmentation, the final prediction model is influenced by the nearby labels, so this is done by relying on the concatenation of convolutional layers, with the output of the first CNN being the additional input to the second CNN's layer. Furthermore, the same two-way structure is used as a subsequent connection for the two CNNs.
The step 4 comprises the following steps:
step 4-1, capturing the feedback item of the change:
the method comprises the steps of extracting features of the thunderstorm cloud cluster in the satellite cloud picture, namely identifying and classifying the relative change size of an objective function value. In this process, use is made of f t-1 And f t-2 Respectively representing a value of a preceding objective function and a value of a following objective function;
if f is t-1 ≥f t-2 Then calculate the change r t Is composed of
Figure BDA0002532412630000061
If f is t-1 ≤f t-2 Then calculate the change r t Is composed of
Figure BDA0002532412630000062
r t The value is always non-negative but less than or greater than 1, which can capture an increase or decrease in relative change.
Step 4-2, smooth tracking enables the algorithm to be converged: and (4) training the improved convolutional neural network model established in the step (3), wherein the neural network needs to be trained to help the network achieve the required accuracy. In the training process, an objective function d is defined t =βd t-1 +(1-β)r t
Wherein β ∈ [0,1) represents the attenuation rate, and the objective function d is reduced by smooth tracking t Let us order
Figure BDA0002532412630000063
As a smoothed estimate of the objective function at time t-2, and
Figure BDA0002532412630000064
wherein
Figure BDA0002532412630000065
As a smoothed estimate of the objective function at time 0, f 0 As a smooth estimation value of the actual situation at the time 0; to maintain convergence during neural network training, the objective function d t Is within a range, the upper and lower of said range are set to be K and K, respectively, and are both greater than 0. Setting:
Figure BDA0002532412630000066
and at this time k<γ t ≤K,
Figure BDA0002532412630000067
So the coefficients in the calculation process
Figure BDA0002532412630000068
And is
Figure BDA0002532412630000069
Finally, the actual value gamma of the objective function t Is that
Figure BDA00025324126300000610
When in use
Figure BDA00025324126300000611
When K and K are equal to or greater than 0, and the coefficients in the process are calculated simultaneously
Figure BDA00025324126300000612
And is
Figure BDA00025324126300000613
While the actual value of the final objective function is gamma t Is that
Figure BDA00025324126300000614
And (3) training the improved convolutional neural network model established in the step (3) by using the training set in the data set prepared in the step (2), wherein a small-batch updating method is used in the training process, and parameters in the neural network are saved every 50 times of iteration in the training process. And in the training process, the loss of the neural network is stable, the training is finished, and the final parameter information in the process is stored. And during verification, inputting image data into the neural network system by using the verification set in the data set, calculating by using the stored node information to obtain a labeled cloud picture of the neural network system, and comparing the labeled cloud picture with the labeled information of the verification set in the step 2 to obtain accuracy information.
In order to achieve the best image labeling network, the accuracy is obtained and then is analyzed and improved, the training process is repeated, and the final network node information is obtained and applied.
The invention has the following beneficial effects:
the neural network system is used for recognizing and researching the satellite cloud pictures, so that the thunderstorm cloud clusters in the satellite cloud pictures can be automatically labeled, the flight trajectory of the airplane is subjected to auxiliary decision making in the aerospace field, and the safety and the effectiveness of the airplane can be avoided from dangerous conditions caused by thunderstorm weather. In addition, in the identification process of the thunderstorm cloud cluster, professional workers for identifying the satellite cloud picture are not needed, so that the use universality is increased, and the method also has an auxiliary effect on measures for effectively avoiding thunderstorm weather disasters in other fields.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Fig. 1 is an example satellite cloud.
Fig. 2 is a diagram of a classical convolutional neural network structure.
Fig. 3 is a diagram of an improved multi-scale two-way convolutional network architecture.
Fig. 4 is a diagram of a modified input cascade structure.
Fig. 5 is a flow chart of a thunderstorm cloud identification process based on a neural network system.
FIG. 6 is a verification diagram and a final labeling information diagram.
Detailed Description
As shown in fig. 5, the present invention provides a method for identifying a thunderstorm cloud cluster in a satellite cloud chart, which comprises the following steps:
step 1, preprocessing an image;
step 2, marking a meteorological cloud picture;
step 3, establishing a convolutional neural network model;
step 4, improving the training model;
the step 1 comprises the following steps:
step 1-1, analyzing satellite cloud picture data:
analyzing original transmission data of a satellite cloud picture into an infrared cloud picture according to the wavelength of electromagnetic waves used by a Japanese sunflower meteorological observation satellite in the observation process, identifying geographic coordinates in the satellite cloud picture according to a method provided by sunflower meteorological observation satellite data set data, continuously representing all positions of the earth sphere in a plane of the satellite cloud picture, and establishing a point-to-point corresponding function relationship between the earth surface position and pixel points in the satellite cloud picture; as shown in fig. 1, the resolution of fig. 1 is 1961 × 1371, and the diagram is an example of a satellite cloud map of the territorial scope of China, and the specific longitude and latitude are represented as 135 degrees 2 minutes 30 seconds from east longitude, 73 degrees 40 minutes from west longitude to east longitude, 3 degrees 52 minutes from south latitude and 3 degrees 33 minutes from north latitude, which are the actual territorial scope of China. The resolution of fig. 1 was 1961 × 1371. The upper left corner of the picture is taken as an origin of coordinates, the length is taken as an x axis, the width is taken as a y axis, the pixel point at the upper left corner of the picture is expressed as (0,0), the actual east-west expressed position is divided into 1961 equal parts, the north-south expressed position is divided into 1371 equal parts, the east-west equal parts express 1.88 minutes, the south-north equal parts express 2.17 minutes, and the actual range expressed by each pixel point in the picture is obtained. And finally, judging the surface position of the earth according to the pixel point position in the picture. If the pixel position is (200, 400), the actual representation position is 128 degrees 16 minutes east longitude and 39 degrees 5 minutes north latitude.
Step 1-2, median filtering is carried out on the image:
in a satellite or radar system, noise is inevitably introduced in the processes of generation, processing and data analysis, and the noise not only reduces the quality of an image, but also has adverse effects on the subsequent processes of image labeling, feature extraction and model training. The median filtering process is used, and a nonlinear filter is adopted to filter noise interference in the processes and protect edge details of the satellite cloud picture. Calculating the weighted median filtering weight coefficient by adopting the following formula:
W(i,j)=[W(N+1,N+1)-α·d·D/m]
wherein the size of the local window is 2N +1, α is a constant, D is the distance from the point (i, j) force to the center of the local window, and D and m are the variance and mean of the local window; since the sum of α, D, m is greater than or equal to, it is clear that the weight of the center point is the greatest. Generally speaking, in those relatively uniform regions, if abrupt change occurs, the abrupt change is mainly caused by noise, in these regions, the local variance is small, α · D/m is close to zero, and the weights of each pixel point in the regions are approximately equal, which is equivalent to general median filtering, so that the abrupt change point can be removed. In the areas containing detail information or boundaries, the local variance is large, so that the weight of the pixel points in the areas is rapidly reduced along with the increase of the distance from the central point, the gray value near the center of the window is reserved, and the purpose of reserving details is achieved.
However, in the initial process of processing and analysis, it is found that for an area where both the D and m values are small, the D/m may still be large, and the expected effect cannot be achieved, and the processing mode can only manually adjust the value of α. Considering the statistical characteristics of the satellite cloud image noise, the calculation of some weights is improved as follows:
Figure BDA0002532412630000081
wherein D is (i,j) Is the variance of the local window centered at (i, j); c is the log compression range.
Step 1-3, enhancing a gray level histogram:
because the meteorological satellite cloud images adopt different wavelengths in the acquisition process, different types of satellite cloud images have different characteristics, direct observation is very difficult, and the gray value of the image is processed according to the local statistical characteristics of the pixels by using the enhancement method of the adaptive gray histogram equalization.
The adaptive histogram equalization method is to save the detail in the original image before histogram equalization is performed, and add the detail to the original image during histogram equalization. The specific calculation method comprises the following steps:
Figure BDA0002532412630000091
wherein x is i,j ,x’ i,j Gray values of the image before and after transformation, m i,j Is x i,j H (-) is the mean value of the window neighborhood centered, and H is the transform function of histogram equalization. And the parameter k should satisfy the following condition: when x is i,j When the position is basically without details, k approaches to 0, and otherwise, the value of k is larger.
In order to achieve self-adaptation, the neighborhood gray variance in the window W is selected as an adaptive variable:
Figure BDA0002532412630000092
in the formula:
Figure BDA0002532412630000093
is the variance of the gray levels within the window W,
Figure BDA0002532412630000094
the noise variance of the whole graph, k, is a scaling factor. Since the image neighborhood variance should be greater than or equal to the image noise variance, the details of the image are enhanced in the process.
The step 2 comprises the following steps:
step 2-1, data selection:
and (3) comparing the basic information of the satellite cloud picture obtained in the step (1), and comparing the corresponding radar basic reflectivity information.
Step 2-2, data labeling:
according to the book of Doppler radar and meteorological observation, when the intensity of a region in a radar basic reflectivity image is more than 40dBz, the region is basically in thunderstorm weather and is accompanied by phenomena of lightning stroke, strong precipitation and the like. And (3) expressing the functional relation between the ground and the cloud picture obtained in the step (1-1), comparing the basic reflectivity image, and accurately obtaining basic thunderstorm weather information in the meteorological cloud picture, including basic information such as edges, positions and the like, and carrying out manual marking. The data set was as follows 7: the scale of 3 is divided into a training set and a validation set, wherein the training set is used for training the proposed model, and the validation set is used for carrying out inspection analysis on the trained model.
The step 3 comprises the following steps:
step 3-1, as shown in fig. 3, two convolution network structures:
as shown in fig. 1, a general convolutional neural network architecture includes an input layer, a convolutional layer, a maximum pooling layer, a full connection layer, a last Softmax classification layer, and an output layer, and finally achieves the purpose of classifying objects by extracting their features. And connecting the two convolutional neural networks together in parallel, and extracting the characteristics of the target simultaneously. Namely, it is composed of two CNNs: the path with the smaller 7 x 7 convolution kernel and the path with the larger 13 x 13 convolution kernel are referred to as the local path and the global path, respectively. The feature of the target is extracted by using convolution kernels with different sizes, and the label of the expected pixel is influenced by two aspects: the area surrounding the visual detail of the pixel and its larger peripheral extent.
Step 3-2, improving two convolution models:
as shown in fig. 2, to allow concatenation of the top hidden layers of the two paths, two layers are used for the local path and a kernel (filter) size of 3 × 3 is used for the second layer. Although this means that the feature valid feature maps in the top level of each path are the same, parameterization of global paths models features more directly and flexibly in that same region. And finally, the feature maps of the local path and the global path are serially connected and then sent to an output layer. Conv is convolution operation, pooling is maximum Pooling operation, and Dropout refers to random discarding operation with a certain probability on nodes in a network in the neural network training process, so that the neural network structures in each training iteration process are kept different, and the training accuracy is improved.
Since the selection of the filter and the size of the input image can be adjusted, based on the model, 4-layer, 8-layer and 12-layer convolution operations are designed in the first convolution respectively, and the proper filter is found, thereby improving the classification accuracy.
Step 3-3, inputting a series structure:
the CNNs are computationally efficient, so that the dependence between adjacent labels in the segmentation can be directly simulated while the efficiency of the CNN is utilized, and the final prediction model is influenced by the nearby labels, so that the output of the first CNN is used as an additional input to the layer of the second CNN, which is done by relying on the concatenation of convolutional layers. Furthermore, the same two-way structure is used as a subsequent connection for the two CNNs.
The input is a serial configuration, in which the output of a first CNN is provided directly as an input to a second CNN. Thus, it is simply processed as an additional image channel of the input slice. From the above design model, the number of convolution layers is increased in the local path in the two-path structure, and the use of a smaller convolution kernel helps focus on the texture structure formed by finer pixels from the image, so the operation flow of fig. 4 is designed, and also in the local path of the input cascade architecture, 8-layer convolution, 12-layer convolution and 20-layer convolution operations are respectively set.
Step 4 comprises the following steps:
step 4-1, capturing the feedback item of the change:
the method comprises the steps of extracting features of the thunderstorm cloud cluster in the satellite cloud picture, namely identifying and classifying the relative change size of an objective function value. In this process, use is made of f t-1 And f t-2 Respectively representing a value of a preceding objective function and a value of a following objective function;
if f is t-1 ≥f t-2 Then calculate the change r t Is composed of
Figure BDA0002532412630000111
If f is t-1 ≤f t-2 Then calculate the change r t Is composed of
Figure BDA0002532412630000112
r t The value is always non-negative, but less than or greater than 1, which can capture an increase or decrease in relative change.
Step 4-2, smooth tracking enables the algorithm to be converged:
neural networks require training to help the network achieve the required accuracy. In the training process, an objective function d is defined t =βd t-1 +(1-β)r t Where β ∈ [0,1) represents the attenuation rate, and the objective function d is clipped by smooth tracking t Let us order
Figure BDA0002532412630000113
As a smoothed estimate of the objective function at time t-2, and
Figure BDA0002532412630000114
wherein
Figure BDA0002532412630000115
As a smoothed estimate of the objective function at time 0, f 0 As a smooth estimation value of the actual situation at the time 0; to maintain convergence during neural network training, the objective function d t Is within a range, provided that the upper and lower extremes of the range are K and K, respectively, and are both greater than 0. Setting:
Figure BDA0002532412630000116
and at this time k<γ t ≤K,
Figure BDA0002532412630000117
So the coefficients in the calculation process
Figure BDA0002532412630000118
And is
Figure BDA0002532412630000119
Finally, the actual value gamma of the objective function t Is that
Figure BDA00025324126300001110
When in use
Figure BDA00025324126300001111
Then, K and K are also greater than 0, and the coefficients in the process are calculated simultaneously
Figure BDA00025324126300001112
And is
Figure BDA00025324126300001113
While the actual value of the final objective function is gamma t Is that
Figure BDA00025324126300001114
To solve thisTo solve the problem of reducing an objective function d by using smooth tracking t And the function is not easily influenced by extra mutation in the training process.
In the process of preprocessing the satellite cloud picture data, in order to improve the accuracy, the image data needs to be cut, so that the information content in each image is reduced, and the neural network system can fully extract the image data. As shown in fig. 6, the resolution of the image is 653 × 684, and the original resolution of the image is small, so that excessive information is not compressed in the preprocessing process, and information loss is avoided. After the training process in the step 4, the network is reconstructed by using the stored network node information, the cloud picture is input into the reconstructed neural network, and the labeled result is obtained after calculation. The output of the neural network was compared with the annotated result in step 2 with an accuracy of 71%.
The present invention provides a method for identifying a thunderstorm cloud cluster in a satellite cloud picture, and a plurality of methods and ways for implementing the technical scheme, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of improvements and modifications can be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in this embodiment can be implemented by the prior art.

Claims (1)

1. A method for identifying a thunderstorm cloud cluster in a satellite cloud picture is characterized by comprising the following steps:
step 1, preprocessing an image;
step 2, marking a meteorological cloud picture;
step 3, establishing an improved convolutional neural network model;
step 4, training a convolutional neural network model, and identifying the thunderstorm cloud cluster based on the trained convolutional neural network model;
the step 1 comprises the following steps:
step 1-1, analyzing satellite cloud picture data:
step 1-2, median filtering is carried out on the image:
step 1-3, carrying out gray level histogram enhancement;
the step 1-1 comprises the following steps:
analyzing original transmission data of the satellite cloud picture into an infrared cloud picture, identifying geographic coordinates in the satellite cloud picture, continuously representing all positions of the earth sphere in a plane of the satellite cloud picture, and establishing a point-to-point corresponding function relationship between the earth surface position and pixel points in the satellite cloud picture;
the step 1-2 comprises the following steps:
the weighted median filter weight coefficient W (i, j) is calculated using the following formula:
W(i,j)=[W(N+1,N+1)-α·d·D/m]
wherein, the size of the local window is 2N +1, alpha is a constant, D is the distance from the point (i, j) to the center of the local window, i, j respectively represent the abscissa and the ordinate, and D and m are respectively the variance and the mean of the local window;
the calculation formula of the weighted median filtering weight coefficient is improved as follows:
Figure FDA0003785840970000011
wherein D is (i,j) (ii) is the variance of the local window centered at (i, j); c is the log compression range; performing median filtering based on the weighted median filtering weight coefficient;
the steps 1-3 comprise:
calculating the gray value of the satellite cloud image by adopting the following formula:
Figure FDA0003785840970000021
wherein x is i,j ,x’ i,j Gray values of satellite cloud image before and after transformation, m i,j Is x i,j As the mean value of the central window neighborhood, H (-) is the transform function of histogram equalization, and the parameter k satisfies the following condition: when x is i,j Is located without detailAt, i.e. x i,j The gray value of the pixel and the gray value of the surrounding pixel do not exceed 20, k approaches to 0 at the moment, and otherwise, the value of k approaches to 1;
selecting a neighborhood gray variance in a window W as a self-adaptive variable calculation parameter k:
Figure FDA0003785840970000022
wherein the content of the first and second substances,
Figure FDA0003785840970000023
is the variance of the gray levels within the window W,
Figure FDA0003785840970000024
k' is a proportionality coefficient, and is the noise variance of the original satellite cloud picture;
the step 2 comprises the following steps:
step 2-1, data selection: comparing and recording basic information of the satellite cloud picture obtained in the step 1, wherein the basic information to be recorded comprises a longitude and latitude range contained in the satellite cloud picture and a cloud layer range of the cloud picture;
step 2-2, data annotation: comparing basic reflectivity images disclosed in national meteorological websites by using the corresponding function relation obtained in the step 1-1, and labeling by using the basic information of the satellite cloud pictures sorted in the step 2-1 to form a data set required during training and verification; the data set was as follows 7:3, dividing the ratio into a training set and a verification set, wherein the training set is used for training the model, and the verification set is used for carrying out inspection analysis on the trained model;
the step 3 comprises the following steps:
step 3-1, establishing a convolutional neural network model comprising two convolutional neural network structures: the convolutional neural network structure comprises an input layer, a convolutional layer, a maximum pooling layer, a full-link layer, a last Softmax classification layer and an output layer, and the two convolutional neural network structures are connected in parallel to obtain a convolutional neural network model comprising the two convolutional neural network structures, namely the convolutional neural network model comprises two convolutional neural network structures which are respectively a local path and a global path, wherein the local path is a path using a 7 multiplied by 7 convolutional kernel, and the global path is a path using a 13 multiplied by 13 convolutional kernel;
step 3-2, improving a convolutional neural network model: performing convolution operation twice in the local path and the global path, and performing convolution by using a convolution kernel with the size of 3 multiplied by 3; the feature mappings of the local path and the global path are connected in series and then sent to an output layer; respectively designing convolution operations of 4 layers, 8 layers and 12 layers in the convolution process of a local path, respectively inputting images, and comparing results;
step 3-3, inputting a series structure: adding the output of a conventional convolutional neural network as the additional input of a global path by depending on the connection of convolutional layers, and finally using the improved convolutional neural network model in the step 3-2 as the subsequent connection of two convolutional neural network structures;
the step 4 comprises the following steps:
step 4-1, capturing the feedback item of the change:
using f t-1 And f t-2 Respectively representing a value of a preceding objective function and a value of a following objective function;
if f is t-1 ≥f t-2 Then calculate the change r t Is composed of
Figure FDA0003785840970000031
If f is t-1 ≤f t-2 Then calculate the change r t Is composed of
Figure FDA0003785840970000032
r t The value is always non-negative, but less than or greater than 1, which can capture an increase or decrease in change;
step 4-2, smooth tracking enables the algorithm to be converged:
training the improved convolution neural network model established in the step 3, and defining an objective function d in the training process t =βd t-1 +(1-β)r t
Wherein β ∈ [0,1) represents attenuationRate, reduction of objective function d by smooth tracking t Let us order
Figure FDA0003785840970000033
As a smoothed estimate of the objective function at time t-2, and
Figure FDA0003785840970000034
wherein
Figure FDA0003785840970000035
As a smoothed estimate of the objective function at time 0, f 0 As a smooth estimation value of the actual situation at the time 0; to maintain convergence during neural network training, the objective function d t Is in a range, and the upper and lower of the range are respectively K and K which are both greater than 0; setting:
Figure FDA0003785840970000036
and when k < gamma t ≤K,
Figure FDA0003785840970000037
The coefficients in the process of calculation
Figure FDA0003785840970000038
And is provided with
Figure FDA0003785840970000039
Finally, the actual value gamma of the objective function t Is that
Figure FDA00037858409700000310
When the temperature is higher than the set temperature
Figure FDA00037858409700000311
When K and K are equal to or greater than 0, and the coefficients in the process are calculated simultaneously
Figure FDA00037858409700000312
And is
Figure FDA00037858409700000313
While the actual value of the final objective function is gamma t Is that
Figure FDA00037858409700000314
Training the improved convolutional neural network model established in the step 3 by using a training set in the data set prepared in the step 2, wherein a small batch updating method is used in the training process, and parameters in the neural network are stored in each 50 iterations in the training process; and in the training process, the loss of the neural network is stable, the training is finished, and the final parameter information in the process is stored.
CN202010521993.2A 2020-06-10 2020-06-10 Method for identifying thunderstorm cloud cluster in satellite cloud picture Active CN111666903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010521993.2A CN111666903B (en) 2020-06-10 2020-06-10 Method for identifying thunderstorm cloud cluster in satellite cloud picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010521993.2A CN111666903B (en) 2020-06-10 2020-06-10 Method for identifying thunderstorm cloud cluster in satellite cloud picture

Publications (2)

Publication Number Publication Date
CN111666903A CN111666903A (en) 2020-09-15
CN111666903B true CN111666903B (en) 2022-10-04

Family

ID=72386275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010521993.2A Active CN111666903B (en) 2020-06-10 2020-06-10 Method for identifying thunderstorm cloud cluster in satellite cloud picture

Country Status (1)

Country Link
CN (1) CN111666903B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204903B (en) * 2021-04-29 2022-04-29 国网电力科学研究院武汉南瑞有限责任公司 Method for predicting thunder and lightning
CN113570642B (en) * 2021-06-10 2024-01-05 国家卫星气象中心(国家空间天气监测预警中心) Static orbit satellite convection primary early warning method based on background field data and machine learning
CN116188488B (en) * 2023-01-10 2024-01-16 广东省第二人民医院(广东省卫生应急医院) Gray gradient-based B-ultrasonic image focus region segmentation method and device
CN116206163B (en) * 2023-05-04 2023-07-04 中科三清科技有限公司 Meteorological satellite remote sensing cloud picture detection analysis processing method
CN116778354B (en) * 2023-08-08 2023-11-21 南京信息工程大学 Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023177A (en) * 2016-05-14 2016-10-12 吉林大学 Thunderstorm cloud cluster identification method and system for meteorological satellite cloud picture
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
CN109738970A (en) * 2018-12-07 2019-05-10 国网江苏省电力有限公司电力科学研究院 The method, apparatus and storage medium for realizing Lightning Warning are excavated based on lightning data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023177A (en) * 2016-05-14 2016-10-12 吉林大学 Thunderstorm cloud cluster identification method and system for meteorological satellite cloud picture
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
CN109738970A (en) * 2018-12-07 2019-05-10 国网江苏省电力有限公司电力科学研究院 The method, apparatus and storage medium for realizing Lightning Warning are excavated based on lightning data

Also Published As

Publication number Publication date
CN111666903A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN111666903B (en) Method for identifying thunderstorm cloud cluster in satellite cloud picture
Li et al. Image dehazing using residual-based deep CNN
CN109934200B (en) RGB color remote sensing image cloud detection method and system based on improved M-Net
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN112836713A (en) Image anchor-frame-free detection-based mesoscale convection system identification and tracking method
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN112464911A (en) Improved YOLOv 3-tiny-based traffic sign detection and identification method
CN110060273B (en) Remote sensing image landslide mapping method based on deep neural network
Zhao et al. An attention encoder-decoder network based on generative adversarial network for remote sensing image dehazing
CN109359661B (en) Sentinel-1 radar image classification method based on convolutional neural network
CN113239722B (en) Deep learning based strong convection extrapolation method and system under multi-scale
CN109741340B (en) Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network
CN111178438A (en) ResNet 101-based weather type identification method
Huang et al. A new haze removal algorithm for single urban remote sensing image
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
Li et al. An end-to-end system for unmanned aerial vehicle high-resolution remote sensing image haze removal algorithm using convolution neural network
CN113642475B (en) Atlantic hurricane strength estimation method based on convolutional neural network model
CN113505712A (en) Novel loss function based sea surface oil spill detection method of convolutional neural network
CN113239865B (en) Deep learning-based lane line detection method
CN113947723A (en) High-resolution remote sensing scene target detection method based on size balance FCOS
CN114202694A (en) Small sample remote sensing scene image classification method based on manifold mixed interpolation and contrast learning
CN113496159A (en) Multi-scale convolution and dynamic weight cost function smoke target segmentation method
CN116129280B (en) Method for detecting snow in remote sensing image
Yang et al. An End-to-End Pyramid Convolutional Neural Network for Dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant