CN113643312A - Cloud layer segmentation method based on true color satellite cloud picture and image processing - Google Patents
Cloud layer segmentation method based on true color satellite cloud picture and image processing Download PDFInfo
- Publication number
- CN113643312A CN113643312A CN202111184390.9A CN202111184390A CN113643312A CN 113643312 A CN113643312 A CN 113643312A CN 202111184390 A CN202111184390 A CN 202111184390A CN 113643312 A CN113643312 A CN 113643312A
- Authority
- CN
- China
- Prior art keywords
- color
- value
- feature
- satellite cloud
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000011218 segmentation Effects 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 title claims abstract description 20
- 239000011159 matrix material Substances 0.000 claims abstract description 78
- 230000004927 fusion Effects 0.000 claims abstract description 51
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000013528 artificial neural network Methods 0.000 claims abstract description 20
- 238000012360 testing method Methods 0.000 claims description 18
- 238000013507 mapping Methods 0.000 claims description 13
- 241001270131 Agaricus moelleri Species 0.000 claims description 9
- 238000002372 labelling Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 239000003086 colorant Substances 0.000 claims description 2
- 238000011176 pooling Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 238000013139 quantization Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a cloud layer segmentation method based on true color satellite cloud pictures and image processing, which comprises the following steps: acquiring a color equalization satellite cloud picture, and acquiring a gray average value to obtain a cloud layer color characteristic value; calculating an edge characteristic value of the satellite cloud picture, and fusing the two characteristics to obtain a color enhancement characteristic value; graying the satellite cloud picture, and establishing a gray level co-occurrence matrix to obtain a texture characteristic value; fusing the texture features to obtain texture fusion features; fusing the color enhancement features and the texture fusion features to obtain multi-feature fusion feature values; establishing a multi-feature fusion feature matrix through the multi-feature fusion feature values; and inputting the multi-feature fusion feature matrix and the true color satellite cloud picture into a neural network for training to obtain the segmented satellite cloud picture. By the method, the characteristics of color, edge contour, texture and the like in the cloud picture are fused and analyzed, the properties of each region of the cloud layer are better reflected, and the method has a good segmentation effect on thick cloud layers, thin cloud layers and cloud layer edges.
Description
Technical Field
The invention relates to the field of image processing, in particular to a cloud layer segmentation method based on satellite cloud pictures and image processing.
Background
The satellite cloud picture is an image of the earth surface observed from top to bottom by a meteorological satellite and is used for reflecting cloud cover and earth surface characteristics. The satellite cloud picture can be used for estimating weather and weather development trend, providing basis for weather analysis and weather forecast, and making up the defects of conventional detection data in some regions lacking weather observation stations.
In the application field, cloud identification is mostly carried out on satellite cloud pictures by means of subjective methods, experience methods or simple analysis methods, future changes of the cloud pictures can only be realized by means of simple linear extrapolation, the means cannot follow the development of the information era, and the improvement of weather forecast level is restricted, so that the research of methods for cloud classification and cloud cluster movement prediction is very important. And the cloud layer division is the premise for developing technical research, so the cloud layer for dividing the satellite cloud picture has certain practical value.
The current common cloud layer segmentation technology comprises the following steps: the cloud image segmentation method comprises a threshold value method, a spatial correlation method, a double spectrum method and the like, wherein the above technologies can complete the segmentation of a cloud image under certain conditions, but the corresponding defects exist, for example, a threshold value law engineer completes image segmentation according to a set threshold value, but the threshold value is set inaccurately when the weather is bad or under other conditions, the completion effect is influenced, the spatial correlation method mainly performs segmentation on a thick cloud layer, but the detection effect on the outline of a thin cloud layer or an edge cloud layer is deficient, and the double spectrum method detects a cloud layer region through spectrum, but due to the factors of different cloud cover, different shapes and the like, the effect expressed by the reflection of sunlight is also uncertain.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a cloud layer segmentation method based on true color satellite cloud pictures and image processing. The invention adopts the following technical scheme:
a cloud layer segmentation method based on true color satellite cloud pictures and image processing comprises the following steps:
the method comprises the steps that a true color satellite cloud picture is subjected to histogram equalization processing to obtain a color equalization satellite cloud picture, the color average value of the color equalization satellite cloud picture is obtained, and a color characteristic value is obtained through the color average value of the color equalization satellite cloud picture; acquiring a color equalization satellite cloud picture edge characteristic value, and fusing a color characteristic and the edge characteristic to obtain a color enhancement characteristic value;
graying a satellite cloud picture with balanced colors, and obtaining texture characteristic values by establishing a gray level co-occurrence matrix; fusing the texture feature values to obtain texture fusion features;
fusing the color enhancement features and the texture fusion features to obtain multi-feature fusion feature values; establishing a multi-feature fusion feature matrix through the multi-feature fusion feature values; and inputting the multi-feature fusion feature matrix and the true color satellite cloud picture into a double-coding full convolution neural network to obtain a segmented satellite cloud picture.
The expression for obtaining the multi-feature fusion characteristic value is as follows:
wherein,in order to fuse the feature values for the multiple features,in order to enhance the value of the characteristic for the color,in order to fuse the feature values for the texture,、is a weight;
wherein the color enhancement characteristic valueThe method for obtaining the color enhancement comprises the step of superposing the edge characteristics into the color characteristics through the mapping relation to obtain the color enhancementStrong eigenvalueThe color enhancement feature valueThe expression of (a) is:
wherein,in order to enhance the value of the characteristic for the color,in order to be a color characteristic value,in order to be the value of the edge feature,is a mapping coefficient;
wherein the texture fuses the feature valuesThe acquisition method comprises the following steps: graying a color-equalized satellite cloud picture, establishing a gray level co-occurrence matrix, obtaining a gray level characteristic value according to the gray level co-occurrence matrix, assigning values in the satellite cloud picture through a sliding window, obtaining a texture characteristic matrix consisting of texture characteristic values, and obtaining a texture fusion characteristic value by fusing a plurality of texture characteristics, wherein the expression of the texture fusion characteristic value is as follows:
wherein,in order to fuse the feature values for the texture,in order to be the value of the energy,is the value of the contrast ratio and,in the form of an entropy value, the value of the entropy,、 、are the mapping coefficients.
Further, the edge feature valueThe acquisition method comprises the following steps: taking a central pixel of a sliding window as a reference, performing difference operation on color values of 8 adjacent pixels, averaging, and then assigning a value to the center of the sliding window to obtain an edge characteristic value:
wherein,in order to be the value of the edge feature,、 、 three channel values are respectively taken as the center points of the windows,、、 three channel values of the ith pixel point in the window central point 8 field are respectively set;
the color characteristic valueThe calculation method of (2) is as follows: the color average value is obtained by performing mean operation and superposition on three channels of the color equalization satellite cloud picture R, G, B, and the color characteristic value is obtained through the color average value:
wherein,in order to be a color characteristic value,for color equalizing the satellite cloud R channel gray values,is the gray value of the channel of the cloud graph G,is the gray value of the B channel of the cloud image,is the mean value of the gray scales of the R channel,is the average value of the gray scales of the G channel,is the average value of the gray scales of the B channel,are coefficients.
Further, the texture feature matrix comprises an energy feature matrix, a contrast feature matrix and an entropy feature value matrix;
the method for acquiring the texture characteristic value comprises the following steps:
wherein,in order to be the value of the energy,a gray level co-occurrence matrix is represented,representing the coordinates of the pixels in the gray level co-occurrence matrix,is the dimension of the gray level co-occurrence matrix;
wherein,in order to be a value of the contrast ratio,a gray level co-occurrence matrix is represented,representing the coordinates of the pixels in the gray level co-occurrence matrix,is a dimension of the gray level co-occurrence matrix,representing the absolute value of the pixel coordinate difference;
wherein,in the form of an entropy value, the value of the entropy,a gray level co-occurrence matrix is represented,representing the coordinates of the pixels in the gray level co-occurrence matrix,is the dimension of the gray level co-occurrence matrix.
Further, the method for establishing the gray level co-occurrence matrix comprises the following steps:
graying the color equalization satellite cloud picture to obtain a gray level image, quantizing the gray level of the obtained gray level image, and establishing a gray level co-occurrence matrix by taking the N-dimensional matrix as a sliding window;
further, the double-coding full convolution neural network comprises a data set module and a training module;
the data set module comprises a true color satellite cloud picture and an artificial annotation image corresponding to the true color satellite cloud picture, and the true color satellite cloud picture and the artificial annotation image are divided into a training set and a test set according to a proportion; the training set input adopts a training set true color satellite cloud picture and a corresponding multi-feature fusion feature matrix, and outputs an artificial labeling image corresponding to the training set true color satellite cloud picture; inputting a test set into a true color satellite cloud picture of the test machine and a corresponding multi-feature fusion feature matrix;
the training model comprises double encoders and a decoder, wherein the input of one encoder structure is a training set true color satellite cloud picture, and the output is a feature image subjected to convolution-pooling; the input of the other encoder structure is a multi-feature fusion feature matrix, the output is a multi-feature fusion feature image, and the image features are extracted through the encoder;
and performing parallel feature fusion on the feature images output by the double-encoder structure to serve as the input of the decoder structure, outputting the artificial annotation images corresponding to the true color satellite cloud images input by the encoder structure, and establishing a mapping relation between the feature images and the training set output images.
And (3) carrying out a series of training and testing on the neural network, and inputting the true color satellite cloud picture and the corresponding multi-feature fusion feature matrix to obtain the segmented cloud layer image.
Compared with the prior art, the invention has the beneficial effects that:
(1) based on the method, cloud layer color characteristics of the satellite cloud picture are extracted through Euclidean distances from real-color satellite cloud picture pixel points to the RGB color space of the average color of the cloud layer, and compared with a simple threshold method, the method is favorable for a neural network to learn the color difference between a cloud layer region and a non-cloud layer region.
(2) Based on the method, the edge characteristics of the satellite cloud picture are extracted, the edge region and the cloud layer color region are overlapped, the effect of enhancing the cloud layer edge can be achieved, and better learning of the edge region information of the cloud layer is facilitated during subsequent neural network coding.
(3) Based on this application multi-feature fusion characteristic has been built for the semantic segmentation, compare in prior art beneficial effect lie in that the fine thick cloud layer region that reflects of color feature, the fine thin cloud layer region that reflects of textural feature, the fine marginal zone that reflects the cloud layer of marginal feature, carry out the feature extraction to multi-feature fusion matrix through neural network, can learn the regional characteristic of whole cloud layer, compare in space correlation method, two spectral methods and cut apart more stable, accurate to the cloud layer.
Drawings
FIG. 1 is a schematic diagram of a cloud layer segmentation method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a true color satellite cloud in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a partitioned satellite cloud in accordance with an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Example 1
As shown in fig. 1, the embodiment provides a cloud layer segmentation method based on true color satellite cloud images and image processing, including:
the method comprises the steps that a true color satellite cloud picture is subjected to histogram equalization processing to obtain a color equalization satellite cloud picture, the color average value of the color equalization satellite cloud picture is obtained, and a color characteristic value is obtained through the color average value of the color equalization satellite cloud picture; acquiring a color equalization satellite cloud picture edge characteristic value, and fusing a color characteristic and the edge characteristic to obtain a color enhancement characteristic value;
in one embodiment, the true color satellite cloud picture is acquired by shooting through an FY4A satellite, and the color equalized satellite cloud picture is obtained after histogram equalization processing. The histogram equalization processing effect on the true color satellite cloud picture is as follows: on one hand, the contrast ratio of the cloud layer area and other areas in the true color satellite cloud picture can be enhanced, and on the other hand, the image color can be more uniform, so that the subsequent gray processing can be conveniently carried out.
In other embodiments, the method for acquiring the true color satellite cloud picture can also be acquired by the national weather service or the existing satellite map software, and the cloud picture acquisition mode is not limited and meets the acquisition requirement.
And manually marking the acquired color equalization satellite cloud picture, marking the pixel value of the cloud layer region in the color equalization satellite cloud picture as '1', marking the pixel value of the non-cloud layer region as '0', manually distinguishing the cloud layer region from the non-cloud layer region through marking, and storing the picture as a manually marked image at the moment.
And performing product operation on the artificial annotation image and R, G, B three channels of the color equalization satellite cloud picture to obtain a cloud layer area image, performing mean operation on three channels of the cloud layer area to obtain a gray average value of each channel, wherein the superposition of the gray average values of the three channels is the color average value of the color equalization satellite cloud picture.
Calculating color feature values from the color mean values:
wherein,in order to be a color characteristic value,for color equalizing the satellite cloud R channel gray values,is the gray value of the channel of the cloud graph G,is the gray value of the B channel of the cloud image,is the mean value of the gray scales of the R channel,is the average value of the gray scales of the G channel,is the average value of the gray scales of the B channel,as a coefficient, in the present embodiment,is given by, And taking 255.
The color feature value reflects the proximity of the color of the pixel to the color of the cloud layer, and the larger the color feature value is, the larger the proximity of the color of the pixel to the color of the cloud layer is, i.e. the more likely the region in which the pixel is located is the cloud layer region.
Extracting the edge color feature of the image based on a sliding window method, sliding the image by using a window of 3 x 3 with the step length of 1, respectively performing difference operation on the color values of the adjacent 8 pixels and the color values of the adjacent 8 pixels by taking the color of the pixel at the center of the window as a reference, calculating the average value, and assigning the center of the sliding window, wherein the value is the edge feature value.
The edge feature value is calculated by the following formula:
wherein,in order to be the value of the edge feature,、 、 three channel values are respectively taken as the center points of the windows,、 、 three channel values of the ith pixel point in the window central point 8 field are respectively.
The value reflects the edge characteristics of the color equalized satellite cloud picture, and the larger the difference characteristic value is, the more the edge profile of the cloud layer can be reflected, namely the boundary area of the cloud layer and the ocean, the land and the like, namely the more the area is likely to be the edge area of the cloud layer.
wherein,in order to enhance the value of the characteristic for the color,in order to be a color characteristic value,in order to be the value of the edge feature,to map the coefficients, in this embodiment,and taking 20.
Because the cloud amount of the cloud layer edge area in the satellite cloud picture is less, the cloud layer characteristics can not be completely reflected only from the color characteristics, the edge color characteristics are introduced, the edge color characteristics and the judgment of the color characteristics on the cloud layer area are in positive correlation, and the cloud layer edge area can be enhanced after the edge color characteristics and the color characteristics are superposed through a certain mapping relation. In the present embodiment, a mapping relationship is designedAnd linear superposition is carried out on the color characteristic, and the cloud layer area is judged according to the size of the color enhancement characteristic value, so that the finally obtained result has higher reliability.
As shown in fig. 1, a grayed color-equalized satellite cloud image is obtained, and a gray level co-occurrence matrix is established to obtain a texture characteristic value; and fusing the texture characteristic values to obtain texture fusion characteristic values.
In an embodiment, graying the color-equalized satellite cloud image to obtain a grayscale image with a grayscale level of 256, which may greatly increase the amount of computation due to too large grayscale level, and in this embodiment, quantization is further performed, where the quantization method is as follows: divide the gray value by 32 and round it down, i.e. 0-255 gray level is changed to 0-7 gray level. After quantization, a gray level co-occurrence matrix is established by using the N-dimensional matrix as a sliding window, where N is 7 and the sliding distance is 1 in this embodiment. And calculating a gray characteristic value according to the gray co-occurrence matrix, assigning the gray characteristic value to the center position of the corresponding sliding window, and traversing the whole image through the sliding window to obtain a texture characteristic matrix consisting of texture characteristic values. The texture feature matrix generated in this embodiment includes an energy feature value matrix, a contrast feature value matrix, and an entropy feature value matrix.
In another embodiment, the texture features of the image are not fixed, the gray values caused by the repeated arrangement of the ground objects on the image are regularly distributed in the image, the texture features have the characteristics, the gray image does not need to be quantized when other texture features are obtained, the standard during quantization is not unique, and in another embodiment, the corresponding values can be adjusted according to the actually obtained satellite cloud images.
wherein,in order to fuse the feature values for the texture,in order to be the value of the energy,is the value of the contrast ratio and,in the form of an entropy value, the value of the entropy,、、for the mapping coefficient, in this embodiment, the value of the mapping coefficient is:, , 。
the method for obtaining the texture features comprises the following steps:
wherein,in order to be the value of the energy,a gray level co-occurrence matrix is represented,representing the coordinates of the pixels in the gray level co-occurrence matrix,is the dimension of the gray level co-occurrence matrix;
wherein,in order to be a value of the contrast ratio,a gray level co-occurrence matrix is represented,representing the coordinates of the pixels in the gray level co-occurrence matrix,is a dimension of the gray level co-occurrence matrix,representing the absolute value of the pixel coordinate difference;
wherein,in the form of an entropy value, the value of the entropy,a gray level co-occurrence matrix is represented,representing the coordinates of the pixels in the gray level co-occurrence matrix,is the dimension of the gray level co-occurrence matrix.
The thin cloud layer region can be effectively judged by fusing the texture feature values, but the texture is possibly complex due to the mountain range region in the land region, so that the thin cloud layer region can be better distinguished by subsequent fusion with the color enhancement feature.
As shown in fig. 1, the color enhancement eigenvalue and the texture fusion eigenvalue are fused to obtain a multi-feature fusion eigenvalue, and a multi-feature fusion eigenvalue matrix is established.
In one embodiment, the method of fusing the color enhancement feature value and the texture fusion feature value is as follows:
wherein,in order to fuse the feature values for the multiple features,color enhancement for true color satellite cloudsThe value of the characteristic is set to be,in order to fuse the feature values for the texture,、as a weight, in the present embodiment,,。
the color enhancement features can better reflect the features of the thick cloud layer region, the texture fusion features can better reflect the features of the thin cloud layer region, and the obtained multi-feature fusion feature values are clearer and more complete in judgment of the whole cloud layer by fusing the color enhancement features and the texture fusion features.
As shown in fig. 1, a data set is constructed to train a neural network, and a segmented cloud layer image is output.
Firstly, a data set is constructed by the acquired true color satellite cloud pictures according to a certain proportion, in the embodiment, seven of ten true color satellite cloud pictures are taken as a training set, and three of ten true color satellite cloud pictures are taken as a test set, namely, the ten true color satellite cloud pictures are divided into the training set and the test set according to the proportion of 7: 3.
Inputting data into a neural network by adopting a double-encoder structure for training, inputting a true color satellite cloud picture and a multi-feature fusion feature matrix obtained by the steps, artificially setting and outputting the true color satellite cloud picture and the multi-feature fusion feature matrix to be a satellite segmentation cloud picture which is artificially marked, and enabling the neural network to learn how to convert the input true color satellite cloud picture into the segmented satellite cloud picture by repeatedly training for multiple times, wherein the training contents all adopt images in a training set part.
The method adopts a double-encoder structure to input data into a neural network for testing, the data are input into a true color satellite cloud picture and a corresponding multi-feature fusion feature matrix, whether an image output after the neural network is trained is consistent with a satellite segmentation cloud picture marked manually or not is tested, the image output by the neural network is accurate and efficient through repeated testing for many times, and the testing content adopts images in a testing set part.
Further, as shown in fig. 2, a true color satellite cloud chart collected by a satellite is provided, wherein lines respectively frame and select a territory range and a province distribution range of China.
As shown in fig. 3, after the neural network is trained repeatedly and the output of the multiple tests is relatively stable, when a true color satellite cloud picture and a corresponding multi-feature fusion feature matrix are input into the neural network, the neural network can output a satellite cloud picture which is correspondingly segmented, wherein a white area is a binary image of which the pixel of a corresponding artificially labeled cloud layer area is '1' and a black area is a non-cloud layer area is '0', so that the segmentation of the true color satellite cloud picture is completed, and the method has the characteristics of high efficiency and accuracy.
In another embodiment, the collected true color satellite cloud image construction data set can be realized by other proportions, for example, in every ten true color satellite cloud images, six are training sets, four are test sets or eight are training sets and two are test sets, the proportion is not unique, but the neural network needs to be trained in a large amount to establish the mapping relation of the characteristics between the input image and the output image, so that the image segmentation is better completed, the proportion occupied by the training sets is larger than that of the test sets in general, and the output of the neural network is more stable and accurate by performing partial tests, so that the purpose of segmenting the satellite cloud image is achieved.
The above embodiments are merely illustrative of the present invention, and should not be construed as limiting the scope of the present invention, and all designs identical or similar to the present invention are within the scope of the present invention.
Claims (6)
1. A cloud layer segmentation method based on true color satellite cloud pictures and image processing is characterized by comprising the following steps:
the method comprises the steps that a true color satellite cloud picture is subjected to histogram equalization processing to obtain a color equalization satellite cloud picture, the color average value of the color equalization satellite cloud picture is obtained, and a color characteristic value is obtained through the color average value of the color equalization satellite cloud picture; acquiring a color equalization satellite cloud picture edge characteristic value, and fusing a color characteristic and the edge characteristic to obtain a color enhancement characteristic value;
graying a satellite cloud picture with balanced colors, and obtaining texture characteristic values by establishing a gray level co-occurrence matrix; fusing the texture feature values to obtain texture fusion features;
fusing the color enhancement features and the texture fusion features to obtain multi-feature fusion feature values; establishing a multi-feature fusion feature matrix through the multi-feature fusion feature values; and inputting the multi-feature fusion feature matrix and the true color satellite cloud picture into a double-coding full convolution neural network to obtain a segmented satellite cloud picture.
2. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 1, wherein: the expression for obtaining the multi-feature fusion characteristic value is as follows:
wherein,in order to fuse the feature values for the multiple features,in order to enhance the value of the characteristic for the color,in order to fuse the feature values for the texture,、 is a weight;
wherein the color enhancement characteristic valueThe method comprises the step of obtaining a color enhancement characteristic value by superposing the edge characteristic to the color characteristic through a mapping relationThe color enhancement feature valueThe expression of (a) is:
wherein,in order to enhance the value of the characteristic for the color,in order to be a color characteristic value,in order to be the value of the edge feature,is a mapping coefficient;
wherein the texture fuses the feature valuesThe acquisition method comprises the following steps: graying the color-equalized satellite cloud picture, establishing a gray level co-occurrence matrix, obtaining a gray level characteristic value according to the gray level co-occurrence matrix, and assigning a value in the satellite cloud picture through a sliding window to obtain a composed lineA texture feature matrix composed of texture feature values, and a texture fusion feature value is obtained by fusing a plurality of texture features, wherein the expression of the texture fusion feature value is as follows:
3. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 2, wherein:
the edge feature valueThe acquisition method comprises the following steps: taking the central pixel of the sliding window as a reference, performing difference operation and averaging on the color values of the adjacent 8 pixels, and then obtaining the average valueAnd then assigning a value to the center of the sliding window to obtain an edge characteristic value:
wherein,in order to be the value of the edge feature,、、 three channel values are respectively taken as the center points of the windows,、、 three channel values of the ith pixel point in the window central point 8 field are respectively set;
the color characteristic valueThe calculation method of (2) is as follows: the color average value is obtained by performing mean operation and superposition on three channels of the color equalization satellite cloud picture R, G, B, and the color characteristic value is obtained through the color average value:
wherein,in order to be a color characteristic value,for color equalizing the satellite cloud R channel gray values,is the gray value of the channel of the cloud graph G,is the gray value of the B channel of the cloud image,is the mean value of the gray scales of the R channel,is the average value of the gray scales of the G channel,is the average value of the gray scales of the B channel,are coefficients.
4. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 2, wherein: the texture feature matrix comprises an energy feature matrix, a contrast feature matrix and an entropy feature value matrix;
the method for acquiring the texture characteristic value comprises the following steps:
wherein,in order to be the value of the energy,a gray level co-occurrence matrix is represented,representing the coordinates of the pixels in the gray level co-occurrence matrix,is the dimension of the gray level co-occurrence matrix;
wherein,in order to be a value of the contrast ratio,a gray level co-occurrence matrix is represented,representing pixels in gray scale
The coordinates in the raw matrix are then generated,is a dimension of the gray level co-occurrence matrix,representing the absolute value of the pixel coordinate difference;
5. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 4, wherein: the method for establishing the gray level co-occurrence matrix comprises the following steps:
graying the color equalization satellite cloud picture to obtain a gray level image, quantizing the gray level of the obtained gray level image, and establishing a gray level co-occurrence matrix by taking the N-dimensional matrix as a sliding window.
6. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 1, wherein: the double-coding full convolution neural network comprises a data set module and a training module;
the data set module comprises a true color satellite cloud picture and an artificial annotation image corresponding to the true color satellite cloud picture, and the true color satellite cloud picture and the artificial annotation image are divided into a training set and a test set according to a proportion; the training set input adopts a training set true color satellite cloud picture and a corresponding multi-feature fusion feature matrix, and outputs an artificial labeling image corresponding to the training set true color satellite cloud picture; inputting a test set into a true color satellite cloud picture of the test machine and a corresponding multi-feature fusion feature matrix;
the training model comprises double encoders and a decoder, wherein the input of one encoder structure is a training set true color satellite cloud picture, and the output is a feature image subjected to convolution-pooling; the input of the other encoder structure is a multi-feature fusion feature matrix, the output is a multi-feature fusion feature image, and the image features are extracted through the encoder;
and performing parallel feature fusion on the feature images output by the double-encoder structure to serve as the input of the decoder structure, outputting the artificial labeling images corresponding to the true color satellite cloud images input by the encoder structure, and establishing a mapping relation between the feature images and the artificial labeling images corresponding to the true color satellite cloud images output by the training set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111184390.9A CN113643312B (en) | 2021-10-12 | 2021-10-12 | Cloud layer segmentation method based on true color satellite cloud picture and image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111184390.9A CN113643312B (en) | 2021-10-12 | 2021-10-12 | Cloud layer segmentation method based on true color satellite cloud picture and image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113643312A true CN113643312A (en) | 2021-11-12 |
CN113643312B CN113643312B (en) | 2022-02-08 |
Family
ID=78426501
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111184390.9A Active CN113643312B (en) | 2021-10-12 | 2021-10-12 | Cloud layer segmentation method based on true color satellite cloud picture and image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113643312B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115100514A (en) * | 2022-05-11 | 2022-09-23 | 南京林业大学 | Cloud tracking method based on FPGA |
CN117332929A (en) * | 2023-11-28 | 2024-01-02 | 珠江水利委员会珠江水利科学研究院 | Intelligent flood prevention method and system for hydraulic engineering |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103208001A (en) * | 2013-02-06 | 2013-07-17 | 华南师范大学 | Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction |
CN110516629A (en) * | 2019-08-30 | 2019-11-29 | 河海大学常州校区 | A kind of nutritious obesity and classification method based on more Cloud Layer Characters |
CN112488050A (en) * | 2020-12-16 | 2021-03-12 | 安徽大学 | Color and texture combined aerial image scene classification method and system |
CN112749621A (en) * | 2020-11-25 | 2021-05-04 | 厦门理工学院 | Remote sensing image cloud layer detection method based on deep convolutional neural network |
CN113239830A (en) * | 2021-05-20 | 2021-08-10 | 北京航空航天大学 | Remote sensing image cloud detection method based on full-scale feature fusion |
CN113284153A (en) * | 2021-05-14 | 2021-08-20 | 惠州中国科学院遥感与数字地球研究所空间信息技术研究院 | Satellite cloud layer image processing method and device, computer equipment and storage medium |
-
2021
- 2021-10-12 CN CN202111184390.9A patent/CN113643312B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103208001A (en) * | 2013-02-06 | 2013-07-17 | 华南师范大学 | Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction |
CN110516629A (en) * | 2019-08-30 | 2019-11-29 | 河海大学常州校区 | A kind of nutritious obesity and classification method based on more Cloud Layer Characters |
CN112749621A (en) * | 2020-11-25 | 2021-05-04 | 厦门理工学院 | Remote sensing image cloud layer detection method based on deep convolutional neural network |
CN112488050A (en) * | 2020-12-16 | 2021-03-12 | 安徽大学 | Color and texture combined aerial image scene classification method and system |
CN113284153A (en) * | 2021-05-14 | 2021-08-20 | 惠州中国科学院遥感与数字地球研究所空间信息技术研究院 | Satellite cloud layer image processing method and device, computer equipment and storage medium |
CN113239830A (en) * | 2021-05-20 | 2021-08-10 | 北京航空航天大学 | Remote sensing image cloud detection method based on full-scale feature fusion |
Non-Patent Citations (2)
Title |
---|
SAIF MOHAMMED A IBRAHIM 等: "Satellite Image Classification using Multi Features Based Descriptors", 《INTERNATIONAL RESEARCH JOURNAL OF ADVANCED ENGINEERING AND SCIENCE》 * |
康超萌: "基于神经网络的国产高分光学遥感图像云检测", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115100514A (en) * | 2022-05-11 | 2022-09-23 | 南京林业大学 | Cloud tracking method based on FPGA |
CN117332929A (en) * | 2023-11-28 | 2024-01-02 | 珠江水利委员会珠江水利科学研究院 | Intelligent flood prevention method and system for hydraulic engineering |
CN117332929B (en) * | 2023-11-28 | 2024-03-08 | 珠江水利委员会珠江水利科学研究院 | Intelligent flood prevention method and system for hydraulic engineering |
Also Published As
Publication number | Publication date |
---|---|
CN113643312B (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110378196B (en) | Road visual detection method combining laser point cloud data | |
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN109118479B (en) | Capsule network-based insulator defect identification and positioning device and method | |
CN110263717B (en) | Method for determining land utilization category of street view image | |
CN101901343B (en) | Remote sensing image road extracting method based on stereo constraint | |
CN113643312B (en) | Cloud layer segmentation method based on true color satellite cloud picture and image processing | |
CN108921120B (en) | Cigarette identification method suitable for wide retail scene | |
CN104077577A (en) | Trademark detection method based on convolutional neural network | |
CN109840483B (en) | Landslide crack detection and identification method and device | |
CN112950780B (en) | Intelligent network map generation method and system based on remote sensing image | |
CN111767874B (en) | Pavement disease detection method based on deep learning | |
CN111639587A (en) | Hyperspectral image classification method based on multi-scale spectrum space convolution neural network | |
CN114494179A (en) | Mobile phone back damage point detection method and system based on image recognition | |
CN112270317A (en) | Traditional digital water meter reading identification method based on deep learning and frame difference method | |
CN112001293A (en) | Remote sensing image ground object classification method combining multi-scale information and coding and decoding network | |
CN111738931B (en) | Shadow removal algorithm for aerial image of photovoltaic array unmanned aerial vehicle | |
CN115880586A (en) | Satellite remote sensing image cloud and snow detection method based on mixed feature network | |
CN116824347A (en) | Road crack detection method based on deep learning | |
CN113256563A (en) | Method and system for detecting surface defects of fine product tank based on space attention mechanism | |
CN103065296A (en) | High-resolution remote sensing image residential area extraction method based on edge feature | |
CN115690605A (en) | Multispectral remote sensing image cloud detection method based on space-spectrum combination | |
CN111046783A (en) | Slope geological disaster boundary extraction method for improving watershed algorithm | |
CN113642663B (en) | Satellite remote sensing image water body extraction method | |
CN112686105B (en) | Fog concentration grade identification method based on video image multi-feature fusion | |
CN113077461A (en) | Steel surface quality detection method based on semi-supervised deep clustering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |