CN113643312A - Cloud layer segmentation method based on true color satellite cloud picture and image processing - Google Patents

Cloud layer segmentation method based on true color satellite cloud picture and image processing Download PDF

Info

Publication number
CN113643312A
CN113643312A CN202111184390.9A CN202111184390A CN113643312A CN 113643312 A CN113643312 A CN 113643312A CN 202111184390 A CN202111184390 A CN 202111184390A CN 113643312 A CN113643312 A CN 113643312A
Authority
CN
China
Prior art keywords
color
value
feature
satellite cloud
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111184390.9A
Other languages
Chinese (zh)
Other versions
CN113643312B (en
Inventor
康然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Weipei Communication Technology Development Co ltd
Original Assignee
Jiangsu Weipei Communication Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Weipei Communication Technology Development Co ltd filed Critical Jiangsu Weipei Communication Technology Development Co ltd
Priority to CN202111184390.9A priority Critical patent/CN113643312B/en
Publication of CN113643312A publication Critical patent/CN113643312A/en
Application granted granted Critical
Publication of CN113643312B publication Critical patent/CN113643312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a cloud layer segmentation method based on true color satellite cloud pictures and image processing, which comprises the following steps: acquiring a color equalization satellite cloud picture, and acquiring a gray average value to obtain a cloud layer color characteristic value; calculating an edge characteristic value of the satellite cloud picture, and fusing the two characteristics to obtain a color enhancement characteristic value; graying the satellite cloud picture, and establishing a gray level co-occurrence matrix to obtain a texture characteristic value; fusing the texture features to obtain texture fusion features; fusing the color enhancement features and the texture fusion features to obtain multi-feature fusion feature values; establishing a multi-feature fusion feature matrix through the multi-feature fusion feature values; and inputting the multi-feature fusion feature matrix and the true color satellite cloud picture into a neural network for training to obtain the segmented satellite cloud picture. By the method, the characteristics of color, edge contour, texture and the like in the cloud picture are fused and analyzed, the properties of each region of the cloud layer are better reflected, and the method has a good segmentation effect on thick cloud layers, thin cloud layers and cloud layer edges.

Description

Cloud layer segmentation method based on true color satellite cloud picture and image processing
Technical Field
The invention relates to the field of image processing, in particular to a cloud layer segmentation method based on satellite cloud pictures and image processing.
Background
The satellite cloud picture is an image of the earth surface observed from top to bottom by a meteorological satellite and is used for reflecting cloud cover and earth surface characteristics. The satellite cloud picture can be used for estimating weather and weather development trend, providing basis for weather analysis and weather forecast, and making up the defects of conventional detection data in some regions lacking weather observation stations.
In the application field, cloud identification is mostly carried out on satellite cloud pictures by means of subjective methods, experience methods or simple analysis methods, future changes of the cloud pictures can only be realized by means of simple linear extrapolation, the means cannot follow the development of the information era, and the improvement of weather forecast level is restricted, so that the research of methods for cloud classification and cloud cluster movement prediction is very important. And the cloud layer division is the premise for developing technical research, so the cloud layer for dividing the satellite cloud picture has certain practical value.
The current common cloud layer segmentation technology comprises the following steps: the cloud image segmentation method comprises a threshold value method, a spatial correlation method, a double spectrum method and the like, wherein the above technologies can complete the segmentation of a cloud image under certain conditions, but the corresponding defects exist, for example, a threshold value law engineer completes image segmentation according to a set threshold value, but the threshold value is set inaccurately when the weather is bad or under other conditions, the completion effect is influenced, the spatial correlation method mainly performs segmentation on a thick cloud layer, but the detection effect on the outline of a thin cloud layer or an edge cloud layer is deficient, and the double spectrum method detects a cloud layer region through spectrum, but due to the factors of different cloud cover, different shapes and the like, the effect expressed by the reflection of sunlight is also uncertain.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a cloud layer segmentation method based on true color satellite cloud pictures and image processing. The invention adopts the following technical scheme:
a cloud layer segmentation method based on true color satellite cloud pictures and image processing comprises the following steps:
the method comprises the steps that a true color satellite cloud picture is subjected to histogram equalization processing to obtain a color equalization satellite cloud picture, the color average value of the color equalization satellite cloud picture is obtained, and a color characteristic value is obtained through the color average value of the color equalization satellite cloud picture; acquiring a color equalization satellite cloud picture edge characteristic value, and fusing a color characteristic and the edge characteristic to obtain a color enhancement characteristic value;
graying a satellite cloud picture with balanced colors, and obtaining texture characteristic values by establishing a gray level co-occurrence matrix; fusing the texture feature values to obtain texture fusion features;
fusing the color enhancement features and the texture fusion features to obtain multi-feature fusion feature values; establishing a multi-feature fusion feature matrix through the multi-feature fusion feature values; and inputting the multi-feature fusion feature matrix and the true color satellite cloud picture into a double-coding full convolution neural network to obtain a segmented satellite cloud picture.
The expression for obtaining the multi-feature fusion characteristic value is as follows:
Figure 100002_DEST_PATH_IMAGE001
wherein,
Figure 367932DEST_PATH_IMAGE002
in order to fuse the feature values for the multiple features,
Figure 19493DEST_PATH_IMAGE003
in order to enhance the value of the characteristic for the color,
Figure 517339DEST_PATH_IMAGE004
in order to fuse the feature values for the texture,
Figure 584652DEST_PATH_IMAGE005
Figure 133445DEST_PATH_IMAGE006
is a weight;
wherein the color enhancement characteristic value
Figure 144312DEST_PATH_IMAGE003
The method for obtaining the color enhancement comprises the step of superposing the edge characteristics into the color characteristics through the mapping relation to obtain the color enhancementStrong eigenvalue
Figure 812054DEST_PATH_IMAGE007
The color enhancement feature value
Figure 208717DEST_PATH_IMAGE003
The expression of (a) is:
Figure DEST_PATH_IMAGE008
wherein,
Figure 887009DEST_PATH_IMAGE007
in order to enhance the value of the characteristic for the color,
Figure 82498DEST_PATH_IMAGE009
in order to be a color characteristic value,
Figure 699293DEST_PATH_IMAGE010
in order to be the value of the edge feature,
Figure 675340DEST_PATH_IMAGE011
is a mapping coefficient;
wherein the texture fuses the feature values
Figure 62776DEST_PATH_IMAGE004
The acquisition method comprises the following steps: graying a color-equalized satellite cloud picture, establishing a gray level co-occurrence matrix, obtaining a gray level characteristic value according to the gray level co-occurrence matrix, assigning values in the satellite cloud picture through a sliding window, obtaining a texture characteristic matrix consisting of texture characteristic values, and obtaining a texture fusion characteristic value by fusing a plurality of texture characteristics, wherein the expression of the texture fusion characteristic value is as follows:
Figure DEST_PATH_IMAGE012
wherein,
Figure 313814DEST_PATH_IMAGE004
in order to fuse the feature values for the texture,
Figure 562710DEST_PATH_IMAGE013
in order to be the value of the energy,
Figure 52466DEST_PATH_IMAGE014
is the value of the contrast ratio and,
Figure DEST_PATH_IMAGE016
in the form of an entropy value, the value of the entropy,
Figure 910198DEST_PATH_IMAGE017
Figure 774118DEST_PATH_IMAGE018
Figure 101511DEST_PATH_IMAGE019
are the mapping coefficients.
Further, the edge feature value
Figure 421951DEST_PATH_IMAGE010
The acquisition method comprises the following steps: taking a central pixel of a sliding window as a reference, performing difference operation on color values of 8 adjacent pixels, averaging, and then assigning a value to the center of the sliding window to obtain an edge characteristic value:
Figure DEST_PATH_IMAGE020
wherein,
Figure 961702DEST_PATH_IMAGE010
in order to be the value of the edge feature,
Figure 41971DEST_PATH_IMAGE021
Figure 375869DEST_PATH_IMAGE022
Figure 54292DEST_PATH_IMAGE023
three channel values are respectively taken as the center points of the windows,
Figure 712807DEST_PATH_IMAGE024
Figure 162428DEST_PATH_IMAGE025
Figure 932938DEST_PATH_IMAGE026
three channel values of the ith pixel point in the window central point 8 field are respectively set;
the color characteristic value
Figure 149473DEST_PATH_IMAGE009
The calculation method of (2) is as follows: the color average value is obtained by performing mean operation and superposition on three channels of the color equalization satellite cloud picture R, G, B, and the color characteristic value is obtained through the color average value:
Figure 988116DEST_PATH_IMAGE027
wherein,
Figure 597138DEST_PATH_IMAGE009
in order to be a color characteristic value,
Figure 940394DEST_PATH_IMAGE028
for color equalizing the satellite cloud R channel gray values,
Figure 45940DEST_PATH_IMAGE029
is the gray value of the channel of the cloud graph G,
Figure 578869DEST_PATH_IMAGE018
is the gray value of the B channel of the cloud image,
Figure 759315DEST_PATH_IMAGE030
is the mean value of the gray scales of the R channel,
Figure 811453DEST_PATH_IMAGE031
is the average value of the gray scales of the G channel,
Figure 887994DEST_PATH_IMAGE032
is the average value of the gray scales of the B channel,
Figure 205843DEST_PATH_IMAGE033
are coefficients.
Further, the texture feature matrix comprises an energy feature matrix, a contrast feature matrix and an entropy feature value matrix;
the method for acquiring the texture characteristic value comprises the following steps:
energy characteristic value:
Figure 173667DEST_PATH_IMAGE034
wherein,
Figure 140803DEST_PATH_IMAGE013
in order to be the value of the energy,
Figure 680369DEST_PATH_IMAGE035
a gray level co-occurrence matrix is represented,
Figure 451885DEST_PATH_IMAGE036
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 506746DEST_PATH_IMAGE005
is the dimension of the gray level co-occurrence matrix;
contrast characteristic value:
Figure 125126DEST_PATH_IMAGE037
wherein,
Figure 740784DEST_PATH_IMAGE038
in order to be a value of the contrast ratio,
Figure 375345DEST_PATH_IMAGE035
a gray level co-occurrence matrix is represented,
Figure 53768DEST_PATH_IMAGE036
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 150389DEST_PATH_IMAGE005
is a dimension of the gray level co-occurrence matrix,
Figure 424693DEST_PATH_IMAGE039
representing the absolute value of the pixel coordinate difference;
entropy characteristic value:
Figure 152477DEST_PATH_IMAGE041
wherein,
Figure 627507DEST_PATH_IMAGE016
in the form of an entropy value, the value of the entropy,
Figure 252840DEST_PATH_IMAGE035
a gray level co-occurrence matrix is represented,
Figure 238431DEST_PATH_IMAGE036
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 200757DEST_PATH_IMAGE005
is the dimension of the gray level co-occurrence matrix.
Further, the method for establishing the gray level co-occurrence matrix comprises the following steps:
graying the color equalization satellite cloud picture to obtain a gray level image, quantizing the gray level of the obtained gray level image, and establishing a gray level co-occurrence matrix by taking the N-dimensional matrix as a sliding window;
further, the double-coding full convolution neural network comprises a data set module and a training module;
the data set module comprises a true color satellite cloud picture and an artificial annotation image corresponding to the true color satellite cloud picture, and the true color satellite cloud picture and the artificial annotation image are divided into a training set and a test set according to a proportion; the training set input adopts a training set true color satellite cloud picture and a corresponding multi-feature fusion feature matrix, and outputs an artificial labeling image corresponding to the training set true color satellite cloud picture; inputting a test set into a true color satellite cloud picture of the test machine and a corresponding multi-feature fusion feature matrix;
the training model comprises double encoders and a decoder, wherein the input of one encoder structure is a training set true color satellite cloud picture, and the output is a feature image subjected to convolution-pooling; the input of the other encoder structure is a multi-feature fusion feature matrix, the output is a multi-feature fusion feature image, and the image features are extracted through the encoder;
and performing parallel feature fusion on the feature images output by the double-encoder structure to serve as the input of the decoder structure, outputting the artificial annotation images corresponding to the true color satellite cloud images input by the encoder structure, and establishing a mapping relation between the feature images and the training set output images.
And (3) carrying out a series of training and testing on the neural network, and inputting the true color satellite cloud picture and the corresponding multi-feature fusion feature matrix to obtain the segmented cloud layer image.
Compared with the prior art, the invention has the beneficial effects that:
(1) based on the method, cloud layer color characteristics of the satellite cloud picture are extracted through Euclidean distances from real-color satellite cloud picture pixel points to the RGB color space of the average color of the cloud layer, and compared with a simple threshold method, the method is favorable for a neural network to learn the color difference between a cloud layer region and a non-cloud layer region.
(2) Based on the method, the edge characteristics of the satellite cloud picture are extracted, the edge region and the cloud layer color region are overlapped, the effect of enhancing the cloud layer edge can be achieved, and better learning of the edge region information of the cloud layer is facilitated during subsequent neural network coding.
(3) Based on this application multi-feature fusion characteristic has been built for the semantic segmentation, compare in prior art beneficial effect lie in that the fine thick cloud layer region that reflects of color feature, the fine thin cloud layer region that reflects of textural feature, the fine marginal zone that reflects the cloud layer of marginal feature, carry out the feature extraction to multi-feature fusion matrix through neural network, can learn the regional characteristic of whole cloud layer, compare in space correlation method, two spectral methods and cut apart more stable, accurate to the cloud layer.
Drawings
FIG. 1 is a schematic diagram of a cloud layer segmentation method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a true color satellite cloud in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a partitioned satellite cloud in accordance with an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Example 1
As shown in fig. 1, the embodiment provides a cloud layer segmentation method based on true color satellite cloud images and image processing, including:
the method comprises the steps that a true color satellite cloud picture is subjected to histogram equalization processing to obtain a color equalization satellite cloud picture, the color average value of the color equalization satellite cloud picture is obtained, and a color characteristic value is obtained through the color average value of the color equalization satellite cloud picture; acquiring a color equalization satellite cloud picture edge characteristic value, and fusing a color characteristic and the edge characteristic to obtain a color enhancement characteristic value;
in one embodiment, the true color satellite cloud picture is acquired by shooting through an FY4A satellite, and the color equalized satellite cloud picture is obtained after histogram equalization processing. The histogram equalization processing effect on the true color satellite cloud picture is as follows: on one hand, the contrast ratio of the cloud layer area and other areas in the true color satellite cloud picture can be enhanced, and on the other hand, the image color can be more uniform, so that the subsequent gray processing can be conveniently carried out.
In other embodiments, the method for acquiring the true color satellite cloud picture can also be acquired by the national weather service or the existing satellite map software, and the cloud picture acquisition mode is not limited and meets the acquisition requirement.
And manually marking the acquired color equalization satellite cloud picture, marking the pixel value of the cloud layer region in the color equalization satellite cloud picture as '1', marking the pixel value of the non-cloud layer region as '0', manually distinguishing the cloud layer region from the non-cloud layer region through marking, and storing the picture as a manually marked image at the moment.
And performing product operation on the artificial annotation image and R, G, B three channels of the color equalization satellite cloud picture to obtain a cloud layer area image, performing mean operation on three channels of the cloud layer area to obtain a gray average value of each channel, wherein the superposition of the gray average values of the three channels is the color average value of the color equalization satellite cloud picture.
Calculating color feature values from the color mean values:
Figure DEST_PATH_IMAGE042
wherein,
Figure 680597DEST_PATH_IMAGE009
in order to be a color characteristic value,
Figure 419062DEST_PATH_IMAGE028
for color equalizing the satellite cloud R channel gray values,
Figure 235895DEST_PATH_IMAGE029
is the gray value of the channel of the cloud graph G,
Figure 999768DEST_PATH_IMAGE018
is the gray value of the B channel of the cloud image,
Figure 530424DEST_PATH_IMAGE030
is the mean value of the gray scales of the R channel,
Figure 568973DEST_PATH_IMAGE031
is the average value of the gray scales of the G channel,
Figure 187353DEST_PATH_IMAGE032
is the average value of the gray scales of the B channel,
Figure 533201DEST_PATH_IMAGE043
as a coefficient, in the present embodiment,
Figure 867099DEST_PATH_IMAGE044
is given by
Figure 279943DEST_PATH_IMAGE045
Figure 469616DEST_PATH_IMAGE046
And taking 255.
The color feature value reflects the proximity of the color of the pixel to the color of the cloud layer, and the larger the color feature value is, the larger the proximity of the color of the pixel to the color of the cloud layer is, i.e. the more likely the region in which the pixel is located is the cloud layer region.
Extracting the edge color feature of the image based on a sliding window method, sliding the image by using a window of 3 x 3 with the step length of 1, respectively performing difference operation on the color values of the adjacent 8 pixels and the color values of the adjacent 8 pixels by taking the color of the pixel at the center of the window as a reference, calculating the average value, and assigning the center of the sliding window, wherein the value is the edge feature value.
The edge feature value is calculated by the following formula:
Figure 838280DEST_PATH_IMAGE020
wherein,
Figure 361851DEST_PATH_IMAGE010
in order to be the value of the edge feature,
Figure 109545DEST_PATH_IMAGE021
Figure 556892DEST_PATH_IMAGE022
Figure 634887DEST_PATH_IMAGE023
three channel values are respectively taken as the center points of the windows,
Figure 287902DEST_PATH_IMAGE024
Figure 335678DEST_PATH_IMAGE025
Figure 584574DEST_PATH_IMAGE026
three channel values of the ith pixel point in the window central point 8 field are respectively.
The value reflects the edge characteristics of the color equalized satellite cloud picture, and the larger the difference characteristic value is, the more the edge profile of the cloud layer can be reflected, namely the boundary area of the cloud layer and the ocean, the land and the like, namely the more the area is likely to be the edge area of the cloud layer.
The method of fusing the color feature value and the edge feature value is as follows:
Figure 989011DEST_PATH_IMAGE008
wherein,
Figure 743526DEST_PATH_IMAGE007
in order to enhance the value of the characteristic for the color,
Figure 507400DEST_PATH_IMAGE009
in order to be a color characteristic value,
Figure 365951DEST_PATH_IMAGE010
in order to be the value of the edge feature,
Figure 607763DEST_PATH_IMAGE011
to map the coefficients, in this embodiment,
Figure 491722DEST_PATH_IMAGE047
and taking 20.
Because the cloud amount of the cloud layer edge area in the satellite cloud picture is less, the cloud layer characteristics can not be completely reflected only from the color characteristics, the edge color characteristics are introduced, the edge color characteristics and the judgment of the color characteristics on the cloud layer area are in positive correlation, and the cloud layer edge area can be enhanced after the edge color characteristics and the color characteristics are superposed through a certain mapping relation. In the present embodiment, a mapping relationship is designed
Figure 858113DEST_PATH_IMAGE048
And linear superposition is carried out on the color characteristic, and the cloud layer area is judged according to the size of the color enhancement characteristic value, so that the finally obtained result has higher reliability.
As shown in fig. 1, a grayed color-equalized satellite cloud image is obtained, and a gray level co-occurrence matrix is established to obtain a texture characteristic value; and fusing the texture characteristic values to obtain texture fusion characteristic values.
In an embodiment, graying the color-equalized satellite cloud image to obtain a grayscale image with a grayscale level of 256, which may greatly increase the amount of computation due to too large grayscale level, and in this embodiment, quantization is further performed, where the quantization method is as follows: divide the gray value by 32 and round it down, i.e. 0-255 gray level is changed to 0-7 gray level. After quantization, a gray level co-occurrence matrix is established by using the N-dimensional matrix as a sliding window, where N is 7 and the sliding distance is 1 in this embodiment. And calculating a gray characteristic value according to the gray co-occurrence matrix, assigning the gray characteristic value to the center position of the corresponding sliding window, and traversing the whole image through the sliding window to obtain a texture characteristic matrix consisting of texture characteristic values. The texture feature matrix generated in this embodiment includes an energy feature value matrix, a contrast feature value matrix, and an entropy feature value matrix.
In another embodiment, the texture features of the image are not fixed, the gray values caused by the repeated arrangement of the ground objects on the image are regularly distributed in the image, the texture features have the characteristics, the gray image does not need to be quantized when other texture features are obtained, the standard during quantization is not unique, and in another embodiment, the corresponding values can be adjusted according to the actually obtained satellite cloud images.
The calculation method of the texture fusion characteristic value is as follows:
Figure 775253DEST_PATH_IMAGE012
wherein,
Figure 933888DEST_PATH_IMAGE004
in order to fuse the feature values for the texture,
Figure 242826DEST_PATH_IMAGE013
in order to be the value of the energy,
Figure 912022DEST_PATH_IMAGE014
is the value of the contrast ratio and,
Figure 823532DEST_PATH_IMAGE016
in the form of an entropy value, the value of the entropy,
Figure 986977DEST_PATH_IMAGE017
Figure 409048DEST_PATH_IMAGE018
Figure 909486DEST_PATH_IMAGE019
for the mapping coefficient, in this embodiment, the value of the mapping coefficient is:
Figure 560227DEST_PATH_IMAGE049
Figure 757990DEST_PATH_IMAGE050
Figure 318470DEST_PATH_IMAGE051
the method for obtaining the texture features comprises the following steps:
energy characteristic value:
Figure 712860DEST_PATH_IMAGE034
wherein,
Figure 682270DEST_PATH_IMAGE013
in order to be the value of the energy,
Figure 515282DEST_PATH_IMAGE035
a gray level co-occurrence matrix is represented,
Figure 80573DEST_PATH_IMAGE036
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 853674DEST_PATH_IMAGE005
is the dimension of the gray level co-occurrence matrix;
contrast characteristic value:
Figure 999353DEST_PATH_IMAGE052
wherein,
Figure 79622DEST_PATH_IMAGE038
in order to be a value of the contrast ratio,
Figure 988989DEST_PATH_IMAGE035
a gray level co-occurrence matrix is represented,
Figure 343933DEST_PATH_IMAGE036
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 481970DEST_PATH_IMAGE005
is a dimension of the gray level co-occurrence matrix,
Figure 878634DEST_PATH_IMAGE039
representing the absolute value of the pixel coordinate difference;
entropy characteristic value:
Figure 88084DEST_PATH_IMAGE041
wherein,
Figure 713417DEST_PATH_IMAGE016
in the form of an entropy value, the value of the entropy,
Figure 964587DEST_PATH_IMAGE035
a gray level co-occurrence matrix is represented,
Figure 661334DEST_PATH_IMAGE036
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 141174DEST_PATH_IMAGE005
is the dimension of the gray level co-occurrence matrix.
The thin cloud layer region can be effectively judged by fusing the texture feature values, but the texture is possibly complex due to the mountain range region in the land region, so that the thin cloud layer region can be better distinguished by subsequent fusion with the color enhancement feature.
As shown in fig. 1, the color enhancement eigenvalue and the texture fusion eigenvalue are fused to obtain a multi-feature fusion eigenvalue, and a multi-feature fusion eigenvalue matrix is established.
In one embodiment, the method of fusing the color enhancement feature value and the texture fusion feature value is as follows:
Figure 537520DEST_PATH_IMAGE001
wherein,
Figure 915597DEST_PATH_IMAGE002
in order to fuse the feature values for the multiple features,
Figure 885008DEST_PATH_IMAGE003
color enhancement for true color satellite cloudsThe value of the characteristic is set to be,
Figure 734332DEST_PATH_IMAGE004
in order to fuse the feature values for the texture,
Figure 548890DEST_PATH_IMAGE005
Figure 56412DEST_PATH_IMAGE006
as a weight, in the present embodiment,
Figure 342217DEST_PATH_IMAGE053
Figure 462619DEST_PATH_IMAGE054
the color enhancement features can better reflect the features of the thick cloud layer region, the texture fusion features can better reflect the features of the thin cloud layer region, and the obtained multi-feature fusion feature values are clearer and more complete in judgment of the whole cloud layer by fusing the color enhancement features and the texture fusion features.
As shown in fig. 1, a data set is constructed to train a neural network, and a segmented cloud layer image is output.
Firstly, a data set is constructed by the acquired true color satellite cloud pictures according to a certain proportion, in the embodiment, seven of ten true color satellite cloud pictures are taken as a training set, and three of ten true color satellite cloud pictures are taken as a test set, namely, the ten true color satellite cloud pictures are divided into the training set and the test set according to the proportion of 7: 3.
Inputting data into a neural network by adopting a double-encoder structure for training, inputting a true color satellite cloud picture and a multi-feature fusion feature matrix obtained by the steps, artificially setting and outputting the true color satellite cloud picture and the multi-feature fusion feature matrix to be a satellite segmentation cloud picture which is artificially marked, and enabling the neural network to learn how to convert the input true color satellite cloud picture into the segmented satellite cloud picture by repeatedly training for multiple times, wherein the training contents all adopt images in a training set part.
The method adopts a double-encoder structure to input data into a neural network for testing, the data are input into a true color satellite cloud picture and a corresponding multi-feature fusion feature matrix, whether an image output after the neural network is trained is consistent with a satellite segmentation cloud picture marked manually or not is tested, the image output by the neural network is accurate and efficient through repeated testing for many times, and the testing content adopts images in a testing set part.
Further, as shown in fig. 2, a true color satellite cloud chart collected by a satellite is provided, wherein lines respectively frame and select a territory range and a province distribution range of China.
As shown in fig. 3, after the neural network is trained repeatedly and the output of the multiple tests is relatively stable, when a true color satellite cloud picture and a corresponding multi-feature fusion feature matrix are input into the neural network, the neural network can output a satellite cloud picture which is correspondingly segmented, wherein a white area is a binary image of which the pixel of a corresponding artificially labeled cloud layer area is '1' and a black area is a non-cloud layer area is '0', so that the segmentation of the true color satellite cloud picture is completed, and the method has the characteristics of high efficiency and accuracy.
In another embodiment, the collected true color satellite cloud image construction data set can be realized by other proportions, for example, in every ten true color satellite cloud images, six are training sets, four are test sets or eight are training sets and two are test sets, the proportion is not unique, but the neural network needs to be trained in a large amount to establish the mapping relation of the characteristics between the input image and the output image, so that the image segmentation is better completed, the proportion occupied by the training sets is larger than that of the test sets in general, and the output of the neural network is more stable and accurate by performing partial tests, so that the purpose of segmenting the satellite cloud image is achieved.
The above embodiments are merely illustrative of the present invention, and should not be construed as limiting the scope of the present invention, and all designs identical or similar to the present invention are within the scope of the present invention.

Claims (6)

1. A cloud layer segmentation method based on true color satellite cloud pictures and image processing is characterized by comprising the following steps:
the method comprises the steps that a true color satellite cloud picture is subjected to histogram equalization processing to obtain a color equalization satellite cloud picture, the color average value of the color equalization satellite cloud picture is obtained, and a color characteristic value is obtained through the color average value of the color equalization satellite cloud picture; acquiring a color equalization satellite cloud picture edge characteristic value, and fusing a color characteristic and the edge characteristic to obtain a color enhancement characteristic value;
graying a satellite cloud picture with balanced colors, and obtaining texture characteristic values by establishing a gray level co-occurrence matrix; fusing the texture feature values to obtain texture fusion features;
fusing the color enhancement features and the texture fusion features to obtain multi-feature fusion feature values; establishing a multi-feature fusion feature matrix through the multi-feature fusion feature values; and inputting the multi-feature fusion feature matrix and the true color satellite cloud picture into a double-coding full convolution neural network to obtain a segmented satellite cloud picture.
2. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 1, wherein: the expression for obtaining the multi-feature fusion characteristic value is as follows:
Figure DEST_PATH_IMAGE001
wherein,
Figure 101165DEST_PATH_IMAGE002
in order to fuse the feature values for the multiple features,
Figure 865859DEST_PATH_IMAGE003
in order to enhance the value of the characteristic for the color,
Figure 939174DEST_PATH_IMAGE004
in order to fuse the feature values for the texture,
Figure 115946DEST_PATH_IMAGE005
Figure 634969DEST_PATH_IMAGE006
is a weight;
wherein the color enhancement characteristic value
Figure 401117DEST_PATH_IMAGE003
The method comprises the step of obtaining a color enhancement characteristic value by superposing the edge characteristic to the color characteristic through a mapping relation
Figure 329070DEST_PATH_IMAGE003
The color enhancement feature value
Figure 753415DEST_PATH_IMAGE003
The expression of (a) is:
Figure DEST_PATH_IMAGE007
wherein,
Figure 838100DEST_PATH_IMAGE003
in order to enhance the value of the characteristic for the color,
Figure 920643DEST_PATH_IMAGE008
in order to be a color characteristic value,
Figure 368942DEST_PATH_IMAGE009
in order to be the value of the edge feature,
Figure 340626DEST_PATH_IMAGE010
is a mapping coefficient;
wherein the texture fuses the feature values
Figure 524931DEST_PATH_IMAGE004
The acquisition method comprises the following steps: graying the color-equalized satellite cloud picture, establishing a gray level co-occurrence matrix, obtaining a gray level characteristic value according to the gray level co-occurrence matrix, and assigning a value in the satellite cloud picture through a sliding window to obtain a composed lineA texture feature matrix composed of texture feature values, and a texture fusion feature value is obtained by fusing a plurality of texture features, wherein the expression of the texture fusion feature value is as follows:
Figure DEST_PATH_IMAGE011
wherein,
Figure 522023DEST_PATH_IMAGE004
in order to fuse the feature values for the texture,
Figure 501709DEST_PATH_IMAGE012
in order to be the value of the energy,
Figure 521618DEST_PATH_IMAGE013
is the value of the contrast ratio and,
Figure 844332DEST_PATH_IMAGE014
in the form of an entropy value, the value of the entropy,
Figure 539755DEST_PATH_IMAGE015
Figure 869554DEST_PATH_IMAGE016
Figure 778604DEST_PATH_IMAGE017
are the mapping coefficients.
3. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 2, wherein:
the edge feature value
Figure 639429DEST_PATH_IMAGE018
The acquisition method comprises the following steps: taking the central pixel of the sliding window as a reference, performing difference operation and averaging on the color values of the adjacent 8 pixels, and then obtaining the average valueAnd then assigning a value to the center of the sliding window to obtain an edge characteristic value:
Figure DEST_PATH_IMAGE019
wherein,
Figure 527806DEST_PATH_IMAGE018
in order to be the value of the edge feature,
Figure 46829DEST_PATH_IMAGE020
Figure 460493DEST_PATH_IMAGE021
Figure 37416DEST_PATH_IMAGE022
three channel values are respectively taken as the center points of the windows,
Figure 537667DEST_PATH_IMAGE023
Figure 742438DEST_PATH_IMAGE024
Figure 652625DEST_PATH_IMAGE025
three channel values of the ith pixel point in the window central point 8 field are respectively set;
the color characteristic value
Figure 349634DEST_PATH_IMAGE008
The calculation method of (2) is as follows: the color average value is obtained by performing mean operation and superposition on three channels of the color equalization satellite cloud picture R, G, B, and the color characteristic value is obtained through the color average value:
Figure 207869DEST_PATH_IMAGE026
wherein,
Figure 170326DEST_PATH_IMAGE008
in order to be a color characteristic value,
Figure 995062DEST_PATH_IMAGE027
for color equalizing the satellite cloud R channel gray values,
Figure 677902DEST_PATH_IMAGE028
is the gray value of the channel of the cloud graph G,
Figure 674994DEST_PATH_IMAGE016
is the gray value of the B channel of the cloud image,
Figure 507821DEST_PATH_IMAGE029
is the mean value of the gray scales of the R channel,
Figure 537405DEST_PATH_IMAGE030
is the average value of the gray scales of the G channel,
Figure 703944DEST_PATH_IMAGE031
is the average value of the gray scales of the B channel,
Figure 188332DEST_PATH_IMAGE032
are coefficients.
4. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 2, wherein: the texture feature matrix comprises an energy feature matrix, a contrast feature matrix and an entropy feature value matrix;
the method for acquiring the texture characteristic value comprises the following steps:
energy characteristic value:
Figure 860751DEST_PATH_IMAGE033
wherein,
Figure 576083DEST_PATH_IMAGE012
in order to be the value of the energy,
Figure 171330DEST_PATH_IMAGE034
a gray level co-occurrence matrix is represented,
Figure 561171DEST_PATH_IMAGE035
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 549036DEST_PATH_IMAGE005
is the dimension of the gray level co-occurrence matrix;
contrast characteristic value:
Figure 846342DEST_PATH_IMAGE036
wherein,
Figure 741671DEST_PATH_IMAGE013
in order to be a value of the contrast ratio,
Figure 431596DEST_PATH_IMAGE034
a gray level co-occurrence matrix is represented,
Figure 267014DEST_PATH_IMAGE035
representing pixels in gray scale
The coordinates in the raw matrix are then generated,
Figure 100289DEST_PATH_IMAGE005
is a dimension of the gray level co-occurrence matrix,
Figure 328325DEST_PATH_IMAGE037
representing the absolute value of the pixel coordinate difference;
entropy characteristic value:
Figure 69065DEST_PATH_IMAGE039
wherein,
Figure 204566DEST_PATH_IMAGE014
in the form of an entropy value, the value of the entropy,
Figure 869082DEST_PATH_IMAGE034
a gray level co-occurrence matrix is represented,
Figure 147934DEST_PATH_IMAGE035
representing the coordinates of the pixels in the gray level co-occurrence matrix,
Figure 221380DEST_PATH_IMAGE005
is the dimension of the gray level co-occurrence matrix.
5. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 4, wherein: the method for establishing the gray level co-occurrence matrix comprises the following steps:
graying the color equalization satellite cloud picture to obtain a gray level image, quantizing the gray level of the obtained gray level image, and establishing a gray level co-occurrence matrix by taking the N-dimensional matrix as a sliding window.
6. The cloud layer segmentation method based on true color satellite cloud pictures and image processing as claimed in claim 1, wherein: the double-coding full convolution neural network comprises a data set module and a training module;
the data set module comprises a true color satellite cloud picture and an artificial annotation image corresponding to the true color satellite cloud picture, and the true color satellite cloud picture and the artificial annotation image are divided into a training set and a test set according to a proportion; the training set input adopts a training set true color satellite cloud picture and a corresponding multi-feature fusion feature matrix, and outputs an artificial labeling image corresponding to the training set true color satellite cloud picture; inputting a test set into a true color satellite cloud picture of the test machine and a corresponding multi-feature fusion feature matrix;
the training model comprises double encoders and a decoder, wherein the input of one encoder structure is a training set true color satellite cloud picture, and the output is a feature image subjected to convolution-pooling; the input of the other encoder structure is a multi-feature fusion feature matrix, the output is a multi-feature fusion feature image, and the image features are extracted through the encoder;
and performing parallel feature fusion on the feature images output by the double-encoder structure to serve as the input of the decoder structure, outputting the artificial labeling images corresponding to the true color satellite cloud images input by the encoder structure, and establishing a mapping relation between the feature images and the artificial labeling images corresponding to the true color satellite cloud images output by the training set.
CN202111184390.9A 2021-10-12 2021-10-12 Cloud layer segmentation method based on true color satellite cloud picture and image processing Active CN113643312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111184390.9A CN113643312B (en) 2021-10-12 2021-10-12 Cloud layer segmentation method based on true color satellite cloud picture and image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111184390.9A CN113643312B (en) 2021-10-12 2021-10-12 Cloud layer segmentation method based on true color satellite cloud picture and image processing

Publications (2)

Publication Number Publication Date
CN113643312A true CN113643312A (en) 2021-11-12
CN113643312B CN113643312B (en) 2022-02-08

Family

ID=78426501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111184390.9A Active CN113643312B (en) 2021-10-12 2021-10-12 Cloud layer segmentation method based on true color satellite cloud picture and image processing

Country Status (1)

Country Link
CN (1) CN113643312B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100514A (en) * 2022-05-11 2022-09-23 南京林业大学 Cloud tracking method based on FPGA
CN117332929A (en) * 2023-11-28 2024-01-02 珠江水利委员会珠江水利科学研究院 Intelligent flood prevention method and system for hydraulic engineering

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208001A (en) * 2013-02-06 2013-07-17 华南师范大学 Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction
CN110516629A (en) * 2019-08-30 2019-11-29 河海大学常州校区 A kind of nutritious obesity and classification method based on more Cloud Layer Characters
CN112488050A (en) * 2020-12-16 2021-03-12 安徽大学 Color and texture combined aerial image scene classification method and system
CN112749621A (en) * 2020-11-25 2021-05-04 厦门理工学院 Remote sensing image cloud layer detection method based on deep convolutional neural network
CN113239830A (en) * 2021-05-20 2021-08-10 北京航空航天大学 Remote sensing image cloud detection method based on full-scale feature fusion
CN113284153A (en) * 2021-05-14 2021-08-20 惠州中国科学院遥感与数字地球研究所空间信息技术研究院 Satellite cloud layer image processing method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208001A (en) * 2013-02-06 2013-07-17 华南师范大学 Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction
CN110516629A (en) * 2019-08-30 2019-11-29 河海大学常州校区 A kind of nutritious obesity and classification method based on more Cloud Layer Characters
CN112749621A (en) * 2020-11-25 2021-05-04 厦门理工学院 Remote sensing image cloud layer detection method based on deep convolutional neural network
CN112488050A (en) * 2020-12-16 2021-03-12 安徽大学 Color and texture combined aerial image scene classification method and system
CN113284153A (en) * 2021-05-14 2021-08-20 惠州中国科学院遥感与数字地球研究所空间信息技术研究院 Satellite cloud layer image processing method and device, computer equipment and storage medium
CN113239830A (en) * 2021-05-20 2021-08-10 北京航空航天大学 Remote sensing image cloud detection method based on full-scale feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SAIF MOHAMMED A IBRAHIM 等: "Satellite Image Classification using Multi Features Based Descriptors", 《INTERNATIONAL RESEARCH JOURNAL OF ADVANCED ENGINEERING AND SCIENCE》 *
康超萌: "基于神经网络的国产高分光学遥感图像云检测", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100514A (en) * 2022-05-11 2022-09-23 南京林业大学 Cloud tracking method based on FPGA
CN117332929A (en) * 2023-11-28 2024-01-02 珠江水利委员会珠江水利科学研究院 Intelligent flood prevention method and system for hydraulic engineering
CN117332929B (en) * 2023-11-28 2024-03-08 珠江水利委员会珠江水利科学研究院 Intelligent flood prevention method and system for hydraulic engineering

Also Published As

Publication number Publication date
CN113643312B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN110378196B (en) Road visual detection method combining laser point cloud data
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN110263717B (en) Method for determining land utilization category of street view image
CN101901343B (en) Remote sensing image road extracting method based on stereo constraint
CN113643312B (en) Cloud layer segmentation method based on true color satellite cloud picture and image processing
CN108921120B (en) Cigarette identification method suitable for wide retail scene
CN104077577A (en) Trademark detection method based on convolutional neural network
CN109840483B (en) Landslide crack detection and identification method and device
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN111767874B (en) Pavement disease detection method based on deep learning
CN111639587A (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN114494179A (en) Mobile phone back damage point detection method and system based on image recognition
CN112270317A (en) Traditional digital water meter reading identification method based on deep learning and frame difference method
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN111738931B (en) Shadow removal algorithm for aerial image of photovoltaic array unmanned aerial vehicle
CN115880586A (en) Satellite remote sensing image cloud and snow detection method based on mixed feature network
CN116824347A (en) Road crack detection method based on deep learning
CN113256563A (en) Method and system for detecting surface defects of fine product tank based on space attention mechanism
CN103065296A (en) High-resolution remote sensing image residential area extraction method based on edge feature
CN115690605A (en) Multispectral remote sensing image cloud detection method based on space-spectrum combination
CN111046783A (en) Slope geological disaster boundary extraction method for improving watershed algorithm
CN113642663B (en) Satellite remote sensing image water body extraction method
CN112686105B (en) Fog concentration grade identification method based on video image multi-feature fusion
CN113077461A (en) Steel surface quality detection method based on semi-supervised deep clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant