CN107610114A - Optical satellite remote sensing image cloud snow mist detection method based on SVMs - Google Patents

Optical satellite remote sensing image cloud snow mist detection method based on SVMs Download PDF

Info

Publication number
CN107610114A
CN107610114A CN201710834224.6A CN201710834224A CN107610114A CN 107610114 A CN107610114 A CN 107610114A CN 201710834224 A CN201710834224 A CN 201710834224A CN 107610114 A CN107610114 A CN 107610114A
Authority
CN
China
Prior art keywords
image
cloud
snow
fog
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710834224.6A
Other languages
Chinese (zh)
Other versions
CN107610114B (en
Inventor
易尧华
袁媛
余长慧
刘炯杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Hejing Cultural Media Co ltd
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710834224.6A priority Critical patent/CN107610114B/en
Publication of CN107610114A publication Critical patent/CN107610114A/en
Application granted granted Critical
Publication of CN107610114B publication Critical patent/CN107610114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention discloses a kind of satellite remote-sensing image cloud based on SVMs, snow, mist detection method, comprise the following steps:First, different types of a large amount of atural objects and cloud, snow, mist sample image image data are collected as training set, obtains gray scale and the textural characteristics composition characteristic set of image;Machine learning is carried out to the characteristic set of all samples by the method for SVMs and obtains cloud, snow, mist image classification device.Secondly, the classification of image to be measured is determined using obtained cloud, snow, mist image classification device, and carries out closing operation of mathematical morphology and overlapping region correction, judge target area type in remote sensing image;Finally, reselect training sample and obtain new image classification device, secondary detection is carried out to satellite remote-sensing image to be measured, and made comparisons with first time detection, finally determine the cloud, snow, the result of determination of mist of remote sensing image to be measured.Test result indicates that the inventive method can obtain higher accuracy of detection.

Description

Optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
Technical Field
The invention belongs to the field of satellite remote sensing image quality detection, and particularly relates to a satellite remote sensing image cloud, snow and fog detection method based on a support vector machine.
Background
In an optical satellite remote sensing image, remote sensing information is often influenced by clouds, fog weather and snow, the clouds and the snow cover the ground surface information of an area where the image is located, and the fog, the haze and the like can cover many characteristic information in the remote sensing image. Therefore, it is necessary to detect the cloud, snow and fog areas in the remote sensing image and reject the corresponding image data with too large coverage of invalid information, so as to improve the utilization rate of the optical satellite remote sensing image.
The current remote sensing image cloud, snow and fog detection method mainly focuses on detection of cloud or fog and detection and identification of cloud, fog and cloud and snow. The method mainly comprises a threshold value method, a feature extraction method and the like. The method for cloud detection of the remote sensing image mainly judges whether the image is a cloud or not by setting a spectral threshold value by using spectral reflectivity under different wave bands or carries out cloud detection according to characteristic classification by extracting image characteristics. The remote sensing image fog detection method mainly researches typical cases, and utilizes remote sensing data to extract features to monitor and research fog. The cloud and snow detection method mainly utilizes the characteristic that the cloud and snow have similar characteristics in a visible light wave band and have larger difference in a short wave infrared band, and identifies the snow by constructing a cloud and snow contrast increasing factor or calculates a fractal dimension through texture characteristics of a full-color image to realize the identification of the cloud and the snow. The cloud, snow and fog detection method is usually a superposition use of the above methods.
The existing literature retrieval finds that the existing cloud, snow and fog detection method has the following problems: first, it is difficult to detect clouds, snow and fog simultaneously with the conventional method. The detection method is influenced by the detection type, and a single detection method is difficult to adapt to various detection requirements; the existing threshold method has low reliability, and the detection result is influenced by the space-time type, so that the method is difficult to popularize in more common detection; the image feature information selected by the feature extraction method is insufficient, and the detection accuracy is not high enough. Secondly, the detection efficiency of the cloud, snow and fog detection method is low, the algorithm complexity is high, the influence on big data is difficult to rapidly detect and identify, certain requirements are required on a remote sensing data source, and the universality is poor.
Disclosure of Invention
The invention aims to enhance the timeliness of remote sensing image quality inspection and improve the utilization rate of remote sensing images, so that the remote sensing image quality inspection system can be applied to domestic satellite image product quality inspection systems such as resource one, resource three, daily drawing one, high-grade first and the like.
In order to achieve the purpose, the invention provides a satellite remote sensing image cloud, snow and fog detection method based on a support vector machine, and the specific implementation of the technical scheme of the invention comprises the following steps:
step 1, collecting a large amount of cloud, snow, fog and ground object sample image data;
step 2, extracting gray features and texture features of various sample images to form feature vectors;
step 3, training the feature vectors of the sample images by using a support vector machine to respectively obtain a cloud image classifier, a snow image classifier and a fog image classifier which are formed by decision functions;
step 4, performing down-sampling processing on an original image of the satellite remote sensing image to be detected to obtain a thumbnail, performing image segmentation on the thumbnail to obtain sub-images, and calculating a feature vector consisting of gray features and texture features of all the sub-images;
step 5, classifying the sub-images of the remote sensing image of the satellite to be detected, comprising the following sub-steps,
step 5.1, respectively inputting the feature vectors extracted in the step 4 into the cloud, snow and fog image classifiers obtained in the step 3 for prediction classification;
step 5.2, dividing all the sub-images into a cloud area, a fog area, a snow area and a ground object area according to the types of the target areas;
step 5.3, dividing the cloud and ground feature area, the fog and ground feature area and the snow and ground feature area into three binary images, wherein the ground feature area in each image takes the same zero value, and the cloud, snow and fog areas take different image values;
step 6, performing morphological 'closing' operation on the classification result obtained in the step 5;
and 7, comparing three binary image values at the same position to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected.
Further, the implementation manner of the step 7 is as follows,
step 7.1, comparing three binary image values at the same position, and if the image values at the same position of the three images are the same, judging that the position is a ground object area; if two same values exist at the same position, the position is judged to be a category area represented by a third image value; if the image values of the three images at the same position are different, judging that the position is an overlapping area with cloud, snow and fog, recording a point with a zero value as the overlapping area, and recording a point without the zero value as a triple overlapping area;
7.2, repeating the step 7.1, comparing all image values of the three binary images to obtain discrimination results of cloud, snow and fog areas, ground object areas and overlapping areas, and correcting the overlapping areas; firstly, judging whether the overlapped area is contained in other areas, and if so, replacing the overlapped area with other areas; secondly, judging the type of the overlapping area, if the overlapping area is externally connected with a certain determined type area, judging the type of the overlapping area to be the type of the overlapping area except the determined externally connected type, and if not, confirming after the type of the overlapping area is judged to be finished; for the triple overlapping area, if the triple overlapping area is externally connected with a certain determined type area, the overlapping area is judged to be the overlapping area without the externally connected type, and if the triple overlapping area is externally connected with the overlapping area, the type of the overlapping area is judged to be the type of the overlapping area different from the type of the overlapping area externally connected with the overlapping area; and finally, judging the condition that the overlapped areas are only externally connected with different overlapped areas, respectively judging the areas into the categories except the common categories, and finally obtaining the judgment result.
And 7.3, performing morphological 'closed' operation on the judgment result to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected.
Further, the method comprises a step 8 of selecting a proper amount of cloud and ground object samples, fog and ground object samples, snow and ground object samples as training samples again, repeating the steps 2-7 to carry out secondary detection on the satellite remote sensing image to be detected, comparing the detection result of the secondary detection with the detection result of the primary detection, if the two detection results of the same position are the same, determining the type of the position as the type obtained by the detection result of any one time, and if the two detection results of the same position are different, determining that the type of the position is the ground object, and finally obtaining the detection result.
Further, the step 2 is realized as follows,
step 2.1, calculating gray level characteristics of the sample image, including a gray level mean value, a gray level variance, a first order difference and a histogram information entropy of the sample image;
wherein the calculation formula of the gray level mean value is,
wherein f (i, j) is the gray value at (i, j), S = M × N, M is the width of the sample image, N is the height of the sample image;
the gray-scale variance is calculated by the formula,
the first order difference is calculated by the formula,
the calculation formula of the information entropy of the histogram is,
wherein, h [ g ] is the histogram of the sample image, h [ g ] (i) is the percentage of the pixel under a certain gray level to the whole sample image, M is the maximum gray level;
step 2.2, calculating the texture characteristics of the sample image, including the gradient standard deviation, the mixed entropy, the inverse difference moment,
A texture score dimension;
wherein the standard deviation of the gradient is calculated by the formula,
G(i,j;d,θ)=#{(x 1 ,y 1 )(x 2 ,y 2 )|f(x 1 ,y 1 )=i,f(x 2 ,y 2 )=j,|(x 1 ,y 1 )-(x 2 ,y 2 )|=d,∠((x 1 ,y 1 ),(x 2 ,y 2 ) θ) = θ where d represents the distance between two pixels, θ represents the direction angle between pixels, f (x) 1 ,y 1 ) And f (x) 2 ,y 2 ) Respectively represent (x) 1 ,y 1 ) And (x) 2 ,y 2 ) Represents the angle between the pixel point and the horizontal position, # represents the number of pixel pairs obtained according to the limiting conditions in the set, # L x L y Represents the sum of the numbers of all pixel pairs in a specific positional relationship; l is a radical of an alcohol g Represents the maximum of the gray level, L represents the maximum of the gradient;
the formula for calculating the entropy of the mixture is,
the calculation formula of the inverse difference is as follows,
the fractal Brownian random field method is used for solving the texture fractal dimension of the sample image, and the fractal dimension D of the image has the expression,
D=n+1-H
wherein n refers to the spatial dimension of the sample image, and H is a self-similarity parameter;
and 2.3, forming 8-dimensional feature vectors by the gray features and the texture features.
Further, the implementation manner of the step 3 is as follows,
step 3.1, selecting partial cloud samples and ground feature samples as training samples, and using the feature vectors of the training samples as training sets T = { (x) of the training image classifier 1 ,y 1 )...(x i ,y i )},i=1…N,y i E psi = { -1,1}, wherein 1 represents a positive class representing a cloud region class, -1 represents a negative class representing a ground feature region class, and x i ∈R n ,x i Is a feature vector, and N is the number of samples;
step 3.2, constructing a classification hyperplane by adopting a support vector machine of a C-SVC model, calculating a Gaussian kernel function,
wherein x is i And x j Refer to the feature vectors of samples i and j, | | x, respectively i -x j || 2 The square of Euclidean distance, sigma is variance, i =1 \ 8230, N, j =1 \ 8230, N, N is the number of samples;
step 3.3, solving Lagrange multiplier vector of the optimal classification hyperplane of the feature space by adopting a convex quadratic programming method,
wherein, 0 is less than or equal to alpha i C, i =1,2 \8230n, lagrange multiplier vector, alpha i And α j denote the ith and jth Lagrangian multipliers, x, respectively i ,x j Feature vectors, y, representing the ith and jth samples, respectively i And y j Respectively representing the types of the ith sample and the jth sample, wherein C is a penalty parameter, N is the number of samples, and the optimal solution of the Lagrange multiplier vector is obtained by:
α * =(α 1 *2 * ,...α i * ...α N * ) T
wherein alpha is i * Represents an optimal solution for the ith Lagrangian multiplier;
step 3.4, solving the intercept of the optimal classification hyperplane of the feature space, wherein the calculation formula is as follows,
wherein alpha is i * As an optimal solution, y, for the ith Lagrangian multiplier i The type of the ith sample is shown, and N is the number of the samples;
step 3.5, substituting the obtained Gaussian kernel function, the Lagrange optimal solution and the hyperplane intercept into a decision function,
step 3.6, taking the rest cloud samples and surface feature samples as test samples, testing the decision function, optimizing the decision function, and simultaneously obtaining the corresponding cloud image classifier;
and 3.7, repeating the steps 3.1-3.6 to respectively obtain a snow image classifier and a fog image classifier.
Further, in the step 4, if the remote sensing image to be detected is a panchromatic image, the down-sampling processing is directly adopted, and if the remote sensing image to be detected is a multispectral image, the down-sampling is carried out by adopting three RGB bands.
Further, the step 6 is implemented by selecting a structural element with a square structural shape and a 3 × 3 structural size, performing expansion operation on the three binary images, and performing corrosion operation on the obtained processed images by using the same structural element.
Compared with the prior art, the invention has the advantages that: the method can be used for multiple detections after one-time training, the image classifier is obtained through a large amount of image training, the image classifier only needs to be reused during detection, the time complexity of the support vector machine algorithm in the prediction classification stage is low, and the region type can be quickly detected; through tests, the method is suitable for full-color images and n-channel multispectral images, cloud detection is carried out on a plurality of domestic satellite remote sensing images such as a resource number one 02 star, a resource number three, a sky number one, a high-resolution number one and the like by using the method, and the accuracy respectively reaches 94.8%, 96.4%, 93.2% and 95.2%.
Drawings
Fig. 1 is a flow chart of an implementation of the embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and practice of the present invention for those of ordinary skill in the art, the present invention will be described in further detail with reference to the accompanying drawings and examples, which are provided for illustration and explanation and are not intended to limit the scope of the present invention.
Referring to fig. 1, taking panchromatic image data of a satellite with resource number one 02C, resource number three and multispectral remote sensing image data with sky painting number one as an example, the method comprises the following implementation steps:
step 1, sample acquisition
And (3) down-sampling an original image of the sample into a 1024 × 1024 pixel 8-bit bmp format thumbnail, and performing image segmentation on the thumbnail. And if the remote sensing image is a panchromatic image, directly adopting down-sampling processing, and if the remote sensing image is a multispectral image, adopting RGB three-band to carry out down-sampling. The panchromatic image is divided into 32 x 32 sample blocks, the multispectral image is divided into 16 x 16 image blocks, and 1500 ground object samples, 1000 cloud samples, 1000 snow samples and 1000 fog samples are respectively selected as training sample data.
Step 2, feature extraction: generally speaking, the average brightness of a cloud area in a full-color image and a multispectral image is much higher than that of a fog area, the brightness value of the fog area is greater than that of a ground object area, the cloud and snow areas have similar spectral characteristics, and meanwhile, the cloud, snow, fog and ground objects have obvious differences in gray level distribution, gray level change and the like, so that the target area image and the ground object image can be distinguished to a certain extent by utilizing the gray level characteristics of the image.
Extracting the gray characteristic and the texture characteristic vector value of the sample image to form an 8-dimensional characteristic vector, wherein the method comprises the following specific implementation steps of:
step 2.1, calculating the gray level characteristics of the sample image
Step 2.1.1, solving the gray average value, and calculating by using the following formula:
where f (i, j) is the grayscale value at (i, j), S = M × N, M is the width of the sample image, and N is the height of the sample image.
Step 2.1.2, calculating the gray variance of the sample image:
step 2.1.3, calculating the first order difference of the sample image:
step 2.1.4, calculating the histogram information entropy of the sample image:
where h [ g ] is the histogram of the sample image, h [ g ] (i) is the percentage of pixels in the entire image at a certain gray level (i), and M is the maximum gray level.
Step 2.2, calculating the texture features of the sample image: from the perspective of visual characteristics of human eyes, the texture information of clouds, snow and fog in the satellite remote sensing image is often single and simpler than the texture of ground objects, in addition, the edges of the cloud, snow and fog areas in the image are also fuzzy and smooth, and the edges of the ground objects are generally sharp and have large gradients. Therefore, the texture information and the gradient information of the satellite remote sensing image can be used for detecting and dividing the information of the cloud, snow and fog areas and the ground objects. In addition, the texture features of the cloud, snow and fog images are obviously different. For the cloud sample, the texture of the cloud sample belongs to random texture, is variable and difficult to detect, shows disorder and no rule, and has thicker and fuzzy edge texture; the fog sample texture is relatively uniform, the smoothness is relatively good, and the edge form is regular; the snow sample is influenced by the ground texture, so that the snow sample has better directionality and large gradient change. And distinguishing different texture characteristics expressed by cloud, snow and fog through the comprehensive information of the image gray level and the image gradient.
Step 2.2.1, calculating a gray gradient co-occurrence matrix G (i, j, d, theta) of the sample image, and using the following formula:
G(i,j;d,θ)=#{(x 1 ,y 1 )(x 2 ,y 2 )|f(x 1 ,y 1 )=i,f(x 2 ,y 2 )=j,|(x 1 ,y 1 )-(x 2 ,y 2 )|=d,∠((x 1 ,y 1 ),(x 2 ,y 2 ) θ = θ) = where d represents the distance between two pixels, θ represents the direction angle between pixels, (x, y) represents the coordinates of a pixel, f (x, y) represents the gray level of the point, and &representsthe angle between the pixel and the horizontal position, and # represents the number of pixel pairs obtained according to the constraint conditions in the set, for example, the pixel values of two points are 1 and 2 respectively, the distance between them is 1, θ =0 °, i.e. the horizontal direction, and the number of pixel pairs composed of two points meeting these conditions is counted.
Step 2.2.2, normalizing the gray level co-occurrence matrix G (i, j, d, theta) into H (i, j; d, theta), wherein the calculation formula is as follows:
wherein, sigma # L x L y Which represents the sum of the number of all pairs of pixels in a particular positional relationship (i.e., the finger distance d and the angle theta are the same).
Step 2.2.3, calculating the standard deviation of the gradient of the sample image, firstly calculating the average of the gradient:
L g representing the maximum of the grey level and L the maximum of the gradient.
Substituting the gradient average value T into the following formula to obtain the standard deviation of the gradient:
step 2.2.4, calculating the mixed entropy of the sample image, wherein the calculation formula is as follows:
step 2.2.5, extracting local stationarity characteristics of the sample image, and calculating an inverse difference, wherein a calculation formula is as follows:
step 2.2.6, solving the texture fractional dimension of the sample image by using a fractal Brownian random field method;
solving constants H (0 and H (n) and N (n) are utilized to make distribution functions F (t) meet the following conditions:
f (t) is x, Δ x independent distribution function, H is self-similarity parameter, F (x) is called real random function about x, n is sample image space dimension, and the expression of image fractional dimension D is:
D=n+1-H
step 3, training the image classifier
The method for training the feature vectors of the sample images by using the support vector machine is used for respectively obtaining a cloud image classifier, a snow image classifier and a fog image classifier which are formed by decision functions, and the specific implementation comprises the following sub-steps:
step 3.1, taking 80% of the cloud sample and the surface feature sample as training samples, and taking the feature vectors of the training samples as a training set T = { (x) of the training image classifier 1 ,y 1 )...(x i ,y i ) I =1 \ 8230n. Wherein, y i E ψ = { -1,1} wherein 1 represents a positive class, i.e., a cloud region class, -1 represents a negative class, i.e., a ground feature region class, x i ∈R n ,x i Is a feature vector, and N is the number of samples.
Step 3.2, constructing a classification hyperplane by adopting a support vector machine of the C-SVC model, and calculating a Gaussian kernel function:
wherein x i And x j Refer to the feature vectors of samples i and j, | | x, respectively i -x j || 2 The square of the Euclidean distance is shown, the variance is shown as sigma, and the value of the sigma is properly selected according to the experimental result. Since it is difficult to completely classify the feature vectors correctly in the classification process, the purpose of σ can be understood as to set a fault tolerance range, in which errors are ignored. i = 1\8230, N, j =1 \8230, and N, N is the number of samples.
Step 3.3, solving Lagrange multiplier vectors of the optimal classification hyperplane of the feature space by adopting a convex quadratic programming method:
wherein 0 is less than or equal to alpha i Less than or equal to C, i =1,2 \8230N, lagrange multiplier vector, alpha i And α j are the ith and jth Lagrangian multipliers, x, respectively i ,x j Are respectively provided withFeature vectors, y, representing the ith and jth samples i And y j Respectively representing the types of the ith sample and the jth sample, wherein C is a penalty parameter, N is the number of samples, and the optimal solution for solving the Lagrange multiplier vector is as follows:
α * =(α 1 *2 * ,...α i * ...α N * ) T
wherein alpha is i * Represents the optimal solution for the ith lagrange multiplier.
Step 3.4, solving the intercept of the optimal classification hyperplane of the feature space, and calculating a formula:
wherein alpha is i * As an optimal solution, y, for the ith Lagrangian multiplier i The positive and negative types of the ith sample are shown, and N is the number of samples.
And 3.5, substituting the obtained Gaussian kernel function, the Lagrange optimal solution and the hyperplane intercept into a decision function:
and 3.6, taking 20% of the cloud samples and the ground feature samples as a test sample set, testing the decision function, optimizing the decision function, and obtaining the corresponding cloud image classifier.
And 3.7, repeating the steps 3.1-3.6 to respectively obtain a snow image classifier and a fog image classifier.
Step 4, extracting the characteristics of the image to be detected
The method comprises the steps of down-sampling an original image to be detected into a 1024 × 1024 pixel 8-bit bmp format thumbnail, directly performing down-sampling processing if the remote sensing image is a panchromatic image, performing down-sampling by adopting RGB three wave bands if the remote sensing image is a multispectral image, performing image segmentation on the thumbnail to obtain 1024 32 × 32 pixel sub-images, extracting feature vectors of all the sub-images, including gray feature vectors and texture feature vectors, and specifically extracting the feature vectors through step 2.
Step 5, classifying the images to be detected
And 5.1, respectively inputting the feature vectors extracted in the step 4 into corresponding cloud, snow and fog image classifiers obtained in the step 3 for prediction classification, and classifying the feature vectors through a decision function.
And 5.2, repeatedly executing the step 3 until all the sub-images are classified, and dividing all the sub-images into cloud areas, fog areas, snow areas and non-cloud snow and fog areas (namely, ground areas) according to the categories of the target areas.
And 5.3, dividing the cloud and ground object areas, the fog and ground object areas and the snow and ground object areas into three binary images, wherein the ground object areas among the images have the same zero value, and the cloud, snow and fog areas have different image values.
Step 6, morphological closing operation
Selecting structural elements with the square structural shape and the 3 x 3 structural size, respectively performing expansion operation on the three binary images, performing corrosion operation on the obtained processed images by using the same structural elements, connecting cloud, snow and fog areas into a whole, and finally eliminating noise areas at the edges.
Step 7, correction of the overlap region
Step 7.1, comparing three binary image values at the same position: if the image values of the same position of the three images are the same, judging that the position is a ground area; if two same values exist, the position is indicated as a category area represented by a third image value; if the image values are different, the sub-image is indicated to have overlapping areas of cloud, snow and fog, points with zero values are recorded as overlapping areas, and points without zero values are recorded as triple overlapping areas.
7.2, repeating the step 7.1, comparing all image values of the three binary images to obtain discrimination results of cloud, snow and fog areas, ground object areas and overlapping areas, and correcting the overlapping areas; firstly, judging whether the overlapping area is contained in other areas, if so, replacing the overlapping area with other areas (including the overlapping area and the determined category area); secondly, judging the type of the overlapping area, if the overlapping area is externally connected with a certain determined type area, judging the type of the overlapping area to be the type of the overlapping area except the determined externally connected type, and if not, confirming after the type of the overlapping area is judged to be finished; if the type of the overlapping area is circumscribed with the overlapping area, the type of the overlapping area is judged to be different from the type of the overlapping area circumscribed with the overlapping area; and finally, judging the condition that the overlapped areas are only externally connected with different overlapped areas, respectively judging the areas into the categories except the common categories, and finally obtaining the judgment result. For example, if the periphery of the cloud and snow overlapping area is a determined fog area, the cloud and snow area is determined as the fog area; if the cloud and snow area is surrounded by the cloud and fog area, judging the cloud and snow area as the cloud and fog area; if the cloud and snow area is externally connected with the determined cloud area, judging that the area is a snow area; if the cloud area determined by the cloud and snow fog area is externally connected, judging that the area is a snow and fog area; if the cloud snow fog area is externally connected with the cloud fog area, judging the area as a determined snow area; and if the cloud snow area and the cloud and fog area are externally connected, judging that the cloud snow area and the cloud and fog area are a snow area and a fog area respectively.
And 7.3, performing morphological 'closing' operation on the judgment result, wherein the operation method is as described in the step 6, and obtaining a final cloud, snow and fog detection result.
Step 8, secondary detection
And newly selecting 500 cloud and ground object samples, fog and ground object samples, snow and ground object samples to manufacture a support vector machine classifier, and selecting a highlight sample from the ground object. And performing 'secondary detection' on the sample to be detected, comparing the detection result of the secondary detection with the detection result of the first detection, if the two detection results at the same position are the same, determining that the type of the position is the type obtained by any one detection result, and if the two detection results at the same position are different, determining that the type of the position is a ground object, and finally obtaining the detection result. For example, if the result of the second inspection is cloud or snow, the result of the first inspection is fog, the area is determined as a feature, and the area can be determined as cloud only if the results of the first and second inspections are cloud.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (7)

1. The method for detecting the cloud and the snow fog of the optical satellite remote sensing image based on the support vector machine is characterized by comprising the following steps of:
step 1, collecting a large amount of cloud, snow, fog and ground object sample image data;
step 2, extracting gray features and texture features of various sample images to form feature vectors;
step 3, training the feature vectors of the sample images by using a support vector machine to respectively obtain a cloud image classifier, a snow image classifier and a fog image classifier which are formed by decision functions;
step 4, performing down-sampling processing on an original image of the satellite remote sensing image to be detected to obtain a thumbnail, performing image segmentation on the thumbnail to obtain sub-images, and calculating a feature vector consisting of gray features and texture features of all the sub-images;
step 5, classifying the sub-images of the remote sensing image of the satellite to be detected, comprising the following sub-steps,
step 5.1, respectively inputting the feature vectors extracted in the step 4 into the cloud, snow and fog image classifiers obtained in the step 3 for prediction classification;
step 5.2, dividing all the sub-images into a cloud area, a fog area, a snow area and a ground object area according to the types of the target areas;
step 5.3, dividing the cloud and ground feature area, the fog and ground feature area and the snow and ground feature area into three binary images, wherein the ground feature area in each image takes the same zero value, and the cloud, snow and fog areas take different image values;
step 6, performing morphological 'closing' operation on the classification result obtained in the step 5;
and 7, comparing three binary image values at the same position to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected.
2. The method for detecting the cloud and snow fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 1, wherein: the implementation of said step 7 is as follows,
step 7.1, comparing three binary image values at the same position, and if the image values at the same position of the three images are the same, judging that the position is a ground object area; if two same values exist at the same position, the position is judged to be a category area represented by a third image value; if the image values of the three images at the same position are different, judging that the position is an overlapping area with cloud, snow and fog, recording a point with a zero value as the overlapping area, and recording a point without the zero value as a triple overlapping area;
7.2, repeating the step 7.1, comparing all image values of the three binary images to obtain discrimination results of cloud, snow and fog areas, ground object areas and overlapping areas, and correcting the overlapping areas; firstly, judging whether the overlapped area is contained in other areas, and if so, replacing the overlapped area with other areas; secondly, judging the type of the overlapping area, if the overlapping area is externally connected with a certain determined type area, judging the type of the overlapping area to be the type of the overlapping area except the determined externally connected type, and if not, confirming after the type of the overlapping area is judged to be finished; if the type of the overlapping area is circumscribed with the overlapping area, the type of the overlapping area is judged to be different from the type of the overlapping area circumscribed with the overlapping area; finally, judging the condition that the overlapping areas are only externally connected with different overlapping areas, respectively judging the areas as the types after the common types are removed, and finally obtaining a judgment result;
and 7.3, performing morphological 'closed' operation on the judgment result to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected.
3. The method for detecting the cloud and fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 1 or 2, wherein: and 8, selecting a proper amount of cloud and ground object samples, fog and ground object samples, snow and ground object samples as training samples, repeating the steps 2-7 to carry out secondary detection on the satellite remote sensing image to be detected, comparing the detection result of the secondary detection with the first detection result, determining the type of the position as the type obtained by any one detection result if the two detection results of the same position are the same, and determining the type of the position as the ground object if the two detection results of the same position are different, thereby finally obtaining the detection result.
4. The method for detecting the cloud and the fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 3, wherein: the implementation of said step 2 is as follows,
step 2.1, calculating gray level characteristics of the sample image, including a gray level mean value, a gray level variance, a first order difference and a histogram information entropy of the sample image;
wherein the calculation formula of the gray level mean value is,
wherein f (i, j) is the gray value at (i, j), S = M × N, M is the width of the sample image, N is the height of the sample image;
the gray-scale variance is calculated by the formula,
the first order difference is calculated by the formula,
the calculation formula of the information entropy of the histogram is,
wherein, h [ g ] is the histogram of the sample image, h [ g ] (i) is the percentage of the pixel under a certain gray level to the whole sample image, M is the maximum gray level;
step 2.2, calculating texture features of the sample image, including gradient standard deviation, mixed entropy, inverse difference moment and texture fractional dimension of the sample image;
wherein the standard deviation of the gradient is calculated by the formula,
G(i,j;d,θ)=#{(x 1 ,y 1 )(x 2 ,y 2 )|f(x 1 ,y 1 )=i,f(x 2 ,y 2 )=j,|(x 1 ,y 1 )-(x 2 ,y 2 )|=d,∠((x 1 ,y 1 ),(x 2 ,y 2 ) Where d represents the distance between two pixels, theta represents the direction angle between pixels, f (x) 1 ,y 1 ) And f (x) 2 ,y 2 ) Are respectively provided withRepresents (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The gray value of (c) represents the angle between the pixel point and the horizontal position, # represents the number of pixel pairs obtained according to the limiting conditions in the set, # L x L y Represents the sum of the numbers of all pixel pairs in a specific positional relationship; l is a radical of an alcohol g Represents the maximum of the gray level, L represents the maximum of the gradient;
the calculation formula of the mixed entropy is that,
the calculation formula of the inverse difference is that,
the fractal Brownian random field method is used for solving the texture fractal dimension of the sample image, and the expression of the fractal dimension D of the image is,
D=n+1-H
wherein n refers to the spatial dimension of the sample image, and H is a self-similarity parameter;
and 2.3, forming 8-dimensional feature vectors by the gray features and the texture features.
5. The method for detecting the cloud and the fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 4, wherein:
the implementation of said step 3 is as follows,
step 3.1, selecting partial cloud samples and ground feature samples as training samples, and using the feature vectors of the training samples as training sets T = { (x) of the training image classifier 1 ,y 1 )...(x i ,y i )},i=1…N,y i E psi = { -1,1}, wherein 1 represents a positive class representing a cloud region class, -1 represents a negative class representing a ground feature region class, and x i ∈R n ,x i Is a feature vector, and N is the number of samples;
step 3.2, constructing a classification hyperplane by adopting a support vector machine of a C-SVC model, calculating a Gaussian kernel function,
wherein x is i And x j Respectively refer to the feature vectors of samples i and j, | | x i -x j || 2 The square of Euclidean distance, sigma is variance, i =1 \ 8230, N, j =1 \ 8230, N, N is the number of samples;
step 3.3, solving Lagrange multiplier vector of the optimal classification hyperplane of the feature space by adopting a convex quadratic programming method,
wherein, 0 is less than or equal to alpha i C, i =1,2 \8230n, lagrange multiplier vector, alpha i And α j denote the ith and jth Lagrangian multipliers, x, respectively i ,x j Feature vectors, y, representing the ith and jth samples, respectively i And y j Respectively representing the types of the ith sample and the jth sample, wherein C is a penalty parameter, N is the number of samples, and the optimal solution of the Lagrange multiplier vector is obtained by:
α * =(α 1 *2 * ,...α i * ...α N * ) T
wherein alpha is i * Represents an optimal solution for the ith Lagrangian multiplier;
step 3.4, solving the intercept of the optimal classification hyperplane of the feature space, wherein the calculation formula is as follows,
wherein alpha is i * Is the optimal solution of the ith Lagrangian multiplier, y i The type of the ith sample is shown, and N is the number of the samples;
step 3.5, substituting the obtained Gaussian kernel function, the Lagrange optimal solution and the hyperplane intercept into a decision function,
step 3.6, taking the rest cloud samples and ground object samples as test samples, testing the decision function, optimizing the decision function, and simultaneously obtaining corresponding cloud image classifiers;
and 3.7, repeating the steps 3.1-3.6 to respectively obtain a snow image classifier and a fog image classifier.
6. The method for detecting the cloud and fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 5, wherein: in the step 4, if the remote sensing image to be detected is a panchromatic image, the down-sampling processing is directly adopted, and if the remote sensing image to be detected is a multispectral image, the down-sampling is carried out by adopting RGB three-band.
7. The method for detecting the cloud and snow fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 6, wherein: the implementation manner of the step 6 is to select the structural elements with the square structural shape and the 3 x 3 structural size, respectively perform expansion operation on the three binary images, and perform corrosion operation on the obtained processed images by using the same structural elements.
CN201710834224.6A 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine Active CN107610114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710834224.6A CN107610114B (en) 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710834224.6A CN107610114B (en) 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine

Publications (2)

Publication Number Publication Date
CN107610114A true CN107610114A (en) 2018-01-19
CN107610114B CN107610114B (en) 2019-12-10

Family

ID=61060362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710834224.6A Active CN107610114B (en) 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine

Country Status (1)

Country Link
CN (1) CN107610114B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629297A (en) * 2018-04-19 2018-10-09 北京理工大学 A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics
CN109740639A (en) * 2018-12-15 2019-05-10 中国科学院深圳先进技术研究院 A kind of wind and cloud satellite remote-sensing image cloud detection method of optic, system and electronic equipment
CN109934291A (en) * 2019-03-13 2019-06-25 北京林业大学 Construction method, forest land tree species classification method and the system of forest land tree species classifier
CN110175638A (en) * 2019-05-13 2019-08-27 北京中科锐景科技有限公司 A kind of fugitive dust source monitoring method
CN110232302A (en) * 2018-03-06 2019-09-13 香港理工大学深圳研究院 A kind of change detecting method of integrated gray value, spatial information and classification knowledge
CN110599488A (en) * 2019-09-27 2019-12-20 广西师范大学 Cloud detection method based on Sentinel-2 aerosol wave band
CN110705619A (en) * 2019-09-25 2020-01-17 南方电网科学研究院有限责任公司 Fog concentration grade judging method and device
CN110930399A (en) * 2019-12-10 2020-03-27 南京医科大学 TKA preoperative clinical staging intelligent evaluation method based on support vector machine
CN111047570A (en) * 2019-12-10 2020-04-21 西安中科星图空间数据技术有限公司 Automatic cloud detection method based on texture analysis method
CN111291818A (en) * 2020-02-18 2020-06-16 浙江工业大学 Non-uniform class sample equalization method for cloud mask
CN111429435A (en) * 2020-03-27 2020-07-17 王程 Rapid and accurate cloud content detection method for remote sensing digital image
CN111709458A (en) * 2020-05-25 2020-09-25 中国自然资源航空物探遥感中心 Automatic quality inspection method for top-resolution five-number images
CN112668441A (en) * 2020-12-24 2021-04-16 中国电子科技集团公司第二十八研究所 Satellite remote sensing image airplane target identification method combined with priori knowledge
CN112668613A (en) * 2020-12-07 2021-04-16 中国西安卫星测控中心 Satellite infrared imaging effect prediction method based on weather forecast and machine learning
CN113191179A (en) * 2020-12-21 2021-07-30 广州蓝图地理信息技术有限公司 Remote sensing image classification method based on gray level co-occurrence matrix and BP neural network
CN113420717A (en) * 2021-07-16 2021-09-21 西藏民族大学 Three-dimensional monitoring method, device and equipment for ice and snow changes and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093243A (en) * 2013-01-24 2013-05-08 哈尔滨工业大学 High resolution panchromatic remote sensing image cloud discriminating method
CN104077592A (en) * 2013-03-27 2014-10-01 上海市城市建设设计研究总院 Automatic extraction method for high-resolution remote-sensing image navigation mark
CN104484670A (en) * 2014-10-24 2015-04-01 西安电子科技大学 Remote sensing image cloud detection method based on pseudo color and support vector machine
CN104680151A (en) * 2015-03-12 2015-06-03 武汉大学 High-resolution panchromatic remote-sensing image change detection method considering snow covering effect
CN104966295A (en) * 2015-06-16 2015-10-07 武汉大学 Ship extraction method based on wire frame model
CN105260729A (en) * 2015-11-20 2016-01-20 武汉大学 Satellite remote sensing image cloud amount calculation method on the basis of random forest
CN105426903A (en) * 2015-10-27 2016-03-23 航天恒星科技有限公司 Cloud determination method and system for remote sensing satellite images
WO2017099951A1 (en) * 2015-12-07 2017-06-15 The Climate Corporation Cloud detection on remote sensing imagery

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093243A (en) * 2013-01-24 2013-05-08 哈尔滨工业大学 High resolution panchromatic remote sensing image cloud discriminating method
CN104077592A (en) * 2013-03-27 2014-10-01 上海市城市建设设计研究总院 Automatic extraction method for high-resolution remote-sensing image navigation mark
CN104484670A (en) * 2014-10-24 2015-04-01 西安电子科技大学 Remote sensing image cloud detection method based on pseudo color and support vector machine
CN104680151A (en) * 2015-03-12 2015-06-03 武汉大学 High-resolution panchromatic remote-sensing image change detection method considering snow covering effect
CN104966295A (en) * 2015-06-16 2015-10-07 武汉大学 Ship extraction method based on wire frame model
CN105426903A (en) * 2015-10-27 2016-03-23 航天恒星科技有限公司 Cloud determination method and system for remote sensing satellite images
CN105260729A (en) * 2015-11-20 2016-01-20 武汉大学 Satellite remote sensing image cloud amount calculation method on the basis of random forest
WO2017099951A1 (en) * 2015-12-07 2017-06-15 The Climate Corporation Cloud detection on remote sensing imagery

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232302A (en) * 2018-03-06 2019-09-13 香港理工大学深圳研究院 A kind of change detecting method of integrated gray value, spatial information and classification knowledge
CN108629297A (en) * 2018-04-19 2018-10-09 北京理工大学 A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics
CN109740639A (en) * 2018-12-15 2019-05-10 中国科学院深圳先进技术研究院 A kind of wind and cloud satellite remote-sensing image cloud detection method of optic, system and electronic equipment
CN109740639B (en) * 2018-12-15 2021-02-19 中国科学院深圳先进技术研究院 Wind cloud satellite remote sensing image cloud detection method and system and electronic equipment
CN109934291A (en) * 2019-03-13 2019-06-25 北京林业大学 Construction method, forest land tree species classification method and the system of forest land tree species classifier
CN110175638A (en) * 2019-05-13 2019-08-27 北京中科锐景科技有限公司 A kind of fugitive dust source monitoring method
CN110175638B (en) * 2019-05-13 2021-04-30 北京中科锐景科技有限公司 Raise dust source monitoring method
CN110705619A (en) * 2019-09-25 2020-01-17 南方电网科学研究院有限责任公司 Fog concentration grade judging method and device
CN110599488A (en) * 2019-09-27 2019-12-20 广西师范大学 Cloud detection method based on Sentinel-2 aerosol wave band
CN110599488B (en) * 2019-09-27 2022-04-29 广西师范大学 Cloud detection method based on Sentinel-2 aerosol wave band
CN111047570A (en) * 2019-12-10 2020-04-21 西安中科星图空间数据技术有限公司 Automatic cloud detection method based on texture analysis method
CN110930399A (en) * 2019-12-10 2020-03-27 南京医科大学 TKA preoperative clinical staging intelligent evaluation method based on support vector machine
CN111047570B (en) * 2019-12-10 2023-06-27 中科星图空间技术有限公司 Automatic cloud detection method based on texture analysis method
CN111291818A (en) * 2020-02-18 2020-06-16 浙江工业大学 Non-uniform class sample equalization method for cloud mask
CN111429435A (en) * 2020-03-27 2020-07-17 王程 Rapid and accurate cloud content detection method for remote sensing digital image
CN111709458B (en) * 2020-05-25 2021-04-13 中国自然资源航空物探遥感中心 Automatic quality inspection method for top-resolution five-number images
CN111709458A (en) * 2020-05-25 2020-09-25 中国自然资源航空物探遥感中心 Automatic quality inspection method for top-resolution five-number images
CN112668613A (en) * 2020-12-07 2021-04-16 中国西安卫星测控中心 Satellite infrared imaging effect prediction method based on weather forecast and machine learning
CN113191179A (en) * 2020-12-21 2021-07-30 广州蓝图地理信息技术有限公司 Remote sensing image classification method based on gray level co-occurrence matrix and BP neural network
CN112668441A (en) * 2020-12-24 2021-04-16 中国电子科技集团公司第二十八研究所 Satellite remote sensing image airplane target identification method combined with priori knowledge
CN112668441B (en) * 2020-12-24 2022-09-23 中国电子科技集团公司第二十八研究所 Satellite remote sensing image airplane target identification method combined with priori knowledge
CN113420717A (en) * 2021-07-16 2021-09-21 西藏民族大学 Three-dimensional monitoring method, device and equipment for ice and snow changes and readable storage medium

Also Published As

Publication number Publication date
CN107610114B (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN108319973B (en) Detection method for citrus fruits on tree
CN107316031A (en) The image characteristic extracting method recognized again for pedestrian
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN104794502A (en) Image processing and mode recognition technology-based rice blast spore microscopic image recognition method
CN109086687A (en) The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction
CN104680127A (en) Gesture identification method and gesture identification system
CN102214298A (en) Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism
CN106557740B (en) The recognition methods of oil depot target in a kind of remote sensing images
CN109670515A (en) Method and system for detecting building change in unmanned aerial vehicle image
CN102682305A (en) Automatic screening system and automatic screening method using thin-prep cytology test
CN104408482A (en) Detecting method for high-resolution SAR (Synthetic Aperture Radar) image object
CN109948625A (en) Definition of text images appraisal procedure and system, computer readable storage medium
Yang et al. Real-time traffic sign detection via color probability model and integral channel features
CN106529461A (en) Vehicle model identifying algorithm based on integral characteristic channel and SVM training device
Wu et al. Strong shadow removal via patch-based shadow edge detection
CN110175556B (en) Remote sensing image cloud detection method based on Sobel operator
CN113221881B (en) Multi-level smart phone screen defect detection method
Cheng et al. Image segmentation technology and its application in digital image processing
CN111259756A (en) Pedestrian re-identification method based on local high-frequency features and mixed metric learning
CN108073940B (en) Method for detecting 3D target example object in unstructured environment
CN105787475A (en) Traffic sign detection and identification method under complex environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210721

Address after: 517000 floors 1-4, plant incubator (Shenhe Jindi Chuang Valley), building e2-1, east of Xingye Avenue and north of Gaoxin fifth road, Heyuan high tech Development Zone, Guangdong Province

Patentee after: Jingtong space technology (Heyuan) Co.,Ltd.

Address before: 430072 Hubei Province, Wuhan city Wuchang District of Wuhan University Luojiashan

Patentee before: WUHAN University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240319

Address after: Room 501, Building 17, Plot 2, Phase II, the Pearl River River Huacheng, No. 99, Fuyuan West Road, Liuyanghe Street, Kaifu District, Changsha, Hunan 410000

Patentee after: Hunan Hejing Cultural Media Co.,Ltd.

Country or region after: Zhong Guo

Address before: 517000 floors 1-4, plant incubator (Shenhe Jindi Chuang Valley), building e2-1, east of Xingye Avenue and north of Gaoxin fifth road, Heyuan high tech Development Zone, Guangdong Province

Patentee before: Jingtong space technology (Heyuan) Co.,Ltd.

Country or region before: Zhong Guo

TR01 Transfer of patent right