CN107610114B - optical satellite remote sensing image cloud and snow fog detection method based on support vector machine - Google Patents

optical satellite remote sensing image cloud and snow fog detection method based on support vector machine Download PDF

Info

Publication number
CN107610114B
CN107610114B CN201710834224.6A CN201710834224A CN107610114B CN 107610114 B CN107610114 B CN 107610114B CN 201710834224 A CN201710834224 A CN 201710834224A CN 107610114 B CN107610114 B CN 107610114B
Authority
CN
China
Prior art keywords
image
cloud
fog
snow
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710834224.6A
Other languages
Chinese (zh)
Other versions
CN107610114A (en
Inventor
易尧华
袁媛
余长慧
刘炯杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Hejing Cultural Media Co ltd
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710834224.6A priority Critical patent/CN107610114B/en
Publication of CN107610114A publication Critical patent/CN107610114A/en
Application granted granted Critical
Publication of CN107610114B publication Critical patent/CN107610114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a satellite remote sensing image cloud, snow and fog detection method based on a support vector machine, which comprises the following steps: firstly, collecting a large number of images of ground objects, clouds, snow and fog samples of different types as a training set, and obtaining gray level and texture features of the images to form a feature set; and performing machine learning on the feature sets of all samples by a method of a support vector machine to obtain a cloud, snow and fog image classifier. Secondly, determining the category of the image to be detected by using the obtained cloud, snow and fog image classifier, performing morphological closed operation and overlapped area correction, and judging the type of a target area in the remote sensing image; and finally, reselecting the training sample to obtain a new image classifier, carrying out secondary detection on the satellite remote sensing image to be detected, comparing the secondary detection with the primary detection, and finally determining the judgment result of the cloud, snow and fog of the remote sensing image to be detected. Experimental results show that the method can obtain higher detection precision.

Description

optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
Technical Field
the invention belongs to the field of satellite remote sensing image quality detection, and particularly relates to a satellite remote sensing image cloud, snow and fog detection method based on a support vector machine.
background
In an optical satellite remote sensing image, remote sensing information is often influenced by clouds, fog weather and snow, the clouds and the snow cover the ground surface information of an area where the image is located, and the fog, the haze and the like can cover many characteristic information in the remote sensing image. Therefore, it is necessary to detect the cloud, snow and fog areas in the remote sensing image and reject the corresponding image data with too large coverage of invalid information, so as to improve the utilization rate of the optical satellite remote sensing image.
the current remote sensing image cloud, snow and fog detection method mainly focuses on detection of cloud or fog and detection and identification of cloud, fog and cloud and snow. The method mainly comprises a threshold value method, a feature extraction method and the like. The method for cloud detection of the remote sensing image mainly judges whether the image is a cloud or not by setting a spectral threshold value by using spectral reflectivity under different wave bands or carries out cloud detection according to characteristic classification by extracting image characteristics. The remote sensing image fog detection method mainly researches typical cases, and utilizes remote sensing data to extract features to monitor and research fog. The cloud and snow detection method mainly utilizes the characteristic that the cloud and snow have similar characteristics in a visible light wave band and have larger difference in short wave infrared, and identifies the snow by constructing a cloud and snow contrast increasing factor or calculates a fractal dimension for textural features of a full-color image to realize the identification of the cloud and snow. The cloud, snow and fog detection method is usually a superposition use of the above methods.
The existing literature retrieval finds that the existing cloud, snow and fog detection method has the following problems: first, the existing methods are difficult to detect clouds, snow and fog simultaneously. The detection method is influenced by the detection type, and a single detection method is difficult to adapt to various detection requirements; the existing threshold method has low reliability, and the detection result is influenced by the space-time type, so that the method is difficult to popularize in more common detection; the image feature information selected by the feature extraction method is insufficient, and the detection accuracy is not high enough. Secondly, the detection efficiency of the cloud, snow and fog detection method is low, the algorithm complexity is high, the influence on big data is difficult to rapidly detect and identify, certain requirements are required on a remote sensing data source, and the universality is poor.
Disclosure of Invention
The invention aims to enhance the timeliness of remote sensing image quality inspection and improve the utilization rate of remote sensing images, so that the remote sensing image quality inspection system can be applied to domestic satellite image product quality inspection systems such as resource one, resource three, daily drawing one, high-grade first and the like.
In order to achieve the purpose, the invention provides a satellite remote sensing image cloud, snow and fog detection method based on a support vector machine, and the specific implementation of the technical scheme of the invention comprises the following steps:
Step 1, collecting a large amount of cloud, snow, fog and ground object sample image data;
Step 2, extracting gray features and texture features of various sample images to form feature vectors;
Step 3, training the feature vectors of the sample images by using a support vector machine to respectively obtain a cloud image classifier, a snow image classifier and a fog image classifier which are formed by decision functions;
step 4, performing down-sampling processing on an original image of the satellite remote sensing image to be detected to obtain a thumbnail, performing image segmentation on the thumbnail to obtain sub-images, and calculating a feature vector consisting of gray features and texture features of all the sub-images;
Step 5, classifying the sub-images of the remote sensing image of the satellite to be detected, comprising the following sub-steps,
Step 5.1, respectively inputting the feature vectors extracted in the step 4 into the cloud, snow and fog image classifiers obtained in the step 3 for prediction classification;
Step 5.2, dividing all the sub-images into a cloud area, a fog area, a snow area and a ground object area according to the types of the target areas;
Step 5.3, dividing the images into three binary images according to the cloud and ground object areas, the fog and ground object areas and the snow and ground object areas, wherein the ground object areas in each image take the same zero value, and the cloud, snow and fog areas take different image values;
Step 6, performing morphological 'closing' operation on the classification result obtained in the step 5;
And 7, comparing three binary image values at the same position to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected.
further, the implementation manner of the step 7 is as follows,
step 7.1, comparing three binary image values at the same position, and if the image values at the same position of the three images are the same, judging that the position is a ground object area; if two same values exist at the same position, the position is judged to be a category area represented by a third image value; if the image values of the three images at the same position are different, judging that the position is an overlapping area with cloud, snow and fog, recording a point with a zero value as the overlapping area, and recording a point without the zero value as a triple overlapping area;
7.2, repeating the step 7.1, comparing all image values of the three binary images to obtain discrimination results of cloud, snow and fog areas, ground object areas and overlapping areas, and correcting the overlapping areas; firstly, judging whether the overlapped area is contained in other areas, and if so, replacing the overlapped area with other areas; secondly, judging the type of the overlapping area, if the overlapping area is externally connected with a certain determined type area, judging the type of the overlapping area to be the type of the overlapping area except the determined externally connected type, and if not, confirming after the type of the overlapping area is judged to be finished; for the triple overlapping area, if the triple overlapping area is externally connected with a certain determined type area, the overlapping area is judged to be the overlapping area without the externally connected type, and if the triple overlapping area is externally connected with the overlapping area, the type of the overlapping area is judged to be the type of the overlapping area different from the type of the overlapping area externally connected with the overlapping area; and finally, judging the condition that the overlapped areas are only externally connected with different overlapped areas, respectively judging the areas into the categories except the common categories, and finally obtaining the judgment result.
and 7.3, performing morphological 'closed' operation on the judgment result to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected.
further, the method comprises a step 8 of selecting a proper amount of cloud and ground object samples, fog and ground object samples, snow and ground object samples as training samples again, repeating the steps 2-7 to carry out secondary detection on the remote sensing image of the satellite to be detected, comparing the detection result of the secondary detection with the detection result of the primary detection, determining the type of the position as the type obtained by any detection result if the two detection results of the same position are the same, and determining the type of the position as the ground object if the two detection results of the same position are different, thereby finally obtaining the detection result.
Further, the step 2 is realized as follows,
Step 2.1, calculating gray level characteristics of the sample image, including a gray level mean value, a gray level variance, a first order difference and a histogram information entropy of the sample image;
Wherein the calculation formula of the gray level mean value is,
wherein f (i, j) is the gray value at (i, j), S is M × N, M is the width of the sample image, and N is the height of the sample image;
The gray-scale variance is calculated by the formula,
The first order difference is calculated by the formula,
the calculation formula of the information entropy of the histogram is,
wherein, h [ g ] is the histogram of the sample image, h [ g ] (i) is the percentage of the pixel under a certain gray level to the whole sample image, M is the maximum gray level;
Step 2.2, calculating the texture characteristics of the sample image, including the gradient standard deviation, the mixed entropy, the inverse difference moment and the inverse difference moment of the sample image,
a texture score dimension;
wherein the standard deviation of the gradient is calculated by the formula,
g (i, j; d, θ) { (x 1, y 1) (x 2, y 2) | f (x 1, y 1) ═ i, f (x 2, y 2) ═ j, | (x 1, y 1) - (x 2, y 2) | d, [ phi ((x 1, y 1), (x 2, y 2)) ] where d represents the distance between two pixels, theta represents the direction angle between the pixels, f (x 1, y 1) and f (x 2, y 2) represent the gray scale values of (x 1, y 1) and (x 2, y 2), respectively, phi represents the number of pixel pairs with respect to the horizontal position, and # represents the sum of the number of all pixel pairs in a specific positional relationship, # L x L y represents the maximum value of the gray scale value L g;
The formula for calculating the entropy of the mixture is,
The calculation formula of the inverse difference is that,
The fractal Brownian random field method is used for solving the texture fractal dimension of the sample image, and the expression of the fractal dimension D of the image is,
D=n+1-H
wherein n refers to the spatial dimension of the sample image, and H is a self-similarity parameter;
And 2.3, forming 8-dimensional feature vectors by the gray features and the texture features.
Further, the implementation manner of the step 3 is as follows,
step 3.1, selecting a part of cloud samples and ground feature samples as training samples, and using feature vectors of the training samples as a training set T { (x 1, y 1. (x i, y i) }, i { (1 … N), y i ∈ ψ { -1, 1}, wherein 1 represents a positive class which represents a cloud region class, -1 represents a negative class which represents a ground feature region class, x i ∈ R n, x i is feature vectors, and N is the number of samples;
step 3.2, constructing a classification hyperplane by adopting a support vector machine of a C-SVC model, calculating a Gaussian kernel function,
wherein x i and x j respectively refer to feature vectors of samples i and j, | | x i -x j | | 2 is the square of the euclidean distance, σ is the variance, i is 1 … N, j is 1 … N, and N is the number of samples;
step 3.3, solving Lagrange multiplier vector of the optimal classification hyperplane of the characteristic space by adopting a convex quadratic programming method,
wherein α i is not less than 0 and not more than C, i is 1,2 … N, and is a lagrange multiplier vector, α i and α j respectively represent ith and jth lagrange multipliers, x i and x j respectively represent eigenvectors of an ith sample and a jth sample, y i and y j are respectively represented as categories of the ith sample and the jth sample, C is a penalty parameter, N is the number of samples, and the optimal solution of the lagrange multiplier vector is obtained as follows:
α*=(α1 *2 *,...αi *...αN *)T
where α i * represents the optimal solution for the ith Lagrangian multiplier;
Step 3.4, solving the intercept of the optimal classification hyperplane of the feature space, wherein the calculation formula is as follows,
wherein, α i * is the optimal solution of the ith Lagrangian multiplier, y i is the category of the ith sample, and N is the number of samples;
Step 3.5, substituting the obtained Gaussian kernel function, the Lagrange optimal solution and the hyperplane intercept into a decision function,
Step 3.6, taking the rest cloud samples and ground object samples as test samples, testing the decision function, optimizing the decision function, and simultaneously obtaining corresponding cloud image classifiers;
and 3.7, repeating the steps 3.1-3.6 to respectively obtain a snow image classifier and a fog image classifier.
further, in step 4, if the remote sensing image to be detected is a panchromatic image, the down-sampling processing is directly adopted, and if the remote sensing image to be detected is a multispectral image, the down-sampling is carried out by adopting three RGB bands.
further, the step 6 is implemented by selecting a structural element with a square structural shape and a 3 × 3 structural size, performing expansion operation on the three binary images, and performing corrosion operation on the obtained processed images by using the same structural element.
compared with the prior art, the invention has the advantages that: the method can be used for multiple detections after one-time training, the image classifier is obtained through a large amount of image training, the image classifier only needs to be used again during detection, the time complexity of the support vector machine algorithm in the prediction classification stage is low, and the region type can be quickly detected; through tests, the method is suitable for full-color images and n-channel multispectral images, cloud detection is carried out on a plurality of domestic satellite remote sensing images such as a resource number one 02 star, a resource number three, a sky number one, a high-resolution number one and the like by using the method, and the accuracy respectively reaches 94.8%, 96.4%, 93.2% and 95.2%.
drawings
fig. 1 is a flow chart of an implementation of the embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and practice of the present invention for those of ordinary skill in the art, the present invention will be described in further detail with reference to the accompanying drawings and examples, which are provided for illustration and explanation and are not intended to limit the scope of the present invention.
Referring to fig. 1, the invention takes the panchromatic image data of the satellite with the resource number one 02C, the satellite with the resource number three and the multispectral remote sensing image data with the sky drawing number one as an example, and the implementation steps are as follows:
step 1, sample acquisition
And (3) down-sampling an original image of the sample into a 1024 × 1024 pixel 8-bit bmp format thumbnail, and performing image segmentation on the thumbnail. And if the remote sensing image is a panchromatic image, directly adopting down-sampling processing, and if the remote sensing image is a multispectral image, adopting RGB three-band to carry out down-sampling. The panchromatic image is divided into 32 x 32 sample blocks, the multispectral image is divided into 16 x 16 image blocks, and 1500 ground object samples, 1000 cloud samples, 1000 snow samples and 1000 fog samples are respectively selected as training sample data.
Step 2, feature extraction: generally speaking, the average brightness of a cloud area in a full-color image and a multispectral image is much higher than that of a fog area, the brightness value of the fog area is greater than that of a ground object area, the cloud and snow areas have similar spectral characteristics, and meanwhile, the cloud, snow, fog and ground objects have obvious differences in gray level distribution, gray level change and the like, so that the target area image and the ground object image can be distinguished to a certain extent by utilizing the gray level characteristics of the image.
extracting the gray characteristic and the texture characteristic vector value of the sample image to form an 8-dimensional characteristic vector, wherein the specific implementation steps are as follows:
Step 2.1, calculating the gray level characteristics of the sample image
step 2.1.1, solving the gray average value, and calculating by using the following formula:
where f (i, j) is the grayscale value at (i, j), S is M × N, M is the width of the sample image, and N is the height of the sample image.
Step 2.1.2, calculating the gray variance of the sample image:
step 2.1.3, calculating the first order difference of the sample image:
step 2.1.4, calculating the histogram information entropy of the sample image:
Where h g is the histogram of the sample image, h g (i) is the percentage of pixels in the entire image at a certain gray level (i), and M is the maximum gray level.
step 2.2, calculating the texture features of the sample image: from the perspective of visual characteristics of human eyes, the texture information of clouds, snow and fog in the satellite remote sensing image is often single and simpler than the texture of ground objects, in addition, the edges of the cloud, snow and fog areas in the image are also fuzzy and smooth, and the edges of the ground objects are generally sharp and have large gradients. Therefore, the texture information and the gradient information of the satellite remote sensing image can be used for detecting and dividing the cloud, snow and fog areas and the ground feature information. In addition, the texture features of the cloud, snow and fog images are obviously different. For the cloud sample, the texture of the cloud sample belongs to random texture, is variable and difficult to detect, shows disorder and no rule, and has thicker and fuzzy edge texture; the fog sample texture is relatively uniform, the smoothness is relatively good, and the edge form is regular; the snow sample is influenced by the ground texture, so that the snow sample has better directionality and large gradient change. And distinguishing different texture characteristics expressed by cloud, snow and fog through the comprehensive information of the image gray level and the image gradient.
step 2.2.1, calculating a gray gradient co-occurrence matrix G (i, j, d, theta) of the sample image, and using the following formula:
g (i, j; d, θ) { (x 1, y 1) (x 2, y 2) | f (x 1, y 1) ═ i, f (x 2, y 2) ═ j, | (x 1, y 1) - (x 2, y 2) | d, ((x 1, y 1), (x 2, y 2)) } θ where d denotes a distance between two pixels, θ denotes a direction angle between pixels, (x, y) denotes coordinates of a pixel point, f (x, y) denotes a gray value of the point, and denotes an angle between the pixel point and a horizontal position, and # denotes the number of pixel pairs obtained according to a constraint condition in a set, for example, two point pixel values are 1 and 2, respectively, the distance between them is 1, θ is 0 °, that is a horizontal direction, and the number of pixel pairs satisfying these conditions is counted throughout the entire image.
step 2.2.2, normalizing the gray level co-occurrence matrix G (i, j, d, theta) into H (i, j; d, theta), wherein the calculation formula is as follows:
Where Σ # L x L y represents the sum of the number of all pixel pairs in a particular positional relationship (i.e., the finger distance d and the included angle θ are the same).
step 2.2.3, calculating the standard deviation of the gradient of the sample image, firstly calculating the average of the gradient:
l g denotes the maximum value of the gray level, and L denotes the maximum value of the gradient.
Substituting the gradient average value T into the following formula to obtain the standard deviation of the gradient:
step 2.2.4, calculating the mixed entropy of the sample image, wherein the calculation formula is as follows:
Step 2.2.5, extracting local stationarity characteristics of the sample image, and calculating an inverse difference, wherein a calculation formula is as follows:
step 2.2.6, solving the texture fractional dimension of the sample image by using a fractal Brownian random field method;
Solving for the constant H (0< H <1) such that the distribution function f (t) satisfies:
F (t) is a distribution function independent of x, Δ x, H is a self-similarity parameter, f (x) is called a real random function with respect to x, n is a spatial dimension of the sample image, and the fractional dimension D of the image is expressed as:
D=n+1-H
step 3, training the image classifier
Training a feature vector of a sample image by using a method of a support vector machine to respectively obtain a cloud image classifier, a snow image classifier and a fog image classifier which are formed by decision functions, wherein the specific implementation comprises the following substeps:
Step 3.1, using 80% of cloud samples and feature samples as training samples, using feature vectors of the training samples as a training set T { (x 1, y 1. (x i, y i) } of the training image classifier, and i ═ 1 … N, wherein y i ∈ ψ { -1, 1} wherein 1 represents a positive class, i.e., a cloud region class, 1 represents a negative class, i.e., a feature region class, x i ∈ R n, x i is a feature vector, and N is a sample number.
step 3.2, constructing a classification hyperplane by adopting a support vector machine of the C-SVC model, and calculating a Gaussian kernel function:
wherein x i and x j respectively refer to the feature vectors of samples i and j, | | x i -x j | | 2 is the square of euclidean distance, σ is variance, and the value of σ is usually selected appropriately according to the experimental result.
step 3.3, solving Lagrange multiplier vectors of the optimal classification hyperplane of the feature space by adopting a convex quadratic programming method:
wherein α i is not less than 0 and not more than C, i is 1,2 … N, and is a lagrange multiplier vector, α i and α j are the ith and jth lagrange multipliers therein, x i and x j respectively represent the eigenvectors of the ith sample and jth sample, y i and y j are the categories of the ith sample and jth sample, C is a penalty parameter, N is the number of samples, and the optimal solution of the lagrange multiplier vector is solved as follows:
α*=(α1 *2 *,...αi *...αN *)T
Where α i * represents the optimal solution for the ith lagrange multiplier.
Step 3.4, solving the intercept of the optimal classification hyperplane of the feature space, and calculating a formula:
Wherein, α i * is the optimal solution of the ith Lagrangian multiplier, y i is the positive and negative categories of the ith sample, and N is the number of samples.
And 3.5, substituting the obtained Gaussian kernel function, the Lagrange optimal solution and the hyperplane intercept into a decision function:
and 3.6, taking 20% of the cloud sample and the ground feature sample as a test sample set, testing the decision function, optimizing the decision function, and obtaining the corresponding cloud image classifier.
and 3.7, repeating the steps 3.1-3.6 to respectively obtain a snow image classifier and a fog image classifier.
Step 4, extracting the characteristics of the image to be detected
The method comprises the steps of down-sampling an original image to be detected into a 1024 × 1024 pixel 8-bit bmp format thumbnail, directly performing down-sampling processing if the remote sensing image is a panchromatic image, performing down-sampling by adopting RGB three wave bands if the remote sensing image is a multispectral image, performing image segmentation on the thumbnail to obtain 1024 32 × 32 pixel sub-images, extracting feature vectors of all the sub-images, including gray feature vectors and texture feature vectors, and specifically extracting the feature vectors through step 2.
step 5, classifying the images to be detected
and 5.1, respectively inputting the feature vectors extracted in the step 4 into corresponding cloud, snow and fog image classifiers obtained in the step 3 for prediction classification, and classifying the feature vectors through a decision function.
And 5.2, repeatedly executing the step 3 until all the sub-images are classified, and dividing all the sub-images into cloud areas, fog areas, snow areas and non-cloud snow and fog areas (namely land areas) according to the types of the target areas.
and 5.3, dividing the cloud and ground object areas, the fog and ground object areas and the snow and ground object areas into three binary images, wherein the ground object areas among the images have the same zero value, and the cloud, snow and fog areas have different image values.
Step 6, morphological close operation
selecting structural elements with the square structural shape and the 3 x 3 structural size, respectively performing expansion operation on the three binary images, performing corrosion operation on the obtained processed images by using the same structural elements, connecting cloud, snow and fog areas into a whole, and finally eliminating noise areas at the edges.
Step 7, correction of the overlap region
Step 7.1, comparing three binary image values at the same position: if the image values of the same position of the three images are the same, judging that the position is a ground area; if two same values exist, the position is indicated as a category area represented by a third image value; if the image values are different, the sub-image is indicated to have an overlapping area of cloud, snow and fog, points with zero values are recorded as the overlapping area, and points without zero values are recorded as the triple overlapping area.
7.2, repeating the step 7.1, comparing all image values of the three binary images to obtain discrimination results of cloud, snow and fog areas, ground object areas and overlapping areas, and correcting the overlapping areas; firstly, judging whether the overlapped area is contained in other areas, and if so, replacing the overlapped area with other areas (including the overlapped area and the determined category area); secondly, judging the type of the overlapping area, if the overlapping area is externally connected with a certain determined type area, judging the type of the overlapping area to be the type of the overlapping area except the determined externally connected type, and if not, confirming after the type of the overlapping area is judged to be finished; for the triple overlapping area, if the triple overlapping area is externally connected with a certain determined type area, the overlapping area is judged to be the overlapping area without the externally connected type, and if the triple overlapping area is externally connected with the overlapping area, the type of the overlapping area is judged to be the type of the overlapping area different from the type of the overlapping area externally connected with the overlapping area; and finally, judging the condition that the overlapped areas are only externally connected with different overlapped areas, respectively judging the areas into the categories except the common categories, and finally obtaining the judgment result. For example, if the periphery of the cloud and snow overlapping area is a determined fog area, the cloud and snow area is determined as the fog area; if the cloud and snow area is surrounded by the cloud and fog area, judging the cloud and snow area as the cloud and fog area; if the cloud and snow area is externally connected with the determined cloud area, judging that the area is a snow area; if the cloud area determined by the cloud and snow fog area is externally connected, judging that the area is a snow and fog area; if the cloud snow fog area is externally connected with the cloud fog area, judging the area as a determined snow area; and if the cloud snow area and the cloud and fog area are externally connected, judging that the cloud snow area and the cloud and fog area are a snow area and a fog area respectively.
and 7.3, performing morphological 'closed' operation on the judgment result, wherein the operation method is as described in the step 6, and obtaining a final cloud, snow and fog detection result.
Step 8, secondary detection
and newly selecting 500 cloud and ground object samples, fog and ground object samples, snow and ground object samples to manufacture a support vector machine classifier, and selecting a highlight sample from the ground object. The method comprises the steps of carrying out 'secondary detection' on a sample to be detected, comparing a detection result of the secondary detection with a detection result of the primary detection, determining the type of the position as the type obtained by any one detection result if two detection results of the same position are the same, and determining the type of the position as a ground object if the two detection results of the same position are different, thereby finally obtaining the detection result. For example, if the result of the second inspection is cloud or snow, the result of the first inspection is fog, the area is determined as a feature, and the area can be determined as cloud only if the results of the first and second inspections are cloud.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (6)

1. the method for detecting the cloud and the snow fog of the optical satellite remote sensing image based on the support vector machine is characterized by comprising the following steps of:
Step 1, collecting a large amount of cloud, snow, fog and ground object sample image data;
step 2, extracting gray features and texture features of various sample images to form feature vectors;
step 3, training the feature vectors of the sample images by using a support vector machine to respectively obtain a cloud image classifier, a snow image classifier and a fog image classifier which are formed by decision functions;
step 4, performing down-sampling processing on an original image of the satellite remote sensing image to be detected to obtain a thumbnail, performing image segmentation on the thumbnail to obtain sub-images, and calculating a feature vector consisting of gray features and texture features of all the sub-images;
Step 5, classifying the sub-images of the remote sensing image of the satellite to be detected, comprising the following sub-steps,
step 5.1, respectively inputting the feature vectors extracted in the step 4 into the cloud, snow and fog image classifiers obtained in the step 3 for prediction classification;
step 5.2, dividing all the sub-images into a cloud area, a fog area, a snow area and a ground object area according to the types of the target areas;
step 5.3, dividing the images into three binary images according to the cloud and ground object areas, the fog and ground object areas and the snow and ground object areas, wherein the ground object areas in each image take the same zero value, and the cloud, snow and fog areas take different image values;
step 6, performing morphological 'closing' operation on the classification result obtained in the step 5;
step 7, comparing three binary image values at the same position to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected;
Step 7.1, comparing three binary image values at the same position, and if the image values at the same position of the three images are the same, judging that the position is a ground object area; if two same values exist at the same position, the position is judged to be a category area represented by a third image value; if the image values of the three images at the same position are different, judging that the position is an overlapping area with cloud, snow and fog, recording a point with a zero value as the overlapping area, and recording a point without the zero value as a triple overlapping area;
7.2, repeating the step 7.1, comparing all image values of the three binary images to obtain discrimination results of cloud, snow and fog areas, ground object areas and overlapping areas, and correcting the overlapping areas; firstly, judging whether the overlapped area is contained in other areas, and if so, replacing the overlapped area with other areas; secondly, judging the type of the overlapping area, if the overlapping area is externally connected with a certain determined type area, judging the type of the overlapping area to be the type of the overlapping area except the determined externally connected type, and if not, confirming after the type of the overlapping area is judged to be finished; for the triple overlapping area, if the triple overlapping area is externally connected with a certain determined type area, the overlapping area is judged to be the overlapping area without the externally connected type, and if the triple overlapping area is externally connected with the overlapping area, the type of the overlapping area is judged to be the type of the overlapping area different from the type of the overlapping area externally connected with the overlapping area; finally, judging the situation that the overlapped areas are only externally connected with different overlapped areas, respectively judging the areas into categories except the common categories, and finally obtaining a judgment result;
and 7.3, performing morphological 'closed' operation on the judgment result to obtain detection results of cloud, snow and fog in the remote sensing image of the satellite to be detected.
2. the method for detecting the cloud and fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 1, wherein: and 8, selecting a proper amount of cloud and ground object samples, fog and ground object samples, snow and ground object samples as training samples, repeating the steps 2-7 to carry out secondary detection on the satellite remote sensing image to be detected, comparing the detection result of the secondary detection with the first detection result, determining the type of the position as the type obtained by any one detection result if the two detection results of the same position are the same, and determining the type of the position as the ground object if the two detection results of the same position are different, and finally obtaining the detection result.
3. the method for detecting the cloud and the fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 2, wherein: the implementation of said step 2 is as follows,
step 2.1, calculating gray level characteristics of the sample image, including a gray level mean value, a gray level variance, a first order difference and a histogram information entropy of the sample image;
wherein the calculation formula of the gray level mean value is,
wherein f (i, j) is the gray value at (i, j), S is M × N, M is the width of the sample image, and N is the height of the sample image;
The gray-scale variance is calculated by the formula,
The first order difference is calculated by the formula,
the calculation formula of the information entropy of the histogram is,
Wherein, h [ g ] is the histogram of the sample image, h [ g ] (i) is the percentage of the pixel under a certain gray level to the whole sample image, M is the maximum gray level;
step 2.2, calculating texture features of the sample image, including gradient standard deviation, mixed entropy, inverse difference moment and texture fractional dimension of the sample image;
Wherein the standard deviation of the gradient is calculated by the formula,
G(i,j;d,θ)=#{(x1,y1)(x2,y2)|f(x1,y1)=i,f(x2,y2)=j,|(x1,y1)-(x2,y2)|=d,∠((x1,y1),(x2,y2))=θ}
Wherein d represents the distance between two pixels, theta represents the direction angle between the pixels, f (x 1, y 1) and f (x 2, y 2) represent the gray values of (x 1, y 1) and (x 2, y 2) respectively, arc represents the angle between a pixel point and a horizontal position, # represents the number of pixel pairs obtained according to the limiting conditions in the set, Σ # L x L y represents the sum of the numbers of all the pixel pairs under a specific positional relationship, L g represents the maximum value of the gray level, and L represents the maximum value of the gradient;
the formula for calculating the entropy of the mixture is,
The calculation formula of the inverse difference is that,
the fractal Brownian random field method is used for solving the texture fractal dimension of the sample image, and the expression of the fractal dimension D of the image is,
D=n+1-H
wherein n refers to the spatial dimension of the sample image, and H is a self-similarity parameter;
And 2.3, forming 8-dimensional feature vectors by the gray features and the texture features.
4. the method for detecting the cloud and the fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 3, wherein: the implementation of said step 3 is as follows,
Step 3.1, selecting a part of cloud samples and ground feature samples as training samples, and using feature vectors of the training samples as a training set T { (x 1, y 1. (x i, y i) }, i { (1 … N), y i ∈ ψ { -1, 1}, wherein 1 represents a positive class which represents a cloud region class, -1 represents a negative class which represents a ground feature region class, x i ∈ R n, x i is feature vectors, and N is the number of samples;
Step 3.2, constructing a classification hyperplane by adopting a support vector machine of a C-SVC model, calculating a Gaussian kernel function,
wherein x i and x j respectively refer to feature vectors of samples i and j, | | x i -x j | | 2 is the square of the euclidean distance, σ is the variance, i is 1 … N, j is 1 … N, and N is the number of samples;
step 3.3, solving Lagrange multiplier vector of the optimal classification hyperplane of the characteristic space by adopting a convex quadratic programming method,
Wherein α i is not less than 0 and not more than C, i is 1,2 … N, and is a lagrange multiplier vector, α i and α j respectively represent ith and jth lagrange multipliers, x i and x j respectively represent eigenvectors of an ith sample and a jth sample, y i and y j respectively represent categories of the ith sample and the jth sample, C is a penalty parameter, and N is the number of samples, so that the optimal solution of the lagrange multiplier vector is obtained as follows:
α*=(α1 *2 *,...αi *...αN *)T
where α i * represents the optimal solution for the ith Lagrangian multiplier;
step 3.4, solving the intercept of the optimal classification hyperplane of the feature space, wherein the calculation formula is as follows,
wherein, α i * is the optimal solution of the ith Lagrangian multiplier, y i is the category of the ith sample, and N is the number of samples;
Step 3.5, substituting the obtained Gaussian kernel function, the Lagrange optimal solution and the hyperplane intercept into a decision function,
step 3.6, taking the rest cloud samples and ground object samples as test samples, testing the decision function, optimizing the decision function, and simultaneously obtaining corresponding cloud image classifiers;
And 3.7, repeating the steps 3.1-3.6 to respectively obtain a snow image classifier and a fog image classifier.
5. the method for detecting the cloud and the fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 4, wherein: in the step 4, if the remote sensing image to be detected is a panchromatic image, the down-sampling processing is directly adopted, and if the remote sensing image to be detected is a multispectral image, the down-sampling is carried out by adopting RGB three-band.
6. The method for detecting the cloud and fog of the optical satellite remote sensing image based on the support vector machine as claimed in claim 5, wherein: and 6, selecting structural elements with square structural shapes and 3 x 3 structural sizes, respectively performing expansion operation on the three binary images, and performing corrosion operation on the obtained processed images by using the same structural elements.
CN201710834224.6A 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine Active CN107610114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710834224.6A CN107610114B (en) 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710834224.6A CN107610114B (en) 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine

Publications (2)

Publication Number Publication Date
CN107610114A CN107610114A (en) 2018-01-19
CN107610114B true CN107610114B (en) 2019-12-10

Family

ID=61060362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710834224.6A Active CN107610114B (en) 2017-09-15 2017-09-15 optical satellite remote sensing image cloud and snow fog detection method based on support vector machine

Country Status (1)

Country Link
CN (1) CN107610114B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232302B (en) * 2018-03-06 2020-08-25 香港理工大学深圳研究院 Method for detecting change of integrated gray value, spatial information and category knowledge
CN108629297A (en) * 2018-04-19 2018-10-09 北京理工大学 A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics
CN109740639B (en) * 2018-12-15 2021-02-19 中国科学院深圳先进技术研究院 Wind cloud satellite remote sensing image cloud detection method and system and electronic equipment
CN109934291B (en) * 2019-03-13 2020-10-09 北京林业大学 Construction method of forest land tree species classifier, forest land tree species classification method and system
CN110175638B (en) * 2019-05-13 2021-04-30 北京中科锐景科技有限公司 Raise dust source monitoring method
CN110705619B (en) * 2019-09-25 2023-06-06 南方电网科学研究院有限责任公司 Mist concentration grade discriminating method and device
CN110599488B (en) * 2019-09-27 2022-04-29 广西师范大学 Cloud detection method based on Sentinel-2 aerosol wave band
CN110930399A (en) * 2019-12-10 2020-03-27 南京医科大学 TKA preoperative clinical staging intelligent evaluation method based on support vector machine
CN111047570B (en) * 2019-12-10 2023-06-27 中科星图空间技术有限公司 Automatic cloud detection method based on texture analysis method
CN111291818B (en) * 2020-02-18 2022-03-18 浙江工业大学 Non-uniform class sample equalization method for cloud mask
CN111429435A (en) * 2020-03-27 2020-07-17 王程 Rapid and accurate cloud content detection method for remote sensing digital image
CN111709458B (en) * 2020-05-25 2021-04-13 中国自然资源航空物探遥感中心 Automatic quality inspection method for top-resolution five-number images
CN112668613A (en) * 2020-12-07 2021-04-16 中国西安卫星测控中心 Satellite infrared imaging effect prediction method based on weather forecast and machine learning
CN113191179A (en) * 2020-12-21 2021-07-30 广州蓝图地理信息技术有限公司 Remote sensing image classification method based on gray level co-occurrence matrix and BP neural network
CN112668441B (en) * 2020-12-24 2022-09-23 中国电子科技集团公司第二十八研究所 Satellite remote sensing image airplane target identification method combined with priori knowledge
CN113420717A (en) * 2021-07-16 2021-09-21 西藏民族大学 Three-dimensional monitoring method, device and equipment for ice and snow changes and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093243A (en) * 2013-01-24 2013-05-08 哈尔滨工业大学 High resolution panchromatic remote sensing image cloud discriminating method
CN104077592A (en) * 2013-03-27 2014-10-01 上海市城市建设设计研究总院 Automatic extraction method for high-resolution remote-sensing image navigation mark
CN104484670A (en) * 2014-10-24 2015-04-01 西安电子科技大学 Remote sensing image cloud detection method based on pseudo color and support vector machine
CN104680151A (en) * 2015-03-12 2015-06-03 武汉大学 High-resolution panchromatic remote-sensing image change detection method considering snow covering effect
CN104966295A (en) * 2015-06-16 2015-10-07 武汉大学 Ship extraction method based on wire frame model
CN105260729A (en) * 2015-11-20 2016-01-20 武汉大学 Satellite remote sensing image cloud amount calculation method on the basis of random forest
CN105426903A (en) * 2015-10-27 2016-03-23 航天恒星科技有限公司 Cloud determination method and system for remote sensing satellite images
WO2017099951A1 (en) * 2015-12-07 2017-06-15 The Climate Corporation Cloud detection on remote sensing imagery

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093243A (en) * 2013-01-24 2013-05-08 哈尔滨工业大学 High resolution panchromatic remote sensing image cloud discriminating method
CN104077592A (en) * 2013-03-27 2014-10-01 上海市城市建设设计研究总院 Automatic extraction method for high-resolution remote-sensing image navigation mark
CN104484670A (en) * 2014-10-24 2015-04-01 西安电子科技大学 Remote sensing image cloud detection method based on pseudo color and support vector machine
CN104680151A (en) * 2015-03-12 2015-06-03 武汉大学 High-resolution panchromatic remote-sensing image change detection method considering snow covering effect
CN104966295A (en) * 2015-06-16 2015-10-07 武汉大学 Ship extraction method based on wire frame model
CN105426903A (en) * 2015-10-27 2016-03-23 航天恒星科技有限公司 Cloud determination method and system for remote sensing satellite images
CN105260729A (en) * 2015-11-20 2016-01-20 武汉大学 Satellite remote sensing image cloud amount calculation method on the basis of random forest
WO2017099951A1 (en) * 2015-12-07 2017-06-15 The Climate Corporation Cloud detection on remote sensing imagery

Also Published As

Publication number Publication date
CN107610114A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
Gao et al. Automatic change detection in synthetic aperture radar images based on PCANet
EP3455782B1 (en) System and method for detecting plant diseases
CN108319973B (en) Detection method for citrus fruits on tree
CN107085708B (en) High-resolution remote sensing image change detection method based on multi-scale segmentation and fusion
CN104217196B (en) A kind of remote sensing image circle oil tank automatic testing method
CN110543837A (en) visible light airport airplane detection method based on potential target point
CN109086687A (en) The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction
CN107392968B (en) The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure
CN104680127A (en) Gesture identification method and gesture identification system
CN102722891A (en) Method for detecting image significance
Tsai et al. Road sign detection using eigen colour
CN103218831A (en) Video moving target classification and identification method based on outline constraint
CN104217221A (en) Method for detecting calligraphy and paintings based on textural features
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN104408482A (en) Detecting method for high-resolution SAR (Synthetic Aperture Radar) image object
CN103295013A (en) Pared area based single-image shadow detection method
CN108073940B (en) Method for detecting 3D target example object in unstructured environment
Wu et al. Strong shadow removal via patch-based shadow edge detection
CN111259756A (en) Pedestrian re-identification method based on local high-frequency features and mixed metric learning
CN105354547A (en) Pedestrian detection method in combination of texture and color features
Nguyen et al. Using contextual information to classify nuclei in histology images
CN114373079A (en) Rapid and accurate ground penetrating radar target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210721

Address after: 517000 floors 1-4, plant incubator (Shenhe Jindi Chuang Valley), building e2-1, east of Xingye Avenue and north of Gaoxin fifth road, Heyuan high tech Development Zone, Guangdong Province

Patentee after: Jingtong space technology (Heyuan) Co.,Ltd.

Address before: 430072 Hubei Province, Wuhan city Wuchang District of Wuhan University Luojiashan

Patentee before: WUHAN University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240319

Address after: Room 501, Building 17, Plot 2, Phase II, the Pearl River River Huacheng, No. 99, Fuyuan West Road, Liuyanghe Street, Kaifu District, Changsha, Hunan 410000

Patentee after: Hunan Hejing Cultural Media Co.,Ltd.

Country or region after: Zhong Guo

Address before: 517000 floors 1-4, plant incubator (Shenhe Jindi Chuang Valley), building e2-1, east of Xingye Avenue and north of Gaoxin fifth road, Heyuan high tech Development Zone, Guangdong Province

Patentee before: Jingtong space technology (Heyuan) Co.,Ltd.

Country or region before: Zhong Guo

TR01 Transfer of patent right