CN107610114A - Optical satellite remote sensing image cloud snow mist detection method based on SVMs - Google Patents
Optical satellite remote sensing image cloud snow mist detection method based on SVMs Download PDFInfo
- Publication number
- CN107610114A CN107610114A CN201710834224.6A CN201710834224A CN107610114A CN 107610114 A CN107610114 A CN 107610114A CN 201710834224 A CN201710834224 A CN 201710834224A CN 107610114 A CN107610114 A CN 107610114A
- Authority
- CN
- China
- Prior art keywords
- image
- cloud
- fog
- snow
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 59
- 238000012706 support-vector machine Methods 0.000 title claims abstract description 20
- 230000003287 optical effect Effects 0.000 title claims description 10
- 239000003595 mist Substances 0.000 title 1
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 21
- 230000000877 morphologic effect Effects 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 37
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 2
- 230000007797 corrosion Effects 0.000 claims description 2
- 238000003709 image segmentation Methods 0.000 claims description 2
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 claims 1
- 238000012937 correction Methods 0.000 abstract description 2
- 238000010801 machine learning Methods 0.000 abstract 1
- 238000007689 inspection Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 238000005315 distribution function Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
本发明公开了一种基于支持向量机的卫星遥感影像云、雪、雾检测方法,包括以下步骤:首先,收集不同类型的大量地物和云、雪、雾样本影像数据影像作为训练集,获得影像的灰度和纹理特征组成特征集合;通过支持向量机的方法对所有样本的特征集合进行机器学习获得云、雪、雾影像分类器。其次,使用得到的云、雪、雾影像分类器确定待测影像的类别,并进行形态学闭运算和重叠区域校正,判断遥感影像中目标区域类型;最后,重新选择训练样本获得新的影像分类器,对待测卫星遥感影像进行二次检测,并与第一次检测作比较,最终确定待测遥感影像的云、雪、雾的判定结果。实验结果表明本发明方法能够获得较高的检测精度。
The invention discloses a satellite remote sensing image cloud, snow and fog detection method based on a support vector machine, comprising the following steps: first, collecting a large number of different types of surface objects and cloud, snow and fog sample image data images as a training set, and obtaining The grayscale and texture features of the image form a feature set; machine learning is performed on the feature set of all samples by the method of support vector machine to obtain cloud, snow, and fog image classifiers. Secondly, use the obtained cloud, snow, and fog image classifier to determine the category of the image to be tested, and perform morphological closing operation and overlapping area correction to determine the type of target area in the remote sensing image; finally, re-select training samples to obtain a new image classification The device performs secondary detection on the remote sensing image of the satellite to be measured, and compares it with the first detection, and finally determines the judgment results of cloud, snow, and fog in the remote sensing image to be measured. Experimental results show that the method of the present invention can obtain higher detection accuracy.
Description
技术领域technical field
本发明属于卫星遥感影像质量检测领域,具体涉及一种基于支持向量机的卫星遥感影像云、雪、雾检测方法。The invention belongs to the field of quality detection of satellite remote sensing images, in particular to a method for detecting cloud, snow and fog of satellite remote sensing images based on a support vector machine.
背景技术Background technique
在光学卫星遥感影像中,遥感信息常常受到云、雾天气和积雪的影响,云和积雪会覆盖影像所在区域的地表信息,雾与霾等则会使遥感影像中许多特征信息被掩盖。因此,需要对遥感影像中的云、雪、雾区域进行检测,并剔除无效信息覆盖过大的相应影像数据,从而提高光学卫星遥感影像的利用率。In optical satellite remote sensing images, remote sensing information is often affected by clouds, fog and snow. Clouds and snow will cover the surface information of the area where the image is located. Fog and haze will cover up many characteristic information in remote sensing images. Therefore, it is necessary to detect cloud, snow, and fog areas in remote sensing images, and to eliminate corresponding image data with excessive coverage of invalid information, so as to improve the utilization rate of optical satellite remote sensing images.
目前的遥感影像云、雪、雾检测方法主要集中在对云或雾的检测、云、雾和云、雪的检测识别。主要包括阈值法,特征提取法等。遥感影像云检测的方法主要通过利用不同波段下的光谱反射率来设定光谱阈值判断是否是云,或通过提取影像特征,根据特征分类来进行云检测。遥感影像雾检测方法主要是对典型个例的研究,利用遥感数据提取特征进行大雾的监测研究。云、雪的检测方法主要利用云、雪在可见光波段特征相似而在短波红外差异较大的特点,通过构建云、雪反差增大因子识别雪,或通过对全色影像的纹理特征计算分形维数实现云、雪的识别。而云、雪、雾检测方法通常是上述方法的叠加使用。The current remote sensing image cloud, snow, and fog detection methods mainly focus on the detection of clouds or fog, and the detection and identification of clouds, fog, and clouds and snow. It mainly includes threshold method, feature extraction method and so on. The method of remote sensing image cloud detection mainly uses the spectral reflectance under different bands to set the spectral threshold to judge whether it is a cloud, or extracts image features and performs cloud detection according to feature classification. The method of remote sensing image fog detection is mainly the study of typical cases, using remote sensing data to extract features for monitoring research on heavy fog. The cloud and snow detection method mainly utilizes the characteristics of clouds and snow that are similar in the visible light band and have great differences in short-wave infrared, and identify snow by constructing the cloud and snow contrast enhancement factors, or by calculating the fractal dimension of the texture features of the panchromatic image. Realize the identification of clouds and snow. The cloud, snow, and fog detection methods are usually superimposed using the above methods.
经过对现有文献检索发现,目前的云、雪、雾检测方法有如下问题:第一、现有的方法难以同时检测云、雪、雾。检测方法受检测类型的影响,单个检测方法难以适应多种检测要求;已有的阈值法可靠性不高,由于检测结果受时空类型的影响,难以推广到较为普遍的检测上;特征提取法选取的影像特征信息不足,检测准确率不够高。第二、对云、雪、雾检测方法的检测效率不高,算法的复杂度较高,对大数据影响难以快速的检测和识别,对遥感数据源有一定要求,普适性较差。After searching the existing literature, it is found that the current cloud, snow, and fog detection methods have the following problems: First, the existing methods are difficult to detect clouds, snow, and fog at the same time. The detection method is affected by the type of detection, and a single detection method is difficult to adapt to a variety of detection requirements; the existing threshold method is not reliable, because the detection results are affected by the space-time type, it is difficult to extend to the more common detection; the feature extraction method selects The feature information of the image is insufficient, and the detection accuracy is not high enough. Second, the detection efficiency of cloud, snow, and fog detection methods is not high, and the complexity of the algorithm is high. It is difficult to quickly detect and identify the impact of big data. There are certain requirements for remote sensing data sources, and the universality is poor.
发明内容Contents of the invention
发明的目的在于增强遥感影像质量检查的时效性,提高遥感影像利用率,使其可以应用到资源一号、资源三号、天绘一号以及高分一号等国产卫星影像产品质量检查系统中。The purpose of the invention is to enhance the timeliness of remote sensing image quality inspection, improve the utilization rate of remote sensing images, so that it can be applied to the quality inspection system of domestic satellite image products such as Ziyuan No. 1, Zi Zi No. 3, Tianhui No. 1 and Gaofen No. 1 .
为了实现这一目的,本发明提出了一种基于支持向量机的卫星遥感影像云、雪、雾检测方法,本发明技术方案的具体实现包括以下步骤:In order to realize this object, the present invention proposes a kind of satellite remote sensing image cloud, snow, fog detection method based on support vector machine, and the specific realization of technical scheme of the present invention comprises the following steps:
步骤1,收集大量的云、雪、雾和地物样本影像数据;Step 1, collect a large number of sample image data of clouds, snow, fog and ground features;
步骤2,提取各类样本影像的灰度特征和纹理特征组成特征向量;Step 2, extracting grayscale features and texture features of various sample images to form feature vectors;
步骤3,利用支持向量机训练样本影像的特征向量,分别得到由决策函数构成的云影像分类器、雪影像分类器和雾影像分类器;Step 3, use the support vector machine to train the feature vector of the sample image, and obtain the cloud image classifier, snow image classifier and fog image classifier composed of decision functions respectively;
步骤4,对待测卫星遥感影像的原始影像进行降采样处理以获取缩略图,对缩略图进行影像切分得到子影像,计算所有子影像的灰度特征和纹理特征组成的特征向量;Step 4, perform down-sampling processing on the original image of the satellite remote sensing image to be measured to obtain a thumbnail image, perform image segmentation on the thumbnail image to obtain sub-images, and calculate feature vectors composed of grayscale features and texture features of all sub-images;
步骤5,对待测卫星遥感影像的子影像进行分类,包括以下子步骤,Step 5, classify the sub-images of the satellite remote sensing image to be measured, including the following sub-steps,
步骤5.1,将步骤4提取的特征向量分别输入到步骤3得到的云、雪、雾影像分类器中进行预测分类;Step 5.1, input the feature vectors extracted in step 4 into the cloud, snow and fog image classifiers obtained in step 3 for prediction and classification;
步骤5.2,将全部子影像按照目标区域的类型划分为云区、雾区、雪区和地物区;Step 5.2, divide all sub-images into cloud area, fog area, snow area and object area according to the type of target area;
步骤5.3,按照云与地物区,雾与地物区,雪与地物区划分为三幅二值化图像,其中每幅图像中的地物区取相同的零值,云、雪、雾区取不同的图像值;Step 5.3, according to the cloud and ground object area, fog and ground object area, snow and ground object area are divided into three binarized images, wherein the ground object area in each image takes the same zero value, cloud, snow, fog Take different image values for the area;
步骤6,对步骤5得到的分类结果进行形态学“闭”运算;Step 6, performing a morphological "close" operation on the classification results obtained in step 5;
步骤7,比较同一位置三幅二值化的图像值,获得待测卫星遥感影像中云、雪、雾的检测结果。Step 7: Compare the three binarized image values at the same location to obtain the detection results of clouds, snow, and fog in the satellite remote sensing image to be tested.
进一步的,所述步骤7的实现方式如下,Further, the implementation of step 7 is as follows,
步骤7.1,比较同一位置三幅二值化的图像值,若三幅图像同一位置的图像值相同,则判定该位置为地物区;若同一位置存在两种相同的值,则判定该位置为第三种图像值所代表的类别区域;若同一位置三幅图像的图像值均不同,则判定该位置是存在云、雪、雾的重叠区域,并将其中存在零值的点记为二重叠区域,将不存在零值的点记为三重叠区域;Step 7.1, compare the three binarized image values at the same position, if the image values of the same position of the three images are the same, then determine that this position is a feature area; if there are two identical values at the same position, then determine that this position is The category area represented by the third image value; if the image values of the three images at the same location are all different, it is determined that the location is an overlapping area with clouds, snow, and fog, and the points with zero values in it are marked as two overlaps area, mark the points without zero value as triple overlapping areas;
步骤7.2,重复步骤7.1,比较三幅二值化图像的所有图像值,得到云、雪、雾区和地物区以及重叠区域的判别结果,并对重叠区域进行校正;首先判断重叠区域是否包含于其它区域,若包含则将该重叠区域替换为其他区域;其次判断二重叠区域类别,若与某一确定类别区域外接,则判定该重叠区域的类别为重叠区域除去确定的外接类别后的类别,若没有则待三重叠区域类别判定完毕后确认;对于三重叠区域,若与某一确定类别区域外接,则判定该重叠区域为除去外接类别后的二重叠区域,若与二重叠区域外接,则判定该重叠区域的类别为重叠区域与其外接的二重叠区域类别不相同的类别;最后判定二重叠区域仅外接不同二重叠区域的情况,分别将这些区域判定为除去共有类别之后的类别,最终得到判定结果。Step 7.2, repeat step 7.1, compare all the image values of the three binarized images, obtain the discrimination results of cloud, snow, fog area, ground object area and overlapping area, and correct the overlapping area; first judge whether the overlapping area contains In other areas, if it is included, replace the overlapping area with other areas; secondly, determine the category of the two overlapping areas, if it is circumscribed with a certain category area, then determine the category of the overlapping area as the category after the overlap area is removed from the determined circumscribed category , if not, confirm after the category of the three-overlapping area is determined; for the three-overlapping area, if it is circumscribed with an area of a certain category, it is determined that the overlapping area is the second-overlapping area after the circumscribed category is removed, and if it is circumscribed with the second-overlapping area, Then it is determined that the category of the overlapping area is the category of the overlapping area and its circumscribed two overlapping area categories; finally it is determined that the two overlapping areas are only circumscribed by different two overlapping areas, and these areas are respectively determined as categories after removing the common category, and finally Get the judgment result.
步骤7.3,将判定结果进行形态学“闭”运算,得到待测卫星遥感影像中云、雪、雾的检测结果。In step 7.3, the judgment result is subjected to morphological "closed" operation to obtain the detection result of cloud, snow and fog in the satellite remote sensing image to be tested.
进一步的,还包括步骤8,重新选取适量云与地物、雾与地物、雪与地物样本作为训练样本,重复步骤2-7对待测卫星遥感影像进行二次检测,将二次检测的检测结果与第一次检测结果进行比对,若同一位置两次检测结果相同,则确定该位置的类别为任意一次检测结果得到的类别,若同一位置两次检测结果不同,则判定该位置的类别为地物,最终得到检测结果。Further, it also includes step 8, reselecting appropriate amount of cloud and ground objects, fog and ground objects, snow and ground objects samples as training samples, repeating steps 2-7 to perform secondary detection on the remote sensing image of the satellite to be measured, and the secondary detection The detection result is compared with the first detection result. If the two detection results of the same position are the same, it is determined that the category of the position is the category obtained by any one detection result. If the two detection results of the same position are different, the category of the position is determined The category is ground objects, and the detection results are finally obtained.
进一步的,所述步骤2的实现方式如下,Further, the implementation of step 2 is as follows,
步骤2.1,计算样本影像的灰度特征,包括样本影像的灰度均值、灰度方差、一阶差分和直方图信息熵;Step 2.1, calculating the grayscale features of the sample image, including the grayscale mean, grayscale variance, first-order difference and histogram information entropy of the sample image;
其中灰度均值的计算公式为,The formula for calculating the gray mean value is:
其中,f(i,j)为(i,j)处的灰度值,S=M×N,M是样本影像的宽,N是样本影像的高;Wherein, f(i,j) is the gray value at (i,j), S=M×N, M is the width of the sample image, and N is the height of the sample image;
灰度方差的计算公式为,The formula for calculating the variance of the gray level is,
一阶差分的计算公式为,The formula for calculating the first-order difference is,
直方图信息熵的计算公式为,The formula for calculating the histogram information entropy is,
其中,h[g]是样本影像的直方图,h[g](i)是在某灰度级下的像素所占整个样本影像的百分比,M为最大灰度级;Among them, h[g] is the histogram of the sample image, h[g](i) is the percentage of pixels at a certain gray level in the entire sample image, and M is the maximum gray level;
步骤2.2,计算样本影像的纹理特征,包括样本影像的梯度标准差、混合熵、逆差矩、Step 2.2, calculate the texture features of the sample image, including the gradient standard deviation, mixing entropy, inverse moment,
纹理分数维;texture fractal;
其中梯度标准差的计算公式为,The formula for calculating the standard deviation of the gradient is,
G(i,j;d,θ)=#{(x1,y1)(x2,y2)|f(x1,y1)=i,f(x2,y2)=j,|(x1,y1)-(x2,y2)|=d,∠((x1,y1),(x2,y2))=θ}其中d表示两个像素间的距离,θ表示像素间的方向角,f(x1,y1)和f(x2,y2)分别表示(x1,y1)和(x2,y2)的灰度值,∠表示像素点与水平位置的夹角,#表示按照集合里的限制条件得到的像素对的个数,∑#LxLy表示在特定的位置关系下所有像素对的个数的总和;Lg表示灰度级最大值,L表示梯度的最大值;G(i,j;d,θ)=#{(x 1 ,y 1 )(x 2 ,y 2 )|f(x 1 ,y 1 )=i,f(x 2 ,y 2 )=j, |(x 1 ,y 1 )-(x 2 ,y 2 )|=d,∠((x 1 ,y 1 ),(x 2 ,y 2 ))=θ} where d represents the distance between two pixels , θ represents the orientation angle between pixels, f(x 1 ,y 1 ) and f(x 2 ,y 2 ) represent the gray values of (x 1 ,y 1 ) and (x 2 ,y 2 ) respectively, ∠ represents The angle between the pixel point and the horizontal position, # indicates the number of pixel pairs obtained according to the constraints in the set, ∑#L x L y indicates the sum of the number of all pixel pairs under a specific position relationship; L g indicates The maximum value of the gray level, L represents the maximum value of the gradient;
混合熵的计算公式为,The formula for calculating the mixing entropy is,
逆差距的计算公式为,The formula for calculating the inverse gap is,
利用分形布朗随机场法求解样本影像的纹理分数维,则影像的分数维D的表达式为,Using the fractal Brownian random field method to solve the texture fractal dimension of the sample image, the expression of the fractal dimension D of the image is,
D=n+1-HD=n+1-H
其中,n指样本影像空间维数,H为自相似参数;Among them, n refers to the dimension of the sample image space, and H is the self-similarity parameter;
步骤2.3,将以上灰度特征和纹理特征组成8维特征向量。In step 2.3, the above grayscale features and texture features are combined into an 8-dimensional feature vector.
进一步的,所述步骤3的实现方式如下,Further, the implementation of step 3 is as follows,
步骤3.1,选取部分云样本和地物样本作为训练样本,将训练样本的特征向量作为训练影像分类器的训练集T={(x1,y1)...(xi,yi)},i=1…N,yi∈ψ={-1,1},其中1代表正类,代表有云区域类别,-1代表负类,代表地物区域类别,xi∈Rn,xi为特征向量,N为样本个数;Step 3.1, select some cloud samples and object samples as training samples, and use the feature vectors of the training samples as the training set for training the image classifier T={(x 1 ,y 1 )...( xi ,y i )} , i=1…N, y i ∈ ψ={-1, 1}, where 1 represents the positive class, which represents the cloudy area category, -1 represents the negative class, which represents the object area category, x i ∈ R n , x i is the feature vector, N is the number of samples;
步骤3.2,采用C-SVC模型的支持向量机构建分类超平面,计算高斯核函数,Step 3.2, using the support vector machine of the C-SVC model to construct a classification hyperplane, and calculating the Gaussian kernel function,
其中,xi与xj分别指样本i和j的特征向量,||xi-xj||2为欧式距离的平方,σ为方差,i=1…N,j=1…N,N为样本个数;Among them, x i and x j refer to the feature vectors of samples i and j respectively, || xi -x j || 2 is the square of the Euclidean distance, σ is the variance, i=1...N, j=1...N, N is the number of samples;
步骤3.3,采用凸二次规划方法求解特征空间最优分类超平面的拉格朗日乘子向量,Step 3.3, using the convex quadratic programming method to solve the Lagrange multiplier vector of the optimal classification hyperplane in the feature space,
其中,0≤αi≤C,i=1,2…N,为拉格朗日乘子向量,αi和αj分别表示第i个和第j个拉格朗日乘子,xi,xj分别表示第i个样本和第j个样本的特征向量,yi和yj分别表示为第i个样本和第j个样本的类别,C为惩罚参数,N为样本个数,得到拉格朗日乘子向量的最优解为:Among them, 0≤α i ≤C, i=1,2...N, is the Lagrangian multiplier vector, α i and αj represent the i-th and j-th Lagrangian multipliers respectively, x i , x j represents the eigenvectors of the i-th sample and the j-th sample respectively, y i and y j represent the category of the i-th sample and the j-th sample respectively, C is the penalty parameter, N is the number of samples, and the Rag The optimal solution of the Langerian multiplier vector is:
α*=(α1 *,α2 *,...αi *...αN *)T α * =(α 1 * ,α 2 * ,...α i * ...α N * ) T
其中,αi *表示第i个拉格朗日乘子的最优解;Among them, α i * represents the optimal solution of the i-th Lagrangian multiplier;
步骤3.4,求解特征空间最优分类超平面的截距,计算公式为,Step 3.4, solve the intercept of the optimal classification hyperplane in the feature space, the calculation formula is,
其中,αi *为第i个拉格朗日乘子的最优解,yi为第i个样本的类别,N为样本个数;Among them, α i * is the optimal solution of the i-th Lagrangian multiplier, y i is the category of the i-th sample, and N is the number of samples;
步骤3.5,将求得的高斯核函数,拉格朗日最优解,超平面截距代入决策函数,In step 3.5, substitute the obtained Gaussian kernel function, Lagrangian optimal solution, and hyperplane intercept into the decision function,
步骤3.6,将剩余的云样本和地物样本作为测试样本,对决策函数进行测试,优化决策函数,同时获得相应的云影像分类器;In step 3.6, the remaining cloud samples and surface object samples are used as test samples to test the decision function, optimize the decision function, and obtain the corresponding cloud image classifier;
步骤3.7,重复步骤3.1-3.6,分别获得雪影像分类器和雾影像分类器。Step 3.7, repeat steps 3.1-3.6 to obtain snow image classifier and fog image classifier respectively.
进一步的,所述步骤4中,若待测遥感影像为全色影像,则直接采取降采样处理,若待测遥感影像为多光谱影像,则采取RGB三波段进行降采样。Further, in the step 4, if the remote sensing image to be tested is a panchromatic image, down-sampling is directly performed; if the remote sensing image to be tested is a multispectral image, RGB three-band down-sampling is used.
进一步的,所述步骤6的实现方式为,选取结构形状为正方形,结构尺寸为3×3的结构元素,对三幅二值化图像分别进行膨胀运算,将得到的处理图像,再用相同的结构元素进行腐蚀运算。Further, the implementation of step 6 is to select a structural element whose structural shape is a square and whose structural size is 3×3, respectively perform expansion operations on the three binarized images, and then use the same Structural elements are corroded.
与现有技术相比,本发明的优点:本发明方法可以一次训练后用于多次检测,通过大量影像训练得到影像分类器,检测时只需要再次使用即可,支持向量机算法在预测分类阶段时间复杂度低,可以快速进行区域类型的检测;经测试,本发明方法既适用于全色影像,也适用于n通道多光谱影像,利用本发明方法对资源一号02星号、资源三号、天绘一号、高分一号等多颗国产卫星遥感影像进行云检测,其准确度分别达到94.8%、96.4%、93.2%和95.2%。Compared with the prior art, the advantages of the present invention: the method of the present invention can be used for multiple detections after one training, and the image classifier is obtained through a large number of image trainings, and only needs to be used again during detection, and the support vector machine algorithm can predict and classify The time complexity of the stage is low, and the detection of the area type can be carried out quickly; after testing, the method of the present invention is applicable to both panchromatic images and n-channel multi-spectral images. The remote sensing images of multiple domestic satellites including Tianhui No. 1 and Gaofen No. 1 were used for cloud detection, and the accuracy reached 94.8%, 96.4%, 93.2% and 95.2% respectively.
附图说明Description of drawings
图1为本发明实施例的实现流程图。FIG. 1 is a flow chart of the implementation of the embodiment of the present invention.
具体实施方式Detailed ways
为了便于本领域普通技术人员理解和实施本发明,下面结合附图及实施例对本发明作进一步的详细描述,此处所描述的实施示例仅用于说明和解释本发明,但并不限定本发明的保护范围。In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. The implementation examples described here are only used to illustrate and explain the present invention, but do not limit the scope of the present invention. protected range.
参照图1,本发明以资源一号02C号,资源三号卫星全色影像数据以及天绘一号的多光谱遥感影像数据为例,其实现步骤如下:With reference to Fig. 1, the present invention takes No. 02C of No. 1 resource, panchromatic image data of No. 3 satellite and multispectral remote sensing image data of Tianhui No. 1 as an example, and its realization steps are as follows:
步骤1,样本获取Step 1, sample acquisition
将样本的原始影像降采样为1024×1024像素8位bmp格式缩略图,对缩略图进行影像切分。若遥感影像为全色影像,则直接采取降采样处理,若遥感影像为多光谱影像,则采取RGB三波段进行降采样。将全色影像切分为32×32的样本块,将多光谱影像切分为16×16的影像块,分别选取1500个地物样本,1000个云样本,1000个雪样本,1000个雾样本作为训练样本数据。The original image of the sample is down-sampled to a 1024×1024 pixel 8-bit bmp format thumbnail, and the thumbnail image is segmented. If the remote sensing image is a panchromatic image, down-sampling is directly adopted; if the remote sensing image is a multi-spectral image, RGB three-band down-sampling is used. Divide the panchromatic image into 32×32 sample blocks, divide the multispectral image into 16×16 image blocks, and select 1500 ground object samples, 1000 cloud samples, 1000 snow samples, and 1000 fog samples as training sample data.
步骤2,特征提取:云、雪、雾和地物在灰度信息上存在许多不同之处,一般而言,全色与多光谱影像中云区域比雾区域的平均亮度要高很多,而雾区域的亮度值要大于地物区域的亮度值,云与雪区域则有相似的光谱特征,同时,云、雪、雾与地物在灰度分布以及灰度变化等方面也存在明显差异,因此利用影像的灰度特征可将目标区域影像和地物影像进行一定程度的区分。Step 2, feature extraction: There are many differences in the grayscale information of clouds, snow, fog, and ground objects. Generally speaking, the average brightness of cloud areas in panchromatic and multispectral images is much higher than that of fog areas, and fog The brightness value of the area is greater than that of the ground object area, and the cloud and snow areas have similar spectral characteristics. At the same time, there are obvious differences in the gray distribution and gray level change of the cloud, snow, fog and ground objects. Therefore, The image of the target area and the object image can be distinguished to a certain extent by using the gray feature of the image.
提取样本影像的灰度特征和纹理特征矢量值形成8维特征向量,其具体实现步骤如下:Extract the grayscale feature and texture feature vector value of the sample image to form an 8-dimensional feature vector. The specific implementation steps are as follows:
步骤2.1,计算样本影像的灰度特征Step 2.1, calculate the grayscale features of the sample image
步骤2.1.1,求灰度均值,使用如下公式计算:Step 2.1.1, to find the average gray value, use the following formula to calculate:
其中,f(i,j)为(i,j)处的灰度值,S=M×N,M是样本影像的宽,N是样本影像的高。Wherein, f(i,j) is the gray value at (i,j), S=M×N, M is the width of the sample image, and N is the height of the sample image.
步骤2.1.2,计算样本影像的灰度方差:Step 2.1.2, calculate the grayscale variance of the sample image:
步骤2.1.3,计算样本影像的一阶差分:Step 2.1.3, calculate the first order difference of the sample image:
步骤2.1.4,计算样本影像的直方图信息熵:Step 2.1.4, calculate the histogram information entropy of the sample image:
其中,h[g]是样本影像的直方图,h[g](i)是在某灰度级(i)下的像素所占整个影像的百分比,M为最大灰度级。Among them, h[g] is the histogram of the sample image, h[g](i) is the percentage of pixels at a certain gray level (i) in the entire image, and M is the maximum gray level.
步骤2.2,计算样本影像的纹理特征:从人眼的视觉特性角度出发,卫星遥感影像中云、雪、雾的纹理信息往往比地物的纹理要单一和简单,此外影像中云、雪、雾区域的边缘也较为模糊、圆润,而地物边缘则通常较为锐利、梯度大。因此,可利用卫星遥感影像的纹理信息和梯度信息来进行云、雪、雾区域与地物信息的检测和划分。此外,云、雪、雾影像之间的纹理特征也具有明显区别。对于云样本,其纹理属于随机纹理,多变而且难以检测,表现杂乱没有规律,边缘纹理较粗且模糊;雾样本纹理则比较均匀,平滑度较好,边缘形态规则;积雪样本受地面纹理的影响,具有更好的方向性,梯度变化大。通过影像灰度和影像梯度的综合信息,对云、雪、雾表现的不同纹理特征进行分辨。Step 2.2, calculate the texture features of the sample image: From the perspective of the visual characteristics of the human eye, the texture information of clouds, snow, and fog in satellite remote sensing images is often simpler and simpler than that of ground objects. In addition, the texture information of clouds, snow, and fog in images The edge of the area is also relatively blurred and round, while the edge of the ground object is usually sharper and has a large gradient. Therefore, the texture information and gradient information of satellite remote sensing images can be used to detect and divide cloud, snow, fog regions and ground object information. In addition, the texture features of cloud, snow, and fog images are also significantly different. For cloud samples, its texture is random, changeable and difficult to detect, and its appearance is messy and irregular, with thick and blurred edge textures; fog sample texture is relatively uniform, smooth, and edge shape is regular; snow samples are influenced by ground texture. The effect of , has better directionality and large gradient changes. Through the comprehensive information of image grayscale and image gradient, different texture features of clouds, snow, and fog can be distinguished.
步骤2.2.1,计算出样本影像的灰度梯度共生矩阵G(i,j,d,θ),使用如下公式:Step 2.2.1, calculate the gray gradient co-occurrence matrix G(i,j,d,θ) of the sample image, using the following formula:
G(i,j;d,θ)=#{(x1,y1)(x2,y2)|f(x1,y1)=i,f(x2,y2)=j,|(x1,y1)-(x2,y2)|=d,∠((x1,y1),(x2,y2))=θ}其中d表示两个像素间的距离,θ表示像素间的方向角,(x,y)表示像素点的坐标,f(x,y)表示该点的灰度值,∠表示像素点与水平位置的夹角,#表示按照集合里的限制条件得到的像素对的个数,例如两个点像素值分别为1和2,它们之间的距离为1,θ=0°即水平方向,在整个影像中遍历,统计符合这些条件的两个点组成的像素对有多少个。G(i,j;d,θ)=#{(x 1 ,y 1 )(x 2 ,y 2 )|f(x 1 ,y 1 )=i,f(x 2 ,y 2 )=j, |(x 1 ,y 1 )-(x 2 ,y 2 )|=d,∠((x 1 ,y 1 ),(x 2 ,y 2 ))=θ} where d represents the distance between two pixels , θ represents the direction angle between pixels, (x, y) represents the coordinates of the pixel point, f(x, y) represents the gray value of the point, ∠ represents the angle between the pixel point and the horizontal position, # represents the value according to the set The number of pixel pairs obtained from the restriction conditions, for example, the pixel values of two points are 1 and 2 respectively, the distance between them is 1, θ=0° is the horizontal direction, traverse the entire image, and count the pixels that meet these conditions How many pixel pairs are formed by two points.
步骤2.2.2,将灰度共生矩阵G(i,j,d,θ)归一化为H(i,j;d,θ),其计算公式如下:In step 2.2.2, the gray level co-occurrence matrix G(i,j,d,θ) is normalized to H(i,j;d,θ), and its calculation formula is as follows:
其中,∑#LxLy表示在特定的位置关系下(即指距离d和夹角θ相同)所有像素对的个数的总和。Among them, ∑#L x L y represents the sum of the number of all pixel pairs under a specific positional relationship (that is, the distance d and the included angle θ are the same).
步骤2.2.3,计算样本影像的梯度标准差,首先计算梯度平均:Step 2.2.3, calculate the gradient standard deviation of the sample image, first calculate the gradient average:
Lg表示灰度级最大值,L表示梯度的最大值。L g represents the maximum value of the gray level, and L represents the maximum value of the gradient.
将梯度平均值T代入如下公式得到梯度标准差:Substitute the gradient mean T into the following formula to obtain the gradient standard deviation:
步骤2.2.4,计算样本影像的混合熵,计算公式如下:Step 2.2.4, calculate the mixing entropy of the sample image, the calculation formula is as follows:
步骤2.2.5,提取样本影像的局部平稳性特征,计算逆差距,其计算公式如下:Step 2.2.5, extract the local stationarity features of the sample image, and calculate the inverse gap, the calculation formula is as follows:
步骤2.2.6,选用分形布朗随机场法来求解样本影像的纹理分数维;In step 2.2.6, the fractal Brownian random field method is used to solve the texture fractal dimension of the sample image;
求解常数H(0<H<1),使得分布函数F(t)满足:Solve the constant H(0<H<1), so that the distribution function F(t) satisfies:
F(t)为x,Δx无关的分布函数,H为自相似参数,f(x)称为关于x的实随机函数,n指样本影像空间维数,则影像的分数维D的表达式为:F(t) is a distribution function irrelevant to x and Δx, H is a self-similar parameter, f(x) is called a real random function about x, n refers to the dimension of the sample image space, then the expression of the fractal dimension D of the image is :
D=n+1-HD=n+1-H
步骤3,影像分类器训练Step 3, image classifier training
使用支持向量机的方法来训练样本影像的特征向量,分别得到由决策函数构成的云影像分类器、雪影像分类器和雾影像分类器,其具体实现包括以下子步骤:Use the support vector machine method to train the feature vectors of the sample images, and obtain cloud image classifiers, snow image classifiers and fog image classifiers composed of decision functions respectively. The specific implementation includes the following sub-steps:
步骤3.1,将云样本和地物样本的80%作为训练样本,将训练样本的特征向量作为训练影像分类器的训练集T={(x1,y1)...(xi,yi)},i=1…N。其中,yi∈ψ={-1,1}其中1代表正类,即有云区域类别,-1代表负类,即地物区域类别,xi∈Rn,xi为特征向量,N为样本个数。Step 3.1, take 80% of the cloud samples and object samples as training samples, and use the feature vectors of the training samples as the training set for training the image classifier T={(x 1 ,y 1 )...(x i ,y i )}, i=1...N. Among them, y i ∈ ψ = {-1, 1} where 1 represents the positive class, that is, the cloudy area category, -1 represents the negative class, that is, the ground object area category, x i ∈ R n , x i is the feature vector, N is the number of samples.
步骤3.2,采用C-SVC模型的支持向量机构建分类超平面,计算高斯核函数:Step 3.2, use the support vector machine of the C-SVC model to construct the classification hyperplane, and calculate the Gaussian kernel function:
其中xi与xj分别指样本i和j的特征向量,||xi-xj||2为欧式距离的平方,σ为方差,σ的值一般根据实验结果恰当选取。由于在分类的过程中很难完全的将特征向量正确分类,因此σ的目的可以理解为设置一个容错范围,在范围内的忽略错误。i=1…N,j=1…N,N为样本数量。Where x i and x j refer to the feature vectors of samples i and j respectively, || xi -x j || 2 is the square of the Euclidean distance, σ is the variance, and the value of σ is generally selected according to the experimental results. Since it is difficult to completely classify the feature vectors correctly during the classification process, the purpose of σ can be understood as setting an error tolerance range and ignoring errors within the range. i=1...N, j=1...N, N is the number of samples.
步骤3.3,采用凸二次规划方法求解特征空间最优分类超平面的拉格朗日乘子向量:Step 3.3, use the convex quadratic programming method to solve the Lagrangian multiplier vector of the optimal classification hyperplane in the feature space:
其中0≤αi≤C,i=1,2…N,为拉格朗日乘子向量,αi和αj分别为其中的第i个和第j个拉格朗日乘子,xi,xj分别表示第i个样本和第j个样本的特征向量,yi和yj分别表示为第i个样本和第j个样本的类别,C为惩罚参数,N为样本个数,求解出拉格朗日乘子向量的最优解为:Among them, 0≤α i ≤C, i=1,2...N, is the Lagrangian multiplier vector, α i and αj are the i-th and j-th Lagrangian multipliers respectively, x i , x j represent the eigenvectors of the i-th sample and the j-th sample respectively, y i and y j represent the category of the i-th sample and the j-th sample respectively, C is the penalty parameter, N is the number of samples, and the solution is The optimal solution for the Lagrange multiplier vector is:
α*=(α1 *,α2 *,...αi *...αN *)T α * =(α 1 * ,α 2 * ,...α i * ...α N * ) T
其中αi *表示第i个拉格朗日乘子的最优解。where α i * represents the optimal solution of the ith Lagrangian multiplier.
步骤3.4,求解特征空间最优分类超平面的截距,计算公式:Step 3.4, solve the intercept of the optimal classification hyperplane in the feature space, the calculation formula is:
其中,αi *为第i个拉格朗日乘子的最优解,yi为第i个样本的正,负类别,N为样本个数。Among them, α i * is the optimal solution of the i-th Lagrangian multiplier, y i is the positive and negative category of the i-th sample, and N is the number of samples.
步骤3.5,将前述求得的高斯核函数,拉格朗日最优解,超平面截距代入决策函数:In step 3.5, substitute the obtained Gaussian kernel function, Lagrangian optimal solution, and hyperplane intercept into the decision function:
步骤3.6,将云样本和地物样本的20%作为测试样本集,对决策函数进行测试,优化决策函数,获得相应的云影像分类器。In step 3.6, 20% of the cloud samples and ground object samples are used as the test sample set to test the decision function, optimize the decision function, and obtain the corresponding cloud image classifier.
步骤3.7,重复步骤3.1-3.6,分别获得雪影像分类器和雾影像分类器。Step 3.7, repeat steps 3.1-3.6 to obtain snow image classifier and fog image classifier respectively.
步骤4,待测影像特征提取Step 4, feature extraction of the image to be tested
将待测的原始影像降采样为1024×1024像素8位bmp格式缩略图,若遥感影像为全色影像,则直接采取降采样处理,若遥感影像为多光谱影像,则采取RGB三波段进行降采样,对缩略图进行影像切分以获取其1024幅32×32像素子影像,提取所有子影像的特征向量,包括灰度特征和纹理特征矢量,具体提取可通过步骤2实现。The original image to be tested is down-sampled to a 1024×1024 pixel 8-bit bmp format thumbnail. If the remote sensing image is a panchromatic image, the down-sampling process is directly adopted; if the remote sensing image is a multi-spectral image, the RGB three-band downsampling Sampling, segmenting the thumbnail images to obtain 1024 32×32 pixel sub-images, extracting feature vectors of all sub-images, including grayscale features and texture feature vectors, the specific extraction can be achieved through step 2.
步骤5,待测影像分类Step 5, classification of images to be tested
步骤5.1,将步骤4中提取的特征向量分别输入到步骤3得到的对应的云、雪、雾影像分类器中进行预测分类,通过决策函数对特征向量进行分类。In step 5.1, input the feature vectors extracted in step 4 into the corresponding cloud, snow, and fog image classifiers obtained in step 3 for prediction and classification, and classify the feature vectors through the decision function.
步骤5.2,重复执行步骤3直到所有子影像都被分类,将全部子影像按照目标区域的类别划分为云区、雾区、雪区和非云雪雾区域(即地物区)。Step 5.2, repeat step 3 until all sub-images are classified, and divide all sub-images into cloud area, fog area, snow area and non-cloud, snow and fog area (ie object area) according to the category of the target area.
步骤5.3,按照云与地物区,雾与地物区,雪与地物区划分为三幅二值化图像,其中各幅图像之间的地物区取相同的零值,而云、雪、雾区域之间则取不同的图像值。Step 5.3, according to the cloud and ground object area, the fog and ground object area, and the snow and ground object area, it is divided into three binarized images, wherein the ground object area between each image takes the same zero value, and the cloud, snow , different image values are taken between fog regions.
步骤6,形态学闭运算Step 6, Morphological closing operation
选取结构形状为正方形,结构尺寸为3×3的结构元素,对三幅二值化图像分别进行膨胀运算,将得到的处理图像,再用相同的结构元素进行腐蚀运算,将云、雪、雾区域连成一片,最终消除边缘的噪声区域。Select the structural elements whose structural shape is square and whose structural size is 3×3, respectively perform expansion operations on the three binarized images, and then use the same structural elements to perform corrosion operations on the obtained processed images to convert clouds, snow, fog The regions are joined together, and the noisy regions at the edges are finally eliminated.
步骤7,重叠区域校正Step 7, overlapping area correction
步骤7.1,比较同一位置三幅二值化的图像值:若三幅图像同一位置的图像值相同,则判定该位置为地物区;若存在两种相同的值,则表明该位置为第三种图像值所代表的类别区域;若图像值不同,则表明该子影像存在云、雪、雾的重叠区域,将其中存在零值的点记为二重叠区域,将不存在零值的点记为三重叠区域。Step 7.1, compare the three binarized image values at the same position: if the image values of the same position of the three images are the same, it is determined that the position is a feature area; if there are two identical values, it indicates that the position is the third If the image values are different, it indicates that there are overlapping areas of clouds, snow, and fog in the sub-image, and the points with zero values in them will be marked as two overlapping areas, and the points without zero values will be marked as for three overlapping regions.
步骤7.2,重复步骤7.1,比较三幅二值化图像的所有图像值,得到云、雪、雾区域和地物区以及重叠区域的判别结果,并对重叠区域进行校正;首先判断重叠区域是否包含于其它区域,若包含则将该重叠区域替换为其他区域(包括重叠区域和确定类别区域);其次判断二重叠区域类别,若与某一确定类别区域外接,则判定该重叠区域的类别为重叠区域除去确定的外接类别后的类别,若没有则待三重叠区域类别判定完毕后确认;对于三重叠区域,若与某一确定类别区域外接,则判定该重叠区域为除去外接类别后的二重叠区域,若与二重叠区域外接,则判定该重叠区域的类别为重叠区域与其外接的二重叠区域类别不相同的类别;最后判定二重叠区域仅外接不同二重叠区域的情况,分别将这些区域判定为除去共有类别之后的类别,最终得到判定结果。例如云雪重叠区域外围如果是确定的雾区域,则该云雪区域判定为雾区域;如果云雪区域被云雾区域包围,则判定该云雪区域为云雾区域;如果云雪区域与确定的云区域外接,则判断该区域为雪区域;如果云雪雾区域确定的云域外接,则判定该区域为雪雾区域;如果云雪雾区域与云雾区域外接,则判断该区域为确定的雪区域;如果云雪区域与云雾区域外接,则判断该云雪区域与云雾区域分别为雪区域和雾区域。Step 7.2, repeat step 7.1, compare all the image values of the three binarized images, obtain the discrimination results of cloud, snow, fog area, ground object area and overlapping area, and correct the overlapping area; first judge whether the overlapping area contains In other areas, if it is included, replace the overlapping area with other areas (including overlapping areas and definite category areas); secondly, determine the category of the two overlapping areas, if it is circumscribed with a certain definite category area, then determine the category of the overlapping area as overlapping The category of the area after the determinate circumscribed category is removed, if there is no category, it will be confirmed after the category of the three-overlapping area is determined; for the triple-overlapped area, if it is circumscribed with a certain category area, then it is determined that the overlapping area is the two-overlapped area after the circumscribed category is removed area, if it is circumscribed with the two overlapping areas, then it is determined that the category of the overlapping area is the category of the overlapping area and its circumscribed two overlapping area categories; finally it is determined that the two overlapping areas only circumscribe different two overlapping areas, and these areas are judged respectively In order to remove the category after the common category, the final judgment result is obtained. For example, if the periphery of the cloud and snow overlapping area is a definite fog area, then the cloud and snow area is judged as a fog area; If the area is circumscribed, it is judged that the area is a snow area; if the cloud area determined by the cloud, snow and fog area is circumscribed, the area is determined to be a snow and fog area; ; If the cloud and snow area is circumscribed with the cloud and fog area, it is judged that the cloud and snow area and the cloud and fog area are snow area and fog area respectively.
步骤7.3,将判定结果进行形态学“闭”运算,其运算方法如步骤6所述,得到最终的云、雪、雾检测结果。In step 7.3, the judgment result is subjected to morphological "closed" operation, and the operation method is as described in step 6 to obtain the final cloud, snow, and fog detection results.
步骤8,二次检测Step 8, secondary detection
重新选取各500个云与地物、雾与地物、雪与地物样本制作支持向量机分类器,地物选取高亮样本。对待测样本进行“二次检测”,将二次检测的检测结果与第一次检测结果进行比对,若同一位置两次检测结果相同,则确定该位置的类别为任意一次检测结果得到的类别,若同一位置两次检测结果不同,则判定该位置的类别为地物,最终得到检测结果。例如,如果二检的结果是云或雪,一检的结果是雾,该区域都判定为地物,只有一检和二检的结果都是云,才能确定该区域为云。Reselect 500 samples of cloud and ground objects, fog and ground objects, snow and ground objects to make a support vector machine classifier, and select highlighted samples for ground objects. Perform "secondary detection" on the sample to be tested, and compare the detection results of the secondary detection with the first detection results. If the two detection results at the same position are the same, then determine the category of the position as the category obtained from any one detection result , if the two detection results of the same position are different, it is determined that the category of the position is a ground object, and finally the detection result is obtained. For example, if the result of the second inspection is cloud or snow, and the result of the first inspection is fog, the area is determined to be a ground object. Only when the results of the first inspection and the second inspection are both clouds can the area be determined as a cloud.
本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention belongs can make various modifications or supplements to the described specific embodiments or adopt similar methods to replace them, but they will not deviate from the spirit of the present invention or go beyond the definition of the appended claims range.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710834224.6A CN107610114B (en) | 2017-09-15 | 2017-09-15 | Detection method of cloud, snow and fog in optical satellite remote sensing images based on support vector machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710834224.6A CN107610114B (en) | 2017-09-15 | 2017-09-15 | Detection method of cloud, snow and fog in optical satellite remote sensing images based on support vector machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107610114A true CN107610114A (en) | 2018-01-19 |
CN107610114B CN107610114B (en) | 2019-12-10 |
Family
ID=61060362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710834224.6A Active CN107610114B (en) | 2017-09-15 | 2017-09-15 | Detection method of cloud, snow and fog in optical satellite remote sensing images based on support vector machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107610114B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629297A (en) * | 2018-04-19 | 2018-10-09 | 北京理工大学 | A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics |
CN109740639A (en) * | 2018-12-15 | 2019-05-10 | 中国科学院深圳先进技术研究院 | A method, system and electronic device for detecting cloud in remote sensing image of Fengyun satellite |
CN109934291A (en) * | 2019-03-13 | 2019-06-25 | 北京林业大学 | Construction method of forest tree species classifier, forest tree species classification method and system |
CN110175638A (en) * | 2019-05-13 | 2019-08-27 | 北京中科锐景科技有限公司 | A kind of fugitive dust source monitoring method |
CN110232302A (en) * | 2018-03-06 | 2019-09-13 | 香港理工大学深圳研究院 | A kind of change detecting method of integrated gray value, spatial information and classification knowledge |
CN110599488A (en) * | 2019-09-27 | 2019-12-20 | 广西师范大学 | Cloud detection method based on Sentinel-2 aerosol wave band |
CN110705619A (en) * | 2019-09-25 | 2020-01-17 | 南方电网科学研究院有限责任公司 | Fog concentration grade judging method and device |
CN110930399A (en) * | 2019-12-10 | 2020-03-27 | 南京医科大学 | TKA preoperative clinical staging intelligent evaluation method based on support vector machine |
CN111047570A (en) * | 2019-12-10 | 2020-04-21 | 西安中科星图空间数据技术有限公司 | Automatic cloud detection method based on texture analysis method |
CN111291818A (en) * | 2020-02-18 | 2020-06-16 | 浙江工业大学 | A Sample Equalization Method for Cloud Mask-Oriented Non-Uniform Classes |
CN111429435A (en) * | 2020-03-27 | 2020-07-17 | 王程 | Rapid and accurate cloud content detection method for remote sensing digital image |
CN111709458A (en) * | 2020-05-25 | 2020-09-25 | 中国自然资源航空物探遥感中心 | An automatic quality inspection method for Gaofen-5 images |
CN112668613A (en) * | 2020-12-07 | 2021-04-16 | 中国西安卫星测控中心 | Satellite infrared imaging effect prediction method based on weather forecast and machine learning |
CN112668441A (en) * | 2020-12-24 | 2021-04-16 | 中国电子科技集团公司第二十八研究所 | Satellite remote sensing image airplane target identification method combined with priori knowledge |
CN113191179A (en) * | 2020-12-21 | 2021-07-30 | 广州蓝图地理信息技术有限公司 | Remote sensing image classification method based on gray level co-occurrence matrix and BP neural network |
CN113420717A (en) * | 2021-07-16 | 2021-09-21 | 西藏民族大学 | Three-dimensional monitoring method, device and equipment for ice and snow changes and readable storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093243A (en) * | 2013-01-24 | 2013-05-08 | 哈尔滨工业大学 | High resolution panchromatic remote sensing image cloud discriminating method |
CN104077592A (en) * | 2013-03-27 | 2014-10-01 | 上海市城市建设设计研究总院 | Automatic extraction method for high-resolution remote-sensing image navigation mark |
CN104484670A (en) * | 2014-10-24 | 2015-04-01 | 西安电子科技大学 | Remote sensing image cloud detection method based on pseudo color and support vector machine |
CN104680151A (en) * | 2015-03-12 | 2015-06-03 | 武汉大学 | High-resolution panchromatic remote-sensing image change detection method considering snow covering effect |
CN104966295A (en) * | 2015-06-16 | 2015-10-07 | 武汉大学 | Ship extraction method based on wire frame model |
CN105260729A (en) * | 2015-11-20 | 2016-01-20 | 武汉大学 | Satellite remote sensing image cloud amount calculation method on the basis of random forest |
CN105426903A (en) * | 2015-10-27 | 2016-03-23 | 航天恒星科技有限公司 | Cloud determination method and system for remote sensing satellite images |
WO2017099951A1 (en) * | 2015-12-07 | 2017-06-15 | The Climate Corporation | Cloud detection on remote sensing imagery |
-
2017
- 2017-09-15 CN CN201710834224.6A patent/CN107610114B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093243A (en) * | 2013-01-24 | 2013-05-08 | 哈尔滨工业大学 | High resolution panchromatic remote sensing image cloud discriminating method |
CN104077592A (en) * | 2013-03-27 | 2014-10-01 | 上海市城市建设设计研究总院 | Automatic extraction method for high-resolution remote-sensing image navigation mark |
CN104484670A (en) * | 2014-10-24 | 2015-04-01 | 西安电子科技大学 | Remote sensing image cloud detection method based on pseudo color and support vector machine |
CN104680151A (en) * | 2015-03-12 | 2015-06-03 | 武汉大学 | High-resolution panchromatic remote-sensing image change detection method considering snow covering effect |
CN104966295A (en) * | 2015-06-16 | 2015-10-07 | 武汉大学 | Ship extraction method based on wire frame model |
CN105426903A (en) * | 2015-10-27 | 2016-03-23 | 航天恒星科技有限公司 | Cloud determination method and system for remote sensing satellite images |
CN105260729A (en) * | 2015-11-20 | 2016-01-20 | 武汉大学 | Satellite remote sensing image cloud amount calculation method on the basis of random forest |
WO2017099951A1 (en) * | 2015-12-07 | 2017-06-15 | The Climate Corporation | Cloud detection on remote sensing imagery |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110232302A (en) * | 2018-03-06 | 2019-09-13 | 香港理工大学深圳研究院 | A kind of change detecting method of integrated gray value, spatial information and classification knowledge |
CN108629297A (en) * | 2018-04-19 | 2018-10-09 | 北京理工大学 | A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics |
CN109740639A (en) * | 2018-12-15 | 2019-05-10 | 中国科学院深圳先进技术研究院 | A method, system and electronic device for detecting cloud in remote sensing image of Fengyun satellite |
CN109740639B (en) * | 2018-12-15 | 2021-02-19 | 中国科学院深圳先进技术研究院 | A method, system and electronic device for detecting cloud in remote sensing image of Fengyun satellite |
CN109934291A (en) * | 2019-03-13 | 2019-06-25 | 北京林业大学 | Construction method of forest tree species classifier, forest tree species classification method and system |
CN110175638A (en) * | 2019-05-13 | 2019-08-27 | 北京中科锐景科技有限公司 | A kind of fugitive dust source monitoring method |
CN110175638B (en) * | 2019-05-13 | 2021-04-30 | 北京中科锐景科技有限公司 | Raise dust source monitoring method |
CN110705619A (en) * | 2019-09-25 | 2020-01-17 | 南方电网科学研究院有限责任公司 | Fog concentration grade judging method and device |
CN110599488A (en) * | 2019-09-27 | 2019-12-20 | 广西师范大学 | Cloud detection method based on Sentinel-2 aerosol wave band |
CN110599488B (en) * | 2019-09-27 | 2022-04-29 | 广西师范大学 | Cloud detection method based on Sentinel-2 aerosol wave band |
CN111047570A (en) * | 2019-12-10 | 2020-04-21 | 西安中科星图空间数据技术有限公司 | Automatic cloud detection method based on texture analysis method |
CN110930399A (en) * | 2019-12-10 | 2020-03-27 | 南京医科大学 | TKA preoperative clinical staging intelligent evaluation method based on support vector machine |
CN111047570B (en) * | 2019-12-10 | 2023-06-27 | 中科星图空间技术有限公司 | Automatic cloud detection method based on texture analysis method |
CN111291818A (en) * | 2020-02-18 | 2020-06-16 | 浙江工业大学 | A Sample Equalization Method for Cloud Mask-Oriented Non-Uniform Classes |
CN111429435A (en) * | 2020-03-27 | 2020-07-17 | 王程 | Rapid and accurate cloud content detection method for remote sensing digital image |
CN111709458B (en) * | 2020-05-25 | 2021-04-13 | 中国自然资源航空物探遥感中心 | An automatic quality inspection method for Gaofen-5 images |
CN111709458A (en) * | 2020-05-25 | 2020-09-25 | 中国自然资源航空物探遥感中心 | An automatic quality inspection method for Gaofen-5 images |
CN112668613A (en) * | 2020-12-07 | 2021-04-16 | 中国西安卫星测控中心 | Satellite infrared imaging effect prediction method based on weather forecast and machine learning |
CN113191179A (en) * | 2020-12-21 | 2021-07-30 | 广州蓝图地理信息技术有限公司 | Remote sensing image classification method based on gray level co-occurrence matrix and BP neural network |
CN112668441A (en) * | 2020-12-24 | 2021-04-16 | 中国电子科技集团公司第二十八研究所 | Satellite remote sensing image airplane target identification method combined with priori knowledge |
CN112668441B (en) * | 2020-12-24 | 2022-09-23 | 中国电子科技集团公司第二十八研究所 | Satellite remote sensing image airplane target identification method combined with priori knowledge |
CN113420717A (en) * | 2021-07-16 | 2021-09-21 | 西藏民族大学 | Three-dimensional monitoring method, device and equipment for ice and snow changes and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107610114B (en) | 2019-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107610114B (en) | Detection method of cloud, snow and fog in optical satellite remote sensing images based on support vector machine | |
US11037291B2 (en) | System and method for detecting plant diseases | |
CN110678901B (en) | Information processing apparatus, information processing method, and computer-readable storage medium | |
CN106651872B (en) | Pavement crack identification method and system based on Prewitt operator | |
CN102426649B (en) | Simple steel seal digital automatic identification method with high accuracy rate | |
CN108765465B (en) | An Unsupervised SAR Image Change Detection Method | |
CN104217196B (en) | A kind of remote sensing image circle oil tank automatic testing method | |
CN103198315B (en) | Based on the Character Segmentation of License Plate of character outline and template matches | |
CN109948625A (en) | Definition of text images appraisal procedure and system, computer readable storage medium | |
CN105469046B (en) | Based on the cascade vehicle model recognizing method of PCA and SURF features | |
CN103903018A (en) | Method and system for positioning license plate in complex scene | |
CN102682305A (en) | Automatic screening system and automatic screening method using thin-prep cytology test | |
CN110309781A (en) | Remote sensing recognition method for house damage based on multi-scale spectral texture adaptive fusion | |
CN105205480A (en) | Complex scene human eye locating method and system | |
CN106780486A (en) | A method for image extraction of steel plate surface defects | |
CN113221881B (en) | A multi-level smartphone screen defect detection method | |
CN111353371A (en) | Shoreline extraction method based on spaceborne SAR images | |
CN111667475A (en) | Machine vision-based Chinese date grading detection method | |
CN104657980A (en) | Improved multi-channel image partitioning algorithm based on Meanshift | |
CN106296670A (en) | A kind of Edge detection of infrared image based on Retinex watershed Canny operator | |
CN108647593A (en) | Unmanned plane road surface breakage classification and Detection method based on image procossing and SVM | |
CN110751619A (en) | A kind of insulator defect detection method | |
CN106557740A (en) | The recognition methods of oil depot target in a kind of remote sensing images | |
CN106204596B (en) | A method for cloud detection in panchromatic remote sensing images based on Gaussian fitting function and fuzzy mixture estimation | |
CN114926635B (en) | Target segmentation method in multi-focus image combined with deep learning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210721 Address after: 517000 floors 1-4, plant incubator (Shenhe Jindi Chuang Valley), building e2-1, east of Xingye Avenue and north of Gaoxin fifth road, Heyuan high tech Development Zone, Guangdong Province Patentee after: Jingtong space technology (Heyuan) Co.,Ltd. Address before: 430072 Hubei Province, Wuhan city Wuchang District of Wuhan University Luojiashan Patentee before: WUHAN University |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240319 Address after: Room 501, Building 17, Plot 2, Phase II, the Pearl River River Huacheng, No. 99, Fuyuan West Road, Liuyanghe Street, Kaifu District, Changsha, Hunan 410000 Patentee after: Hunan Hejing Cultural Media Co.,Ltd. Country or region after: China Address before: 517000 floors 1-4, plant incubator (Shenhe Jindi Chuang Valley), building e2-1, east of Xingye Avenue and north of Gaoxin fifth road, Heyuan high tech Development Zone, Guangdong Province Patentee before: Jingtong space technology (Heyuan) Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20241014 Address after: No. 9, Building 2, Yonghe Garden, Yongfu Road, Yuancheng District, Heyuan City, Guangdong Province, 517000 Patentee after: Liu Bihua Country or region after: China Patentee after: Guo Guangming Address before: Room 501, Building 17, Plot 2, Phase II, the Pearl River River Huacheng, No. 99, Fuyuan West Road, Liuyanghe Street, Kaifu District, Changsha, Hunan 410000 Patentee before: Hunan Hejing Cultural Media Co.,Ltd. Country or region before: China |