CN110276746B - Robust remote sensing image change detection method - Google Patents
Robust remote sensing image change detection method Download PDFInfo
- Publication number
- CN110276746B CN110276746B CN201910449743.XA CN201910449743A CN110276746B CN 110276746 B CN110276746 B CN 110276746B CN 201910449743 A CN201910449743 A CN 201910449743A CN 110276746 B CN110276746 B CN 110276746B
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- feature
- value
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008859 change Effects 0.000 title claims abstract description 101
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 239000013598 vector Substances 0.000 claims abstract description 103
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 113
- 230000003595 spectral effect Effects 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 6
- 230000014509 gene expression Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 2
- 238000012216 screening Methods 0.000 abstract 1
- 230000009466 transformation Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/58—Extraction of image or video features relating to hyperspectral data
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Computing Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于图像变化技术领域,尤其涉及一种鲁棒性遥感图像变化检测方法。The invention belongs to the technical field of image changes, and in particular relates to a robust remote sensing image change detection method.
背景技术Background technique
遥感,即远距离感知。遥感一词最早于1960年出现在美国的一项军事科研计划之中,在1962年密执安大学等单位举行的第一届环境遥感学术研讨会上被正式采用。如今,遥感技术应用于越来越多的领域,比如:森林或植被的动态变化监测、对自然灾害的灾后分析及评估、对土地利用的变化分析、对农田进行监控、对城镇变化实时监测、分析农作物生长状况、对军事战略目标,比如机场、道路,进行动态监视等领域,极大地促进了经济和社会的发展。Remote sensing, that is, long-distance perception. The term remote sensing first appeared in a military scientific research program in the United States in 1960, and was formally adopted in the first academic seminar on environmental remote sensing held by the University of Michigan and other units in 1962. Nowadays, remote sensing technology is used in more and more fields, such as: dynamic change monitoring of forest or vegetation, post-disaster analysis and assessment of natural disasters, analysis of land use changes, monitoring of farmland, real-time monitoring of urban changes, Analyzing the growth status of crops and conducting dynamic surveillance of military strategic targets, such as airports and roads, has greatly promoted economic and social development.
遥感图像变化检测属于遥感图像处理领域,用于分析处理同一地点不同时期的遥感图像而获得变化信息。国内外学者已经对遥感图像变化检测问题进行了大量的研究,提出了各种各样的变化检测方法。其中基于特征的变化检测方法是利用图像中的光谱特征、纹理特征、空间结构特性,对遥感图像进行变化检测。基于特征的变化检测方法考虑了图像丰富的特征信息,具有较好的噪声鲁棒性和抗光照、辐射干扰能力。Remote sensing image change detection belongs to the field of remote sensing image processing, and is used to analyze and process remote sensing images of the same location in different periods to obtain change information. Scholars at home and abroad have carried out a lot of research on the problem of remote sensing image change detection, and proposed various change detection methods. Among them, the feature-based change detection method is to use the spectral features, texture features, and spatial structure features in the image to detect changes in remote sensing images. The feature-based change detection method considers the rich feature information of the image, and has good noise robustness and anti-light and radiation interference ability.
由于遥感图像受光照、大气环境等影响,而采集的图像特征也受其影响。不仅如此,由于噪声干扰,传统的聚类算法受稀疏数据的影响分类的结果不够精确。选择具有代表性的特征并精确分类成为遥感图像变化检测技术的关键。Since remote sensing images are affected by illumination, atmospheric environment, etc., the collected image features are also affected by them. Not only that, due to noise interference, the traditional clustering algorithm is affected by sparse data and the classification results are not accurate enough. Selecting representative features and accurately classifying them becomes the key to change detection technology in remote sensing images.
发明内容SUMMARY OF THE INVENTION
发明目的:针对以上问题,本发明提出一种鲁棒性遥感图像变化检测方法。该方法是一种基于特征的遥感图影变化检测方法,根据Relief算法对图像特征进行选择和筛选,然后利用PCA算法降维,最后使用NR-Kmeans算法,排除噪声干扰点对检测结果的影响,最后输出结果。本发明排除了噪声干扰点,检测更加精确。Purpose of the invention: In view of the above problems, the present invention proposes a robust remote sensing image change detection method. This method is a feature-based remote sensing image change detection method. Image features are selected and screened according to the Relief algorithm, then the PCA algorithm is used to reduce the dimension, and finally the NR-Kmeans algorithm is used to eliminate the influence of noise interference points on the detection results. The final output result. The invention eliminates noise interference points, and the detection is more accurate.
技术方案:为实现本发明的目的,本发明所采用的技术方案是:一种鲁棒性遥感图像变化检测方法,包括如下步骤:Technical solution: In order to achieve the purpose of the present invention, the technical solution adopted in the present invention is: a robust remote sensing image change detection method, comprising the following steps:
(1)采集获取目标地区T1和T2时刻的遥感图像,所述图像满足分辨率需求并且包含清晰的纹理特征信息;( 1 ) collect and obtain remote sensing images of target areas T1 and T2 , which meet the resolution requirements and contain clear texture feature information;
(2)采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合Md,Md=[m1,m2,...mN],其中,mi表示第i个对象所对应的变化矢量;将Md中元素按照T1或T2时刻图像中N个对象的排列方式排列,构成一幅差值图;(2) Using ENVI software to realize image data preprocessing to eliminate shooting errors; using the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, That is, N objects are obtained respectively; the spectral features and texture features of the images at times T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a change vector set M d , where M d =[m 1 ,m 2 ,...m N ], wherein, m i represents the change vector corresponding to the ith object; the elements in M d are arranged according to the arrangement of N objects in the image at time T 1 or T 2 to form a difference map;
(3)根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;(3) According to the Relief feature selection algorithm, obtain the weight of the feature of each object in the feature subspace M l of the change vector set M d , select e change features with the largest weight, and the value of e is less than the total number of features contained in the object, and then The e change features are selected from the N objects respectively, and the matrix X is obtained in combination with the change vector set M d ;
(4)利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y,Y=[y1,y2,...,yN];(4) Use PCA algorithm to reduce the feature dimension of matrix X, and obtain the changed feature matrix Y after transformation, Y=[y 1 , y 2 ,...,y N ];
(5)使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y′;(5) Use the NR-Kmeans algorithm to classify the elements yi of the variable feature matrix Y, remove the sparse data disturbed by noise from the variable feature matrix Y, and obtain a dense point set Y';
(6)将密集点集合Y′分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图,图中标出了图像变化的区域。(6) Divide the dense point set Y′ into the unchanged class and the changed class. The gray value of the object in the difference map corresponding to the unchanged class element is converted to 255, and the difference value corresponding to the element in the changed class The gray value of the object in the figure is converted to 0, and the change detection result map is output according to the gray value of the object, and the area of the image change is marked in the figure.
进一步,步骤(2)中,采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合,步骤如下:Further, in step (2), ENVI software is used to realize image data preprocessing to eliminate shooting errors; eCogntion software platform is used to perform multi - scale segmentation on the preprocessed images at T1 and T2, and the two images are divided into N image blocks, that is, N objects are obtained respectively; the spectral features and texture features of the images at time T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a set of change vectors. The steps are as follows:
(2.1)采用ENVI软件分别对T1和T2时刻的图像进行几何校正、图像配准、大气校正操作,实现图像数据预处理;( 2.1 ) Using ENVI software to perform geometric correction, image registration, and atmospheric correction operations on the images at T1 and T2, respectively, to realize image data preprocessing;
(2.2)采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;(2.2) Use the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, that is, to obtain N objects respectively;
(2.3)求取预处理后的图像的Mean-Std特征,作为图像的光谱特征;采用灰度共生矩阵提取预处理后的图像的纹理特征;(2.3) Obtain the Mean-Std feature of the preprocessed image as the spectral feature of the image; use the gray level co-occurrence matrix to extract the texture feature of the preprocessed image;
(2.4)根据预处理后的两幅图像提取的光谱特征和纹理特征,构建特征向量,获取变化矢量集合Md。(2.4) According to the spectral features and texture features extracted from the two preprocessed images, a feature vector is constructed, and a change vector set M d is obtained.
进一步,步骤(2.3)所述求取图像的Mean-Std特征,作为图像的光谱特征;方法如下:Further, the Mean-Std feature of the image is obtained as described in step (2.3) as the spectral feature of the image; the method is as follows:
对图像的每个对象,按照下列公式求取光谱特征中的均值Mean和标准差Std:For each object of the image, the mean and standard deviation Std in the spectral features are obtained according to the following formulas:
式中,Mean为对象中像素点灰度的均值,std为对象中像素点灰度的标准差,A表示对象中像素点数,ci代表对象中第i个像素点的灰度大小。In the formula, Mean is the mean of the grayscale of the pixels in the object, std is the standard deviation of the grayscale of the pixels in the object, A represents the number of pixels in the object, and c i represents the grayscale of the ith pixel in the object.
进一步,步骤(2.3)所述采用灰度共生矩阵提取图像的纹理特征;方法如下:Further, in step (2.3), the gray level co-occurrence matrix is used to extract the texture features of the image; the method is as follows:
(2.3.1)将图像中任意一像素点(τ1,τ2)及另一像素点(τ1+Δ1,τ2+Δ2)构成点对,其中Δ1,Δ2为整数;设像素点(τ1,τ2)的灰度值为f1,(τ1+Δ1,τ2+Δ2)的灰度值为f2,则该点对的灰度值为(f1,f2),设图像的最大灰度级为L,则f1与f2的组合共有L*L组;(2.3.1) Any pixel point (τ 1 , τ 2 ) and another pixel point (τ 1 +Δ 1 , τ 2 +Δ 2 ) in the image form a point pair, where Δ 1 , Δ 2 are integers; Assuming that the gray value of the pixel point (τ 1 , τ 2 ) is f1, the gray value of (τ 1 +Δ 1 , τ 2 +Δ 2 ) is f2, then the gray value of the pair of points is (f1, f2 ), set the maximum gray level of the image to be L, then the combination of f1 and f2 has a total of L*L groups;
(2.3.2)统计图像中每一组不同的(f1,f2)值出现的次数,然后排列成一个矩阵,其中,位于矩阵位置(L,L)上的矩阵元素值就是两个灰度值均为L的点对出现的次数;(2.3.2) Count the number of occurrences of each group of different (f1, f2) values in the image, and then arrange them into a matrix, where the matrix element values located at the matrix position (L, L) are the two grayscale values. are the number of occurrences of point pairs of L;
(2.3.3)根据(f1,f2)出现的总次数将每一组不同的(f1,f2)值出现的次数归一化为出现的概率g(f1,f2),归一化之后的的方阵称为灰度共生矩阵;(2.3.3) According to the total number of occurrences of (f1, f2), the number of occurrences of each group of different (f1, f2) values is normalized to the probability of occurrence g(f1, f2), and the normalized The square matrix is called the gray level co-occurrence matrix;
(2.3.4)灰度共生矩阵提供了14种统计量作为纹理特征,本发明选取6种统计量作为纹理特征,所述6种统计量为对比度、熵、能量、均等性、不相似性、相关性;对比度反映了图像的清晰度和纹理沟纹深浅的程度;熵表示了图像中纹理的非均匀程度或复杂程度;能量反映了图像灰度分布均匀程度和纹理粗细度;均等性反映图像纹理的同质性,度量图像纹理局部变化的多少;相关性反应了图像纹理的一致性;这6种统计量都从各个方面表示了图像的特性,故选择这6种统计量作为纹理特征;(2.3.4) The grayscale co-occurrence matrix provides 14 kinds of statistics as texture features, the present invention selects 6 kinds of statistics as texture features, and the 6 kinds of statistics are contrast, entropy, energy, equality, dissimilarity, Correlation; contrast reflects the clarity of the image and the depth of texture grooves; entropy expresses the degree of non-uniformity or complexity of the texture in the image; energy reflects the uniformity of the grayscale distribution of the image and the thickness of the texture; equality reflects the image The homogeneity of the texture measures the local variation of the image texture; the correlation reflects the consistency of the image texture; these 6 kinds of statistics represent the characteristics of the image from various aspects, so these 6 kinds of statistics are selected as texture features;
(2.3.5)设灰度共生矩阵的位置(i,j)上的矩阵元素值为g(i,j),L为图像的最大灰度级,则所述纹理特征表示如下:(2.3.5) Assuming that the value of the matrix element at the position (i, j) of the gray level co-occurrence matrix is g(i, j), and L is the maximum gray level of the image, the texture feature is expressed as follows:
式中,Con表示对比度,Ent表示熵,Ene表示能量,Hom表示均等性,Dis表示不相似性,Cor表示相关性; In the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
进一步,步骤(2.4)所述根据预处理后的两幅图像提取的光谱特征和纹理特征,获取变化矢量集合Md;方法如下:Further, according to the spectral features and texture features extracted from the two preprocessed images described in step (2.4), a change vector set M d is obtained; the method is as follows:
(2.4.1)根据步骤(2.3)得到的T1和T2时刻图像的光谱特征和纹理特征,分别构建特征向量,对两幅图像的相对应的特征向量计算差值;两幅图像相对应的第i个对象的第j个特征向量的差值Md(i,j),表示如下:(2.4.1) According to the spectral features and texture features of the images at time T 1 and T 2 obtained in step (2.3), construct feature vectors respectively, and calculate the difference between the corresponding feature vectors of the two images; the two images correspond to The difference M d (i, j) of the j-th eigenvector of the i-th object is expressed as follows:
Md(i,j)=Md,1(i,j)-Md,2(i,j)M d (i, j) = M d, 1 (i, j) - M d, 2 (i, j)
式中,Md,1(i,j)为T1时刻图像第i个对象的第j个特征向量,Md,2(i,j)为T2时刻图像第i个对象的第j个特征向量;i≤N,N为图像中对象的总数;j≤Sfeature,Sfeature为对象包含的特征总数;Sfeature=Sband×8,Sband为对象总波段数;In the formula, M d, 1 (i, j) is the j-th feature vector of the i-th object in the image at time T 1 , and M d, 2 (i, j) is the j-th object of the i-th object in the image at time T 2 feature vector; i≤N, N is the total number of objects in the image; j≤S feature , S feature is the total number of features contained in the object; S feature =S band ×8, S band is the total number of bands of the object;
根据差值Md(i,j),得到第i个对象所对应的变化矢量mi,表示如下:According to the difference value M d (i, j), the change vector m i corresponding to the ith object is obtained, which is expressed as follows:
mi=(Md(i,1),Md(i,2),...,Md(i,Sfeature))m i = (M d (i, 1), M d (i, 2), ..., M d (i, S feature ))
(2.4.2)重复步骤(2.4.1),计算两幅图像中相对应的每一个对象的特征向量的差值,得到每个对象所对应的变化矢量,构成变化矢量集合Md,表示如下:(2.4.2) Repeat step (2.4.1), calculate the difference between the feature vectors of each object corresponding to the two images, obtain the change vector corresponding to each object, and form the change vector set M d , which is expressed as follows :
Md=[m1,m2,…mN]M d =[m 1 , m 2 , ... m N ]
将Md中元素按照T1和T2时刻图像中N个对象的排列方式排列,构成一幅差值图。Arrange the elements in M d according to the arrangement of N objects in the image at time T1 and T2 to form a difference map.
进一步,步骤(3)所述根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Mi中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;方法如下:Further, according to the Relief feature selection algorithm described in step (3), obtain the weight of the feature of each object in the feature subspace M i of the change vector set M d , select e the change features with the largest weight, and the value of e is less than the object contains The total number of features, and then select the e change features from the N objects, and combine the change vector set M d to obtain the matrix X; the method is as follows:
(3.1)变化矢量集合Md也称为特征空间,提取特征空间Md=[m1,m2,m3,…mN]的子空间Ml=[m1,m2,m3,…mn]作为训练集,其中n<N;使用人工解译将特征子空间Ml分为Ml1与Ml2两个数集,其中Ml1包含l1个元素,ML2包含l2个元素,l1+l2=n,Ml1∈Ml,Ml2∈Ml;(3.1) The change vector set M d is also called the feature space, and the subspace M l =[m 1 ,m 2 ,m 3 ] of the feature space M d =[m 1 ,m 2 ,m 3 ,... m N ] is extracted, ... m n ] as the training set, where n<N; the feature subspace M l is divided into two number sets M l1 and M l2 using artificial interpretation, where M l1 contains l1 elements, M L2 contains l2 elements, l1+l2=n, M l1 ∈ M l , M l2 ∈ M l ;
(3.2)计算Ml中所有元素两两之间变化矢量的欧式距离,组成距离矩阵,即(3.2) Calculate the Euclidean distance of the change vector between all elements in M l to form a distance matrix, that is
其中,d表示距离矩阵,表示Ml中元素两两之间变化矢量的欧式距离;where d represents the distance matrix, represents the Euclidean distance of the change vector between the elements in M l ;
(3.3)在特征子空间Ml中,对于对象i的变化矢量mi,i=1,2,3,...,n,若mi∈Ml1,选取Ml1内与其欧式距离最小的s个矢量,并选取Ml2内与其欧氏距离最大的s个矢量;若mi∈Ml2,选取Ml2内与其欧式距离最小的s个矢量,并选取Ml1内与其欧氏距离最大的s个矢量;即对mi求取近邻矩阵Qi=[q1,q2,...,qs,...,q2s],其中,[q1,q2,...,qs]为所述欧式距离最小的s个矢量,[qs+1,qs+2,...,q2s]为所述欧氏距离最大的s个矢量,s=min(l1,l2);(3.3) In the feature subspace M l , for the change vector m i of the object i, i=1, 2, 3, ..., n, if m i ∈ M l1 , select the one with the smallest Euclidean distance in M l1 s vectors, and select the s vectors with the largest Euclidean distance in M l2 ; if m i ∈ M l2 , select the s vectors with the smallest Euclidean distance in M l2 , and select the largest Euclidean distance in M l1 s vectors; that is, the nearest neighbor matrix Q i = [q 1 , q 2 ,..., q s ,..., q 2s ] for mi , where [q 1 , q 2 ,..., q s ] are the s vectors with the smallest Euclidean distance, [q s+1 , q s+2 , ..., q 2s ] are the s vectors with the largest Euclidean distance, s=min(l1, l2);
(3.4)根据猜错近邻相关统计量和猜中近邻相关统计量,计算对象i的每个特征对应的权重,即(3.4) Calculate the weight corresponding to each feature of object i according to the related statistics of the wrongly guessed neighbors and the related statistics of the correctly guessed neighbors, namely
式中,是特征fj对应的权重,diffrj是特征fj的猜错近邻相关统计量,diffwj是特征fj的猜中近邻相关统计量;diffrj和diffwj的计算表达式如下:In the formula, is the weight corresponding to the feature f j , diffr j is the related statistic of the wrong-guessed neighbor of the feature f j , diffw j is the related statistic of the correct-guessed neighbor of the feature f j ; the calculation expressions of diffr j and diffw j are as follows:
式中,s=min(l1,l2),i表示对象i,j表示第j个特征,当a=1,2,3,...,s时,Qi(a,j)表示[q1,q2,...,qs],当a=s+1,s+2,...,2s时,Qi(a,j)表示[qs+1,qs+2,...,q2s];当b=1,2,3,...,s时,Qi(b,j)表示[q1,q2,...,qs],当b=s+1,s+2,...,2s时,Qi(b,j)表示[qs+1,qs+2,...,q2s];In the formula, s=min(l1, l2), i represents the object i, j represents the jth feature, when a=1, 2, 3,..., s, Q i (a, j) represents [q 1 , q 2 , . ..., q 2s ]; when b = 1, 2, 3, ..., s, Q i (b, j) represents [q 1 , q 2 , ..., q s ], when b = When s+1, s+2, ..., 2s, Q i (b, j) represents [q s+1 , q s+2 , ..., q 2s ];
(3.5)根据步骤(3.4)计算得到的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,进而分别在N个对象中选取所述e个变化特征,结合原变化矢量集合Md,得到矩阵X,表示如下:(3.5) According to the weight of the feature of each object in the feature subspace M l calculated in step (3.4), select e change features with the largest weight, and then select the e change features from the N objects respectively, combine with From the original change vector set M d , the matrix X is obtained, which is expressed as follows:
式中,Xi=[xi1 xi2 ... xie],i=1,2,...,N;矩阵元素xij表示对象i的e个变化特征中的第j个特征,e取值小于对象包含的特征总数。In the formula, X i =[x i1 x i2 ... x ie ], i=1, 2, ..., N; the matrix element x ij represents the j-th feature in the e changing features of the object i, e The value is less than the total number of features contained in the object.
进一步,步骤(4)所述利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y;方法如下:Further, described in step (4), the PCA algorithm is used to perform feature dimension reduction on the matrix X to obtain the change feature matrix Y after the transformation; the method is as follows:
(4.1)根据步骤(3)得到的矩阵X,计算矩阵每一行的均值μi,i=1,2,...,N:(4.1) According to the matrix X obtained in step (3), calculate the mean value μ i of each row of the matrix, i=1, 2,...,N:
式中,j=1,2,...,e,E(Xi)表示数学期望;In the formula, j=1, 2,...,e, E(X i ) represents mathematical expectation;
(4.2)根据均值μi,计算矩阵X的协方差矩阵C,即:(4.2) Calculate the covariance matrix C of the matrix X according to the mean μ i , namely:
式中,i=1,2,...,N,r=1,2,...,N,j=1,2,...,e;由上述公式可知,协方差矩阵C为实对称矩阵;In the formula, i=1,2,...,N,r=1,2,...,N,j=1,2,...,e; it can be seen from the above formula that the covariance matrix C is a real Symmetric matrix;
(4.3)计算协方差矩阵C的特征值λ1,λ2,...,λN,对应的特征向量为v1,v2,...,vN,即λivi=Cvi,i=1,2,...,N;(4.3) Calculate the eigenvalues λ 1 , λ 2 , ..., λ N of the covariance matrix C, and the corresponding eigenvectors are v 1 , v 2 , ..., v N , that is, λ i v i =Cv i , i=1,2,...,N;
(4.4)将特征值λ1,λ2,...,λN进行降序排序,得到λ′1,λ′2,...,λ′N,对应的特征向量为v′1,v′2,...,v′N,取排序后的特征值构成一个对角矩阵P;(4.4) Sort the eigenvalues λ 1 , λ 2 , ..., λ N in descending order to obtain λ' 1 , λ' 2 , ..., λ' N , and the corresponding eigenvectors are v' 1 , v' 2 ,...,v′ N , take the sorted eigenvalues to form a diagonal matrix P;
(4.5)计算矩阵X在新的特征向量v′1,v′2,...,v′N上的投影,得到降维后的变化特征矩阵Y,即:(4.5) Calculate the projection of the matrix X on the new eigenvectors v′ 1 , v′ 2 , ..., v′ N , and obtain the changed feature matrix Y after dimension reduction, namely:
Y=X*P=[y1,y2,...,yN]Y=X*P=[y 1 , y 2 , ..., y N ]
其中,y1,y2,...,yN表示对象的变化特征。Among them, y 1 , y 2 , ..., y N represent the changing characteristics of the object.
进一步,所述步骤(5)中使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y′;步骤如下:Further, in the step (5), the NR-Kmeans algorithm is used to classify the element yi of the variable feature matrix Y, and the sparse data disturbed by noise is removed from the variable feature matrix Y to obtain a dense point set Y'; the steps are as follows:
(5.1)在矩阵Y=[y1,y2,...,yN]中,对于每一个元素点yα,yα∈Y,给定邻域半径δ,得到符合的元素点Qb(yα)表示包含元素点yα在内的yα的最近邻元素点集合;其中,表示元素点yα、的距离,b为yα在邻域范围内最近邻元素点个数,f=1,2,...,b;(5.1) In the matrix Y=[y 1 , y 2 , ..., y N ], for each element point y α , y α ∈ Y, given the neighborhood radius δ, get the element point of Q b (y α ) represents the nearest neighbor element point set of y α including element point y α ; where, represents the element point y α , distance, b is the number of nearest neighbor element points of y α in the neighborhood, f=1, 2, ..., b;
(5.2)计算元素点yα的密度函数值;元素点yα的密度函数值表示邻域半径δ范围内所有最近邻元素点对其影响的函数之和;yα的密度函数值计算公式如下:(5.2) Calculate the density function value of the element point y α ; the density function value of the element point y α represents the sum of the functions of all the nearest neighbor element points in the range of the neighborhood radius δ; the calculation formula of the density function value of y α is as follows :
式中,为元素点yα、的距离,f=1,2,...,b;In the formula, is the element point y α , distance, f = 1, 2, ..., b;
(5.3)重复步骤(5.1)~(5.2),直到矩阵Y中所有元素点的密度函数值计算完成;(5.3) Repeat steps (5.1) to (5.2) until the calculation of the density function values of all element points in the matrix Y is completed;
(5.4)当yβ的密度函数值大于等于矩阵Y中所有元素点平均密度值时,则将元素点yβ视为可用数据放入密集点集合Y′中,即yβ符合:(5.4) When the density function value of y β is greater than or equal to the average density value of all element points in the matrix Y, the element point y β is regarded as available data and put into the dense point set Y′, that is, y β conforms to:
式中,DF(yβ)表示yβ的密度函数值,DF(yα)为数据点yα的密度函数值;In the formula, DF(y β ) represents the density function value of y β , and DF(y α ) is the density function value of the data point y α ;
(5.5)根据步骤(5.4)得到矩阵Y中所有的可用数据,所述可用数据构成密集点集合Y′,Y′为剔除了被噪声干扰的稀疏数据后的元素点集合。(5.5) Obtain all available data in the matrix Y according to step (5.4), the available data constitute a dense point set Y', and Y' is an element point set after excluding sparse data interfered by noise.
进一步,步骤(6)所述将密集点集合Y′分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图;方法如下:Further, according to step (6), the dense point set Y' is divided into unchanged classes and changed classes, and the gray value of the objects in the difference map corresponding to the unchanged class elements is converted to 255, and the changed classes The gray value of the object in the difference map corresponding to the element is converted to 0, and the change detection result map is output according to the gray value of the object; the method is as follows:
(6.1)从密集点集合Y′=[y′1,y′2,...,y′η],η为Y′中元素点个数,η≤N,y′η表示剔除了被噪声干扰的数据后的对象的特征;选取密度值最大的元素点,作为第一个初始聚类中心C1,计算Y′中其他元素点的特征值与聚类中心C1的特征值的差值,选取与C1的特征值相差最大的元素点,即差值绝对值最大的点作为第二个初始聚类中心C2;C1,C2相应的特征值为c1,c2,聚类中心C1和C2对应类D1和D2;(6.1) From the dense point set Y′=[y′ 1 , y′ 2 , ..., y′ η ], η is the number of element points in Y′, η≤N, y′ η means that the noise is eliminated The characteristics of the object behind the disturbed data; select the element point with the largest density value as the first initial cluster center C 1 , and calculate the difference between the eigenvalues of other element points in Y′ and the eigenvalues of the cluster center C 1 , select the element point with the largest difference from the eigenvalue of C 1 , that is, the point with the largest absolute value of the difference as the second initial cluster center C 2 ; Class centers C 1 and C 2 correspond to classes D 1 and D 2 ;
(6.2)计算密集点集合Y′中的元素点y′θ与两个聚类中心的特征值的差值,即(6.2) Calculate the difference between the element point y′ θ in the dense point set Y′ and the eigenvalues of the two cluster centers, namely
dθ1=|y′θ-c1|d θ1 = |y′ θ -c 1 |
dθ2=|y′θ-c2|d θ2 = |y′ θ -c 2 |
dθ1表示元素点y′θ与聚类中心C1的特征差值的绝对值,dθ2表示元素点y′θ与聚类中心C2的特征差值的绝对值,θ=1,2,...,η;将密集点集合Y′中的元素点分到差值绝对值较小的聚类中心所对应的类;d θ1 represents the absolute value of the feature difference between the element point y' θ and the cluster center C 1 , d θ2 represents the absolute value of the feature difference between the element point y' θ and the cluster center C 2 , θ=1, 2, ..., η; divide the elements in the dense point set Y' into the class corresponding to the cluster center with the smaller absolute difference value;
(6.3)计算类D1,D2内相应的元素点特征值的平均值c′1,c′2,即(6.3) Calculate the average value c′ 1 , c′ 2 of the corresponding element point eigenvalues in the classes D 1 , D 2 , namely
式中,c′1,c′2分别为类D1,D2相应的元素点特征值的平均值;|D1|表示类D1中元素点的个数,|D2|表示类D2中元素点的个数,y′对应类内元素点特征值;如果c′1≠c1,将类D1的聚类中心C1的特征值c1更新为c′1;如果c′2≠c2,将类D2的聚类中心C2的特征值c2更新为c′2;In the formula, c′ 1 , c′ 2 are the average values of the corresponding element point eigenvalues of class D 1 and D 2 respectively; |D 1 | represents the number of element points in class D 1 , and |D 2 | represents class D The number of element points in 2 , y′ corresponds to the eigenvalues of the element points in the class; if c′ 1 ≠c 1 , update the eigenvalue c 1 of the cluster center C 1 of class D 1 to c′ 1 ; if c′ 1 ≠c 1 , 2 ≠c 2 , update the eigenvalue c 2 of the cluster center C 2 of class D 2 to c′ 2 ;
(6.4)重复步骤(6.2)~(6.3),直至两个聚类中心的特征值都不再发生改变;(6.4) Repeat steps (6.2) to (6.3) until the eigenvalues of the two cluster centers no longer change;
(6.5)将两个类D1,D2分为未变化的类和变化的类,其中,平均值较小的为未变化的类,平均值较大的为变化的类;未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0;(6.5) Divide the two classes D 1 , D 2 into the unchanged class and the changed class, wherein the class with a smaller average value is the unchanged class, and the class with a larger average value is the changed class; the unchanged class The gray value of the object in the difference map corresponding to the element is converted to 255, and the gray value of the object in the difference map corresponding to the element in the changed class is converted to 0;
(6.6)根据对象的灰度值输出检测结果图,图中标出了图像变化的区域。(6.6) Output the detection result graph according to the gray value of the object, and the area where the image changes is marked in the graph.
有益效果:与现有技术相比,本发明的技术方案具有以下有益的技术效果:Beneficial effects: compared with the prior art, the technical solution of the present invention has the following beneficial technical effects:
(1)为了减小噪声对于数据的干扰,使用密度分布函数排除密度稀疏点,剔除一些噪声干扰点。(1) In order to reduce the interference of noise on the data, the density distribution function is used to eliminate the density sparse points and some noise interference points.
(2)结合最远距离原则,在高密度数据集合中选取初始聚类中心,避免了原始Kmeans算法随机选取初始聚类中心的缺点。(2) Combined with the principle of the farthest distance, the initial cluster center is selected in the high-density data set, which avoids the disadvantage of randomly selecting the initial cluster center in the original Kmeans algorithm.
附图说明Description of drawings
图1是本发明的流程图;Fig. 1 is the flow chart of the present invention;
图2是NR-Kmeans算法流程图。Figure 2 is a flowchart of the NR-Kmeans algorithm.
具体实施方式Detailed ways
下面结合附图和实施例对本发明的技术方案作进一步的说明。The technical solutions of the present invention will be further described below with reference to the accompanying drawings and embodiments.
本发明所述的一种鲁棒性遥感图像变化检测方法,如图1所示,包括以下几个步骤:A robust remote sensing image change detection method according to the present invention, as shown in Figure 1, includes the following steps:
(1)采集获取目标地区T1和T2时刻的遥感图像,所述图像满足分辨率需求并且包含清晰的纹理特征信息;选择合肥市某区域的图像,包含了农田作物面积覆盖情况;( 1 ) Collect and obtain the remote sensing images of the target areas T1 and T2 , which meet the resolution requirements and contain clear texture feature information; select the image of a certain area in Hefei City, including the coverage of farmland crops;
(2)采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合Md,Md=[m1,m2,...mN],其中,mi表示第i个对象所对应的变化矢量;将Md中元素按照T1或T2时刻图像中N个对象的排列方式排列,构成一幅差值图;(2) Using ENVI software to realize image data preprocessing to eliminate shooting errors; using the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, That is, N objects are obtained respectively; the spectral features and texture features of the images at times T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a change vector set M d , where M d =[m 1 ,m 2 ,...m N ], wherein, m i represents the change vector corresponding to the ith object; the elements in M d are arranged according to the arrangement of N objects in the image at time T 1 or T 2 to form a difference map;
(3)根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;(3) According to the Relief feature selection algorithm, obtain the weight of the feature of each object in the feature subspace M l of the change vector set M d , select e change features with the largest weight, and the value of e is less than the total number of features contained in the object, and then The e change features are selected from the N objects respectively, and the matrix X is obtained in combination with the change vector set M d ;
(4)利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y,Y=[y1,y2,...,yN];(4) Use the PCA algorithm to reduce the feature dimension of the matrix X, and obtain the changed feature matrix Y after the transformation, Y=[y 1 , y 2 ,...,y N ];
(5)使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y';(5) Use the NR-Kmeans algorithm to classify the elements yi of the variable feature matrix Y, remove the sparse data disturbed by noise from the variable feature matrix Y, and obtain a dense point set Y';
(6)将密集点集合Y'分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图,图中标出了图像变化的区域。(6) Divide the dense point set Y' into the unchanged class and the changed class. The gray value of the object in the difference map corresponding to the unchanged class element is converted to 255, and the difference value corresponding to the element in the changed class The gray value of the object in the figure is converted to 0, and the change detection result map is output according to the gray value of the object, and the area of the image change is marked in the figure.
步骤(2)中,采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合,步骤如下:In step (2), ENVI software is used to realize image data preprocessing to eliminate shooting errors; eCogntion software platform is used to perform multi - scale segmentation on the preprocessed images at time T1 and T2, and the two images are divided into N pieces respectively. Image blocks, that is, N objects are obtained respectively; the spectral features and texture features of the images at times T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a set of change vectors. The steps are as follows:
(2.1)采用ENVI软件分别对T1和T2时刻的图像进行几何校正、图像配准、大气校正操作,实现图像数据预处理;两幅图像大小均为512*512;(2.1) Using ENVI software to perform geometric correction, image registration, and atmospheric correction operations on the images at T 1 and T 2 , respectively, to realize image data preprocessing; the size of both images is 512*512;
(2.2)采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;(2.2) Use the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, that is, to obtain N objects respectively;
(2.3)求取预处理后的图像的Mean-Std特征,作为图像的光谱特征;采用灰度共生矩阵提取预处理后的图像的纹理特征;(2.3) Obtain the Mean-Std feature of the preprocessed image as the spectral feature of the image; use the gray level co-occurrence matrix to extract the texture feature of the preprocessed image;
(2.4)根据预处理后的两幅图像提取的光谱特征和纹理特征,构建特征向量,获取变化矢量集合Md。(2.4) According to the spectral features and texture features extracted from the two preprocessed images, a feature vector is constructed, and a change vector set M d is obtained.
步骤(2.3)所述求取图像的Mean-Std特征,作为图像的光谱特征;方法如下:The Mean-Std feature of the image is obtained as described in step (2.3) as the spectral feature of the image; the method is as follows:
对图像的每个对象,按照下列公式求取光谱特征中的均值Mean和标准差Std:For each object of the image, the mean and standard deviation Std in the spectral features are obtained according to the following formulas:
式中,Mean为对象中像素点灰度的均值,std为对象中像素点灰度的标准差,A表示对象中像素点数,ci代表对象中第i个像素点的灰度大小。In the formula, Mean is the mean of the grayscale of the pixels in the object, std is the standard deviation of the grayscale of the pixels in the object, A represents the number of pixels in the object, and c i represents the grayscale of the ith pixel in the object.
步骤(2.3)所述采用灰度共生矩阵提取图像的纹理特征;方法如下:In step (2.3), the texture feature of the image is extracted by using the grayscale co-occurrence matrix; the method is as follows:
(2.3.1)将图像中任意一像素点(τ1,τ2)及另一像素点(τ1+Δ1,τ2+Δ2)构成点对,其中Δ1,Δ2为整数;设像素点(τ1,τ2)的灰度值为f1,(τ1+Δ1,τ2+Δ2)的灰度值为f2,则该点对的灰度值为(f1,f2),设图像的最大灰度级为L,则f1与f2的组合共有L*L组;本实施例设L=16,则f1与f2的组合共有16*16组;(2.3.1) Any pixel point (τ 1 , τ 2 ) and another pixel point (τ 1 +Δ 1 , τ 2 +Δ 2 ) in the image form a point pair, where Δ 1 , Δ 2 are integers; Assuming that the gray value of the pixel point (τ 1 , τ 2 ) is f1, the gray value of (τ 1 +Δ 1 , τ 2 +Δ 2 ) is f2, then the gray value of the pair of points is (f1, f2 ), set the maximum gray level of the image to be L, then the combination of f1 and f2 has a total of L*L groups; in this embodiment, if L=16, then the combination of f1 and f2 has a total of 16*16 groups;
(2.3.2)统计图像中每一组不同的(f1,f2)值出现的次数,然后排列成一个矩阵,其中,位于矩阵位置(L,L)上的矩阵元素值就是两个灰度值均为L的点对出现的次数;(2.3.2) Count the number of occurrences of each group of different (f1, f2) values in the image, and then arrange them into a matrix, where the matrix element values located at the matrix position (L, L) are the two grayscale values. are the number of occurrences of point pairs of L;
(2.3.3)根据(f1,f2)出现的总次数将每一组不同的(f1,f2)值出现的次数归一化为出现的概率g(f1,f2),归一化之后的的方阵称为灰度共生矩阵;(2.3.3) According to the total number of occurrences of (f1, f2), the number of occurrences of each group of different (f1, f2) values is normalized to the probability of occurrence g(f1, f2), and the normalized The square matrix is called the gray level co-occurrence matrix;
(2.3.4)灰度共生矩阵提供了14种统计量作为纹理特征,本发明选取6种统计量作为纹理特征,所述6种统计量为对比度、熵、能量、均等性、不相似性、相关性;对比度反映了图像的清晰度和纹理沟纹深浅的程度;熵表示了图像中纹理的非均匀程度或复杂程度;能量反映了图像灰度分布均匀程度和纹理粗细度;均等性反映图像纹理的同质性,度量图像纹理局部变化的多少;相关性反应了图像纹理的一致性;这6种统计量都从各个方面表示了图像的特性,故选择这6种统计量作为纹理特征;(2.3.4) The grayscale co-occurrence matrix provides 14 kinds of statistics as texture features, the present invention selects 6 kinds of statistics as texture features, and the 6 kinds of statistics are contrast, entropy, energy, equality, dissimilarity, Correlation; contrast reflects the clarity of the image and the depth of texture grooves; entropy expresses the degree of non-uniformity or complexity of the texture in the image; energy reflects the uniformity of the grayscale distribution of the image and the thickness of the texture; equality reflects the image The homogeneity of the texture measures the local variation of the image texture; the correlation reflects the consistency of the image texture; these 6 kinds of statistics represent the characteristics of the image from various aspects, so these 6 kinds of statistics are selected as texture features;
(2.3.5)设灰度共生矩阵的位置(i,j)上的矩阵元素值为g(i,j),L为图像的最大灰度级,则所述纹理特征表示如下:(2.3.5) Assuming that the value of the matrix element at the position (i, j) of the gray level co-occurrence matrix is g(i, j), and L is the maximum gray level of the image, the texture feature is expressed as follows:
式中,Con表示对比度,Ent表示熵,Ene表示能量,Hom表示均等性,Dis表示不相似性,Cor表示相关性; In the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
步骤(2.4)所述根据预处理后的两幅图像提取的光谱特征和纹理特征,获取变化矢量集合Md;方法如下:Described in step (2.4), according to the spectral features and texture features extracted from the two preprocessed images, obtain the change vector set M d ; the method is as follows:
(2.4.1)根据步骤(2.3)得到的T1和T2时刻图像的光谱特征和纹理特征,分别构建特征向量,对两幅图像的相对应的特征向量计算差值;两幅图像相对应的第i个对象的第j个特征向量的差值Md(i,j),表示如下:(2.4.1) According to the spectral features and texture features of the images at time T 1 and T 2 obtained in step (2.3), construct feature vectors respectively, and calculate the difference between the corresponding feature vectors of the two images; the two images correspond to The difference M d (i, j) of the j-th eigenvector of the i-th object is expressed as follows:
Md(i,j)=Md,1(i,j)-Md,2(i,j)M d (i, j) = M d, 1 (i, j) - M d, 2 (i, j)
式中,Md,1(i,j)为T1时刻图像第i个对象的第j个特征向量,Md,2(i,j)为T2时刻图像第i个对象的第j个特征向量;i≤N,N为图像中对象的总数;j≤Sfeature,Sfeature为对象包含的特征总数;Sfeature=Sband×8,Sband为对象总波段数;本实施例中图像有4个波段,则Sfeature=32;In the formula, M d, 1 (i, j) is the j-th feature vector of the i-th object in the image at time T 1 , and M d, 2 (i, j) is the j-th object of the i-th object in the image at time T 2 feature vector; i≤N, N is the total number of objects in the image; j≤S feature , S feature is the total number of features included in the object; S feature =S band × 8, S band is the total number of bands of the object; in this embodiment, the image There are 4 bands, then S feature = 32;
根据差值Md(i,j),得到第i个对象所对应的变化矢量mi,表示如下:According to the difference value M d (i, j), the change vector m i corresponding to the ith object is obtained, which is expressed as follows:
mi=(Md(i,1),Md(i,2),...,Md(i,32))m i = (M d (i, 1), M d (i, 2), ..., M d (i, 32))
(2.4.2)重复步骤(2.4.1),计算两幅图像中相对应的每一个对象的特征向量的差值,得到每个对象所对应的变化矢量,构成变化矢量集合Md,表示如下:(2.4.2) Repeat step (2.4.1), calculate the difference between the feature vectors of each object corresponding to the two images, obtain the change vector corresponding to each object, and form the change vector set M d , which is expressed as follows :
Md=[m1,m2,...mN]M d =[m 1 , m 2 , . . . m N ]
将Md中元素按照T1和T2时刻图像中N个对象的排列方式排列,构成一幅差值图。Arrange the elements in M d according to the arrangement of N objects in the image at time T1 and T2 to form a difference map.
步骤(3)所述根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;方法如下:According to the Relief feature selection algorithm described in step (3), the weight of the feature of each object in the feature subspace M l of the change vector set M d is obtained, and the e change features with the largest weight are selected, and the value of e is less than the feature contained in the object. the total number, and then select the e change features from the N objects respectively, and combine the change vector set M d to obtain the matrix X; the method is as follows:
(3.1)变化矢量集合Md也称为特征空间,提取特征空间Md=[m1,m2,m3,...mN]的子空间Ml=[m1,m2,m3,...mn]作为训练集,其中n<N;使用人工解译将特征子空间Ml分为Ml1与Ml2两个数集,其中Ml1包含l1个元素,Ml2包含l2个元素,l1+l2=n,Ml1∈Ml,Ml2∈Ml;(3.1) The change vector set M d is also called the feature space, extract the subspace M l =[m 1 ,m 2 ,m of the feature space M d =[m 1 ,m 2 ,m 3 ,... m N ] 3 , ... _ _ _ _ l2 elements, l1+l2=n, M l1 ∈ M l , M l2 ∈ M l ;
(3.2)计算Ml中所有元素两两之间变化矢量的欧式距离,组成距离矩阵,即(3.2) Calculate the Euclidean distance of the change vector between all elements in M l to form a distance matrix, that is
其中,d表示距离矩阵,表示Ml中元素两两之间变化矢量的欧式距离;where d represents the distance matrix, represents the Euclidean distance of the change vector between the elements in M l ;
(3.3)在特征子空间Ml中,对于对象i的变化矢量mi,i=1,2,3,...,n,若mi∈Ml1,选取Ml1内与其欧式距离最小的s个矢量,并选取Ml2内与其欧氏距离最大的s个矢量;若mi∈Ml2,选取Ml2内与其欧式距离最小的s个矢量,并选取Ml1内与其欧氏距离最大的s个矢量;即对mi求取近邻矩阵Qi=[q1,q2,...,qs,...,q2s],其中,[q1,q2,...,qs]为所述欧式距离最小的s个矢量,[qs+1,qs+2,...,q2s]为所述欧氏距离最大的s个矢量,s=min(l1,l2);(3.3) In the feature subspace M l , for the change vector m i of the object i, i=1, 2, 3, ..., n, if m i ∈ M l1 , select the one with the smallest Euclidean distance in M l1 s vectors, and select the s vectors with the largest Euclidean distance in M l2 ; if m i ∈ M l2 , select the s vectors with the smallest Euclidean distance in M l2 , and select the largest Euclidean distance in M l1 s vectors; that is, the nearest neighbor matrix Q i = [q 1 , q 2 ,..., q s ,..., q 2s ] for mi , where [q 1 , q 2 ,..., q s ] are the s vectors with the smallest Euclidean distance, [q s+1 , q s+2 , ..., q 2s ] are the s vectors with the largest Euclidean distance, s=min(l1, l2);
(3.4)根据猜错近邻相关统计量和猜中近邻相关统计量,计算对象i的每个特征对应的权重,即(3.4) Calculate the weight corresponding to each feature of object i according to the related statistics of the wrongly guessed neighbors and the related statistics of the correctly guessed neighbors, namely
式中,是特征fj对应的权重,diffrj是特征fj的猜错近邻相关统计量,diffwj是特征fj的猜中近邻相关统计量;diffrj和diffwj的计算表达式如下:In the formula, is the weight corresponding to the feature f j , diffr j is the related statistic of the wrong-guessed neighbor of the feature f j , diffw j is the related statistic of the correct-guessed neighbor of the feature f j ; the calculation expressions of diffr j and diffw j are as follows:
式中,s=min(l1,l2),i表示对象i,j表示第j个特征,当a=1,2,3,...,s时,Qi(a,j)表示[q1,q2,...,qs],当a=s+1,s+2,...,2s时,Qi(a,j)表示[qs+1,qs+2,...,q2s];当b=1,2,3,...,s时,Qi(b,j)表示[q1,q2,...,qs],当b=s+1,s+2,...,2s时,Qi(b,j)表示[qs+1,qs+2,...,q2s];In the formula, s=min(l1, l2), i represents the object i, j represents the jth feature, when a=1, 2, 3,..., s, Q i (a, j) represents [q 1 , q 2 , . ..., q 2s ]; when b = 1, 2, 3, ..., s, Q i (b, j) represents [q 1 , q 2 , ..., q s ], when b = When s+1, s+2, ..., 2s, Q i (b, j) represents [q s+1 , q s+2 , ..., q 2s ];
(3.5)根据步骤(3.4)计算得到的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,进而分别在N个对象中选取所述e个变化特征,结合原变化矢量集合Md,得到矩阵X,表示如下:(3.5) According to the weight of the feature of each object in the feature subspace M l calculated in step (3.4), select e change features with the largest weight, and then select the e change features from the N objects respectively, and combine From the original change vector set M d , the matrix X is obtained, which is expressed as follows:
式中,Xi=[xi1 xi2 ... xie],i=1,2,...,N;矩阵元素xij表示对象i的e个变化特征中的第j个特征,e取值小于对象包含的特征总数。In the formula, X i =[x i1 x i2 ... x ie ], i=1, 2, ..., N; the matrix element x ij represents the j-th feature in the e changing features of the object i, e The value is less than the total number of features contained in the object.
步骤(4)所述利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y;方法如下:The described step (4) utilizes PCA algorithm to carry out feature dimension reduction to matrix X, and obtains the change feature matrix Y after the transformation; The method is as follows:
(4.1)根据步骤(3)得到的矩阵X,计算矩阵每一行的均值μi,i=1,2,...,N:(4.1) According to the matrix X obtained in step (3), calculate the mean value μ i of each row of the matrix, i=1, 2,...,N:
式中,j=1,2,...,e,E(Xi)表示数学期望;In the formula, j=1, 2,...,e, E(X i ) represents mathematical expectation;
(4.2)根据均值μi,计算矩阵X的协方差矩阵C,即:(4.2) Calculate the covariance matrix C of the matrix X according to the mean μ i , namely:
式中,i=1,2,...,N,r=1,2,...,N,j=1,2,...,e;由上述公式可知,协方差矩阵C为实对称矩阵;In the formula, i=1,2,...,N,r=1,2,...,N,j=1,2,...,e; it can be seen from the above formula that the covariance matrix C is a real Symmetric matrix;
(4.3)计算协方差矩阵C的特征值λ1,λ2,...,λN,对应的特征向量为v1,v2,...,vN,即λivi=Cvi,i=1,2,…,N;(4.3) Calculate the eigenvalues λ 1 , λ 2 , ..., λ N of the covariance matrix C, and the corresponding eigenvectors are v 1 , v 2 , ..., v N , that is, λ i v i =Cv i , i=1, 2, ..., N;
(4.4)将特征值λ1,λ2,...,λN进行降序排序,得到λ′1,λ′2,...,λ′N,对应的特征向量为v′1,v′2,...,v′N,取排序后的特征值构成一个对角矩阵P;(4.4) Sort the eigenvalues λ 1 , λ 2 , ..., λ N in descending order to obtain λ' 1 , λ' 2 , ..., λ' N , and the corresponding eigenvectors are v' 1 , v' 2 ,...,v′ N , take the sorted eigenvalues to form a diagonal matrix P;
(4.5)计算矩阵X在新的特征向量v′1,v′2,...,v′N上的投影,得到降维后的变化特征矩阵Y,即:(4.5) Calculate the projection of the matrix X on the new eigenvectors v′ 1 , v′ 2 , ..., v′ N , and obtain the changed feature matrix Y after dimension reduction, namely:
Y=X*P=[y1,y2,...,yN]Y=X*P=[y 1 , y 2 , ..., y N ]
其中,y1,y2,...,yN表示对象的变化特征。Among them, y 1 , y 2 , ..., y N represent the changing characteristics of the object.
所述步骤(5)中使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y′;步骤如下:In the step (5), the NR-Kmeans algorithm is used to classify the elements yi of the variable feature matrix Y, and the sparse data disturbed by noise is removed from the variable feature matrix Y to obtain a dense point set Y'; the steps are as follows:
(5.1)在矩阵Y=[y1,y2,...,yN]中,对于每一个元素点yα,yα∈Y,给定邻域半径δ,得到符合的元素点Qb(yα)表示包含元素点yα在内的yα的最近邻元素点集合;其中,表示元素点yα、的距离,b为yα在邻域范围内最近邻元素点个数,f=1,2,...,b;(5.1) In the matrix Y=[y 1 , y 2 , ..., y N ], for each element point y α , y α ∈ Y, given the neighborhood radius δ, get the element point of Q b (y α ) represents the nearest neighbor element point set of y α including element point y α ; where, represents the element point y α , distance, b is the number of nearest neighbor element points of y α in the neighborhood, f=1, 2, ..., b;
(5.2)计算元素点yα的密度函数值;元素点yx的密度函数值表示邻域半径δ范围内所有最近邻元素点对其影响的函数之和;yα的密度函数值计算公式如下:(5.2) Calculate the density function value of the element point y α; the density function value of the element point yx represents the sum of the functions of all the nearest neighbor element points in the range of the neighborhood radius δ; the calculation formula of the density function value of y α is as follows:
式中,为元素点yα、的距离,f=1,2,...,b;In the formula, is the element point y α , distance, f = 1, 2, ..., b;
(5.3)重复步骤(5.1)~(5.2),直到矩阵Y中所有元素点的密度函数值计算完成;(5.3) Repeat steps (5.1) to (5.2) until the calculation of the density function values of all element points in the matrix Y is completed;
(5.4)当yβ的密度函数值大于等于矩阵Y中所有元素点平均密度值时,则将元素点yβ视为可用数据放入密集点集合Y′中,即yβ符合:(5.4) When the density function value of y β is greater than or equal to the average density value of all element points in the matrix Y, the element point y β is regarded as available data and put into the dense point set Y′, that is, y β conforms to:
式中,DF(yβ)表示yβ的密度函数值,DF(yα)为数据点yα的密度函数值;In the formula, DF(y β ) represents the density function value of y β , and DF(y α ) is the density function value of the data point y α ;
(5.5)根据步骤(5.4)得到矩阵Y中所有的可用数据,所述可用数据构成密集点集合Y′,Y′为剔除了被噪声干扰的稀疏数据后的元素点集合。(5.5) Obtain all available data in the matrix Y according to step (5.4), the available data constitute a dense point set Y', and Y' is an element point set after excluding sparse data interfered by noise.
步骤(6)所述将密集点集合Y′分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图;方法如下:In step (6), the dense point set Y' is divided into unchanged classes and changed classes. The gray values of the objects in the difference map corresponding to the unchanged class elements are converted to 255, and the elements in the changed classes correspond to The gray value of the object in the difference map is converted to 0, and the change detection result map is output according to the gray value of the object; the method is as follows:
(6.1)从密集点集合Y′=[y′1,y′2,...,y′η],η为Y′中元素点个数,η≤N,y′η表示剔除了被噪声干扰的数据后的对象的特征;选取密度值最大的元素点,作为第一个初始聚类中心C1,计算Y′中其他元素点的特征值与聚类中心C1的特征值的差值,选取与C1的特征值相差最大的元素点,即差值绝对值最大的点作为第二个初始聚类中心C2;C1,C2相应的特征值为c1,c2,聚类中心C1和C2对应类D1和D2;(6.1) From the dense point set Y′=[y′ 1 , y′ 2 , ..., y′ η ], η is the number of element points in Y′, η≤N, y′ η means that the noise is eliminated The characteristics of the object behind the disturbed data; select the element point with the largest density value as the first initial cluster center C 1 , and calculate the difference between the eigenvalues of other element points in Y′ and the eigenvalues of the cluster center C 1 , select the element point with the largest difference from the eigenvalue of C 1 , that is, the point with the largest absolute value of the difference as the second initial cluster center C 2 ; Class centers C 1 and C 2 correspond to classes D 1 and D 2 ;
(6.2)计算密集点集合Y′中的元素点y′θ与两个聚类中心的特征值的差值,即(6.2) Calculate the difference between the element point y′ θ in the dense point set Y′ and the eigenvalues of the two cluster centers, namely
dθ1=|y′θ-c1|d θ1 = |y′ θ -c 1 |
dθ2=|y′θ-c2|d θ2 = |y′ θ -c 2 |
dθ1表示元素点y′θ与聚类中心C1的差值的绝对值,dθ2表示元素点y′θ与聚类中心C2的差值的绝对值,θ=1,2,...,η;将密集点集合Y′中的元素点分到差值绝对值较小的聚类中心所对应的类;d θ1 represents the absolute value of the difference between the element point y' θ and the cluster center C 1 , d θ2 represents the absolute value of the difference between the element point y' θ and the cluster center C 2 , θ=1, 2, .. ., η; divide the elements in the dense point set Y' into the class corresponding to the cluster center with the smaller absolute difference value;
(6.3)计算类D1,D2内相应的元素点特征值的平均值c′1,c′2,即(6.3) Calculate the average value c′ 1 , c′ 2 of the corresponding element point eigenvalues in the classes D 1 , D 2 , namely
式中,c′1,c′2分别为类D1,D2相应的元素点特征值的平均值;|D1|表示类D1中元素点的个数,|D2|表示类D2中元素点的个数,y′对应类内元素点特征值;如果c′1≠c1,将类D1的聚类中心C1的特征值c1更新为c′1;如果c′2≠c2,将类D2的聚类中心C2的特征值c2更新为c′2;In the formula, c′ 1 , c′ 2 are the average values of the corresponding element point eigenvalues of class D 1 and D 2 respectively; |D 1 | represents the number of element points in class D 1 , and |D 2 | represents class D The number of element points in 2 , y′ corresponds to the eigenvalues of the element points in the class; if c′ 1 ≠c 1 , update the eigenvalue c 1 of the cluster center C 1 of class D 1 to c′ 1 ; if c′ 1 ≠c 1 , 2 ≠c 2 , update the eigenvalue c 2 of the cluster center C 2 of class D 2 to c′ 2 ;
(6.4)重复步骤(6.2)~(6.3),直至两个聚类中心的特征值都不再发生改变;(6.4) Repeat steps (6.2) to (6.3) until the eigenvalues of the two cluster centers no longer change;
(6.5)将两个类D1,D2分为未变化的类和变化的类,其中,平均值较小的为未变化的类,平均值较大的为变化的类;未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0;(6.5) Divide the two classes D 1 , D 2 into the unchanged class and the changed class, wherein the class with a smaller average value is the unchanged class, and the class with a larger average value is the changed class; the unchanged class The gray value of the object in the difference map corresponding to the element is converted to 255, and the gray value of the object in the difference map corresponding to the element in the changed class is converted to 0;
(6.6)根据对象的灰度值输出检测结果图,图中标出了图像变化的区域。(6.6) Output the detection result graph according to the gray value of the object, and the area where the image changes is marked in the graph.
本发明改进了传统的kmeans聚类算法,检测精度更高,准确率高。The invention improves the traditional kmeans clustering algorithm, and has higher detection precision and high accuracy.
以上所述仅是本发明的优选实施方式,应当指出:对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only the preferred embodiment of the present invention, it should be pointed out that: for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made, and these improvements and modifications are also It should be regarded as the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910449743.XA CN110276746B (en) | 2019-05-28 | 2019-05-28 | Robust remote sensing image change detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910449743.XA CN110276746B (en) | 2019-05-28 | 2019-05-28 | Robust remote sensing image change detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110276746A CN110276746A (en) | 2019-09-24 |
CN110276746B true CN110276746B (en) | 2022-08-19 |
Family
ID=67960074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910449743.XA Active CN110276746B (en) | 2019-05-28 | 2019-05-28 | Robust remote sensing image change detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110276746B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091054B (en) * | 2019-11-13 | 2020-11-10 | 广东国地规划科技股份有限公司 | Method, system and device for monitoring land type change and storage medium |
CN111931744B (en) * | 2020-10-09 | 2021-01-05 | 航天宏图信息技术股份有限公司 | Method and device for detecting change of remote sensing image |
CN112329565A (en) * | 2020-10-26 | 2021-02-05 | 兰州交通大学 | Road construction supervision method and system based on high-resolution remote sensing image |
CN114066876B (en) * | 2021-11-25 | 2022-07-08 | 北京建筑大学 | A construction waste change detection method based on classification results and CVA-SGD method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102629380A (en) * | 2012-03-03 | 2012-08-08 | 西安电子科技大学 | Remote sensing image change detection method based on multi-group filtering and dimension reduction |
CN103456020A (en) * | 2013-09-08 | 2013-12-18 | 西安电子科技大学 | Remote sensing image change detection method based on treelet feature fusion |
CN107423771A (en) * | 2017-08-04 | 2017-12-01 | 河海大学 | A kind of two phase method for detecting change of remote sensing image |
CN109409389A (en) * | 2017-08-16 | 2019-03-01 | 香港理工大学深圳研究院 | A kind of object-oriented change detecting method merging multiple features |
-
2019
- 2019-05-28 CN CN201910449743.XA patent/CN110276746B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102629380A (en) * | 2012-03-03 | 2012-08-08 | 西安电子科技大学 | Remote sensing image change detection method based on multi-group filtering and dimension reduction |
CN103456020A (en) * | 2013-09-08 | 2013-12-18 | 西安电子科技大学 | Remote sensing image change detection method based on treelet feature fusion |
CN107423771A (en) * | 2017-08-04 | 2017-12-01 | 河海大学 | A kind of two phase method for detecting change of remote sensing image |
CN109409389A (en) * | 2017-08-16 | 2019-03-01 | 香港理工大学深圳研究院 | A kind of object-oriented change detecting method merging multiple features |
Also Published As
Publication number | Publication date |
---|---|
CN110276746A (en) | 2019-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112101278B (en) | Homestead point cloud classification method based on k-nearest neighbor feature extraction and deep learning | |
CN110276746B (en) | Robust remote sensing image change detection method | |
CN105608474B (en) | Region adaptivity plant extraction method based on high resolution image | |
CN109657610A (en) | A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images | |
CN110751087B (en) | EOF-based unmanned aerial vehicle signal identification system and method | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN101561865B (en) | Target Recognition Method in Synthetic Aperture Radar Image Based on Multi-parameter Spectral Features | |
CN108229551B (en) | Hyperspectral remote sensing image classification method based on compact dictionary sparse representation | |
CN112949738B (en) | Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm | |
CN106295124A (en) | Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount | |
CN107895139A (en) | A kind of SAR image target recognition method based on multi-feature fusion | |
CN114266961A (en) | Method for integrating, learning and classifying marsh vegetation stacks by integrating hyperspectral and multiband fully-polarized SAR images | |
CN109446894A (en) | The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture | |
Xia et al. | Band selection for hyperspectral imagery: A new approach based on complex networks | |
CN102663740A (en) | SAR image change detection method based on image cutting | |
CN114022782B (en) | Sea fog detection method based on MODIS satellite data | |
CN113988198B (en) | Multi-scale city function classification method based on landmark constraint | |
CN111460943A (en) | Remote sensing image ground object classification method and system | |
CN113850315A (en) | Hyperspectral image classification method and device combining EMP (empirical mode decomposition) features and TNT (trinitrotoluene) modules | |
CN117611908A (en) | Accurate crop classification using sky-ground integrated large-scene hyperspectral remote sensing images | |
CN111325158A (en) | CNN and RFC-based integrated learning polarized SAR image classification method | |
Li et al. | Spatial fuzzy clustering and deep auto-encoder for unsupervised change detection in synthetic aperture radar images | |
Ma et al. | Spatial first hyperspectral image classification with graph convolution network | |
Guo et al. | Hyperspectral image classification based on stacked contractive autoencoder combined with adaptive spectral-spatial information | |
Naji et al. | Satellite Images Scene Classification Based Support Vector Machines and K-Nearest Neighbor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |