CN110276746B - Robust remote sensing image change detection method - Google Patents

Robust remote sensing image change detection method Download PDF

Info

Publication number
CN110276746B
CN110276746B CN201910449743.XA CN201910449743A CN110276746B CN 110276746 B CN110276746 B CN 110276746B CN 201910449743 A CN201910449743 A CN 201910449743A CN 110276746 B CN110276746 B CN 110276746B
Authority
CN
China
Prior art keywords
image
matrix
feature
value
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910449743.XA
Other languages
Chinese (zh)
Other versions
CN110276746A (en
Inventor
王鑫
张香梁
段林
宁晨
马贞立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201910449743.XA priority Critical patent/CN110276746B/en
Publication of CN110276746A publication Critical patent/CN110276746A/en
Application granted granted Critical
Publication of CN110276746B publication Critical patent/CN110276746B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robustness remote sensing image change detection method, which comprises the following steps: acquiring image information of two time phases of a target area; preprocessing the remote sensing images of the two time phases to obtain image characteristics; selecting and screening image features according to a Relief algorithm; performing feature dimension reduction by using a PCA algorithm to obtain a final preferred feature change vector; and (4) classifying by using an NR-Kmeans method to obtain a change detection result. The invention improves the traditional kmeans clustering algorithm, and has higher detection precision and high accuracy.

Description

一种鲁棒性遥感图像变化检测方法A Robust Remote Sensing Image Change Detection Method

技术领域technical field

本发明属于图像变化技术领域,尤其涉及一种鲁棒性遥感图像变化检测方法。The invention belongs to the technical field of image changes, and in particular relates to a robust remote sensing image change detection method.

背景技术Background technique

遥感,即远距离感知。遥感一词最早于1960年出现在美国的一项军事科研计划之中,在1962年密执安大学等单位举行的第一届环境遥感学术研讨会上被正式采用。如今,遥感技术应用于越来越多的领域,比如:森林或植被的动态变化监测、对自然灾害的灾后分析及评估、对土地利用的变化分析、对农田进行监控、对城镇变化实时监测、分析农作物生长状况、对军事战略目标,比如机场、道路,进行动态监视等领域,极大地促进了经济和社会的发展。Remote sensing, that is, long-distance perception. The term remote sensing first appeared in a military scientific research program in the United States in 1960, and was formally adopted in the first academic seminar on environmental remote sensing held by the University of Michigan and other units in 1962. Nowadays, remote sensing technology is used in more and more fields, such as: dynamic change monitoring of forest or vegetation, post-disaster analysis and assessment of natural disasters, analysis of land use changes, monitoring of farmland, real-time monitoring of urban changes, Analyzing the growth status of crops and conducting dynamic surveillance of military strategic targets, such as airports and roads, has greatly promoted economic and social development.

遥感图像变化检测属于遥感图像处理领域,用于分析处理同一地点不同时期的遥感图像而获得变化信息。国内外学者已经对遥感图像变化检测问题进行了大量的研究,提出了各种各样的变化检测方法。其中基于特征的变化检测方法是利用图像中的光谱特征、纹理特征、空间结构特性,对遥感图像进行变化检测。基于特征的变化检测方法考虑了图像丰富的特征信息,具有较好的噪声鲁棒性和抗光照、辐射干扰能力。Remote sensing image change detection belongs to the field of remote sensing image processing, and is used to analyze and process remote sensing images of the same location in different periods to obtain change information. Scholars at home and abroad have carried out a lot of research on the problem of remote sensing image change detection, and proposed various change detection methods. Among them, the feature-based change detection method is to use the spectral features, texture features, and spatial structure features in the image to detect changes in remote sensing images. The feature-based change detection method considers the rich feature information of the image, and has good noise robustness and anti-light and radiation interference ability.

由于遥感图像受光照、大气环境等影响,而采集的图像特征也受其影响。不仅如此,由于噪声干扰,传统的聚类算法受稀疏数据的影响分类的结果不够精确。选择具有代表性的特征并精确分类成为遥感图像变化检测技术的关键。Since remote sensing images are affected by illumination, atmospheric environment, etc., the collected image features are also affected by them. Not only that, due to noise interference, the traditional clustering algorithm is affected by sparse data and the classification results are not accurate enough. Selecting representative features and accurately classifying them becomes the key to change detection technology in remote sensing images.

发明内容SUMMARY OF THE INVENTION

发明目的:针对以上问题,本发明提出一种鲁棒性遥感图像变化检测方法。该方法是一种基于特征的遥感图影变化检测方法,根据Relief算法对图像特征进行选择和筛选,然后利用PCA算法降维,最后使用NR-Kmeans算法,排除噪声干扰点对检测结果的影响,最后输出结果。本发明排除了噪声干扰点,检测更加精确。Purpose of the invention: In view of the above problems, the present invention proposes a robust remote sensing image change detection method. This method is a feature-based remote sensing image change detection method. Image features are selected and screened according to the Relief algorithm, then the PCA algorithm is used to reduce the dimension, and finally the NR-Kmeans algorithm is used to eliminate the influence of noise interference points on the detection results. The final output result. The invention eliminates noise interference points, and the detection is more accurate.

技术方案:为实现本发明的目的,本发明所采用的技术方案是:一种鲁棒性遥感图像变化检测方法,包括如下步骤:Technical solution: In order to achieve the purpose of the present invention, the technical solution adopted in the present invention is: a robust remote sensing image change detection method, comprising the following steps:

(1)采集获取目标地区T1和T2时刻的遥感图像,所述图像满足分辨率需求并且包含清晰的纹理特征信息;( 1 ) collect and obtain remote sensing images of target areas T1 and T2 , which meet the resolution requirements and contain clear texture feature information;

(2)采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合Md,Md=[m1,m2,...mN],其中,mi表示第i个对象所对应的变化矢量;将Md中元素按照T1或T2时刻图像中N个对象的排列方式排列,构成一幅差值图;(2) Using ENVI software to realize image data preprocessing to eliminate shooting errors; using the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, That is, N objects are obtained respectively; the spectral features and texture features of the images at times T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a change vector set M d , where M d =[m 1 ,m 2 ,...m N ], wherein, m i represents the change vector corresponding to the ith object; the elements in M d are arranged according to the arrangement of N objects in the image at time T 1 or T 2 to form a difference map;

(3)根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;(3) According to the Relief feature selection algorithm, obtain the weight of the feature of each object in the feature subspace M l of the change vector set M d , select e change features with the largest weight, and the value of e is less than the total number of features contained in the object, and then The e change features are selected from the N objects respectively, and the matrix X is obtained in combination with the change vector set M d ;

(4)利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y,Y=[y1,y2,...,yN];(4) Use PCA algorithm to reduce the feature dimension of matrix X, and obtain the changed feature matrix Y after transformation, Y=[y 1 , y 2 ,...,y N ];

(5)使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y′;(5) Use the NR-Kmeans algorithm to classify the elements yi of the variable feature matrix Y, remove the sparse data disturbed by noise from the variable feature matrix Y, and obtain a dense point set Y';

(6)将密集点集合Y′分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图,图中标出了图像变化的区域。(6) Divide the dense point set Y′ into the unchanged class and the changed class. The gray value of the object in the difference map corresponding to the unchanged class element is converted to 255, and the difference value corresponding to the element in the changed class The gray value of the object in the figure is converted to 0, and the change detection result map is output according to the gray value of the object, and the area of the image change is marked in the figure.

进一步,步骤(2)中,采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合,步骤如下:Further, in step (2), ENVI software is used to realize image data preprocessing to eliminate shooting errors; eCogntion software platform is used to perform multi - scale segmentation on the preprocessed images at T1 and T2, and the two images are divided into N image blocks, that is, N objects are obtained respectively; the spectral features and texture features of the images at time T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a set of change vectors. The steps are as follows:

(2.1)采用ENVI软件分别对T1和T2时刻的图像进行几何校正、图像配准、大气校正操作,实现图像数据预处理;( 2.1 ) Using ENVI software to perform geometric correction, image registration, and atmospheric correction operations on the images at T1 and T2, respectively, to realize image data preprocessing;

(2.2)采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;(2.2) Use the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, that is, to obtain N objects respectively;

(2.3)求取预处理后的图像的Mean-Std特征,作为图像的光谱特征;采用灰度共生矩阵提取预处理后的图像的纹理特征;(2.3) Obtain the Mean-Std feature of the preprocessed image as the spectral feature of the image; use the gray level co-occurrence matrix to extract the texture feature of the preprocessed image;

(2.4)根据预处理后的两幅图像提取的光谱特征和纹理特征,构建特征向量,获取变化矢量集合Md(2.4) According to the spectral features and texture features extracted from the two preprocessed images, a feature vector is constructed, and a change vector set M d is obtained.

进一步,步骤(2.3)所述求取图像的Mean-Std特征,作为图像的光谱特征;方法如下:Further, the Mean-Std feature of the image is obtained as described in step (2.3) as the spectral feature of the image; the method is as follows:

对图像的每个对象,按照下列公式求取光谱特征中的均值Mean和标准差Std:For each object of the image, the mean and standard deviation Std in the spectral features are obtained according to the following formulas:

Figure BDA0002074766860000021
Figure BDA0002074766860000021

式中,Mean为对象中像素点灰度的均值,std为对象中像素点灰度的标准差,A表示对象中像素点数,ci代表对象中第i个像素点的灰度大小。In the formula, Mean is the mean of the grayscale of the pixels in the object, std is the standard deviation of the grayscale of the pixels in the object, A represents the number of pixels in the object, and c i represents the grayscale of the ith pixel in the object.

进一步,步骤(2.3)所述采用灰度共生矩阵提取图像的纹理特征;方法如下:Further, in step (2.3), the gray level co-occurrence matrix is used to extract the texture features of the image; the method is as follows:

(2.3.1)将图像中任意一像素点(τ1,τ2)及另一像素点(τ11,τ22)构成点对,其中Δ1,Δ2为整数;设像素点(τ1,τ2)的灰度值为f1,(τ11,τ22)的灰度值为f2,则该点对的灰度值为(f1,f2),设图像的最大灰度级为L,则f1与f2的组合共有L*L组;(2.3.1) Any pixel point (τ 1 , τ 2 ) and another pixel point (τ 11 , τ 22 ) in the image form a point pair, where Δ 1 , Δ 2 are integers; Assuming that the gray value of the pixel point (τ 1 , τ 2 ) is f1, the gray value of (τ 11 , τ 22 ) is f2, then the gray value of the pair of points is (f1, f2 ), set the maximum gray level of the image to be L, then the combination of f1 and f2 has a total of L*L groups;

(2.3.2)统计图像中每一组不同的(f1,f2)值出现的次数,然后排列成一个矩阵,其中,位于矩阵位置(L,L)上的矩阵元素值就是两个灰度值均为L的点对出现的次数;(2.3.2) Count the number of occurrences of each group of different (f1, f2) values in the image, and then arrange them into a matrix, where the matrix element values located at the matrix position (L, L) are the two grayscale values. are the number of occurrences of point pairs of L;

(2.3.3)根据(f1,f2)出现的总次数将每一组不同的(f1,f2)值出现的次数归一化为出现的概率g(f1,f2),归一化之后的的方阵称为灰度共生矩阵;(2.3.3) According to the total number of occurrences of (f1, f2), the number of occurrences of each group of different (f1, f2) values is normalized to the probability of occurrence g(f1, f2), and the normalized The square matrix is called the gray level co-occurrence matrix;

(2.3.4)灰度共生矩阵提供了14种统计量作为纹理特征,本发明选取6种统计量作为纹理特征,所述6种统计量为对比度、熵、能量、均等性、不相似性、相关性;对比度反映了图像的清晰度和纹理沟纹深浅的程度;熵表示了图像中纹理的非均匀程度或复杂程度;能量反映了图像灰度分布均匀程度和纹理粗细度;均等性反映图像纹理的同质性,度量图像纹理局部变化的多少;相关性反应了图像纹理的一致性;这6种统计量都从各个方面表示了图像的特性,故选择这6种统计量作为纹理特征;(2.3.4) The grayscale co-occurrence matrix provides 14 kinds of statistics as texture features, the present invention selects 6 kinds of statistics as texture features, and the 6 kinds of statistics are contrast, entropy, energy, equality, dissimilarity, Correlation; contrast reflects the clarity of the image and the depth of texture grooves; entropy expresses the degree of non-uniformity or complexity of the texture in the image; energy reflects the uniformity of the grayscale distribution of the image and the thickness of the texture; equality reflects the image The homogeneity of the texture measures the local variation of the image texture; the correlation reflects the consistency of the image texture; these 6 kinds of statistics represent the characteristics of the image from various aspects, so these 6 kinds of statistics are selected as texture features;

(2.3.5)设灰度共生矩阵的位置(i,j)上的矩阵元素值为g(i,j),L为图像的最大灰度级,则所述纹理特征表示如下:(2.3.5) Assuming that the value of the matrix element at the position (i, j) of the gray level co-occurrence matrix is g(i, j), and L is the maximum gray level of the image, the texture feature is expressed as follows:

Figure BDA0002074766860000031
Figure BDA0002074766860000031

Figure BDA0002074766860000032
Figure BDA0002074766860000032

Figure BDA0002074766860000033
Figure BDA0002074766860000033

Figure BDA0002074766860000034
Figure BDA0002074766860000034

Figure BDA0002074766860000035
Figure BDA0002074766860000035

Figure BDA0002074766860000036
Figure BDA0002074766860000036

式中,Con表示对比度,Ent表示熵,Ene表示能量,Hom表示均等性,Dis表示不相似性,Cor表示相关性;

Figure BDA0002074766860000037
Figure BDA0002074766860000038
In the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
Figure BDA0002074766860000037
Figure BDA0002074766860000038

进一步,步骤(2.4)所述根据预处理后的两幅图像提取的光谱特征和纹理特征,获取变化矢量集合Md;方法如下:Further, according to the spectral features and texture features extracted from the two preprocessed images described in step (2.4), a change vector set M d is obtained; the method is as follows:

(2.4.1)根据步骤(2.3)得到的T1和T2时刻图像的光谱特征和纹理特征,分别构建特征向量,对两幅图像的相对应的特征向量计算差值;两幅图像相对应的第i个对象的第j个特征向量的差值Md(i,j),表示如下:(2.4.1) According to the spectral features and texture features of the images at time T 1 and T 2 obtained in step (2.3), construct feature vectors respectively, and calculate the difference between the corresponding feature vectors of the two images; the two images correspond to The difference M d (i, j) of the j-th eigenvector of the i-th object is expressed as follows:

Md(i,j)=Md,1(i,j)-Md,2(i,j)M d (i, j) = M d, 1 (i, j) - M d, 2 (i, j)

式中,Md,1(i,j)为T1时刻图像第i个对象的第j个特征向量,Md,2(i,j)为T2时刻图像第i个对象的第j个特征向量;i≤N,N为图像中对象的总数;j≤Sfeature,Sfeature为对象包含的特征总数;Sfeature=Sband×8,Sband为对象总波段数;In the formula, M d, 1 (i, j) is the j-th feature vector of the i-th object in the image at time T 1 , and M d, 2 (i, j) is the j-th object of the i-th object in the image at time T 2 feature vector; i≤N, N is the total number of objects in the image; j≤S feature , S feature is the total number of features contained in the object; S feature =S band ×8, S band is the total number of bands of the object;

根据差值Md(i,j),得到第i个对象所对应的变化矢量mi,表示如下:According to the difference value M d (i, j), the change vector m i corresponding to the ith object is obtained, which is expressed as follows:

mi=(Md(i,1),Md(i,2),...,Md(i,Sfeature))m i = (M d (i, 1), M d (i, 2), ..., M d (i, S feature ))

(2.4.2)重复步骤(2.4.1),计算两幅图像中相对应的每一个对象的特征向量的差值,得到每个对象所对应的变化矢量,构成变化矢量集合Md,表示如下:(2.4.2) Repeat step (2.4.1), calculate the difference between the feature vectors of each object corresponding to the two images, obtain the change vector corresponding to each object, and form the change vector set M d , which is expressed as follows :

Md=[m1,m2,…mN]M d =[m 1 , m 2 , ... m N ]

将Md中元素按照T1和T2时刻图像中N个对象的排列方式排列,构成一幅差值图。Arrange the elements in M d according to the arrangement of N objects in the image at time T1 and T2 to form a difference map.

进一步,步骤(3)所述根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Mi中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;方法如下:Further, according to the Relief feature selection algorithm described in step (3), obtain the weight of the feature of each object in the feature subspace M i of the change vector set M d , select e the change features with the largest weight, and the value of e is less than the object contains The total number of features, and then select the e change features from the N objects, and combine the change vector set M d to obtain the matrix X; the method is as follows:

(3.1)变化矢量集合Md也称为特征空间,提取特征空间Md=[m1,m2,m3,…mN]的子空间Ml=[m1,m2,m3,…mn]作为训练集,其中n<N;使用人工解译将特征子空间Ml分为Ml1与Ml2两个数集,其中Ml1包含l1个元素,ML2包含l2个元素,l1+l2=n,Ml1∈Ml,Ml2∈Ml(3.1) The change vector set M d is also called the feature space, and the subspace M l =[m 1 ,m 2 ,m 3 ] of the feature space M d =[m 1 ,m 2 ,m 3 ,... m N ] is extracted, ... m n ] as the training set, where n<N; the feature subspace M l is divided into two number sets M l1 and M l2 using artificial interpretation, where M l1 contains l1 elements, M L2 contains l2 elements, l1+l2=n, M l1 ∈ M l , M l2 ∈ M l ;

(3.2)计算Ml中所有元素两两之间变化矢量的欧式距离,组成距离矩阵,即(3.2) Calculate the Euclidean distance of the change vector between all elements in M l to form a distance matrix, that is

Figure BDA0002074766860000041
Figure BDA0002074766860000041

其中,d表示距离矩阵,

Figure BDA0002074766860000042
表示Ml中元素两两之间变化矢量的欧式距离;where d represents the distance matrix,
Figure BDA0002074766860000042
represents the Euclidean distance of the change vector between the elements in M l ;

(3.3)在特征子空间Ml中,对于对象i的变化矢量mi,i=1,2,3,...,n,若mi∈Ml1,选取Ml1内与其欧式距离最小的s个矢量,并选取Ml2内与其欧氏距离最大的s个矢量;若mi∈Ml2,选取Ml2内与其欧式距离最小的s个矢量,并选取Ml1内与其欧氏距离最大的s个矢量;即对mi求取近邻矩阵Qi=[q1,q2,...,qs,...,q2s],其中,[q1,q2,...,qs]为所述欧式距离最小的s个矢量,[qs+1,qs+2,...,q2s]为所述欧氏距离最大的s个矢量,s=min(l1,l2);(3.3) In the feature subspace M l , for the change vector m i of the object i, i=1, 2, 3, ..., n, if m i ∈ M l1 , select the one with the smallest Euclidean distance in M l1 s vectors, and select the s vectors with the largest Euclidean distance in M l2 ; if m i ∈ M l2 , select the s vectors with the smallest Euclidean distance in M l2 , and select the largest Euclidean distance in M l1 s vectors; that is, the nearest neighbor matrix Q i = [q 1 , q 2 ,..., q s ,..., q 2s ] for mi , where [q 1 , q 2 ,..., q s ] are the s vectors with the smallest Euclidean distance, [q s+1 , q s+2 , ..., q 2s ] are the s vectors with the largest Euclidean distance, s=min(l1, l2);

(3.4)根据猜错近邻相关统计量和猜中近邻相关统计量,计算对象i的每个特征对应的权重,即(3.4) Calculate the weight corresponding to each feature of object i according to the related statistics of the wrongly guessed neighbors and the related statistics of the correctly guessed neighbors, namely

Figure BDA0002074766860000051
Figure BDA0002074766860000051

式中,

Figure BDA0002074766860000052
是特征fj对应的权重,diffrj是特征fj的猜错近邻相关统计量,diffwj是特征fj的猜中近邻相关统计量;diffrj和diffwj的计算表达式如下:In the formula,
Figure BDA0002074766860000052
is the weight corresponding to the feature f j , diffr j is the related statistic of the wrong-guessed neighbor of the feature f j , diffw j is the related statistic of the correct-guessed neighbor of the feature f j ; the calculation expressions of diffr j and diffw j are as follows:

Figure BDA0002074766860000053
Figure BDA0002074766860000053

Figure BDA0002074766860000054
Figure BDA0002074766860000054

式中,s=min(l1,l2),i表示对象i,j表示第j个特征,当a=1,2,3,...,s时,Qi(a,j)表示[q1,q2,...,qs],当a=s+1,s+2,...,2s时,Qi(a,j)表示[qs+1,qs+2,...,q2s];当b=1,2,3,...,s时,Qi(b,j)表示[q1,q2,...,qs],当b=s+1,s+2,...,2s时,Qi(b,j)表示[qs+1,qs+2,...,q2s];In the formula, s=min(l1, l2), i represents the object i, j represents the jth feature, when a=1, 2, 3,..., s, Q i (a, j) represents [q 1 , q 2 , . ..., q 2s ]; when b = 1, 2, 3, ..., s, Q i (b, j) represents [q 1 , q 2 , ..., q s ], when b = When s+1, s+2, ..., 2s, Q i (b, j) represents [q s+1 , q s+2 , ..., q 2s ];

(3.5)根据步骤(3.4)计算得到的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,进而分别在N个对象中选取所述e个变化特征,结合原变化矢量集合Md,得到矩阵X,表示如下:(3.5) According to the weight of the feature of each object in the feature subspace M l calculated in step (3.4), select e change features with the largest weight, and then select the e change features from the N objects respectively, combine with From the original change vector set M d , the matrix X is obtained, which is expressed as follows:

Figure BDA0002074766860000055
Figure BDA0002074766860000055

式中,Xi=[xi1 xi2 ... xie],i=1,2,...,N;矩阵元素xij表示对象i的e个变化特征中的第j个特征,e取值小于对象包含的特征总数。In the formula, X i =[x i1 x i2 ... x ie ], i=1, 2, ..., N; the matrix element x ij represents the j-th feature in the e changing features of the object i, e The value is less than the total number of features contained in the object.

进一步,步骤(4)所述利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y;方法如下:Further, described in step (4), the PCA algorithm is used to perform feature dimension reduction on the matrix X to obtain the change feature matrix Y after the transformation; the method is as follows:

(4.1)根据步骤(3)得到的矩阵X,计算矩阵每一行的均值μi,i=1,2,...,N:(4.1) According to the matrix X obtained in step (3), calculate the mean value μ i of each row of the matrix, i=1, 2,...,N:

Figure BDA0002074766860000056
Figure BDA0002074766860000056

式中,j=1,2,...,e,E(Xi)表示数学期望;In the formula, j=1, 2,...,e, E(X i ) represents mathematical expectation;

(4.2)根据均值μi,计算矩阵X的协方差矩阵C,即:(4.2) Calculate the covariance matrix C of the matrix X according to the mean μ i , namely:

Figure BDA0002074766860000061
Figure BDA0002074766860000061

Figure BDA0002074766860000062
Figure BDA0002074766860000062

式中,i=1,2,...,N,r=1,2,...,N,j=1,2,...,e;由上述公式可知,协方差矩阵C为实对称矩阵;In the formula, i=1,2,...,N,r=1,2,...,N,j=1,2,...,e; it can be seen from the above formula that the covariance matrix C is a real Symmetric matrix;

(4.3)计算协方差矩阵C的特征值λ1,λ2,...,λN,对应的特征向量为v1,v2,...,vN,即λivi=Cvi,i=1,2,...,N;(4.3) Calculate the eigenvalues λ 1 , λ 2 , ..., λ N of the covariance matrix C, and the corresponding eigenvectors are v 1 , v 2 , ..., v N , that is, λ i v i =Cv i , i=1,2,...,N;

(4.4)将特征值λ1,λ2,...,λN进行降序排序,得到λ′1,λ′2,...,λ′N,对应的特征向量为v′1,v′2,...,v′N,取排序后的特征值构成一个对角矩阵P;(4.4) Sort the eigenvalues λ 1 , λ 2 , ..., λ N in descending order to obtain λ' 1 , λ' 2 , ..., λ' N , and the corresponding eigenvectors are v' 1 , v' 2 ,...,v′ N , take the sorted eigenvalues to form a diagonal matrix P;

Figure BDA0002074766860000063
Figure BDA0002074766860000063

(4.5)计算矩阵X在新的特征向量v′1,v′2,...,v′N上的投影,得到降维后的变化特征矩阵Y,即:(4.5) Calculate the projection of the matrix X on the new eigenvectors v′ 1 , v′ 2 , ..., v′ N , and obtain the changed feature matrix Y after dimension reduction, namely:

Y=X*P=[y1,y2,...,yN]Y=X*P=[y 1 , y 2 , ..., y N ]

其中,y1,y2,...,yN表示对象的变化特征。Among them, y 1 , y 2 , ..., y N represent the changing characteristics of the object.

进一步,所述步骤(5)中使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y′;步骤如下:Further, in the step (5), the NR-Kmeans algorithm is used to classify the element yi of the variable feature matrix Y, and the sparse data disturbed by noise is removed from the variable feature matrix Y to obtain a dense point set Y'; the steps are as follows:

(5.1)在矩阵Y=[y1,y2,...,yN]中,对于每一个元素点yα,yα∈Y,给定邻域半径δ,得到符合

Figure BDA0002074766860000064
的元素点
Figure BDA0002074766860000065
Qb(yα)表示包含元素点yα在内的yα的最近邻元素点集合;其中,
Figure BDA0002074766860000066
表示元素点yα
Figure BDA0002074766860000067
的距离,b为yα在邻域范围内最近邻元素点个数,f=1,2,...,b;(5.1) In the matrix Y=[y 1 , y 2 , ..., y N ], for each element point y α , y α ∈ Y, given the neighborhood radius δ, get the
Figure BDA0002074766860000064
element point of
Figure BDA0002074766860000065
Q b (y α ) represents the nearest neighbor element point set of y α including element point y α ; where,
Figure BDA0002074766860000066
represents the element point y α ,
Figure BDA0002074766860000067
distance, b is the number of nearest neighbor element points of y α in the neighborhood, f=1, 2, ..., b;

(5.2)计算元素点yα的密度函数值;元素点yα的密度函数值表示邻域半径δ范围内所有最近邻元素点对其影响的函数之和;yα的密度函数值计算公式如下:(5.2) Calculate the density function value of the element point y α ; the density function value of the element point y α represents the sum of the functions of all the nearest neighbor element points in the range of the neighborhood radius δ; the calculation formula of the density function value of y α is as follows :

Figure BDA0002074766860000068
Figure BDA0002074766860000068

式中,

Figure BDA0002074766860000069
为元素点yα
Figure BDA00020747668600000610
的距离,f=1,2,...,b;In the formula,
Figure BDA0002074766860000069
is the element point y α ,
Figure BDA00020747668600000610
distance, f = 1, 2, ..., b;

(5.3)重复步骤(5.1)~(5.2),直到矩阵Y中所有元素点的密度函数值计算完成;(5.3) Repeat steps (5.1) to (5.2) until the calculation of the density function values of all element points in the matrix Y is completed;

(5.4)当yβ的密度函数值大于等于矩阵Y中所有元素点平均密度值时,则将元素点yβ视为可用数据放入密集点集合Y′中,即yβ符合:(5.4) When the density function value of y β is greater than or equal to the average density value of all element points in the matrix Y, the element point y β is regarded as available data and put into the dense point set Y′, that is, y β conforms to:

Figure BDA0002074766860000071
Figure BDA0002074766860000071

式中,DF(yβ)表示yβ的密度函数值,DF(yα)为数据点yα的密度函数值;In the formula, DF(y β ) represents the density function value of y β , and DF(y α ) is the density function value of the data point y α ;

(5.5)根据步骤(5.4)得到矩阵Y中所有的可用数据,所述可用数据构成密集点集合Y′,Y′为剔除了被噪声干扰的稀疏数据后的元素点集合。(5.5) Obtain all available data in the matrix Y according to step (5.4), the available data constitute a dense point set Y', and Y' is an element point set after excluding sparse data interfered by noise.

进一步,步骤(6)所述将密集点集合Y′分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图;方法如下:Further, according to step (6), the dense point set Y' is divided into unchanged classes and changed classes, and the gray value of the objects in the difference map corresponding to the unchanged class elements is converted to 255, and the changed classes The gray value of the object in the difference map corresponding to the element is converted to 0, and the change detection result map is output according to the gray value of the object; the method is as follows:

(6.1)从密集点集合Y′=[y′1,y′2,...,y′η],η为Y′中元素点个数,η≤N,y′η表示剔除了被噪声干扰的数据后的对象的特征;选取密度值最大的元素点,作为第一个初始聚类中心C1,计算Y′中其他元素点的特征值与聚类中心C1的特征值的差值,选取与C1的特征值相差最大的元素点,即差值绝对值最大的点作为第二个初始聚类中心C2;C1,C2相应的特征值为c1,c2,聚类中心C1和C2对应类D1和D2(6.1) From the dense point set Y′=[y′ 1 , y′ 2 , ..., y′ η ], η is the number of element points in Y′, η≤N, y′ η means that the noise is eliminated The characteristics of the object behind the disturbed data; select the element point with the largest density value as the first initial cluster center C 1 , and calculate the difference between the eigenvalues of other element points in Y′ and the eigenvalues of the cluster center C 1 , select the element point with the largest difference from the eigenvalue of C 1 , that is, the point with the largest absolute value of the difference as the second initial cluster center C 2 ; Class centers C 1 and C 2 correspond to classes D 1 and D 2 ;

(6.2)计算密集点集合Y′中的元素点y′θ与两个聚类中心的特征值的差值,即(6.2) Calculate the difference between the element point y′ θ in the dense point set Y′ and the eigenvalues of the two cluster centers, namely

dθ1=|y′θ-c1|d θ1 = |y′ θ -c 1 |

dθ2=|y′θ-c2|d θ2 = |y′ θ -c 2 |

dθ1表示元素点y′θ与聚类中心C1的特征差值的绝对值,dθ2表示元素点y′θ与聚类中心C2的特征差值的绝对值,θ=1,2,...,η;将密集点集合Y′中的元素点分到差值绝对值较小的聚类中心所对应的类;d θ1 represents the absolute value of the feature difference between the element point y' θ and the cluster center C 1 , d θ2 represents the absolute value of the feature difference between the element point y' θ and the cluster center C 2 , θ=1, 2, ..., η; divide the elements in the dense point set Y' into the class corresponding to the cluster center with the smaller absolute difference value;

(6.3)计算类D1,D2内相应的元素点特征值的平均值c′1,c′2,即(6.3) Calculate the average value c′ 1 , c′ 2 of the corresponding element point eigenvalues in the classes D 1 , D 2 , namely

Figure BDA0002074766860000072
Figure BDA0002074766860000072

Figure BDA0002074766860000073
Figure BDA0002074766860000073

式中,c′1,c′2分别为类D1,D2相应的元素点特征值的平均值;|D1|表示类D1中元素点的个数,|D2|表示类D2中元素点的个数,y′对应类内元素点特征值;如果c′1≠c1,将类D1的聚类中心C1的特征值c1更新为c′1;如果c′2≠c2,将类D2的聚类中心C2的特征值c2更新为c′2In the formula, c′ 1 , c′ 2 are the average values of the corresponding element point eigenvalues of class D 1 and D 2 respectively; |D 1 | represents the number of element points in class D 1 , and |D 2 | represents class D The number of element points in 2 , y′ corresponds to the eigenvalues of the element points in the class; if c′ 1 ≠c 1 , update the eigenvalue c 1 of the cluster center C 1 of class D 1 to c′ 1 ; if c′ 1 ≠c 1 , 2 ≠c 2 , update the eigenvalue c 2 of the cluster center C 2 of class D 2 to c′ 2 ;

(6.4)重复步骤(6.2)~(6.3),直至两个聚类中心的特征值都不再发生改变;(6.4) Repeat steps (6.2) to (6.3) until the eigenvalues of the two cluster centers no longer change;

(6.5)将两个类D1,D2分为未变化的类和变化的类,其中,平均值较小的为未变化的类,平均值较大的为变化的类;未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0;(6.5) Divide the two classes D 1 , D 2 into the unchanged class and the changed class, wherein the class with a smaller average value is the unchanged class, and the class with a larger average value is the changed class; the unchanged class The gray value of the object in the difference map corresponding to the element is converted to 255, and the gray value of the object in the difference map corresponding to the element in the changed class is converted to 0;

(6.6)根据对象的灰度值输出检测结果图,图中标出了图像变化的区域。(6.6) Output the detection result graph according to the gray value of the object, and the area where the image changes is marked in the graph.

有益效果:与现有技术相比,本发明的技术方案具有以下有益的技术效果:Beneficial effects: compared with the prior art, the technical solution of the present invention has the following beneficial technical effects:

(1)为了减小噪声对于数据的干扰,使用密度分布函数排除密度稀疏点,剔除一些噪声干扰点。(1) In order to reduce the interference of noise on the data, the density distribution function is used to eliminate the density sparse points and some noise interference points.

(2)结合最远距离原则,在高密度数据集合中选取初始聚类中心,避免了原始Kmeans算法随机选取初始聚类中心的缺点。(2) Combined with the principle of the farthest distance, the initial cluster center is selected in the high-density data set, which avoids the disadvantage of randomly selecting the initial cluster center in the original Kmeans algorithm.

附图说明Description of drawings

图1是本发明的流程图;Fig. 1 is the flow chart of the present invention;

图2是NR-Kmeans算法流程图。Figure 2 is a flowchart of the NR-Kmeans algorithm.

具体实施方式Detailed ways

下面结合附图和实施例对本发明的技术方案作进一步的说明。The technical solutions of the present invention will be further described below with reference to the accompanying drawings and embodiments.

本发明所述的一种鲁棒性遥感图像变化检测方法,如图1所示,包括以下几个步骤:A robust remote sensing image change detection method according to the present invention, as shown in Figure 1, includes the following steps:

(1)采集获取目标地区T1和T2时刻的遥感图像,所述图像满足分辨率需求并且包含清晰的纹理特征信息;选择合肥市某区域的图像,包含了农田作物面积覆盖情况;( 1 ) Collect and obtain the remote sensing images of the target areas T1 and T2 , which meet the resolution requirements and contain clear texture feature information; select the image of a certain area in Hefei City, including the coverage of farmland crops;

(2)采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合Md,Md=[m1,m2,...mN],其中,mi表示第i个对象所对应的变化矢量;将Md中元素按照T1或T2时刻图像中N个对象的排列方式排列,构成一幅差值图;(2) Using ENVI software to realize image data preprocessing to eliminate shooting errors; using the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, That is, N objects are obtained respectively; the spectral features and texture features of the images at times T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a change vector set M d , where M d =[m 1 ,m 2 ,...m N ], wherein, m i represents the change vector corresponding to the ith object; the elements in M d are arranged according to the arrangement of N objects in the image at time T 1 or T 2 to form a difference map;

(3)根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;(3) According to the Relief feature selection algorithm, obtain the weight of the feature of each object in the feature subspace M l of the change vector set M d , select e change features with the largest weight, and the value of e is less than the total number of features contained in the object, and then The e change features are selected from the N objects respectively, and the matrix X is obtained in combination with the change vector set M d ;

(4)利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y,Y=[y1,y2,...,yN];(4) Use the PCA algorithm to reduce the feature dimension of the matrix X, and obtain the changed feature matrix Y after the transformation, Y=[y 1 , y 2 ,...,y N ];

(5)使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y';(5) Use the NR-Kmeans algorithm to classify the elements yi of the variable feature matrix Y, remove the sparse data disturbed by noise from the variable feature matrix Y, and obtain a dense point set Y';

(6)将密集点集合Y'分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图,图中标出了图像变化的区域。(6) Divide the dense point set Y' into the unchanged class and the changed class. The gray value of the object in the difference map corresponding to the unchanged class element is converted to 255, and the difference value corresponding to the element in the changed class The gray value of the object in the figure is converted to 0, and the change detection result map is output according to the gray value of the object, and the area of the image change is marked in the figure.

步骤(2)中,采用ENVI软件实现图像数据预处理,消除拍摄误差;采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;分别获取T1和T2时刻的图像的光谱特征和纹理特征,构建特征向量,得到变化矢量集合,步骤如下:In step (2), ENVI software is used to realize image data preprocessing to eliminate shooting errors; eCogntion software platform is used to perform multi - scale segmentation on the preprocessed images at time T1 and T2, and the two images are divided into N pieces respectively. Image blocks, that is, N objects are obtained respectively; the spectral features and texture features of the images at times T 1 and T 2 are obtained respectively, and feature vectors are constructed to obtain a set of change vectors. The steps are as follows:

(2.1)采用ENVI软件分别对T1和T2时刻的图像进行几何校正、图像配准、大气校正操作,实现图像数据预处理;两幅图像大小均为512*512;(2.1) Using ENVI software to perform geometric correction, image registration, and atmospheric correction operations on the images at T 1 and T 2 , respectively, to realize image data preprocessing; the size of both images is 512*512;

(2.2)采用eCogntion软件平台对T1和T2时刻的预处理后的图像进行多尺度分割,将两幅图像分别划分成N个图像块,即分别得到N个对象;(2.2) Use the eCogntion software platform to perform multi-scale segmentation on the preprocessed images at T 1 and T 2 , and divide the two images into N image blocks respectively, that is, to obtain N objects respectively;

(2.3)求取预处理后的图像的Mean-Std特征,作为图像的光谱特征;采用灰度共生矩阵提取预处理后的图像的纹理特征;(2.3) Obtain the Mean-Std feature of the preprocessed image as the spectral feature of the image; use the gray level co-occurrence matrix to extract the texture feature of the preprocessed image;

(2.4)根据预处理后的两幅图像提取的光谱特征和纹理特征,构建特征向量,获取变化矢量集合Md(2.4) According to the spectral features and texture features extracted from the two preprocessed images, a feature vector is constructed, and a change vector set M d is obtained.

步骤(2.3)所述求取图像的Mean-Std特征,作为图像的光谱特征;方法如下:The Mean-Std feature of the image is obtained as described in step (2.3) as the spectral feature of the image; the method is as follows:

对图像的每个对象,按照下列公式求取光谱特征中的均值Mean和标准差Std:For each object of the image, the mean and standard deviation Std in the spectral features are obtained according to the following formulas:

Figure BDA0002074766860000091
Figure BDA0002074766860000091

式中,Mean为对象中像素点灰度的均值,std为对象中像素点灰度的标准差,A表示对象中像素点数,ci代表对象中第i个像素点的灰度大小。In the formula, Mean is the mean of the grayscale of the pixels in the object, std is the standard deviation of the grayscale of the pixels in the object, A represents the number of pixels in the object, and c i represents the grayscale of the ith pixel in the object.

步骤(2.3)所述采用灰度共生矩阵提取图像的纹理特征;方法如下:In step (2.3), the texture feature of the image is extracted by using the grayscale co-occurrence matrix; the method is as follows:

(2.3.1)将图像中任意一像素点(τ1,τ2)及另一像素点(τ11,τ22)构成点对,其中Δ1,Δ2为整数;设像素点(τ1,τ2)的灰度值为f1,(τ11,τ22)的灰度值为f2,则该点对的灰度值为(f1,f2),设图像的最大灰度级为L,则f1与f2的组合共有L*L组;本实施例设L=16,则f1与f2的组合共有16*16组;(2.3.1) Any pixel point (τ 1 , τ 2 ) and another pixel point (τ 11 , τ 22 ) in the image form a point pair, where Δ 1 , Δ 2 are integers; Assuming that the gray value of the pixel point (τ 1 , τ 2 ) is f1, the gray value of (τ 11 , τ 22 ) is f2, then the gray value of the pair of points is (f1, f2 ), set the maximum gray level of the image to be L, then the combination of f1 and f2 has a total of L*L groups; in this embodiment, if L=16, then the combination of f1 and f2 has a total of 16*16 groups;

(2.3.2)统计图像中每一组不同的(f1,f2)值出现的次数,然后排列成一个矩阵,其中,位于矩阵位置(L,L)上的矩阵元素值就是两个灰度值均为L的点对出现的次数;(2.3.2) Count the number of occurrences of each group of different (f1, f2) values in the image, and then arrange them into a matrix, where the matrix element values located at the matrix position (L, L) are the two grayscale values. are the number of occurrences of point pairs of L;

(2.3.3)根据(f1,f2)出现的总次数将每一组不同的(f1,f2)值出现的次数归一化为出现的概率g(f1,f2),归一化之后的的方阵称为灰度共生矩阵;(2.3.3) According to the total number of occurrences of (f1, f2), the number of occurrences of each group of different (f1, f2) values is normalized to the probability of occurrence g(f1, f2), and the normalized The square matrix is called the gray level co-occurrence matrix;

(2.3.4)灰度共生矩阵提供了14种统计量作为纹理特征,本发明选取6种统计量作为纹理特征,所述6种统计量为对比度、熵、能量、均等性、不相似性、相关性;对比度反映了图像的清晰度和纹理沟纹深浅的程度;熵表示了图像中纹理的非均匀程度或复杂程度;能量反映了图像灰度分布均匀程度和纹理粗细度;均等性反映图像纹理的同质性,度量图像纹理局部变化的多少;相关性反应了图像纹理的一致性;这6种统计量都从各个方面表示了图像的特性,故选择这6种统计量作为纹理特征;(2.3.4) The grayscale co-occurrence matrix provides 14 kinds of statistics as texture features, the present invention selects 6 kinds of statistics as texture features, and the 6 kinds of statistics are contrast, entropy, energy, equality, dissimilarity, Correlation; contrast reflects the clarity of the image and the depth of texture grooves; entropy expresses the degree of non-uniformity or complexity of the texture in the image; energy reflects the uniformity of the grayscale distribution of the image and the thickness of the texture; equality reflects the image The homogeneity of the texture measures the local variation of the image texture; the correlation reflects the consistency of the image texture; these 6 kinds of statistics represent the characteristics of the image from various aspects, so these 6 kinds of statistics are selected as texture features;

(2.3.5)设灰度共生矩阵的位置(i,j)上的矩阵元素值为g(i,j),L为图像的最大灰度级,则所述纹理特征表示如下:(2.3.5) Assuming that the value of the matrix element at the position (i, j) of the gray level co-occurrence matrix is g(i, j), and L is the maximum gray level of the image, the texture feature is expressed as follows:

Figure BDA0002074766860000101
Figure BDA0002074766860000101

Figure BDA0002074766860000102
Figure BDA0002074766860000102

Figure BDA0002074766860000103
Figure BDA0002074766860000103

Figure BDA0002074766860000104
Figure BDA0002074766860000104

Figure BDA0002074766860000105
Figure BDA0002074766860000105

Figure BDA0002074766860000106
Figure BDA0002074766860000106

式中,Con表示对比度,Ent表示熵,Ene表示能量,Hom表示均等性,Dis表示不相似性,Cor表示相关性;

Figure BDA0002074766860000107
Figure BDA0002074766860000108
In the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
Figure BDA0002074766860000107
Figure BDA0002074766860000108

步骤(2.4)所述根据预处理后的两幅图像提取的光谱特征和纹理特征,获取变化矢量集合Md;方法如下:Described in step (2.4), according to the spectral features and texture features extracted from the two preprocessed images, obtain the change vector set M d ; the method is as follows:

(2.4.1)根据步骤(2.3)得到的T1和T2时刻图像的光谱特征和纹理特征,分别构建特征向量,对两幅图像的相对应的特征向量计算差值;两幅图像相对应的第i个对象的第j个特征向量的差值Md(i,j),表示如下:(2.4.1) According to the spectral features and texture features of the images at time T 1 and T 2 obtained in step (2.3), construct feature vectors respectively, and calculate the difference between the corresponding feature vectors of the two images; the two images correspond to The difference M d (i, j) of the j-th eigenvector of the i-th object is expressed as follows:

Md(i,j)=Md,1(i,j)-Md,2(i,j)M d (i, j) = M d, 1 (i, j) - M d, 2 (i, j)

式中,Md,1(i,j)为T1时刻图像第i个对象的第j个特征向量,Md,2(i,j)为T2时刻图像第i个对象的第j个特征向量;i≤N,N为图像中对象的总数;j≤Sfeature,Sfeature为对象包含的特征总数;Sfeature=Sband×8,Sband为对象总波段数;本实施例中图像有4个波段,则Sfeature=32;In the formula, M d, 1 (i, j) is the j-th feature vector of the i-th object in the image at time T 1 , and M d, 2 (i, j) is the j-th object of the i-th object in the image at time T 2 feature vector; i≤N, N is the total number of objects in the image; j≤S feature , S feature is the total number of features included in the object; S feature =S band × 8, S band is the total number of bands of the object; in this embodiment, the image There are 4 bands, then S feature = 32;

根据差值Md(i,j),得到第i个对象所对应的变化矢量mi,表示如下:According to the difference value M d (i, j), the change vector m i corresponding to the ith object is obtained, which is expressed as follows:

mi=(Md(i,1),Md(i,2),...,Md(i,32))m i = (M d (i, 1), M d (i, 2), ..., M d (i, 32))

(2.4.2)重复步骤(2.4.1),计算两幅图像中相对应的每一个对象的特征向量的差值,得到每个对象所对应的变化矢量,构成变化矢量集合Md,表示如下:(2.4.2) Repeat step (2.4.1), calculate the difference between the feature vectors of each object corresponding to the two images, obtain the change vector corresponding to each object, and form the change vector set M d , which is expressed as follows :

Md=[m1,m2,...mN]M d =[m 1 , m 2 , . . . m N ]

将Md中元素按照T1和T2时刻图像中N个对象的排列方式排列,构成一幅差值图。Arrange the elements in M d according to the arrangement of N objects in the image at time T1 and T2 to form a difference map.

步骤(3)所述根据Relief特征选择算法,获得变化矢量集合Md的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,e取值小于对象包含的特征总数,进而分别在N个对象中选取所述e个变化特征,结合变化矢量集合Md,得到矩阵X;方法如下:According to the Relief feature selection algorithm described in step (3), the weight of the feature of each object in the feature subspace M l of the change vector set M d is obtained, and the e change features with the largest weight are selected, and the value of e is less than the feature contained in the object. the total number, and then select the e change features from the N objects respectively, and combine the change vector set M d to obtain the matrix X; the method is as follows:

(3.1)变化矢量集合Md也称为特征空间,提取特征空间Md=[m1,m2,m3,...mN]的子空间Ml=[m1,m2,m3,...mn]作为训练集,其中n<N;使用人工解译将特征子空间Ml分为Ml1与Ml2两个数集,其中Ml1包含l1个元素,Ml2包含l2个元素,l1+l2=n,Ml1∈Ml,Ml2∈Ml(3.1) The change vector set M d is also called the feature space, extract the subspace M l =[m 1 ,m 2 ,m of the feature space M d =[m 1 ,m 2 ,m 3 ,... m N ] 3 , ... _ _ _ _ l2 elements, l1+l2=n, M l1 ∈ M l , M l2 ∈ M l ;

(3.2)计算Ml中所有元素两两之间变化矢量的欧式距离,组成距离矩阵,即(3.2) Calculate the Euclidean distance of the change vector between all elements in M l to form a distance matrix, that is

Figure BDA0002074766860000111
Figure BDA0002074766860000111

其中,d表示距离矩阵,

Figure BDA0002074766860000112
表示Ml中元素两两之间变化矢量的欧式距离;where d represents the distance matrix,
Figure BDA0002074766860000112
represents the Euclidean distance of the change vector between the elements in M l ;

(3.3)在特征子空间Ml中,对于对象i的变化矢量mi,i=1,2,3,...,n,若mi∈Ml1,选取Ml1内与其欧式距离最小的s个矢量,并选取Ml2内与其欧氏距离最大的s个矢量;若mi∈Ml2,选取Ml2内与其欧式距离最小的s个矢量,并选取Ml1内与其欧氏距离最大的s个矢量;即对mi求取近邻矩阵Qi=[q1,q2,...,qs,...,q2s],其中,[q1,q2,...,qs]为所述欧式距离最小的s个矢量,[qs+1,qs+2,...,q2s]为所述欧氏距离最大的s个矢量,s=min(l1,l2);(3.3) In the feature subspace M l , for the change vector m i of the object i, i=1, 2, 3, ..., n, if m i ∈ M l1 , select the one with the smallest Euclidean distance in M l1 s vectors, and select the s vectors with the largest Euclidean distance in M l2 ; if m i ∈ M l2 , select the s vectors with the smallest Euclidean distance in M l2 , and select the largest Euclidean distance in M l1 s vectors; that is, the nearest neighbor matrix Q i = [q 1 , q 2 ,..., q s ,..., q 2s ] for mi , where [q 1 , q 2 ,..., q s ] are the s vectors with the smallest Euclidean distance, [q s+1 , q s+2 , ..., q 2s ] are the s vectors with the largest Euclidean distance, s=min(l1, l2);

(3.4)根据猜错近邻相关统计量和猜中近邻相关统计量,计算对象i的每个特征对应的权重,即(3.4) Calculate the weight corresponding to each feature of object i according to the related statistics of the wrongly guessed neighbors and the related statistics of the correctly guessed neighbors, namely

Figure BDA0002074766860000113
Figure BDA0002074766860000113

式中,

Figure BDA0002074766860000114
是特征fj对应的权重,diffrj是特征fj的猜错近邻相关统计量,diffwj是特征fj的猜中近邻相关统计量;diffrj和diffwj的计算表达式如下:In the formula,
Figure BDA0002074766860000114
is the weight corresponding to the feature f j , diffr j is the related statistic of the wrong-guessed neighbor of the feature f j , diffw j is the related statistic of the correct-guessed neighbor of the feature f j ; the calculation expressions of diffr j and diffw j are as follows:

Figure BDA0002074766860000115
Figure BDA0002074766860000115

Figure BDA0002074766860000116
Figure BDA0002074766860000116

式中,s=min(l1,l2),i表示对象i,j表示第j个特征,当a=1,2,3,...,s时,Qi(a,j)表示[q1,q2,...,qs],当a=s+1,s+2,...,2s时,Qi(a,j)表示[qs+1,qs+2,...,q2s];当b=1,2,3,...,s时,Qi(b,j)表示[q1,q2,...,qs],当b=s+1,s+2,...,2s时,Qi(b,j)表示[qs+1,qs+2,...,q2s];In the formula, s=min(l1, l2), i represents the object i, j represents the jth feature, when a=1, 2, 3,..., s, Q i (a, j) represents [q 1 , q 2 , . ..., q 2s ]; when b = 1, 2, 3, ..., s, Q i (b, j) represents [q 1 , q 2 , ..., q s ], when b = When s+1, s+2, ..., 2s, Q i (b, j) represents [q s+1 , q s+2 , ..., q 2s ];

(3.5)根据步骤(3.4)计算得到的特征子空间Ml中每个对象的特征的权重,选取e个权重最大的变化特征,进而分别在N个对象中选取所述e个变化特征,结合原变化矢量集合Md,得到矩阵X,表示如下:(3.5) According to the weight of the feature of each object in the feature subspace M l calculated in step (3.4), select e change features with the largest weight, and then select the e change features from the N objects respectively, and combine From the original change vector set M d , the matrix X is obtained, which is expressed as follows:

Figure BDA0002074766860000121
Figure BDA0002074766860000121

式中,Xi=[xi1 xi2 ... xie],i=1,2,...,N;矩阵元素xij表示对象i的e个变化特征中的第j个特征,e取值小于对象包含的特征总数。In the formula, X i =[x i1 x i2 ... x ie ], i=1, 2, ..., N; the matrix element x ij represents the j-th feature in the e changing features of the object i, e The value is less than the total number of features contained in the object.

步骤(4)所述利用PCA算法对矩阵X进行特征降维,得到变换之后的变化特征矩阵Y;方法如下:The described step (4) utilizes PCA algorithm to carry out feature dimension reduction to matrix X, and obtains the change feature matrix Y after the transformation; The method is as follows:

(4.1)根据步骤(3)得到的矩阵X,计算矩阵每一行的均值μi,i=1,2,...,N:(4.1) According to the matrix X obtained in step (3), calculate the mean value μ i of each row of the matrix, i=1, 2,...,N:

Figure BDA0002074766860000122
Figure BDA0002074766860000122

式中,j=1,2,...,e,E(Xi)表示数学期望;In the formula, j=1, 2,...,e, E(X i ) represents mathematical expectation;

(4.2)根据均值μi,计算矩阵X的协方差矩阵C,即:(4.2) Calculate the covariance matrix C of the matrix X according to the mean μ i , namely:

Figure BDA0002074766860000123
Figure BDA0002074766860000123

Figure BDA0002074766860000124
Figure BDA0002074766860000124

式中,i=1,2,...,N,r=1,2,...,N,j=1,2,...,e;由上述公式可知,协方差矩阵C为实对称矩阵;In the formula, i=1,2,...,N,r=1,2,...,N,j=1,2,...,e; it can be seen from the above formula that the covariance matrix C is a real Symmetric matrix;

(4.3)计算协方差矩阵C的特征值λ1,λ2,...,λN,对应的特征向量为v1,v2,...,vN,即λivi=Cvi,i=1,2,…,N;(4.3) Calculate the eigenvalues λ 1 , λ 2 , ..., λ N of the covariance matrix C, and the corresponding eigenvectors are v 1 , v 2 , ..., v N , that is, λ i v i =Cv i , i=1, 2, ..., N;

(4.4)将特征值λ1,λ2,...,λN进行降序排序,得到λ′1,λ′2,...,λ′N,对应的特征向量为v′1,v′2,...,v′N,取排序后的特征值构成一个对角矩阵P;(4.4) Sort the eigenvalues λ 1 , λ 2 , ..., λ N in descending order to obtain λ' 1 , λ' 2 , ..., λ' N , and the corresponding eigenvectors are v' 1 , v' 2 ,...,v′ N , take the sorted eigenvalues to form a diagonal matrix P;

Figure BDA0002074766860000131
Figure BDA0002074766860000131

(4.5)计算矩阵X在新的特征向量v′1,v′2,...,v′N上的投影,得到降维后的变化特征矩阵Y,即:(4.5) Calculate the projection of the matrix X on the new eigenvectors v′ 1 , v′ 2 , ..., v′ N , and obtain the changed feature matrix Y after dimension reduction, namely:

Y=X*P=[y1,y2,...,yN]Y=X*P=[y 1 , y 2 , ..., y N ]

其中,y1,y2,...,yN表示对象的变化特征。Among them, y 1 , y 2 , ..., y N represent the changing characteristics of the object.

所述步骤(5)中使用NR-Kmeans算法对变化特征矩阵Y的元素yi进行分类,从变化特征矩阵Y中剔除被噪声干扰的稀疏数据,得到密集点集合Y′;步骤如下:In the step (5), the NR-Kmeans algorithm is used to classify the elements yi of the variable feature matrix Y, and the sparse data disturbed by noise is removed from the variable feature matrix Y to obtain a dense point set Y'; the steps are as follows:

(5.1)在矩阵Y=[y1,y2,...,yN]中,对于每一个元素点yα,yα∈Y,给定邻域半径δ,得到符合

Figure BDA0002074766860000132
的元素点
Figure BDA0002074766860000133
Qb(yα)表示包含元素点yα在内的yα的最近邻元素点集合;其中,
Figure BDA0002074766860000134
表示元素点yα
Figure BDA0002074766860000135
的距离,b为yα在邻域范围内最近邻元素点个数,f=1,2,...,b;(5.1) In the matrix Y=[y 1 , y 2 , ..., y N ], for each element point y α , y α ∈ Y, given the neighborhood radius δ, get the
Figure BDA0002074766860000132
element point of
Figure BDA0002074766860000133
Q b (y α ) represents the nearest neighbor element point set of y α including element point y α ; where,
Figure BDA0002074766860000134
represents the element point y α ,
Figure BDA0002074766860000135
distance, b is the number of nearest neighbor element points of y α in the neighborhood, f=1, 2, ..., b;

(5.2)计算元素点yα的密度函数值;元素点yx的密度函数值表示邻域半径δ范围内所有最近邻元素点对其影响的函数之和;yα的密度函数值计算公式如下:(5.2) Calculate the density function value of the element point y α; the density function value of the element point yx represents the sum of the functions of all the nearest neighbor element points in the range of the neighborhood radius δ; the calculation formula of the density function value of y α is as follows:

Figure BDA0002074766860000136
Figure BDA0002074766860000136

式中,

Figure BDA0002074766860000137
为元素点yα
Figure BDA0002074766860000138
的距离,f=1,2,...,b;In the formula,
Figure BDA0002074766860000137
is the element point y α ,
Figure BDA0002074766860000138
distance, f = 1, 2, ..., b;

(5.3)重复步骤(5.1)~(5.2),直到矩阵Y中所有元素点的密度函数值计算完成;(5.3) Repeat steps (5.1) to (5.2) until the calculation of the density function values of all element points in the matrix Y is completed;

(5.4)当yβ的密度函数值大于等于矩阵Y中所有元素点平均密度值时,则将元素点yβ视为可用数据放入密集点集合Y′中,即yβ符合:(5.4) When the density function value of y β is greater than or equal to the average density value of all element points in the matrix Y, the element point y β is regarded as available data and put into the dense point set Y′, that is, y β conforms to:

Figure BDA0002074766860000139
Figure BDA0002074766860000139

式中,DF(yβ)表示yβ的密度函数值,DF(yα)为数据点yα的密度函数值;In the formula, DF(y β ) represents the density function value of y β , and DF(y α ) is the density function value of the data point y α ;

(5.5)根据步骤(5.4)得到矩阵Y中所有的可用数据,所述可用数据构成密集点集合Y′,Y′为剔除了被噪声干扰的稀疏数据后的元素点集合。(5.5) Obtain all available data in the matrix Y according to step (5.4), the available data constitute a dense point set Y', and Y' is an element point set after excluding sparse data interfered by noise.

步骤(6)所述将密集点集合Y′分为未变化的类和变化的类,未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0,根据对象的灰度值输出变化检测结果图;方法如下:In step (6), the dense point set Y' is divided into unchanged classes and changed classes. The gray values of the objects in the difference map corresponding to the unchanged class elements are converted to 255, and the elements in the changed classes correspond to The gray value of the object in the difference map is converted to 0, and the change detection result map is output according to the gray value of the object; the method is as follows:

(6.1)从密集点集合Y′=[y′1,y′2,...,y′η],η为Y′中元素点个数,η≤N,y′η表示剔除了被噪声干扰的数据后的对象的特征;选取密度值最大的元素点,作为第一个初始聚类中心C1,计算Y′中其他元素点的特征值与聚类中心C1的特征值的差值,选取与C1的特征值相差最大的元素点,即差值绝对值最大的点作为第二个初始聚类中心C2;C1,C2相应的特征值为c1,c2,聚类中心C1和C2对应类D1和D2(6.1) From the dense point set Y′=[y′ 1 , y′ 2 , ..., y′ η ], η is the number of element points in Y′, η≤N, y′ η means that the noise is eliminated The characteristics of the object behind the disturbed data; select the element point with the largest density value as the first initial cluster center C 1 , and calculate the difference between the eigenvalues of other element points in Y′ and the eigenvalues of the cluster center C 1 , select the element point with the largest difference from the eigenvalue of C 1 , that is, the point with the largest absolute value of the difference as the second initial cluster center C 2 ; Class centers C 1 and C 2 correspond to classes D 1 and D 2 ;

(6.2)计算密集点集合Y′中的元素点y′θ与两个聚类中心的特征值的差值,即(6.2) Calculate the difference between the element point y′ θ in the dense point set Y′ and the eigenvalues of the two cluster centers, namely

dθ1=|y′θ-c1|d θ1 = |y′ θ -c 1 |

dθ2=|y′θ-c2|d θ2 = |y′ θ -c 2 |

dθ1表示元素点y′θ与聚类中心C1的差值的绝对值,dθ2表示元素点y′θ与聚类中心C2的差值的绝对值,θ=1,2,...,η;将密集点集合Y′中的元素点分到差值绝对值较小的聚类中心所对应的类;d θ1 represents the absolute value of the difference between the element point y' θ and the cluster center C 1 , d θ2 represents the absolute value of the difference between the element point y' θ and the cluster center C 2 , θ=1, 2, .. ., η; divide the elements in the dense point set Y' into the class corresponding to the cluster center with the smaller absolute difference value;

(6.3)计算类D1,D2内相应的元素点特征值的平均值c′1,c′2,即(6.3) Calculate the average value c′ 1 , c′ 2 of the corresponding element point eigenvalues in the classes D 1 , D 2 , namely

Figure BDA0002074766860000141
Figure BDA0002074766860000141

Figure BDA0002074766860000142
Figure BDA0002074766860000142

式中,c′1,c′2分别为类D1,D2相应的元素点特征值的平均值;|D1|表示类D1中元素点的个数,|D2|表示类D2中元素点的个数,y′对应类内元素点特征值;如果c′1≠c1,将类D1的聚类中心C1的特征值c1更新为c′1;如果c′2≠c2,将类D2的聚类中心C2的特征值c2更新为c′2In the formula, c′ 1 , c′ 2 are the average values of the corresponding element point eigenvalues of class D 1 and D 2 respectively; |D 1 | represents the number of element points in class D 1 , and |D 2 | represents class D The number of element points in 2 , y′ corresponds to the eigenvalues of the element points in the class; if c′ 1 ≠c 1 , update the eigenvalue c 1 of the cluster center C 1 of class D 1 to c′ 1 ; if c′ 1 ≠c 1 , 2 ≠c 2 , update the eigenvalue c 2 of the cluster center C 2 of class D 2 to c′ 2 ;

(6.4)重复步骤(6.2)~(6.3),直至两个聚类中心的特征值都不再发生改变;(6.4) Repeat steps (6.2) to (6.3) until the eigenvalues of the two cluster centers no longer change;

(6.5)将两个类D1,D2分为未变化的类和变化的类,其中,平均值较小的为未变化的类,平均值较大的为变化的类;未变化的类元素对应的差值图中的对象的灰度值转化为255,变化的类中元素对应的差值图中的对象的灰度值转化为0;(6.5) Divide the two classes D 1 , D 2 into the unchanged class and the changed class, wherein the class with a smaller average value is the unchanged class, and the class with a larger average value is the changed class; the unchanged class The gray value of the object in the difference map corresponding to the element is converted to 255, and the gray value of the object in the difference map corresponding to the element in the changed class is converted to 0;

(6.6)根据对象的灰度值输出检测结果图,图中标出了图像变化的区域。(6.6) Output the detection result graph according to the gray value of the object, and the area where the image changes is marked in the graph.

本发明改进了传统的kmeans聚类算法,检测精度更高,准确率高。The invention improves the traditional kmeans clustering algorithm, and has higher detection precision and high accuracy.

以上所述仅是本发明的优选实施方式,应当指出:对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only the preferred embodiment of the present invention, it should be pointed out that: for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made, and these improvements and modifications are also It should be regarded as the protection scope of the present invention.

Claims (8)

1. A robustness remote sensing image change detection method is characterized by comprising the following steps: the method comprises the following steps:
(1) acquiring and obtaining target area T 1 And T 2 The remote sensing image at a moment meets the resolution requirement and contains clear texture feature information;
(2) adopting ENVI software to realize image data preprocessing and eliminate shooting errors; using eCongintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 Spectral features and texture features of the image at the moment, constructing a feature vector, and obtaining a change vector set M d ,M d =[m 1 ,m 2 ,...m N ]Wherein m is N Representing a variation vector corresponding to the Nth object; will M d The middle element is according to T 1 Or T 2 Arranging N objects in the time image in an arrangement mode to form a difference image;
(3) obtaining a variation vector set M according to a Relief feature selection algorithm d Feature subspace M of l Selecting e change characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is less than the total number of the characteristics contained in the object, further respectively selecting the e change characteristics from N objects, and combining a change vector set M d Obtaining a matrix X;
(4) performing feature dimensionality reduction on the matrix X by using a PCA algorithm to obtain a transformed change feature matrix Y, wherein Y is [ Y ═ Y [ 1 ,y 2 ,...,y N ];
(5) Using the NR-Kmeans algorithm on the elements Y of the variation feature matrix Y i Classifying, and removing the noise from the variation characteristic matrix YObtaining dense point set Y' by using sparse data of acoustic interference; the method comprises the following steps:
(5.1) in matrix Y ═ Y 1 ,y 2 ,...,y N ]For each element point y α ,y α E.g. Y, giving neighborhood radius delta to obtain the coincidence
Figure FDA0003698617410000011
Element point of (2)
Figure FDA0003698617410000012
Figure FDA0003698617410000013
Q b (y α ) The representation contains the element point y α Y inside α A set of nearest neighbor element points; wherein,
Figure FDA0003698617410000014
representing element point y α
Figure FDA0003698617410000015
B is y α The number of nearest neighbor element points in a neighborhood range, wherein f is 1, 2.
(5.2) calculating element point y α The density function value of (a); element point y α The density function value of (2) represents the sum of functions of all nearest neighbor element points in the neighborhood radius delta range; y is α The calculation formula of the density function value is as follows:
Figure FDA0003698617410000016
in the formula,
Figure FDA0003698617410000017
is an element point y α
Figure FDA0003698617410000018
1,2, …, b;
(5.3) repeating the steps (5.1) - (5.2) until the density function values of all element points in the matrix Y are calculated;
(5.4) when y β When the density function value is more than or equal to the average density value of all the element points in the matrix Y, the element point Y is added β Put the data considered available into the dense set of points Y', i.e. Y β The following conditions are met:
Figure FDA0003698617410000019
in the formula, DF (y) β ) Denotes y β Value of density function, DF (y) α ) Is a data point y α The density function value of (a);
(5.5) obtaining all available data in the matrix Y according to the step (5.4), wherein the available data form a dense point set Y ', and Y' is an element point set from which sparse data interfered by noise is removed;
(6) and dividing the dense point set Y' into an unchanged class and a changed class, converting the gray value of the object in the difference image corresponding to the unchanged class element into 255, converting the gray value of the object in the difference image corresponding to the changed class element into 0, outputting a change detection result image according to the gray value of the object, and marking an image change area in the image.
2. The robust remote sensing image change detection method according to claim 1, characterized in that: in the step (2), the ENVI software is adopted to realize the image data preprocessing and eliminate the shooting error; using eCongintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 The method comprises the following steps of constructing a characteristic vector by using spectral characteristics and texture characteristics of an image at a moment, and obtaining a change vector set, wherein the steps are as follows:
(2.1) separately aligning T with ENVI software 1 And T 2 Performing geometric correction, image registration and atmospheric correction on the images at all times to realize image data preprocessing;
(2.2) use of eCogintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively;
(2.3) obtaining Mean-Std characteristics of the preprocessed image as spectral characteristics of the image; extracting texture features of the preprocessed image by adopting a gray level co-occurrence matrix;
(2.4) according to the spectral features and the texture features extracted from the two preprocessed images, constructing a feature vector, and obtaining a variation vector set M d
3. The robust remote sensing image change detection method according to claim 2, characterized in that: step (2.3) obtaining Mean-Std characteristics of the image as spectral characteristics of the image; the method comprises the following steps:
for each object of the image, the Mean and standard deviation Std in the spectral features are calculated according to the following formula:
Figure FDA0003698617410000021
wherein Mean is the Mean of the gray levels of the pixels in the object, std is the standard deviation of the gray levels of the pixels in the object, A represents the number of the pixels in the object, c p Representing the gray scale of the p-th pixel point in the object.
4. The robust remote sensing image change detection method according to claim 2, characterized in that: extracting texture features of the image by adopting a gray level co-occurrence matrix in the step (2.3); the method comprises the following steps:
(2.3.1) any pixel point (tau) in the image 12 ) And another pixel (tau) 1 +△ 12 +△ 2 ) Form point pairs of which 1 ,△ 2 Is an integer; setting pixelDot (tau) 12 ) Has a gray value of f1 (τ) 1 +△ 12 +△ 2 ) If the gray value of (a) is f2, the gray value of the point pair is (f1, f2), and the maximum gray level of the image is L, the combination of f1 and f2 shares L × L groups;
(2.3.2) counting the number of occurrences of each different set of (f1, f2) values in the image, and then arranging the values into a matrix, wherein the matrix element values at the matrix positions (L, L) are the number of occurrences of the two point pairs with the gray values L;
(2.3.3) normalizing the number of occurrences of each different set of (f1, f2) values to the probability of occurrence g (f1, f2) according to the total number of occurrences of (f1, f2), the normalized square matrix being called a gray level co-occurrence matrix;
(2.3.4) selecting 6 kinds of statistic provided by the gray level co-occurrence matrix as texture features, wherein the 6 kinds of statistic are contrast, entropy, energy, equality, dissimilarity and correlation;
(2.3.5) if the matrix element value at the position (z, t) of the gray level co-occurrence matrix is g (z, t), and L is the maximum gray level of the image, the texture feature is expressed as follows:
Figure FDA0003698617410000031
Figure FDA0003698617410000032
Figure FDA0003698617410000033
Figure FDA0003698617410000034
Figure FDA0003698617410000035
Figure FDA0003698617410000036
in the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
Figure FDA0003698617410000037
Figure FDA0003698617410000038
5. the robust remote sensing image change detection method according to claim 2, characterized in that: step (2.4) obtaining a variation vector set M according to the spectral characteristics and the texture characteristics extracted from the two preprocessed images d (ii) a The method comprises the following steps:
(2.4.1) T obtained according to step (2.3) 1 And T 2 Respectively constructing characteristic vectors by the spectral characteristics and the texture characteristics of the images at the moment, and calculating the difference value of the corresponding characteristic vectors of the two images; difference value M of jth characteristic vector of ith object corresponding to two images d (i, j), as follows:
M d (i,j)=M d,1 (i,j)-M d,2 (i,j)
in the formula, M d,1 (i, j) is T 1 J-th feature vector, M, of i-th object of time image d,2 (i, j) is T 2 The jth feature vector of the ith object of the moment image; i is less than or equal to N, wherein N is the total number of the objects in the image; j is less than or equal to S feature ,S feature Total number of features contained for the object; s feature =S band ×8,S band The total wave band number of the object;
according to the difference M d (i, j) obtaining a change vector m corresponding to the ith object i Expressed as follows:
m i =(M d (i,1),M d (i,2),...,M d (i,S feature ))
(2.4.2) repeating the step (2.4.1), calculating the difference value of the characteristic vector of each corresponding object in the two images to obtain the variation vector corresponding to each object, and forming a variation vector set M d Expressed as follows:
M d =[m 1 ,m 2 ,...m N ]
will M d The middle element is according to T 1 And T 2 The arrangement modes of the N objects in the time image are arranged to form a difference value image.
6. The robust remote sensing image change detection method according to any one of claims 1 to 5, characterized in that: selecting an algorithm according to the Release characteristics to obtain a variation vector set M in the step (3) d Characteristic subspace M of l Selecting e change characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is less than the total number of the characteristics contained in the object, further respectively selecting the e change characteristics from N objects, and combining a change vector set M d Obtaining a matrix X; the method comprises the following steps:
(3.1) vector set of variations M d Also called feature space, extracting feature space M d =[m 1 ,m 2 ,m 3 ,...m N ]Of (2) subspace M l =[m 1 ,m 2 ,m 3 ,...m n ]As a training set, where N < N; the feature subspace M l Is divided into M l1 And M l2 Two sets of numbers, where M l1 Containing l1 elements, M l2 Containing l2 elements, l1+ l2 ═ n, M l1 ∈M l ,M l2 ∈M l
(3.2) calculation of M l The Euclidean distance of the change vector between every two elements in the vector form a distance matrix, namely:
Figure FDA0003698617410000041
where d represents the distance matrix,
Figure FDA0003698617410000042
represents M l The Euclidean distance of the vector change between every two middle elements;
(3.3) in the feature subspace M l For the change vector m of the object i i Where i is 1,2,3, …, n, if m i ∈M l1 Selecting M l1 Selecting M vectors with the minimum Euclidean distance between the vector and the vector l2 S vectors with the largest Euclidean distance; if m is i ∈M l2 Selecting M l2 Selecting M vectors with the minimum Euclidean distance between the vector and the vector l1 S vectors with the largest Euclidean distance; namely to m i Solving a neighbor matrix Q i =[q 1 ,q 2 ,...,q s ,...,q 2s ]Wherein [ q ] 1 ,q 2 ,...,q s ]For s vectors with the minimum Euclidean distance, [ q ] s+1 ,q s+2 ,...,q 2s ]S vectors with the largest Euclidean distance, wherein s is min (l1, l 2);
(3.4) calculating the weight corresponding to each feature of the object i according to the guess neighbor correlation statistic and the guess neighbor correlation statistic, namely
w fj =diffr j -diffw j
In the formula, w fj Is a characteristic f j Corresponding weight, diffr j Is a characteristic f j Guess the neighbor correlation statistic, diffw j Is a characteristic f j Guessing neighbor related statistics; diffr j And diffw j The calculation expression of (a) is as follows:
Figure FDA0003698617410000051
Figure FDA0003698617410000052
where, s is min (l1, l2), i denotes an object i, j denotes a jth feature, and Q is 1,2,3, …, s i (a, j) represents [ q ] 1 ,q 2 ,...,q s ]When a is s +1, s +2, …,2s, Q i (a, j) represents [ q ] s+1 ,q s+2 ,...,q 2s ](ii) a When b is 1,2,3, …, s, Q i (b, j) represents [ q 1 ,q 2 ,...,q s ]When b is s +1, s +2, …,2s, Q is i (b, j) represents [ q s+1 ,q s+2 ,...,q 2s ];
(3.5) calculating the obtained feature subspace M according to the step (3.4) l Selecting e variation characteristics with the maximum weight from the weight of the characteristic of each object, further selecting the e variation characteristics from the N objects respectively, and combining the original variation vector set M d A matrix X is obtained, represented as follows:
Figure FDA0003698617410000053
in the formula, X i =[x i1 x i2 ...x ie ]1,2, N; matrix element x ij And (4) representing the jth feature in the e change features of the object i, wherein the value of e is smaller than the total number of the features contained in the object.
7. The robust remote sensing image change detection method according to any one of claims 1 to 5, characterized in that: performing feature dimension reduction on the matrix X by using a PCA algorithm to obtain a transformed change feature matrix Y; the method comprises the following steps:
(4.1) calculating the mean value mu of each row of the matrix according to the matrix X obtained in the step (3) i ,i=1,2,...,N:
Figure FDA0003698617410000054
Wherein j is 1,2, E (X) i ) Represents a mathematical expectation;
(4.2) according to the mean value μ i The covariance matrix C of the matrix X is calculated, i.e.:
Figure FDA0003698617410000055
Figure FDA0003698617410000056
wherein, i 1,2, is, N, r 1,2, is, N, j 1,2, is, e;
(4.3) calculating an eigenvalue λ of the covariance matrix C 12 ,...,λ N Corresponding feature vector is v 12 ,...,ν N I.e. λ i v i =Cv i ,i=1,2,…,N;
(4.4) converting the characteristic value lambda 12 ,...,λ N Sorting in descending order to obtain lambda' 1 ,λ' 2 ,...,λ' N The corresponding feature vector is v' 1 ,ν' 2 ,...,ν' N Taking the sorted eigenvalues to form a diagonal matrix P;
Figure FDA0003698617410000061
(4.5) calculating the matrix X at the new feature vector v' 1 ,ν' 2 ,...,ν' N And (3) obtaining a change characteristic matrix Y after dimensionality reduction by the projection, namely:
Y=X*P=[y 1 ,y 2 ,...,y N ]
wherein, y 1 ,y 2 ,...,y N Representing the changing characteristics of the object.
8. The robust remote sensing image change detection method according to any one of claims 1 to 5, characterized in that: dividing the dense point set Y' into an unchanged class and a changed class, converting the gray value of the object in the difference map corresponding to the unchanged class elements into 255, converting the gray value of the object in the difference map corresponding to the changed class elements into 0, and outputting a change detection result map according to the gray value of the object; the method comprises the following steps:
(6.1) from dense point set Y '═ Y' 1 ,y' 2 ,...,y' η ]Eta is the number of element points in Y ', eta is less than or equal to N, Y' η Representing the characteristics of the object from which the data disturbed by the noise is removed; selecting the element point with the maximum density value as a first initial clustering center C 1 Calculating the characteristic value and the clustering center C of other element points in Y 1 The difference of the characteristic values of (A) and (B) is selected from 1 The element point with the largest difference of eigenvalues of (2), that is, the point with the largest absolute value of the difference, is taken as the second initial clustering center C 2 ;C 1 ,C 2 Corresponding characteristic value is c 1 ,c 2 Cluster center C 1 And C 2 Corresponding class D 1 And D 2
(6.2) calculating element points Y 'in the dense point set Y' θ Difference from the eigenvalues of the two cluster centers, i.e.
d θ1 =|y' θ -c 1 |
d θ2 =|y' θ -c 2 |
d θ1 Denotes element point y' θ And a clustering center C 1 Absolute value of the difference in characteristics of d θ2 Denotes element point y' θ And a clustering center C 2 Is equal to 1,2, η; dividing the element points in the dense point set Y' into classes corresponding to the clustering centers with smaller absolute difference values;
(6.3) calculating class D 1 ,D 2 Mean value c 'of characteristic values of inner corresponding element points' 1 ,c' 2 I.e. by
Figure FDA0003698617410000062
Figure FDA0003698617410000063
In the formula (II), c' 1 ,c' 2 Are respectively of class D 1 ,D 2 An average value of the corresponding element point feature values; | D 1 | represents class D 1 Number of Medium element points, | D 2 I represents class D 2 The number of the middle element points, y', corresponds to the characteristic value of the element points in the class; if c' 1 ≠c 1 Of class D 1 Cluster center C of 1 Characteristic value c of 1 Is updated to c' 1 (ii) a If c' 2 ≠c 2 Class D 2 Cluster center C of 2 Characteristic value c of 2 Is updated to c' 2
(6.4) repeating the steps (6.2) - (6.3) until the characteristic values of the two clustering centers are not changed any more;
(6.5) two classes D 1 ,D 2 The method comprises the following steps of (1) dividing the method into an unchanged class and a changed class, wherein the unchanged class is the class with a smaller average value, and the changed class is the class with a larger average value; converting the gray value of the object in the difference graph corresponding to the unchanged class elements into 255, and converting the gray value of the object in the difference graph corresponding to the changed class elements into 0;
and (6.6) outputting a detection result graph according to the gray value of the object, wherein the graph marks the image change area.
CN201910449743.XA 2019-05-28 2019-05-28 Robust remote sensing image change detection method Active CN110276746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910449743.XA CN110276746B (en) 2019-05-28 2019-05-28 Robust remote sensing image change detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910449743.XA CN110276746B (en) 2019-05-28 2019-05-28 Robust remote sensing image change detection method

Publications (2)

Publication Number Publication Date
CN110276746A CN110276746A (en) 2019-09-24
CN110276746B true CN110276746B (en) 2022-08-19

Family

ID=67960074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910449743.XA Active CN110276746B (en) 2019-05-28 2019-05-28 Robust remote sensing image change detection method

Country Status (1)

Country Link
CN (1) CN110276746B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091054B (en) * 2019-11-13 2020-11-10 广东国地规划科技股份有限公司 Method, system and device for monitoring land type change and storage medium
CN111931744B (en) * 2020-10-09 2021-01-05 航天宏图信息技术股份有限公司 Method and device for detecting change of remote sensing image
CN112329565A (en) * 2020-10-26 2021-02-05 兰州交通大学 Road construction supervision method and system based on high-resolution remote sensing image
CN114066876B (en) * 2021-11-25 2022-07-08 北京建筑大学 A construction waste change detection method based on classification results and CVA-SGD method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629380A (en) * 2012-03-03 2012-08-08 西安电子科技大学 Remote sensing image change detection method based on multi-group filtering and dimension reduction
CN103456020A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Remote sensing image change detection method based on treelet feature fusion
CN107423771A (en) * 2017-08-04 2017-12-01 河海大学 A kind of two phase method for detecting change of remote sensing image
CN109409389A (en) * 2017-08-16 2019-03-01 香港理工大学深圳研究院 A kind of object-oriented change detecting method merging multiple features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629380A (en) * 2012-03-03 2012-08-08 西安电子科技大学 Remote sensing image change detection method based on multi-group filtering and dimension reduction
CN103456020A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Remote sensing image change detection method based on treelet feature fusion
CN107423771A (en) * 2017-08-04 2017-12-01 河海大学 A kind of two phase method for detecting change of remote sensing image
CN109409389A (en) * 2017-08-16 2019-03-01 香港理工大学深圳研究院 A kind of object-oriented change detecting method merging multiple features

Also Published As

Publication number Publication date
CN110276746A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN112101278B (en) Homestead point cloud classification method based on k-nearest neighbor feature extraction and deep learning
CN110276746B (en) Robust remote sensing image change detection method
CN105608474B (en) Region adaptivity plant extraction method based on high resolution image
CN109657610A (en) A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images
CN110751087B (en) EOF-based unmanned aerial vehicle signal identification system and method
CN108960404B (en) Image-based crowd counting method and device
CN101561865B (en) Target Recognition Method in Synthetic Aperture Radar Image Based on Multi-parameter Spectral Features
CN108229551B (en) Hyperspectral remote sensing image classification method based on compact dictionary sparse representation
CN112949738B (en) Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN107895139A (en) A kind of SAR image target recognition method based on multi-feature fusion
CN114266961A (en) Method for integrating, learning and classifying marsh vegetation stacks by integrating hyperspectral and multiband fully-polarized SAR images
CN109446894A (en) The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture
Xia et al. Band selection for hyperspectral imagery: A new approach based on complex networks
CN102663740A (en) SAR image change detection method based on image cutting
CN114022782B (en) Sea fog detection method based on MODIS satellite data
CN113988198B (en) Multi-scale city function classification method based on landmark constraint
CN111460943A (en) Remote sensing image ground object classification method and system
CN113850315A (en) Hyperspectral image classification method and device combining EMP (empirical mode decomposition) features and TNT (trinitrotoluene) modules
CN117611908A (en) Accurate crop classification using sky-ground integrated large-scene hyperspectral remote sensing images
CN111325158A (en) CNN and RFC-based integrated learning polarized SAR image classification method
Li et al. Spatial fuzzy clustering and deep auto-encoder for unsupervised change detection in synthetic aperture radar images
Ma et al. Spatial first hyperspectral image classification with graph convolution network
Guo et al. Hyperspectral image classification based on stacked contractive autoencoder combined with adaptive spectral-spatial information
Naji et al. Satellite Images Scene Classification Based Support Vector Machines and K-Nearest Neighbor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant