CN110276746B - Robust remote sensing image change detection method - Google Patents

Robust remote sensing image change detection method Download PDF

Info

Publication number
CN110276746B
CN110276746B CN201910449743.XA CN201910449743A CN110276746B CN 110276746 B CN110276746 B CN 110276746B CN 201910449743 A CN201910449743 A CN 201910449743A CN 110276746 B CN110276746 B CN 110276746B
Authority
CN
China
Prior art keywords
image
matrix
value
feature
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910449743.XA
Other languages
Chinese (zh)
Other versions
CN110276746A (en
Inventor
王鑫
张香梁
段林
宁晨
马贞立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201910449743.XA priority Critical patent/CN110276746B/en
Publication of CN110276746A publication Critical patent/CN110276746A/en
Application granted granted Critical
Publication of CN110276746B publication Critical patent/CN110276746B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robustness remote sensing image change detection method, which comprises the following steps: acquiring image information of two time phases of a target area; preprocessing the remote sensing images of the two time phases to obtain image characteristics; selecting and screening image features according to a Relief algorithm; performing feature dimension reduction by using a PCA algorithm to obtain a final preferred feature change vector; and (4) classifying by using an NR-Kmeans method to obtain a change detection result. The invention improves the traditional kmeans clustering algorithm, and has higher detection precision and high accuracy.

Description

Robust remote sensing image change detection method
Technical Field
The invention belongs to the technical field of image change, and particularly relates to a robust remote sensing image change detection method.
Background
Remote sensing, i.e. distance sensing. The term remote sensing was first introduced in the scientific research project of the U.S. in 1960 and was formally adopted in the first academic seminar of environmental remote sensing, which was held by institutions such as the university of michigan in 1962. Nowadays, remote sensing technology is applied in more and more fields, such as: the method is applied to the fields of monitoring the dynamic change of forests or vegetations, analyzing and evaluating the post-disaster of natural disasters, analyzing the change of land utilization, monitoring farmlands, monitoring the change of cities and towns in real time, analyzing the growth condition of crops, dynamically monitoring military strategic targets such as airports and roads and the like, and greatly promotes the development of economy and society.
The remote sensing image change detection belongs to the field of remote sensing image processing, and is used for analyzing and processing remote sensing images in the same place and different periods to obtain change information. Scholars at home and abroad have made a great deal of research on the problem of detecting the change of the remote sensing image and put forward various change detection methods. The characteristic-based change detection method is used for detecting the change of the remote sensing image by utilizing spectral characteristics, textural characteristics and spatial structure characteristics in the image. The change detection method based on the characteristics considers the rich characteristic information of the image and has better noise robustness and anti-illumination and radiation interference capability.
Since the remote sensing image is affected by illumination, atmospheric environment, etc., the acquired image characteristics are also affected by the remote sensing image. Moreover, due to noise interference, the classification result of the traditional clustering algorithm is not accurate enough due to the influence of sparse data. Selecting representative features and accurately classifying the representative features become the key of the remote sensing image change detection technology.
Disclosure of Invention
The invention aims to: aiming at the problems, the invention provides a robust remote sensing image change detection method. The method is a feature-based remote sensing image change detection method, image features are selected and screened according to a Relief algorithm, then the PCA algorithm is used for dimensionality reduction, finally the NR-Kmeans algorithm is used for eliminating the influence of noise interference points on detection results, and finally the results are output. The invention eliminates noise interference points and has more accurate detection.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows: a robustness remote sensing image change detection method comprises the following steps:
(1) acquiring and obtaining target area T 1 And T 2 The remote sensing image at a moment meets the resolution requirement and contains clear texture feature information;
(2) adopting ENVI software to realize image data preprocessing and eliminate shooting errors; using eCongintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 Spectral features and texture features of the image at the moment, constructing a feature vector, and obtaining a change vector set M d ,M d =[m 1 ,m 2 ,...m N ]Wherein m is i Represents the ith pairLike the corresponding change vector; will M d The middle element is according to T 1 Or T 2 Arranging N objects in the time image in an arrangement mode to form a difference image;
(3) obtaining a variation vector set M according to a Relief feature selection algorithm d Feature subspace M of l Selecting e change characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is less than the total number of the characteristics contained in the object, further respectively selecting the e change characteristics from N objects, and combining a change vector set M d Obtaining a matrix X;
(4) performing feature dimensionality reduction on the matrix X by using a PCA algorithm to obtain a transformed change feature matrix Y, wherein Y is [ Y ═ Y [ 1 ,y 2 ,...,y N ];
(5) Using the NR-Kmeans algorithm on the elements Y of the variation feature matrix Y i Classifying, and removing sparse data interfered by noise from the change characteristic matrix Y to obtain a dense point set Y';
(6) and dividing the dense point set Y' into an unchanged class and a changed class, converting the gray value of the object in the difference image corresponding to the unchanged class element into 255, converting the gray value of the object in the difference image corresponding to the changed class element into 0, outputting a change detection result image according to the gray value of the object, and marking an image change area in the image.
Further, in the step (2), image data preprocessing is realized by adopting ENVI software, and shooting errors are eliminated; adopting eCogntion software platform to pair T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 The method comprises the following steps of constructing a characteristic vector by using spectral characteristics and textural characteristics of an image at a moment to obtain a change vector set, wherein the steps are as follows:
(2.1) separately targeting T with ENVI software 1 And T 2 Performing geometric correction, image registration and atmospheric correction on the images at the moment to realize image data preprocessing;
(2.2) use of eCogintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively;
(2.3) obtaining Mean-Std characteristics of the preprocessed image as spectral characteristics of the image; extracting texture features of the preprocessed image by adopting a gray level co-occurrence matrix;
(2.4) constructing a characteristic vector according to the spectral characteristics and the texture characteristics extracted from the two preprocessed images, and acquiring a change vector set M d
Further, the Mean-Std characteristic of the image is obtained in the step (2.3) and is used as the spectral characteristic of the image; the method comprises the following steps:
for each object of the image, the Mean and standard deviation Std in the spectral features are calculated according to the following formulas:
Figure BDA0002074766860000021
in the formula, Mean is the Mean value of the gray levels of the pixels in the object, std is the standard deviation of the gray levels of the pixels in the object, A represents the number of the pixels in the object, c i And representing the gray scale of the ith pixel point in the object.
Further, the texture features of the image are extracted by adopting a gray level co-occurrence matrix in the step (2.3); the method comprises the following steps:
(2.3.1) any pixel point (tau) in the image 1 ,τ 2 ) And another pixel (tau) 11 ,τ 22 ) Form a point pair in which 1 ,Δ 2 Is an integer; setting pixel point (tau) 1 ,τ 2 ) Has a gray value of f1, (τ) 11 ,τ 22 ) If the gray value of (a) is f2, the gray value of the point pair is (f1, f2), and the maximum gray level of the image is L, the combination of f1 and f2 shares L × L groups;
(2.3.2) counting the number of occurrences of each different set of (f1, f2) values in the image, and then arranging the values into a matrix, wherein the matrix element values at the matrix positions (L, L) are the number of occurrences of the two point pairs with the gray values L;
(2.3.3) normalizing the number of occurrences of each different set of (f1, f2) values to the probability of occurrence g (f1, f2) according to the total number of occurrences of (f1, f2), the normalized square matrix being called a gray level co-occurrence matrix;
(2.3.4) the gray level co-occurrence matrix provides 14 statistics as texture features, 6 statistics are selected as the texture features, and the 6 statistics are contrast, entropy, energy, equality, dissimilarity and correlation; the contrast reflects the definition of the image and the depth of the texture groove; entropy represents the degree of non-uniformity or complexity of texture in an image; the energy reflects the uniformity degree of image gray level distribution and texture thickness; the equality reflects the homogeneity of the image texture and measures the local change of the image texture; the correlation reflects the consistency of the image texture; the 6 statistics represent the characteristics of the image from all aspects, so the 6 statistics are selected as texture features;
(2.3.5) if the matrix element value at the position (i, j) of the gray level co-occurrence matrix is g (i, j), and L is the maximum gray level of the image, the texture feature is expressed as follows:
Figure BDA0002074766860000031
Figure BDA0002074766860000032
Figure BDA0002074766860000033
Figure BDA0002074766860000034
Figure BDA0002074766860000035
Figure BDA0002074766860000036
in the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
Figure BDA0002074766860000037
Figure BDA0002074766860000038
further, in the step (2.4), a variation vector set M is obtained according to the spectral features and the texture features extracted from the two preprocessed images d (ii) a The method comprises the following steps:
(2.4.1) T obtained according to step (2.3) 1 And T 2 Respectively constructing characteristic vectors by the spectral characteristics and the texture characteristics of the images at the moment, and calculating the difference value of the corresponding characteristic vectors of the two images; difference value M of jth characteristic vector of ith object corresponding to two images d (i, j), as follows:
M d (i,j)=M d,1 (i,j)-M d,2 (i,j)
in the formula, M d,1 (i, j) is T 1 J-th feature vector, M, of i-th object of time image d,2 (i, j) is T 2 A jth feature vector of an ith object of the time image; i is less than or equal to N, wherein N is the total number of the objects in the image; j is less than or equal to S feature ,S feature The total number of features contained in the object; s. the feature =S band ×8,S band The total wave band number of the object;
according to the difference M d (i, j) obtaining a change vector m corresponding to the ith object i Expressed as follows:
m i =(M d (i,1),M d (i,2),...,M d (i,S feature ))
(2.4.2) repeating the step (2.4.1) and calculating the position of the two imagesObtaining the variation vector corresponding to each object by the difference value of the feature vectors of each corresponding object to form a variation vector set M d Expressed as follows:
M d =[m 1 ,m 2 ,…m N ]
will M d The middle element is according to T 1 And T 2 The arrangement modes of the N objects in the time image are arranged to form a difference value image.
Further, in the step (3), a variation vector set M is obtained according to a Relief feature selection algorithm d Characteristic subspace M of i Selecting e change characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is less than the total number of the characteristics contained in the object, further respectively selecting the e change characteristics from N objects, and combining a change vector set M d Obtaining a matrix X; the method comprises the following steps:
(3.1) variation vector set M d Also called feature space, extracting the feature space M d =[m 1 ,m 2 ,m 3 ,…m N ]Of (2) subspace M l =[m 1 ,m 2 ,m 3 ,…m n ]As a training set, wherein N < N; using human interpretation to subspace M of features l Is divided into M l1 And M l2 Two number sets, where M l1 Containing l1 elements, M L2 Containing l2 elements, l1+ l2 ═ n, M l1 ∈M l ,M l2 ∈M l
(3.2) calculation of M l The Euclidean distance of the variation vector between every two elements in the vector form a distance matrix, i.e.
Figure BDA0002074766860000041
Where d represents the distance matrix,
Figure BDA0002074766860000042
represents M l The Euclidean distance of the vector between every two middle elements is changed;
(3.3)in the feature subspace M l For the change vector m of the object i i I is 1,2,3, n, if m i ∈M l1 Selecting M l1 Selecting M vectors with the minimum Euclidean distance between the vector and the vector l2 S vectors with the largest Euclidean distance from the interior; if m i ∈M l2 Selecting M l2 S vectors with the minimum Euclidean distance from the interior and M vectors are selected l1 S vectors with the largest Euclidean distance from the interior; i.e. to m i Solving a neighbor matrix Q i =[q 1 ,q 2 ,...,q s ,...,q 2s ]Wherein [ q ] 1 ,q 2 ,...,q s ]For s vectors with the minimum Euclidean distance, [ q ] s+1 ,q s+2 ,...,q 2s ]S vectors with the largest Euclidean distance, wherein s is min (l1, l 2);
(3.4) calculating the weight corresponding to each feature of the object i according to the guess neighbor related statistic and the guess neighbor related statistic, namely
Figure BDA0002074766860000051
In the formula,
Figure BDA0002074766860000052
is a characteristic f j Corresponding weight, diffr j Is a characteristic f j Guess the neighbor correlation statistic, diffw j Is a characteristic f j Guessing neighbor related statistics; diffr (diffr) j And diffw j The calculation expression of (a) is as follows:
Figure BDA0002074766860000053
Figure BDA0002074766860000054
where, s is min (l1, l2), i represents the object i, j represents the jth feature, and whenWhen a is 1,2,3, a i (a, j) represents [ q ] 1 ,q 2 ,...,q s ]Q when a is s +1, s +2, 2s i (a, j) represents [ q ] s+1 ,q s+2 ,...,q 2s ](ii) a Q when b is 1,2,3 i (b, j) represents [ q 1 ,q 2 ,...,q s ]Q when b is s +1, s +2, 2s i (b, j) represents [ q ] s+1 ,q s+2 ,...,q 2s ];
(3.5) calculating the obtained feature subspace M according to the step (3.4) l Selecting e variation characteristics with the maximum weight from the weight of the characteristic of each object, further selecting the e variation characteristics from the N objects respectively, and combining the original variation vector set M d A matrix X is obtained, represented as follows:
Figure BDA0002074766860000055
in the formula, X i =[x i1 x i2 ... x ie ]1,2, ·, N; matrix element x ij And (4) representing the jth feature in the e change features of the object i, wherein the value of e is smaller than the total number of the features contained in the object.
Further, performing feature dimension reduction on the matrix X by using a PCA algorithm in the step (4) to obtain a transformed change feature matrix Y; the method comprises the following steps:
(4.1) calculating the mean value mu of each row of the matrix according to the matrix X obtained in the step (3) i ,i=1,2,...,N:
Figure BDA0002074766860000056
Wherein j is 1,2, E (X) i ) Represents a mathematical expectation;
(4.2) according to the mean value μ i The covariance matrix C of the matrix X is calculated, i.e.:
Figure BDA0002074766860000061
Figure BDA0002074766860000062
wherein, i is 1,2, 1., N, r is 1,2, 1., N, j is 1,2,. e; as can be seen from the above formula, the covariance matrix C is a real symmetric matrix;
(4.3) calculating an eigenvalue λ of the covariance matrix C 1 ,λ 2 ,...,λ N Corresponding feature vector is v 1 ,v 2 ,...,v N I.e. lambda i v i =Cv i ,i=1,2,...,N;
(4.4) converting the characteristic value lambda 1 ,λ 2 ,...,λ N Sorting in descending order to obtain lambda' 1 ,λ′ 2 ,...,λ′ N The corresponding feature vector is v' 1 ,v′ 2 ,...,v′ N Taking the sorted eigenvalues to form a diagonal matrix P;
Figure BDA0002074766860000063
(4.5) calculating matrix X at the new eigenvector v' 1 ,v′ 2 ,...,v′ N And (3) obtaining a change characteristic matrix Y after dimensionality reduction by the projection, namely:
Y=X*P=[y 1 ,y 2 ,...,y N ]
wherein, y 1 ,y 2 ,...,y N Representing the changing characteristics of the object.
Further, the step (5) uses the NR-Kmeans algorithm to change the element Y of the feature matrix Y i Classifying, and removing sparse data interfered by noise from the change characteristic matrix Y to obtain a dense point set Y'; the method comprises the following steps:
(5.1) in matrix Y ═ Y 1 ,y 2 ,...,y N ]For eachElement point y α ,y α E.g. Y, giving neighborhood radius delta to obtain the coincidence
Figure BDA0002074766860000064
Element point of (2)
Figure BDA0002074766860000065
Q b (y α ) The representation contains the element point y α Y inside α A set of nearest neighbor element points; wherein,
Figure BDA0002074766860000066
representing element point y α
Figure BDA0002074766860000067
B is y α The number of nearest neighbor element points in a neighborhood range, wherein f is 1, 2.
(5.2) calculating the element point y α The density function value of (a); element point y α The density function value of (2) represents the sum of functions of all nearest neighbor element points in the neighborhood radius delta range; y is α The calculation formula of the density function value is as follows:
Figure BDA0002074766860000068
in the formula,
Figure BDA0002074766860000069
is an element point y α
Figure BDA00020747668600000610
1,2, b;
(5.3) repeating the steps (5.1) - (5.2) until the density function values of all the element points in the matrix Y are calculated;
(5.4) when y β When the density function value is more than or equal to the average density value of all the element points in the matrix Y, the element point Y is added β Placement of available data into dense Point set Y'In, i.e. y β The following conditions are met:
Figure BDA0002074766860000071
wherein DF (y) β ) Denotes y β Value of density function, DF (y) α ) Is a data point y α The density function value of (a);
and (5.5) obtaining all available data in the matrix Y according to the step (5.4), wherein the available data forms a dense point set Y ', and Y' is an element point set from which sparse data interfered by noise is removed.
Further, in the step (6), the dense point set Y' is divided into an unchanged class and a changed class, the gray value of the object in the difference map corresponding to the unchanged class element is converted into 255, the gray value of the object in the difference map corresponding to the changed class element is converted into 0, and a change detection result map is output according to the gray value of the object; the method comprises the following steps:
(6.1) from dense point set Y '═ Y' 1 ,y′ 2 ,...,y′ η ]Eta is the number of element points in Y ', eta is less than or equal to N, Y' η Representing the characteristics of the object from which the data disturbed by the noise is removed; selecting the element point with the maximum density value as a first initial clustering center C 1 Calculating the characteristic values of other element points in Y' and the clustering center C 1 The difference of the characteristic values of (A) and (B) is selected from 1 The element point with the largest difference in eigenvalues, that is, the point with the largest absolute difference value, is taken as the second initial clustering center C 2 ;C 1 ,C 2 Corresponding characteristic value is c 1 ,c 2 Cluster center C 1 And C 2 Corresponding class D 1 And D 2
(6.2) calculating element points Y 'in the dense point set Y' θ Difference from the eigenvalues of the two cluster centers, i.e.
d θ1 =|y′ θ -c 1 |
d θ2 =|y′ θ -c 2 |
d θ1 Denotes element point y' θ And clustering center C 1 Of the absolute value of the feature difference, d θ2 Denotes element point y' θ And a clustering center C 2 Is equal to 1,2, η; the element points in the dense point set Y' are classified into a class corresponding to a cluster center with a smaller absolute difference value;
(6.3) calculating class D 1 ,D 2 Average value c 'of feature values of corresponding element points' 1 ,c′ 2 I.e. by
Figure BDA0002074766860000072
Figure BDA0002074766860000073
In the formula (II), c' 1 ,c′ 2 Are respectively of class D 1 ,D 2 An average value of the corresponding element point feature values; i D 1 I represents class D 1 Number of Medium element points, | D 2 | represents class D 2 The number of the middle element points, y', corresponds to the characteristic value of the element points in the class; if c' 1 ≠c 1 Of class D 1 Cluster center C of 1 Characteristic value c of 1 Is updated to c' 1 (ii) a If c' 2 ≠c 2 Of class D 2 Cluster center C of 2 Characteristic value c of 2 Is updated to c' 2
(6.4) repeating the steps (6.2) - (6.3) until the characteristic values of the two clustering centers are not changed any more;
(6.5) two classes D 1 ,D 2 The method comprises the following steps of (1) dividing the method into an unchanged class and a changed class, wherein the unchanged class is the class with a smaller average value, and the changed class is the class with a larger average value; converting the gray value of the object in the difference image corresponding to the unchanged class element into 255, and converting the gray value of the object in the difference image corresponding to the changed class element into 0;
(6.6) outputting a detection result graph according to the gray value of the object, wherein the graph marks the image change area.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
(1) in order to reduce the interference of noise on data, density sparse points are eliminated by using a density distribution function, and noise interference points are eliminated.
(2) And the initial clustering center is selected from the high-density data set by combining the farthest distance principle, so that the defect that the initial clustering center is randomly selected by the original Kmeans algorithm is overcome.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of the NR-Kmeans algorithm.
Detailed Description
The technical scheme of the invention is further explained by combining the drawings and the embodiment.
The method for detecting the change of the robustness remote sensing image, disclosed by the invention, as shown in figure 1, comprises the following steps of:
(1) collecting and obtaining target area T 1 And T 2 A remote sensing image at a moment, wherein the image meets the resolution requirement and contains clear textural feature information; selecting an image of a certain area of the fertilizer market, wherein the image contains the area coverage condition of the farmland crops;
(2) adopting ENVI software to realize image data preprocessing and eliminate shooting errors; using eCongintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 Spectral features and textural features of the image at the moment, constructing feature vectors and obtaining a variation vector set M d ,M d =[m 1 ,m 2 ,...m N ]Wherein m is i Representing a variation vector corresponding to the ith object; will M d Middle element according to T 1 Or T 2 Arranging N objects in the time image in an arrangement mode to form a difference image;
(3) selecting according to the Relief characteristicSelecting an algorithm to obtain a variation vector set M d Characteristic subspace M of l Selecting e variation characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is smaller than the total number of the characteristics contained in the object, further respectively selecting the e variation characteristics from the N objects, and combining a variation vector set M d Obtaining a matrix X;
(4) and (4) performing characteristic dimension reduction on the matrix X by using a PCA algorithm to obtain a changed characteristic matrix Y after transformation, wherein Y is [ Y ═ Y [ ] 1 ,y 2 ,...,y N ];
(5) Using the NR-Kmeans algorithm on the elements Y of the variation feature matrix Y i Classifying, and removing sparse data interfered by noise from the change characteristic matrix Y to obtain a dense point set Y';
(6) and dividing the dense point set Y' into an unchanged class and a changed class, converting the gray value of the object in the difference image corresponding to the unchanged class element into 255, converting the gray value of the object in the difference image corresponding to the changed class element into 0, outputting a change detection result image according to the gray value of the object, and marking an image change area in the image.
In the step (2), the ENVI software is adopted to realize the image data preprocessing and eliminate the shooting error; using eCongintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 The method comprises the following steps of constructing a characteristic vector by using spectral characteristics and textural characteristics of an image at a moment to obtain a change vector set, wherein the steps are as follows:
(2.1) separately aligning T with ENVI software 1 And T 2 Performing geometric correction, image registration and atmospheric correction on the images at all times to realize image data preprocessing; the size of the two images is 512 x 512;
(2.2) Using eCogintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively;
(2.3) obtaining Mean-Std characteristics of the preprocessed image as spectral characteristics of the image; extracting texture features of the preprocessed image by adopting a gray level co-occurrence matrix;
(2.4) according to the spectral features and the texture features extracted from the two preprocessed images, constructing a feature vector, and obtaining a variation vector set M d
Solving Mean-Std characteristics of the image as spectral characteristics of the image in the step (2.3); the method comprises the following steps:
for each object of the image, the Mean and standard deviation Std in the spectral features are calculated according to the following formula:
Figure BDA0002074766860000091
wherein Mean is the Mean of the gray levels of the pixels in the object, std is the standard deviation of the gray levels of the pixels in the object, A represents the number of the pixels in the object, c i Representing the gray scale of the ith pixel point in the object.
Extracting texture features of the image by adopting a gray level co-occurrence matrix in the step (2.3); the method comprises the following steps:
(2.3.1) any pixel point (tau) in the image 1 ,τ 2 ) And another pixel (tau) 11 ,τ 22 ) Form a point pair in which 1 ,Δ 2 Is an integer; setting pixel point (tau) 1 ,τ 2 ) Has a gray value of f1 (τ) 11 ,τ 22 ) If the gray value of (a) is f2, the gray value of the point pair is (f1, f2), and the maximum gray level of the image is L, the combination of f1 and f2 shares L × L groups; in this embodiment, assuming that L is 16, the combination of f1 and f2 has 16 × 16 groups;
(2.3.2) counting the number of occurrences of each different set of (f1, f2) values in the image, and then arranging the values into a matrix, wherein the matrix element values at the matrix positions (L, L) are the number of occurrences of the two point pairs with the gray values L;
(2.3.3) normalizing the number of occurrences of each different set of (f1, f2) values to the probability of occurrence g (f1, f2) according to the total number of occurrences of (f1, f2), the normalized square matrix being called a gray level co-occurrence matrix;
(2.3.4) providing 14 statistics as texture features by the gray level co-occurrence matrix, selecting 6 statistics as the texture features, wherein the 6 statistics are contrast, entropy, energy, equality, dissimilarity and correlation; the contrast reflects the definition of the image and the depth of the texture groove; entropy represents the degree of non-uniformity or complexity of texture in an image; the energy reflects the uniformity degree of image gray level distribution and the thickness degree of texture; the equality reflects the homogeneity of the image texture and measures the local change of the image texture; the correlation reflects the consistency of the image texture; the 6 statistics represent the characteristics of the image from all aspects, so the 6 statistics are selected as texture features;
(2.3.5) if the matrix element value at the position (i, j) of the gray level co-occurrence matrix is g (i, j), and L is the maximum gray level of the image, the texture features are expressed as follows:
Figure BDA0002074766860000101
Figure BDA0002074766860000102
Figure BDA0002074766860000103
Figure BDA0002074766860000104
Figure BDA0002074766860000105
Figure BDA0002074766860000106
in the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
Figure BDA0002074766860000107
Figure BDA0002074766860000108
step (2.4) obtaining a variation vector set M according to the spectral characteristics and the texture characteristics extracted from the two preprocessed images d (ii) a The method comprises the following steps:
(2.4.1) T obtained according to step (2.3) 1 And T 2 Respectively constructing characteristic vectors by using the spectral characteristics and the texture characteristics of the images at the moment, and calculating a difference value of the corresponding characteristic vectors of the two images; difference value M of jth characteristic vector of ith object corresponding to two images d (i, j), as follows:
M d (i,j)=M d,1 (i,j)-M d,2 (i,j)
in the formula, M d,1 (i, j) is T 1 J-th feature vector, M, of i-th object of time image d,2 (i, j) is T 2 The jth feature vector of the ith object of the moment image; i is less than or equal to N, wherein N is the total number of the objects in the image; j is less than or equal to S feature ,S feature The total number of features contained in the object; s feature =S band ×8,S band The total wave band number of the object; in the embodiment, the image has 4 wave bands, S feature =32;
According to the difference M d (i, j) obtaining a change vector m corresponding to the ith object i Expressed as follows:
m i =(M d (i,1),M d (i,2),...,M d (i,32))
(2.4.2) repeating the step (2.4.1), calculating the difference value of the characteristic vector of each corresponding object in the two images to obtain the variation vector corresponding to each object, and forming a variation vector set M d Expressed as follows:
M d =[m 1 ,m 2 ,...m N ]
will M d Middle element according to T 1 And T 2 The arrangement modes of the N objects in the time image are arranged to form a difference value image.
Selecting an algorithm according to the Relief characteristics to obtain a variation vector set M in the step (3) d Feature subspace M of l Selecting e variation characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is smaller than the total number of the characteristics contained in the object, further respectively selecting the e variation characteristics from the N objects, and combining a variation vector set M d Obtaining a matrix X; the method comprises the following steps:
(3.1) vector set of variations M d Also called feature space, extracting feature space M d =[m 1 ,m 2 ,m 3 ,...m N ]Of (2) subspace M l =[m 1 ,m 2 ,m 3 ,...m n ]As a training set, where N < N; using human interpretation to subspace M of features l Is divided into M l1 And M l2 Two sets of numbers, where M l1 Containing l1 elements, M l2 Containing l2 elements, l1+ l2 ═ n, M l1 ∈M l ,M l2 ∈M l
(3.2) calculation of M l The Euclidean distance of the change vector between every two elements in the vector form a distance matrix, namely
Figure BDA0002074766860000111
Where d represents the distance matrix,
Figure BDA0002074766860000112
represents M l The Euclidean distance of the vector change between every two middle elements;
(3.3) in the feature subspace M l For the change vector m of the object i i 1,2,3, 1, n, if m i ∈M l1 SelectingM l1 Selecting M vectors with the minimum Euclidean distance between the vector and the vector l2 S vectors with the largest Euclidean distance from the interior; if m i ∈M l2 Selecting M l2 S vectors with the minimum Euclidean distance from the interior and M vectors are selected l1 S vectors with the largest Euclidean distance; namely to m i Solving a neighbor matrix Q i =[q 1 ,q 2 ,...,q s ,...,q 2s ]Wherein [ q ] 1 ,q 2 ,...,q s ]For s vectors with the minimum Euclidean distance, [ q ] s+1 ,q s+2 ,...,q 2s ]S vectors with the largest Euclidean distance, wherein s is min (l1, l 2);
(3.4) calculating the weight corresponding to each feature of the object i according to the guess neighbor correlation statistic and the guess neighbor correlation statistic, namely
Figure BDA0002074766860000113
In the formula,
Figure BDA0002074766860000114
is a characteristic f j Corresponding weight, diffr j Is a characteristic f j Guess the neighbor correlation statistic, diffw j Is a characteristic f j Guessing neighbor related statistics; diffr j And diffw j The calculation expression of (a) is as follows:
Figure BDA0002074766860000115
Figure BDA0002074766860000116
where, s is min (l1, l2), i denotes an object i, j denotes a jth feature, and Q denotes a 1,2,3 i (a, j) represents [ q ] 1 ,q 2 ,...,q s ]Q when a is s +1, s +2 i (a, j) represents [ q ] s+1 ,q s+2 ,...,q 2s ](ii) a Q when b is 1,2,3 i (b, j) represents [ q 1 ,q 2 ,...,q s ]Q when b is s +1, s +2 i (b, j) represents [ q ] s+1 ,q s+2 ,...,q 2s ];
(3.5) calculating the obtained feature subspace M according to the step (3.4) l Selecting e variation characteristics with the maximum weight from the weight of the characteristic of each object, further selecting the e variation characteristics from the N objects respectively, and combining the original variation vector set M d A matrix X is obtained, represented as follows:
Figure BDA0002074766860000121
in the formula, X i =[x i1 x i2 ... x ie ]1,2, N; matrix element x ij And e represents the j-th feature in the e change features of the object i, and the value of e is less than the total number of the features contained in the object.
Performing feature dimensionality reduction on the matrix X by using a PCA algorithm to obtain a transformed change feature matrix Y; the method comprises the following steps:
(4.1) calculating the mean value mu of each row of the matrix according to the matrix X obtained in the step (3) i ,i=1,2,...,N:
Figure BDA0002074766860000122
Wherein j is 1,2, E (X) i ) Represents a mathematical expectation;
(4.2) according to the mean value μ i The covariance matrix C of the matrix X is calculated, i.e.:
Figure BDA0002074766860000123
Figure BDA0002074766860000124
wherein, i is 1,2, 1., N, r is 1,2, 1., N, j is 1,2,. e; as can be seen from the above formula, the covariance matrix C is a real symmetric matrix;
(4.3) calculating an eigenvalue lambda of the covariance matrix C 1 ,λ 2 ,...,λ N Corresponding feature vector is v 1 ,v 2 ,...,v N I.e. lambda i v i =Cv i ,i=1,2,…,N;
(4.4) converting the characteristic value lambda 1 ,λ 2 ,...,λ N Sorting in descending order to obtain lambda' 1 ,λ′ 2 ,...,λ′ N The corresponding feature vector is v' 1 ,v′ 2 ,...,v′ N Taking the sorted eigenvalues to form a diagonal matrix P;
Figure BDA0002074766860000131
(4.5) calculating matrix X at the new eigenvector v' 1 ,v′ 2 ,...,v′ N And (3) obtaining a change characteristic matrix Y after dimensionality reduction by the projection, namely:
Y=X*P=[y 1 ,y 2 ,...,y N ]
wherein, y 1 ,y 2 ,...,y N Representing the changing characteristics of the object.
In the step (5), the NR-Kmeans algorithm is used for the element Y of the variation characteristic matrix Y i Classifying, and removing sparse data interfered by noise from the change characteristic matrix Y to obtain a dense point set Y'; the method comprises the following steps:
(5.1) in matrix Y ═ Y 1 ,y 2 ,...,y N ]For each element point y α ,y α E.g. Y, giving neighborhood radius delta to obtain the coincidence
Figure BDA0002074766860000132
Element point of (2)
Figure BDA0002074766860000133
Q b (y α ) The representation contains the element point y α Y inside α The nearest neighbor element point set of (2); wherein,
Figure BDA0002074766860000134
representing element point y α
Figure BDA0002074766860000135
B is y α The number of nearest neighbor element points in a neighborhood range, wherein f is 1, 2.
(5.2) calculating the element point y α The density function value of (a); the density function value of the element point yx represents the sum of functions of all nearest neighbor element points in the range of the neighborhood radius delta; y is α The density function value of (b) is calculated as follows:
Figure BDA0002074766860000136
in the formula,
Figure BDA0002074766860000137
is an element point y α
Figure BDA0002074766860000138
1,2, b;
(5.3) repeating the steps (5.1) - (5.2) until the density function values of all the element points in the matrix Y are calculated;
(5.4) when y β When the value of the density function is more than or equal to the average density value of all the element points in the matrix Y, the element point Y is determined β Put the data considered available into the dense set of points Y', i.e. Y β The following conditions are met:
Figure BDA0002074766860000139
wherein DF (y) β ) Denotes y β Value of density function, DF (y) α ) Is a data point y α The density function value of (a);
and (5.5) obtaining all available data in the matrix Y according to the step (5.4), wherein the available data forms a dense point set Y ', and Y' is an element point set from which sparse data interfered by noise is removed.
Step (6), the dense point set Y' is divided into an unchanged class and a changed class, the gray value of the object in the difference image corresponding to the elements of the unchanged class is converted into 255, the gray value of the object in the difference image corresponding to the elements of the changed class is converted into 0, and a change detection result image is output according to the gray value of the object; the method comprises the following steps:
(6.1) from dense point set Y '═ Y' 1 ,y′ 2 ,...,y′ η ]Eta is the number of element points in Y ', eta is less than or equal to N, Y' η Representing the characteristics of the object from which the data disturbed by the noise is removed; selecting the element point with the maximum density value as a first initial clustering center C 1 Calculating the characteristic value and the clustering center C of other element points in Y 1 The difference of the characteristic values of (A) and (B) is selected from 1 The element point with the largest difference in eigenvalues, that is, the point with the largest absolute difference value, is taken as the second initial clustering center C 2 ;C 1 ,C 2 Corresponding characteristic value is c 1 ,c 2 Cluster center C 1 And C 2 Corresponding class D 1 And D 2
(6.2) calculating element points Y 'in the dense point set Y' θ Difference from the eigenvalues of the two cluster centers, i.e.
d θ1 =|y′ θ -c 1 |
d θ2 =|y′ θ -c 2 |
d θ1 Denotes element point y' θ And a clustering center C 1 Absolute value of the difference of (d) θ2 Denotes element point y' θ And a clustering center C 2 Is absolute of the difference ofA value, θ ═ 1, 2., η; the element points in the dense point set Y' are classified into a class corresponding to a cluster center with a smaller absolute difference value;
(6.3) calculating class D 1 ,D 2 Mean value c 'of characteristic values of inner corresponding element points' 1 ,c′ 2 I.e. by
Figure BDA0002074766860000141
Figure BDA0002074766860000142
In formula (II), c' 1 ,c′ 2 Are respectively of class D 1 ,D 2 An average value of the corresponding element point feature values; i D 1 | represents class D 1 Number of medium element points, | D 2 I represents class D 2 The number of the middle element points, y', corresponds to the characteristic value of the element points in the class; if c' 1 ≠c 1 Of class D 1 Cluster center C of 1 Characteristic value c of 1 Is updated to c' 1 (ii) a If c' 2 ≠c 2 Of class D 2 Cluster center C of 2 Characteristic value c of 2 Is updated to c' 2
(6.4) repeating the steps (6.2) to (6.3) until the characteristic values of the two clustering centers are not changed;
(6.5) two classes D 1 ,D 2 The method comprises the following steps of (1) dividing the method into an unchanged class and a changed class, wherein the unchanged class is the class with a smaller average value, and the changed class is the class with a larger average value; converting the gray value of the object in the difference image corresponding to the unchanged class element into 255, and converting the gray value of the object in the difference image corresponding to the changed class element into 0;
(6.6) outputting a detection result graph according to the gray value of the object, wherein the graph marks the image change area.
The invention improves the traditional kmeans clustering algorithm, and has higher detection precision and high accuracy.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (8)

1. A robustness remote sensing image change detection method is characterized by comprising the following steps: the method comprises the following steps:
(1) acquiring and obtaining target area T 1 And T 2 The remote sensing image at a moment meets the resolution requirement and contains clear texture feature information;
(2) adopting ENVI software to realize image data preprocessing and eliminate shooting errors; using eCongintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 Spectral features and texture features of the image at the moment, constructing a feature vector, and obtaining a change vector set M d ,M d =[m 1 ,m 2 ,...m N ]Wherein m is N Representing a variation vector corresponding to the Nth object; will M d The middle element is according to T 1 Or T 2 Arranging N objects in the time image in an arrangement mode to form a difference image;
(3) obtaining a variation vector set M according to a Relief feature selection algorithm d Feature subspace M of l Selecting e change characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is less than the total number of the characteristics contained in the object, further respectively selecting the e change characteristics from N objects, and combining a change vector set M d Obtaining a matrix X;
(4) performing feature dimensionality reduction on the matrix X by using a PCA algorithm to obtain a transformed change feature matrix Y, wherein Y is [ Y ═ Y [ 1 ,y 2 ,...,y N ];
(5) Using the NR-Kmeans algorithm on the elements Y of the variation feature matrix Y i Classifying, and removing the noise from the variation characteristic matrix YObtaining dense point set Y' by using sparse data of acoustic interference; the method comprises the following steps:
(5.1) in matrix Y ═ Y 1 ,y 2 ,...,y N ]For each element point y α ,y α E.g. Y, giving neighborhood radius delta to obtain the coincidence
Figure FDA0003698617410000011
Element point of (2)
Figure FDA0003698617410000012
Figure FDA0003698617410000013
Q b (y α ) The representation contains the element point y α Y inside α A set of nearest neighbor element points; wherein,
Figure FDA0003698617410000014
representing element point y α
Figure FDA0003698617410000015
B is y α The number of nearest neighbor element points in a neighborhood range, wherein f is 1, 2.
(5.2) calculating element point y α The density function value of (a); element point y α The density function value of (2) represents the sum of functions of all nearest neighbor element points in the neighborhood radius delta range; y is α The calculation formula of the density function value is as follows:
Figure FDA0003698617410000016
in the formula,
Figure FDA0003698617410000017
is an element point y α
Figure FDA0003698617410000018
1,2, …, b;
(5.3) repeating the steps (5.1) - (5.2) until the density function values of all element points in the matrix Y are calculated;
(5.4) when y β When the density function value is more than or equal to the average density value of all the element points in the matrix Y, the element point Y is added β Put the data considered available into the dense set of points Y', i.e. Y β The following conditions are met:
Figure FDA0003698617410000019
in the formula, DF (y) β ) Denotes y β Value of density function, DF (y) α ) Is a data point y α The density function value of (a);
(5.5) obtaining all available data in the matrix Y according to the step (5.4), wherein the available data form a dense point set Y ', and Y' is an element point set from which sparse data interfered by noise is removed;
(6) and dividing the dense point set Y' into an unchanged class and a changed class, converting the gray value of the object in the difference image corresponding to the unchanged class element into 255, converting the gray value of the object in the difference image corresponding to the changed class element into 0, outputting a change detection result image according to the gray value of the object, and marking an image change area in the image.
2. The robust remote sensing image change detection method according to claim 1, characterized in that: in the step (2), the ENVI software is adopted to realize the image data preprocessing and eliminate the shooting error; using eCongintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively; separately obtain T 1 And T 2 The method comprises the following steps of constructing a characteristic vector by using spectral characteristics and texture characteristics of an image at a moment, and obtaining a change vector set, wherein the steps are as follows:
(2.1) separately aligning T with ENVI software 1 And T 2 Performing geometric correction, image registration and atmospheric correction on the images at all times to realize image data preprocessing;
(2.2) use of eCogintion software platform to T 1 And T 2 Performing multi-scale segmentation on the preprocessed images at a moment, and dividing the two images into N image blocks respectively to obtain N objects respectively;
(2.3) obtaining Mean-Std characteristics of the preprocessed image as spectral characteristics of the image; extracting texture features of the preprocessed image by adopting a gray level co-occurrence matrix;
(2.4) according to the spectral features and the texture features extracted from the two preprocessed images, constructing a feature vector, and obtaining a variation vector set M d
3. The robust remote sensing image change detection method according to claim 2, characterized in that: step (2.3) obtaining Mean-Std characteristics of the image as spectral characteristics of the image; the method comprises the following steps:
for each object of the image, the Mean and standard deviation Std in the spectral features are calculated according to the following formula:
Figure FDA0003698617410000021
wherein Mean is the Mean of the gray levels of the pixels in the object, std is the standard deviation of the gray levels of the pixels in the object, A represents the number of the pixels in the object, c p Representing the gray scale of the p-th pixel point in the object.
4. The robust remote sensing image change detection method according to claim 2, characterized in that: extracting texture features of the image by adopting a gray level co-occurrence matrix in the step (2.3); the method comprises the following steps:
(2.3.1) any pixel point (tau) in the image 12 ) And another pixel (tau) 1 +△ 12 +△ 2 ) Form point pairs of which 1 ,△ 2 Is an integer; setting pixelDot (tau) 12 ) Has a gray value of f1 (τ) 1 +△ 12 +△ 2 ) If the gray value of (a) is f2, the gray value of the point pair is (f1, f2), and the maximum gray level of the image is L, the combination of f1 and f2 shares L × L groups;
(2.3.2) counting the number of occurrences of each different set of (f1, f2) values in the image, and then arranging the values into a matrix, wherein the matrix element values at the matrix positions (L, L) are the number of occurrences of the two point pairs with the gray values L;
(2.3.3) normalizing the number of occurrences of each different set of (f1, f2) values to the probability of occurrence g (f1, f2) according to the total number of occurrences of (f1, f2), the normalized square matrix being called a gray level co-occurrence matrix;
(2.3.4) selecting 6 kinds of statistic provided by the gray level co-occurrence matrix as texture features, wherein the 6 kinds of statistic are contrast, entropy, energy, equality, dissimilarity and correlation;
(2.3.5) if the matrix element value at the position (z, t) of the gray level co-occurrence matrix is g (z, t), and L is the maximum gray level of the image, the texture feature is expressed as follows:
Figure FDA0003698617410000031
Figure FDA0003698617410000032
Figure FDA0003698617410000033
Figure FDA0003698617410000034
Figure FDA0003698617410000035
Figure FDA0003698617410000036
in the formula, Con represents contrast, Ent represents entropy, Ene represents energy, Hom represents equality, Dis represents dissimilarity, and Cor represents correlation;
Figure FDA0003698617410000037
Figure FDA0003698617410000038
5. the robust remote sensing image change detection method according to claim 2, characterized in that: step (2.4) obtaining a variation vector set M according to the spectral characteristics and the texture characteristics extracted from the two preprocessed images d (ii) a The method comprises the following steps:
(2.4.1) T obtained according to step (2.3) 1 And T 2 Respectively constructing characteristic vectors by the spectral characteristics and the texture characteristics of the images at the moment, and calculating the difference value of the corresponding characteristic vectors of the two images; difference value M of jth characteristic vector of ith object corresponding to two images d (i, j), as follows:
M d (i,j)=M d,1 (i,j)-M d,2 (i,j)
in the formula, M d,1 (i, j) is T 1 J-th feature vector, M, of i-th object of time image d,2 (i, j) is T 2 The jth feature vector of the ith object of the moment image; i is less than or equal to N, wherein N is the total number of the objects in the image; j is less than or equal to S feature ,S feature Total number of features contained for the object; s feature =S band ×8,S band The total wave band number of the object;
according to the difference M d (i, j) obtaining a change vector m corresponding to the ith object i Expressed as follows:
m i =(M d (i,1),M d (i,2),...,M d (i,S feature ))
(2.4.2) repeating the step (2.4.1), calculating the difference value of the characteristic vector of each corresponding object in the two images to obtain the variation vector corresponding to each object, and forming a variation vector set M d Expressed as follows:
M d =[m 1 ,m 2 ,...m N ]
will M d The middle element is according to T 1 And T 2 The arrangement modes of the N objects in the time image are arranged to form a difference value image.
6. The robust remote sensing image change detection method according to any one of claims 1 to 5, characterized in that: selecting an algorithm according to the Release characteristics to obtain a variation vector set M in the step (3) d Characteristic subspace M of l Selecting e change characteristics with the maximum weight according to the weight of the characteristics of each object, wherein the value of e is less than the total number of the characteristics contained in the object, further respectively selecting the e change characteristics from N objects, and combining a change vector set M d Obtaining a matrix X; the method comprises the following steps:
(3.1) vector set of variations M d Also called feature space, extracting feature space M d =[m 1 ,m 2 ,m 3 ,...m N ]Of (2) subspace M l =[m 1 ,m 2 ,m 3 ,...m n ]As a training set, where N < N; the feature subspace M l Is divided into M l1 And M l2 Two sets of numbers, where M l1 Containing l1 elements, M l2 Containing l2 elements, l1+ l2 ═ n, M l1 ∈M l ,M l2 ∈M l
(3.2) calculation of M l The Euclidean distance of the change vector between every two elements in the vector form a distance matrix, namely:
Figure FDA0003698617410000041
where d represents the distance matrix,
Figure FDA0003698617410000042
represents M l The Euclidean distance of the vector change between every two middle elements;
(3.3) in the feature subspace M l For the change vector m of the object i i Where i is 1,2,3, …, n, if m i ∈M l1 Selecting M l1 Selecting M vectors with the minimum Euclidean distance between the vector and the vector l2 S vectors with the largest Euclidean distance; if m is i ∈M l2 Selecting M l2 Selecting M vectors with the minimum Euclidean distance between the vector and the vector l1 S vectors with the largest Euclidean distance; namely to m i Solving a neighbor matrix Q i =[q 1 ,q 2 ,...,q s ,...,q 2s ]Wherein [ q ] 1 ,q 2 ,...,q s ]For s vectors with the minimum Euclidean distance, [ q ] s+1 ,q s+2 ,...,q 2s ]S vectors with the largest Euclidean distance, wherein s is min (l1, l 2);
(3.4) calculating the weight corresponding to each feature of the object i according to the guess neighbor correlation statistic and the guess neighbor correlation statistic, namely
w fj =diffr j -diffw j
In the formula, w fj Is a characteristic f j Corresponding weight, diffr j Is a characteristic f j Guess the neighbor correlation statistic, diffw j Is a characteristic f j Guessing neighbor related statistics; diffr j And diffw j The calculation expression of (a) is as follows:
Figure FDA0003698617410000051
Figure FDA0003698617410000052
where, s is min (l1, l2), i denotes an object i, j denotes a jth feature, and Q is 1,2,3, …, s i (a, j) represents [ q ] 1 ,q 2 ,...,q s ]When a is s +1, s +2, …,2s, Q i (a, j) represents [ q ] s+1 ,q s+2 ,...,q 2s ](ii) a When b is 1,2,3, …, s, Q i (b, j) represents [ q 1 ,q 2 ,...,q s ]When b is s +1, s +2, …,2s, Q is i (b, j) represents [ q s+1 ,q s+2 ,...,q 2s ];
(3.5) calculating the obtained feature subspace M according to the step (3.4) l Selecting e variation characteristics with the maximum weight from the weight of the characteristic of each object, further selecting the e variation characteristics from the N objects respectively, and combining the original variation vector set M d A matrix X is obtained, represented as follows:
Figure FDA0003698617410000053
in the formula, X i =[x i1 x i2 ...x ie ]1,2, N; matrix element x ij And (4) representing the jth feature in the e change features of the object i, wherein the value of e is smaller than the total number of the features contained in the object.
7. The robust remote sensing image change detection method according to any one of claims 1 to 5, characterized in that: performing feature dimension reduction on the matrix X by using a PCA algorithm to obtain a transformed change feature matrix Y; the method comprises the following steps:
(4.1) calculating the mean value mu of each row of the matrix according to the matrix X obtained in the step (3) i ,i=1,2,...,N:
Figure FDA0003698617410000054
Wherein j is 1,2, E (X) i ) Represents a mathematical expectation;
(4.2) according to the mean value μ i The covariance matrix C of the matrix X is calculated, i.e.:
Figure FDA0003698617410000055
Figure FDA0003698617410000056
wherein, i 1,2, is, N, r 1,2, is, N, j 1,2, is, e;
(4.3) calculating an eigenvalue λ of the covariance matrix C 12 ,...,λ N Corresponding feature vector is v 12 ,...,ν N I.e. λ i v i =Cv i ,i=1,2,…,N;
(4.4) converting the characteristic value lambda 12 ,...,λ N Sorting in descending order to obtain lambda' 1 ,λ' 2 ,...,λ' N The corresponding feature vector is v' 1 ,ν' 2 ,...,ν' N Taking the sorted eigenvalues to form a diagonal matrix P;
Figure FDA0003698617410000061
(4.5) calculating the matrix X at the new feature vector v' 1 ,ν' 2 ,...,ν' N And (3) obtaining a change characteristic matrix Y after dimensionality reduction by the projection, namely:
Y=X*P=[y 1 ,y 2 ,...,y N ]
wherein, y 1 ,y 2 ,...,y N Representing the changing characteristics of the object.
8. The robust remote sensing image change detection method according to any one of claims 1 to 5, characterized in that: dividing the dense point set Y' into an unchanged class and a changed class, converting the gray value of the object in the difference map corresponding to the unchanged class elements into 255, converting the gray value of the object in the difference map corresponding to the changed class elements into 0, and outputting a change detection result map according to the gray value of the object; the method comprises the following steps:
(6.1) from dense point set Y '═ Y' 1 ,y' 2 ,...,y' η ]Eta is the number of element points in Y ', eta is less than or equal to N, Y' η Representing the characteristics of the object from which the data disturbed by the noise is removed; selecting the element point with the maximum density value as a first initial clustering center C 1 Calculating the characteristic value and the clustering center C of other element points in Y 1 The difference of the characteristic values of (A) and (B) is selected from 1 The element point with the largest difference of eigenvalues of (2), that is, the point with the largest absolute value of the difference, is taken as the second initial clustering center C 2 ;C 1 ,C 2 Corresponding characteristic value is c 1 ,c 2 Cluster center C 1 And C 2 Corresponding class D 1 And D 2
(6.2) calculating element points Y 'in the dense point set Y' θ Difference from the eigenvalues of the two cluster centers, i.e.
d θ1 =|y' θ -c 1 |
d θ2 =|y' θ -c 2 |
d θ1 Denotes element point y' θ And a clustering center C 1 Absolute value of the difference in characteristics of d θ2 Denotes element point y' θ And a clustering center C 2 Is equal to 1,2, η; dividing the element points in the dense point set Y' into classes corresponding to the clustering centers with smaller absolute difference values;
(6.3) calculating class D 1 ,D 2 Mean value c 'of characteristic values of inner corresponding element points' 1 ,c' 2 I.e. by
Figure FDA0003698617410000062
Figure FDA0003698617410000063
In the formula (II), c' 1 ,c' 2 Are respectively of class D 1 ,D 2 An average value of the corresponding element point feature values; | D 1 | represents class D 1 Number of Medium element points, | D 2 I represents class D 2 The number of the middle element points, y', corresponds to the characteristic value of the element points in the class; if c' 1 ≠c 1 Of class D 1 Cluster center C of 1 Characteristic value c of 1 Is updated to c' 1 (ii) a If c' 2 ≠c 2 Class D 2 Cluster center C of 2 Characteristic value c of 2 Is updated to c' 2
(6.4) repeating the steps (6.2) - (6.3) until the characteristic values of the two clustering centers are not changed any more;
(6.5) two classes D 1 ,D 2 The method comprises the following steps of (1) dividing the method into an unchanged class and a changed class, wherein the unchanged class is the class with a smaller average value, and the changed class is the class with a larger average value; converting the gray value of the object in the difference graph corresponding to the unchanged class elements into 255, and converting the gray value of the object in the difference graph corresponding to the changed class elements into 0;
and (6.6) outputting a detection result graph according to the gray value of the object, wherein the graph marks the image change area.
CN201910449743.XA 2019-05-28 2019-05-28 Robust remote sensing image change detection method Active CN110276746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910449743.XA CN110276746B (en) 2019-05-28 2019-05-28 Robust remote sensing image change detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910449743.XA CN110276746B (en) 2019-05-28 2019-05-28 Robust remote sensing image change detection method

Publications (2)

Publication Number Publication Date
CN110276746A CN110276746A (en) 2019-09-24
CN110276746B true CN110276746B (en) 2022-08-19

Family

ID=67960074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910449743.XA Active CN110276746B (en) 2019-05-28 2019-05-28 Robust remote sensing image change detection method

Country Status (1)

Country Link
CN (1) CN110276746B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091054B (en) * 2019-11-13 2020-11-10 广东国地规划科技股份有限公司 Method, system and device for monitoring land type change and storage medium
CN111931744B (en) * 2020-10-09 2021-01-05 航天宏图信息技术股份有限公司 Method and device for detecting change of remote sensing image
CN112329565A (en) * 2020-10-26 2021-02-05 兰州交通大学 Road construction supervision method and system based on high-resolution remote sensing image
CN114066876B (en) * 2021-11-25 2022-07-08 北京建筑大学 Construction waste change detection method based on classification result and CVA-SGD method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629380A (en) * 2012-03-03 2012-08-08 西安电子科技大学 Remote sensing image change detection method based on multi-group filtering and dimension reduction
CN103456020A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Remote sensing image change detection method based on treelet feature fusion
CN107423771A (en) * 2017-08-04 2017-12-01 河海大学 A kind of two phase method for detecting change of remote sensing image
CN109409389A (en) * 2017-08-16 2019-03-01 香港理工大学深圳研究院 A kind of object-oriented change detecting method merging multiple features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629380A (en) * 2012-03-03 2012-08-08 西安电子科技大学 Remote sensing image change detection method based on multi-group filtering and dimension reduction
CN103456020A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Remote sensing image change detection method based on treelet feature fusion
CN107423771A (en) * 2017-08-04 2017-12-01 河海大学 A kind of two phase method for detecting change of remote sensing image
CN109409389A (en) * 2017-08-16 2019-03-01 香港理工大学深圳研究院 A kind of object-oriented change detecting method merging multiple features

Also Published As

Publication number Publication date
CN110276746A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110276746B (en) Robust remote sensing image change detection method
Lv et al. Iterative training sample expansion to increase and balance the accuracy of land classification from VHR imagery
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN109871902B (en) SAR small sample identification method based on super-resolution countermeasure generation cascade network
CN106503739A (en) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
CN103996047B (en) Hyperspectral image classification method based on squeezed spectra clustering ensemble
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN108446582A (en) Hyperspectral image classification method based on textural characteristics and affine propagation clustering algorithm
CN108182449A (en) A kind of hyperspectral image classification method
CN103714148B (en) SAR image search method based on sparse coding classification
CN113139512B (en) Depth network hyperspectral image classification method based on residual error and attention
CN114821198B (en) Cross-domain hyperspectral image classification method based on self-supervision and small sample learning
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
CN110751087A (en) EOF-based unmanned aerial vehicle signal identification system and method
CN109300115B (en) Object-oriented multispectral high-resolution remote sensing image change detection method
CN104392454B (en) The merging method based on the scoring of atural object classification degree of membership under the empty spectrum combining classification framework of high-spectrum remote sensing
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
CN112734695A (en) SAR image change detection method based on regional enhancement convolutional neural network
CN111460943A (en) Remote sensing image ground object classification method and system
CN111639697A (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN107609507A (en) Feature based tensor sum supports the Remote Sensing Target recognition methods of tensor machine
CN114627424A (en) Gait recognition method and system based on visual angle transformation
CN112784777A (en) Unsupervised hyperspectral image change detection method based on antagonistic learning
CN112613354A (en) Heterogeneous remote sensing image change detection method based on sparse noise reduction self-encoder
CN114998725B (en) Hyperspectral image classification method based on self-adaptive spatial spectrum attention kernel generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant