CN115690086A - Object-based high-resolution remote sensing image change detection method and system - Google Patents

Object-based high-resolution remote sensing image change detection method and system Download PDF

Info

Publication number
CN115690086A
CN115690086A CN202211439434.2A CN202211439434A CN115690086A CN 115690086 A CN115690086 A CN 115690086A CN 202211439434 A CN202211439434 A CN 202211439434A CN 115690086 A CN115690086 A CN 115690086A
Authority
CN
China
Prior art keywords
image
remote sensing
change detection
pixel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211439434.2A
Other languages
Chinese (zh)
Inventor
行敏锋
杨圆圆
李俊谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute of UESTC Huzhou
Original Assignee
Yangtze River Delta Research Institute of UESTC Huzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute of UESTC Huzhou filed Critical Yangtze River Delta Research Institute of UESTC Huzhou
Priority to CN202211439434.2A priority Critical patent/CN115690086A/en
Publication of CN115690086A publication Critical patent/CN115690086A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image change detection, and discloses a high-resolution remote sensing image change detection method and system based on an object, which comprises the steps of carrying out radiation correction and geometric correction on time phases of high-resolution remote sensing data; sequentially carrying out peer-to-peer filtering, color quantization, J value calculation and space segmentation processing on the corrected high-resolution remote sensing data; carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR; and carrying out change detection by using a multi-scale fusion strategy, and filtering a suspected false detection area in the detection result by using the fractal dimension in the landscape pattern index to obtain a change detection result. The OOCD change detection method is not easily influenced by noise, changes are detected by using image texture features, a change detection framework is constructed, and a specific change area can be quickly and accurately detected.

Description

Object-based high-resolution remote sensing image change detection method and system
Technical Field
The invention belongs to the technical field of image change detection, and particularly relates to a high-resolution remote sensing image change detection method and system based on an object.
Background
Direct comparison is a direct method of change detection. For example, prior art 1 uses VID in combination with image regression methods to study the effect of increased mining on the reduction of vegetation coverage in swedish mountainous areas. CVA is the most widely applied method in multi-temporal optical remote sensing image change detection, and prior art 2 generates a two-dimensional differential image of a multiband image by using CVA technology before calculating a weight optimal threshold.
With the improvement of the spatial resolution of the remote sensing image, the details of the ground object are richer and richer, and the heterogeneity of the spectrum is increased. Therefore, the differential image construction method based on pixel-by-pixel comparison faces a great challenge. In the high spatial resolution remote sensing image change detection, the image change is difficult to be effectively obtained by using a direct comparison method.
Clustering is an effective method for carrying out unsupervised binary classification on different images and then carrying out change detection. For example, prior art 3 proposes an unsupervised change detection method based on reconstructed independent component analysis and K-means (ABC-K-means) clustering by artificial bee colony adjustment. In the prior art 4, a difference image obtained by CVA is analyzed by using fuzzy C-means (FCM) and GK clustering (Gustafson Kessel clustering, GKC), so as to obtain a change result. In the prior art 5, an average-scale image and a logarithmic-scale image are fused to obtain a difference image, and then fuzzy local information C-means (FLICM) processing is performed on the difference image to perform change detection. However, the clustering algorithm has some problems, such as the objective function is easy to fall into a local minimum value, the function convergence speed is slow, and the method is sensitive to initial values and noise, thereby affecting the accuracy of change detection.
With the development of significance detection technology, the method is widely applied to the fields of image segmentation, target recognition, image retrieval and the like. Now, significance detection is also used for change detection. For example, for complex statistical distribution of change features, prior art 6 proposes a semi-supervised building change detection method based on significance detection combined with Morphological Building Index (MBI). In the prior art 7, a fused difference image and a feature difference image are obtained first, then saliency maps of two difference images are obtained by using a common saliency policy, and finally, the saliency maps are clustered and fused by using the FLICM to obtain a final change detection result. The change detection method relies on significance detection of the difference image to obtain the changed region, so the quality of the difference image directly affects the accuracy of subsequent change detection. Due to the fact that the spatial features of the images are omitted, the high-resolution remote sensing images obtained by the IR method, the UID method and the CVA method enable the quality of the difference images to be poor.
The above-described pixel-based comparison change detection method requires the same spatial resolution. In the spectral change detection process, images of different resolutions are typically resampled. When the spectral patterns are different, simple change detection (e.g., image differentiation and thresholding) tends to be unreliable or impossible to overlap. Prior art 8 uses images of different spatial resolutions to detect the proportional change in the land coverage of the sub-pixels. And unmixing the coarse resolution image, performing spatial degradation on the fine resolution image, and generating a double-time-class area ratio image. By comparing the class area ratio images of the dual temporal coarse resolution, a change in the class proportion of each coarse resolution pixel is detected. However, since spectral unmixing can only determine the land coverage area proportion of coarse resolution pixels, and cannot provide class label information on the sub-pixel scale, only coarse resolution pixel variations are detected, not fine resolution pixel variations. Prior art 9 proposes a High Resolution triple Network (HRTNet) framework for change detection, which is applied to a High Resolution remote sensing image. It uses a dynamic initial model to enhance the ability to represent multi-scale features, making the overall model more sensitive to varying regions of different sizes. By utilizing the time characteristics contained in the differential image, the attention of the model to the surface change is ensured, and the robustness of the model to the false change identification is enhanced. The framework can identify interested change information and filter out interference of irrelevant change information.
The Object-Oriented Change Detection (OOCD) algorithm is mainly divided into three classes: class object based, image object based and multi-temporal object based methods. Class object based methods are highly dependent on the accuracy of image classification, while the high accuracy of image classification is dependent on the quality and number of training samples. By setting the threshold, the image object based approach can directly compare different temporal images. However, the independent segmentation results of multi-temporal images exhibit different sizes and shapes, which makes the object difficult to find.
In the prior art 10, an image target is constructed by using a merged multi-temporal image segmentation result, and a Normalized Difference Vegetation Index (NDVI), a mean value, and a standard deviation are calculated and used as a variation Index. It combines a statistical outlier mining method with change detection. The method assumes that the sample data obeys Gaussian distribution, and adopts a chi-square iterative cutting method to search a changing object. However, the method is mainly used for forest change detection, and only NDVI is used as a change index, the overall accuracy is 90%, and the overall kappa coefficient is 0.8. The prior art 11 improves the method of the technique 10 by introducing mahalanobis distance as a similarity measure method and abandoning the use of the characteristic difference as an index for measuring the degree of change in reflectivity.
In the prior art 12, a chi-square and iterative pruning change detection method is used to find a change object trapped in sample data, and the method does not strictly comply with gaussian distribution, so that the detection effect is not ideal. In order to correct the defects of instability of mahalanobis distance and sensitivity to small variables, and the like, the prior art 13 proposes a change detection method based on the cosine law and a box plot. Prior art 14 overcomes the salt-and-pepper effect problem caused by pixel-based classification methods using objects as the smallest classification units. Prior art 15 proposes a framework for finding an optimal segmentation for a given feature type using an image object based approach. Prior art 16 discloses a change detection method for image analysis and image segmentation techniques based on target/neighborhood correlation. Prior art 17 proposes some unsupervised methods based on comparison of computed features of homogeneous regions.
Through the above analysis, the problems and defects of the prior art are as follows: in the prior art, a single scale is adopted for detection, so that the uncertainty is large, the error with the real change is large, the detection result has uncertainty, and the method can not be applied to the detection of the high-resolution remote sensing image; meanwhile, the prior art is greatly influenced by noise, and Gaussian and pulse noise of the remote sensing image cannot be effectively removed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method and a system for detecting the change of a high-resolution remote sensing image based on an object.
The invention is realized in such a way that a method for detecting the change of a high-resolution remote sensing image based on an object comprises the following steps:
firstly, carrying out radiation correction and geometric correction on high-resolution remote sensing data; sequentially carrying out peer-to-peer filtering, color quantization, J value calculation and space segmentation processing on the corrected high-resolution remote sensing data;
secondly, carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR; and carrying out change detection by using a multi-scale fusion strategy, and filtering a suspected false detection area in the detection result by using the fractal dimension in the landscape pattern index to obtain a change detection result.
Further, the object-based high-resolution remote sensing image change detection method comprises the following steps:
step one, carrying out radiation correction and geometric correction on high-resolution remote sensing data; smoothing and denoising the two corrected remote sensing images through peer-to-peer filtering;
step two, merging clusters with similar colors by using the modified fast K-means clustering to obtain a color quantized image;
thirdly, JSEG image segmentation is carried out on the quantized image on different scales to obtain a multi-scale ground object; respectively carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR;
step four, change detection is carried out by utilizing a multi-scale fusion strategy; and filtering the suspected false detection area by using the fractal dimension in the landscape pattern index to obtain a change detection result.
Further, the smoothing and denoising the two corrected remote sensing images by peer-to-peer filtering comprises:
(1) Calculating d i First order difference f of (n) i (n):
f i (n)=d i+1 (n)-d i (n);
Wherein d is i (n) represents all pixels in the window to the center pixel x 0 (n) a Euclidean distance;
(2) Using the formula for x i And (n) testing M points before and after the point, and judging whether each point belongs to impulse noise:
f i (n)≤α;
wherein M = w/2, half the window size; α is set to a large value for highly corrupted images and to a small value for slightly corrupted images;
(3) If f i (n) if not, let j be less than or equal to i; or j>i end point x j (n) treating the signal as impulse noise and removing the impulse noise; the rest d j (n) for estimating true peer-to-peer groups;
(4) Will pixel x 0 (n) replace with a weighted average of peer group members:
Figure BDA0003947628390000031
wherein w i Representing a standard Gaussian filtering weight;
(5) If x 0 (n) belongs to impulse noise, and there is no peer, then the true peer for the location is estimated by the other peers in the window.
Further, the merging clusters with similar colors to obtain a color quantization image by using the modified fast K-means clustering includes:
1) Obtaining each peer group maximum distance T (n):
T(n)=d m(n)-1 (n);
wherein, d i (i =0, m (n) -1) represents the distance of the peer group member pixel to the center pixel; m (n) represents the size of the peer group, i.e., the number of peer group members;
2) After obtaining the value of the maximum distance T (n) of the peer group, the smoothness of the local area is represented with the pixel x (n) as the center, and the weight v (n) of each pixel x (n) is calculated by the following formula:
v(n)=exp(-T(n));
3) The initial number of clusters K in the color quantization is determined using the following equation:
K=βT avg
wherein, T avg Represents the average value of T (n), and beta represents a parameter;
4) Setting an updating rule of a clustering center containing a pixel weight centroid; a color class C is calculated by i Center of mass c i
Figure BDA0003947628390000041
5) Determining the center of the initial cluster by using the K mean value combined with the weight, performing weighted distortion measurement by using the following formula, and determining the cluster to be split until the initial cluster number K is reached:
D i =∑v(n)||x(n)-c i || 2 ,x(n)∈C i
6) And merging the similar clusters, enabling the minimum distance between the two centroids to meet a preset threshold value, and distributing each pixel to the nearest cluster centroid to obtain a color quantized image.
Further, the step of performing JSEG image segmentation on the quantized image at different scales to obtain a multi-scale ground object includes:
(1) Determining a set of all N points of the color quantization image to obtain a point set Z, dividing Z into C parts according to color classes, wherein Z = { Zi,1 ≦ i ≦ C }, and calculating the point set Z i Center m of i
Figure BDA0003947628390000042
Wherein Z = (x, y), Z ∈ Z, and is the coordinate of the point in the image;
(2) J-value of the determined region:
J=(S T -S W )/S W
wherein:
Figure BDA0003947628390000043
Figure BDA0003947628390000044
(3) Dividing the gamut U of a color quantized image into k regions U i (1. Ltoreq. I. Ltoreq.k) with M in each region i Points, respectively calculating the region U i J value J of i And calculating the average J value J of the region U avg
Figure BDA0003947628390000045
(4) Constructing an image, enabling the pixel value of the image to correspond to the J value calculated on a small window with the pixel as the center to obtain a J image, and enabling the corresponding pixel value to serve as a local J value;
(5) The mean and standard deviation of the J images were calculated as μ J And σ J Represents; and calculating a threshold variable T J
T J =μ J +aσ J
Wherein a represents several different preset values;
(6) Screening initial seeds: setting local J value less than T J As candidate seed points: connecting the candidate seed points based on the 4 connectivity connection points to obtain a candidate seed area; calculating different thresholds by using different a to obtain different seed regions and quantities, and taking the most quantity as a final result; if it isDetermining the candidate seed region as a seed if the size of the candidate seed region is larger than the corresponding minimum size;
(7) Filling the cavities in the seeds; averaging the local J values of the remaining unassigned portions of the region and connecting pixels below the average to form a growing region; if a growth region is adjacent to one and only one seed, it is assigned to the seed;
(8) Calculating the local J value of the residual pixel under the next smaller scale, repeating the step (7) and positioning the boundary; the remaining pixels are grown one by one on the smallest scale. The pixel with the smallest J value is assigned to the neighboring seed and the buffer for storing the unclassified pixels at the seed boundary is updated until all pixels are classified.
Further, the change detection is carried out by utilizing a multi-scale fusion strategy; the step of filtering the suspected false detection area in the detection result by using the fractal dimension in the landscape pattern index to obtain the change detection result comprises the following steps:
1) Each object R is calculated using the formula j Multi-scale similarity of S j
Figure BDA0003947628390000051
2) Judging each object R j Multi-scale similarity of S j Similarity determination threshold T with change detection S In the magnitude relation of (A), if S j ≥T S Then R is j Left unchanged, labeled 0; otherwise, judging R j A change, labeled 1, occurred;
3) Repeating the steps 1) to 2) until all the objects in the segmentation result are completed;
4) The fractal dimension is calculated for all detected objects in the variation region by using the following formula:
Figure BDA0003947628390000052
wherein C represents the circumference of the object; s represents the area of the object;
5) Contrast meterWhether the calculated fractal dimension index of the object in the change area is larger than T or not idx If the difference is larger than the preset threshold, judging that the building is not constructed, excluding the building from the change detection result, and traversing all the objects in the change area to obtain the change detection result.
Another object of the present invention is to provide a system for detecting a change in an object-based high-resolution remote sensing image, which implements the method for detecting a change in an object-based high-resolution remote sensing image, the system comprising:
the data processing module is used for carrying out radiation correction and geometric correction on the high-resolution remote sensing data;
the object extraction module is used for sequentially carrying out peer-to-peer filtering, color quantization, J value calculation and space segmentation processing on the corrected high-resolution remote sensing data to obtain a plurality of objects;
the object analysis module is used for carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR to obtain a detection result;
and the multi-scale fusion and fractal dimension filtering module is used for carrying out change detection by utilizing a multi-scale fusion strategy and filtering a suspected false detection area in the detection result by utilizing the fractal dimension in the landscape pattern index to obtain a change detection result.
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the object based high resolution remote sensing image change detection method.
Another object of the present invention is to provide a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the object-based high resolution remote sensing image change detecting method.
Another object of the present invention is to provide an information data processing terminal for implementing the object-based high resolution remote sensing image change detection system.
By combining the technical scheme and the technical problem to be solved, the technical scheme to be protected by the invention has the advantages and positive effects that:
the invention uses the peer-to-peer filtering method to smooth and denoise the remote sensing image, can effectively remove Gaussian and pulse noise and can carry out smoothing treatment under the condition of better retaining edge information.
The invention provides an improved K-means clustering method for replacing a common unsupervised clustering method. And the initial centroid of the cluster is found by using a heuristic method, so that the running time of the K-means algorithm and the quality of a final solution are improved. The weight is increased in the processes of selecting the initial centroid and iteratively updating the cluster centroid so as to generate more clusters in the texture area, and then the clusters with similar centroids are combined to obtain the detected edge more accurately.
The segmentation algorithm which is sensitive to edge perception and weak in smooth region perception even in pixel distribution effectively extracts the object region and the boundary, and achieves image segmentation. And segmenting the remote sensing image under a plurality of different scales to obtain objects of different scales.
The similarity measure SSIM and PSNR of two evaluation image similarities are calculated, and the similarity of each object is calculated under each scale so as to quantitatively analyze the similarity degree between the objects.
The method and the device effectively combine the characteristics of the image objects under different scales by constructing a multi-scale fusion strategy to detect the change of the image. And measuring the regularity of the shape of the change region by using the fractal dimension of the landscape pattern index as an index, and filtering the suspected false detection region in the detection result.
The invention has better result by experiment on the data set WHU Building Dataset, the overall precision reaches 96.78%, and the Kappa coefficient reaches 0.8402. The effectiveness of the method of the invention is proved.
The invention is effective and reliable in city change detection of high-resolution remote sensing images. By using the JSEG algorithm, not only can the target in the scene be accurately extracted, but also the change detection can be carried out by utilizing multiple features contained in the J image sequence, and the final result can be obtained by further applying a fusion strategy. Experiments prove that the method overcomes the uncertainty of single-scale detection and generates a detection result closer to real change.
The invention has good performance even on a single scale, thereby proving that the research of the object-oriented change detection algorithm can meet the requirement of the continuously developed high-resolution remote sensing image.
The two similarity measures in the invention have respective merits and demerits. The SSIM measure, while not effective in the data set used in this experiment, has bounded characteristics that make it possible to achieve better results than PSNR in other environments. The PSNR measurement has higher precision, lower false alarm rate and lower omission factor in the experiment; its unbounded nature, on the other hand, can make setting the threshold difficult.
The landscape pattern index filtering operation of the invention can improve the precision of the detection result and make the change area more clear.
The division of the area by the change detection of the invention can provide valuable reference information for field work, thereby reducing the workload and saving resources.
The OOCD change detection method is not easily influenced by noise, changes are detected by applying image texture characteristics, a change detection framework is constructed, a better result is obtained on a data set WHU Building Dataset, and a specific change area can be quickly and accurately detected.
Drawings
FIG. 1 is a schematic diagram of a method for detecting changes in an object-based high-resolution remote sensing image according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for detecting changes in an object-based high-resolution remote sensing image according to an embodiment of the present invention;
FIG. 3 is a class-map plot of J =1.720 as provided by an embodiment of the present invention;
FIG. 4 is a class-map of J =0 provided by an embodiment of the present invention;
FIG. 5 is a class-map plot of J =0.855 as provided by an embodiment of the present invention;
FIG. 6 is a drawing of J provided by an embodiment of the present invention + =0,J * =0,J o =0,J avg =0 optimal segmentation scheme;
FIG. 7 is a drawing of J provided by an embodiment of the present invention + =0,J {*,o} =0.011,J avg =0.05 best segmentation map;
FIG. 8 is a diagram illustrating a basic window for calculating local J values according to an embodiment of the present invention; only the "+" point in the figure is used to calculate the local J value;
FIG. 9 is a schematic diagram of down-sampling of a 2-level window provided by an embodiment of the present invention; only the "+" points in the figure are used to calculate the local J value;
FIG. 10 is a diagram of remote sensing of a region 2012 provided by an embodiment of the present invention;
FIG. 11 is a 2016 map of a remote sensing of a certain area provided by an embodiment of the present invention;
FIG. 12 is a J image at scale 3 of a multi-temporal image of a region 2012 provided by an embodiment of the present invention;
fig. 13 is a J image at scale 3 of a multi-temporal image of an area in 2016 provided by an embodiment of the present invention;
FIG. 14 is a schematic diagram of a segmentation result at scale 3 of a multi-temporal image of a region in 2012 provided by an embodiment of the present invention;
fig. 15 is a schematic diagram of a segmentation result at scale 3 of a multi-temporal image of a region in 2012 provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1-2, the method for detecting changes in a high-resolution remote sensing image based on an object according to an embodiment of the present invention includes the following steps:
s101, performing radiation correction and geometric correction on the high-resolution remote sensing data; smoothing and denoising the two corrected remote sensing images through peer-to-peer filtering;
s102, merging clusters with similar colors by using the modified rapid K-means clustering to obtain a color quantized image;
s103, performing JSEG image segmentation on the quantized image on different scales to obtain a multi-scale ground object; respectively carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR;
s104, performing change detection by using a multi-scale fusion strategy; and filtering a suspected false detection area in the detection result by using the fractal dimension in the landscape pattern index to obtain a change detection result.
The following is a detailed description of specific embodiments of the present invention:
1. change detection
1.1 general procedure for Change detection
The traditional direct comparison method is widely applied in practice due to its simplicity and easy implementation. Other methods are developed to a certain extent by taking advantage of partial ideas of direct comparison methods. Firstly, preprocessing a multi-temporal remote sensing image, wherein the steps generally comprise operations of radiation correction, geometric correction, image registration and the like. Then, differential images between the multi-temporal images are calculated, which can be absolute differences, various distances, vegetation index differences, and the like. Finally, a threshold is generally set to detect the change of the differential image, the threshold may be one or more fixed values set according to the experience of researchers, or a global threshold or a dynamic threshold may be obtained through an algorithm.
1.2 NDVI vegetation index differential-based change detection
The satellite image variation detection method based on Normalized Difference Vegetation Index (NDVI) Difference is a simple and effective algebraic variation detection, which has been used by many scholars to study Vegetation coverage variation in various regions. For example, gandhi et al studied the change detection and other land vegetation characteristics of velore regions in india at different NDVI differential image thresholds using TM images.
The change detection method based on the vegetation index DNVI difference is simply summarized as the following steps: (1) The NDVI image is computed by having registered the multispectral reflectance images of the two phases. And (2) calculating a difference image of the two NDVI images. (3) And comparing the difference image with a set threshold value, wherein if the difference image is smaller than the threshold value, the difference image is regarded as an unchanged area, and otherwise, the difference image is regarded as a changed area.
The first step of the method is to compute the multispectral reflectance images for both phases and then compute the NDVI image. NDVI has been widely used to study the relationship between spectral variability and changes in vegetation growth rate, and it also helps to determine the growth of green vegetation, as well as detect changes in vegetation. Considering two different time phases t in the same area 1 And t 2 Remote sensing image X of 1 And X 2 The size is H multiplied by W pixels, the image of each time phase has B wave bands, and the image is subjected to preprocessing such as correction and registration. Let X i,b As an original image X i A single band image of H × W pixels in the b-th band (i =1, 2. NDVI image NDVI of phase i i Calculated from the following equation.
Figure BDA0003947628390000081
Wherein NIR i Near infrared reflectance image of phase i, X in TM image i,4 ;R i A visible red light reflectance image of time phase i, and X in the TM image i,3
The second step of the method is to calculate the NDVI difference image, the calculation formula is as follows.
NDVI D =|NDVI 1 -NDVI 2 | (1-2)
The third step of the method is to compare with a threshold value, thereby enabling change detection. The detection result is generally a binary image, and the calculation formula is as follows.
C=NDVI D ≥T NDVI (1-3)
Wherein T is NDVI The threshold value for detecting the change can be set by a researcher through experience, and can also be obtained through other self-adaptive methods. C is a result image of change detectionA value of 1 indicates a changed pixel and a value of 0 indicates an unchanged pixel, calculated by a logical expression.
The threshold change detection based on the vegetation index difference is simple and effective, is suitable for the forest and grass land change detection of multiple lands, and can also change different indexes according to different detection targets. The selection of the method threshold depends on the experience and statistics of researchers, and the automation is poor.
1.3 dynamic threshold detection method based on unsupervised change of remote sensing image
To distinguish between changed and unchanged pixels, the most common solution is to binarize the differential image using a threshold algorithm to select a global threshold. Although simple and effective, the selection of the optimal threshold should typically be associated with a priori knowledge about the scene or visual interpretation to give meaning. However, most threshold decision methods do not take into account a priori information. Furthermore, due to the common overlap problem of two clusters in the difference map, the global threshold is typically not suitable for all pixels. Thus, sometimes a global threshold will produce poor results.
In the prior art, the membership value of FCM to each pixel is used as prior information to guide the threshold decision of each pixel. The method comprises the following steps: first, a difference image is generated using a Change Vector Analysis (CVA) technique. The difference image was then modeled using a Gaussian Mixture Model (GMM) with varying and non-varying sets of pixels, and statistical parameters were estimated for the difference image using an Expectation Maximum (EM) algorithm. Therefore, based on Bayesian decision theory, a global initial threshold T can be determined G . Finally, a dynamic threshold T may be obtained by modifying the initial global threshold using the membership value for each pixel U (y, x) generated by the FCM algorithm D
The first step of the method is to generate a difference image. Considering two different time phases t in the same area 1 And t 2 Remote sensing image X of 1 And X 2 The size is H multiplied by W pixels, the image of each time phase has B wave bands, and the image is subjected to preprocessing such as correction and registration. Let X i,b As an original figureImage X i A single band image of H × W pixels in the b-th band (i =1,2. Two-dimensional differential image X of double-time-phase multiband image D Can be defined by the following formula:
Figure BDA0003947628390000091
the second step of the method is initial global threshold identification. The changed pixels and the unchanged pixels of the difference image are respectively marked as C c 、C n First, a distinction is made by the global threshold identified by the EM algorithm and Bayes-Bayes theory. Under simple assumptions, the statistical distribution of pixels in the difference image can be represented by C c And C n The gaussian mixture model of (a) is modeled, which can be expressed as:
Figure BDA0003947628390000092
wherein N is i (x;μ i ,σ i 2 ) Is normally distributed, and is expected to be μ i Variance is σ i 2 ;α i Is the mixing weight for each distribution; m is the number of Gaussian distributions, where M is 2.
On this basis, the parameters of the GMM are estimated by an iterative process using the EM algorithm. Then, each pixel in the difference image is assigned to the class with the highest posterior conditional probability according to the Bayesian minimum error rule. Finally, by solving the following quadratic equation, the initial global threshold can be obtained.
Figure BDA0003947628390000101
In the formula (mu) n 、σ n 2 ,α n ) And (mu) c ,σ c 2 ,α c ) Expectation, variance, and blending weight for the changed and unchanged pixel sets, respectively.
The third step of the method isAnd calculating a dynamic threshold value by combining the membership of the FCM to detect the change. The FCM algorithm is an iterative clustering method, pixels in the same cluster have high similarity, the similarity between different clusters is low, and the cluster of each pixel is identified in a fuzzy mode of a membership degree matrix. FCM attempts to minimize an objective function J through iteration m To find a fuzzy partition of a given data set:
Figure BDA0003947628390000102
wherein X = { X 1 ,x 2 ,x n Is a data set in an M-dimensional vector space, C (2. Ltoreq. C.ltoreq.N) is the number of clusters, u ji Is x i Membership in the jth cluster, m is a weighted index of each fuzzy membership, c j Is the center of cluster j, d 2 (x i ,c j ) Is x i And center c j A distance measure therebetween.
After obtaining the FCM membership matrix, the dynamic threshold of each pixel is modified as follows:
Figure BDA0003947628390000103
wherein, T G The optimal initial threshold value u obtained by the EM algorithm and the Bayesian decision theory n (y, x) and u c (y, x) are membership values for the pixel at position (y, x) belonging to an unchanged pixel and a changed pixel, respectively. If 0<u n (y,x)/u c (y,x)<1, indicating that the pixel has a greater probability of being identified as a changed pixel, the threshold will be reduced by equation (1-8). If 1<u n (y,x)/u c (y,x)<e-1, the threshold of the pixel will also be lowered to account for the high false detection rate of the FCM algorithm. If u is n (y,x)/u c (y,x)>e-1, which means that the pixel has a greater probability of being identified as an unchanged pixel, the threshold is raised.
Finally, the difference image X is thresholded using dynamic threshold D And (3) carrying out change detection:
C=X D ≥T D (1-9)
c is a result image of change detection, and a value of 1 indicates a changed pixel and a value of 0 indicates an unchanged pixel, calculated by a logical expression.
Rather than merely identifying each pixel as either a changed pixel or an unchanged pixel based on its membership value, the method uses the membership value as a priori information to optimize the threshold decision for identification. The method is simple and effective in identifying meaningful changes, and the detection precision is higher than that of an EM algorithm, a Bayesian decision theory and FCM clustering.
1.4 unsupervised Change detection with principal component analysis coupled with K-means clustering
The unsupervised change detection of Principal Component Analysis (PCA) combined with K-means clustering is an automatic change detection method with simple calculation and high efficiency. The method mainly comprises the following six steps: (1) generating a difference image. And (2) generating non-overlapping image blocks from the difference image. (3) Feature vector space is created using PCA on non-overlapping image blocks. (4) Overlapping blocks of data around each pixel are projected onto the feature vector space, creating a feature vector space over the entire difference image. (5) The feature vector space is clustered into two clusters corresponding to wu and wc using a K-means algorithm. (6) And distributing each pixel to one of the clusters according to the minimum Euclidean distance between each pixel of the differential image and the average characteristic vector of the clusters to generate a change map.
The first step of the algorithm is to generate a difference image. Considering two different time phases t in the same area 1 And t 2 Remote sensing image X of 1 And X 2 The sizes are all H × W pixels, and the image is subjected to preprocessing such as correction and arrangement. Let X d It may be defined differently according to the type of the input image, for the differential image. For optical images, X d Can be defined as the absolute difference of the luminance values of the two images, i.e.:
X d =|X 2 -X 1 | (1-10)
the second step of the process is to subject X to d Division into sizesh x h non-overlapping blocks, where h ≧ 2. Order to
Figure BDA0003947628390000113
As coordinates (y, x) of the differential image block, where h is an odd number, sign "
Figure BDA0003947628390000114
"is rounding up. By using X d Lexicographically ordering (y, x) yields the vector formula:
Figure BDA0003947628390000111
by PCA method, using vector set x d (y, x) creates a feature vector space. For ease of mathematical description, x p d For representing a vector x d (y, x) and
Figure BDA0003947628390000115
Figure BDA0003947628390000116
) Representing an index, a symbol "
Figure BDA0003947628390000117
"means rounding down. The average vector of the set is defined as:
Figure BDA0003947628390000112
difference between each vector and the average vector:
Figure BDA0003947628390000118
principal component analysis finds N orthonormal vectors and their associated scalar lambda that best describes the data distribution s And applied to the difference vector set delta p . Vector e s And a corresponding scalar λ s The eigenvectors and eigenvalues of the covariance matrix C, respectively.
Figure BDA0003947628390000121
Wherein, the upper symbols " T "corresponds to a transpose of a vector. Matrix C is of size h 2 ×h 2 It determines h 2 An eigenvector and an eigenvalue. Let us assume that the eigenvectors generated by C are sorted in descending order according to their eigenvalues, i.e. λ s ≥λ s+1
The feature vector space is formed by dividing x d (i, j) is projected onto the feature vector space of each pixel at spatial location (i, j) created by:
v(i,j)=[v 1 ,v 2 ,…,v s ] T (1-15)
wherein
Figure BDA0003947628390000122
Wherein S is more than or equal to 1 and less than or equal to h 2 And S is more than or equal to 1 and less than or equal to S. The parameter S determines the dimensionality of the feature vector v (i, j) at the spatial location (i, j) and is used only to fit x d (i, j) the number of feature vectors projected onto the feature vector space. Vector x d (i, j) is the same method as that obtained by the equations (1-11).
The next stage of the method is to cluster the feature vector space by using a K-means clustering algorithm to generate two clusters. Let v wu And v wc Cluster mean feature vectors of wu and wc classes, respectively. In order to label the clusters generated by the K-means clustering algorithm, two average values on the difference image are found by using the labeled pixels in the K-means clustering algorithm. When a change occurs between two images of a specific region, the differential pixel value of the region is expected to be higher than the pixel value of the region without the change. With this assumption, a cluster having a low pixel average value in the difference image is classified into wu class, and the other cluster is classified into wc class.
By v wc And v wu A binary change detection map CM = { CM (i, j) |1 ≦ i ≦ H,1 ≦ j ≦ W }, where "1" indicates that the corresponding pixel location has changed (i.e., belongs to the wc class) and "0" indicates that no change has occurred (i.e., belongs to the wu class). This process can be considered as an unsupervised threshold.
Figure BDA0003947628390000123
In this case, the symbol "d ()" represents the euclidean distance.
The algorithm is simple in calculation, is suitable for real-time detection application, and has good performance in resisting Gaussian noise and speckle noise with the average value being zero.
The general flow of change detection and some theoretical methods of change detection are described in detail above. The methods comprise change detection based on NDVI vegetation index difference, dynamic threshold detection based on unsupervised change of remote sensing images and unsupervised change detection combining principal component analysis and K-means clustering. The generation of a difference image, a vegetation index difference image, and a two-dimensional difference image of a multiband image is described in detail. The empirical determination of the global threshold and the determination of the global threshold based on the EM algorithm and the Bayesian decision theory are described in detail. A dynamic threshold determination method combining FCM basic concept and global threshold with FCM is introduced. The generation of non-overlapping image blocks, the creation of a feature vector space and how to utilize K-means clustering for change detection are described in detail.
2. Filtering and color quantization
2.1 Peer group Filtering
Noise removal and image smoothing are important content for many image processing applications. For example, gaussian filtering and median filtering are often used as preprocessing steps in color image quantization, motion estimation and image segmentation. For color images, one common method of removing impulse noise is by Vector Median Filtering (VMF). Other methods include Vector Directional Filtering (VDF) and directional-distance filtering (DDF). The latter is a combination of the VMF and VDF methods. One of the disadvantages of these approaches is that they generally tend to modify pixels that are not corrupted by noise. In vector direction filtering, outliers are first detected using a teager-like operator to replace only noisy pixels. But the detection process is performed for each individual color component, which may lead to errors in the final result. Aiming at the situation of mixed Gaussian noise and impulse noise, a self-adaptive nonlinear multivariate filtering method is provided in the directional distance filtering. However, since the average of the entire local window is used to estimate the original pixel value, it may blur edges and details.
To address these shortcomings, the prior art discloses a nonlinear color image noise removal algorithm (PGF) peer group filtering, which does not blur edges.
Let x 0 (n) represents a vector of image pixels, taking a window of size w x w centered on it, computing all pixels in the window to x 0 (n) and sorting in ascending order and representing it as x i (n)(i=0、k=w 2 -1). Euclidean distance d i (n):
d i (n)=||x 0 (n)-x i (n)||,i=0,…,k (2-1)
d 0 (n)≤d 1 (n)≤…≤d k (n) (2-2)
Center pixel x 0 Peer-to-peer group P (n) is defined as:
P(n)={x i (n),i=0,…,m(n)-1} (2-3)
where m (n) is the size of P (n), i.e., the number of member pixels.
Peer group P (n) is composed of x 0 (n) itself and its neighboring dots of similar color. How to select the appropriate size m (n) for each peer group based on local statistics is of great significance to the success of the algorithm. A simple method may be to set a threshold T (n) such that m (n) satisfies:
d m(n)-1 (n) is less than or equal to T (n) and d m(n) (n)≥T(n) (2-4)
However, it is difficult to find a fixed optimal value of T (n) since the signal and noise statistics may vary from image to image and even within the same image.
If there are two color clusters in the window, the two classes can be separated by Fisher's linear discriminant that maximizes the ratio of inter-class scatter to intra-class scatter. However, for more than two classes, this approach will not be able to separate the inclusion of the center pixel x 0 And (n) clustering. Furthermore, in three-dimensional space, the computational complexity of this method is also high.
To circumvent these problems, a simple approach is to use only one dimension of the distance d i (n) discrimination estimation of Fisher. The criteria for maximizing this are:
Figure BDA0003947628390000131
wherein:
Figure BDA0003947628390000132
Figure BDA0003947628390000141
Figure BDA0003947628390000142
Figure BDA0003947628390000143
the algorithm calculates J (i) for each i and finds the cutoff position where J (i) is the maximum, i.e.:
m(n)=argmax(J(i)),i=0,…,k (2-10)
to eliminate the effect of impulse noise, the distance d is calculated before peer-to-peer group classification is performed i (n) ofFirst order difference f i (n):
f i (n)=d i+1 (n)-d i (n) (2-11)
For x i M points before and after (n) are subjected to the following test to check whether they belong to impulse noise:
f i (n)≤α (2-12)
where M = w/2, half the window size, α is set to a larger value for highly corrupted images and to a smaller value for slightly corrupted images, with α =12 being set in the experiment. If f is i (n) if the threshold is not satisfied, j is less than or equal to i or j>i end point x j (n) regarded as impulse noise and removed. The rest d j (n) is used to estimate true peer-to-peer groups.
After impulse noise removal and peer group classification, pixel x is classified 0 (n) is replaced with a weighted average of its peer group members:
Figure BDA0003947628390000144
wherein, w i Is a standard Gaussian filter weight, which depends on p i (n) relative to x 0 (n) relative position. If x is determined 0 (n) belongs to impulse noise and there is no peer group, then the true peer group for that location is estimated by the other peer pixels in the window.
2.2 color quantization
In the last step, the image has been smoothed and denoised using peer-to-peer filtering, during which each peer-to-peer maximum distance T (n) is obtained:
T(n)=d m(n)-1 (n) (2-14)
wherein d is i (i =0, m (n) -1) is the distance of the peer group member pixel to the center pixel, and m (n) is the size of the peer group, i.e., the number of peer group members. The smoothness of the local area represented by the value of the maximum distance T (n) of the ground peer group centered on the pixel x (n) is obtained, and the weight v (n) of each pixel x (n) is calculated by the following formula:
v(n)=exp(-T(n)) (2-15)
the pixels in the noisy region are weighted less than the pixels in the smooth region.
Average value T of T (n) avg Can represent the smoothness of the whole image, T in general avg The higher the image, the less smooth the image, and the more clusters are needed to quantify the colors in the image. The initial number of clusters K in the color quantization is determined by:
K=βT avg (2-16)
wherein, in the experiment, β is set to 2.
A weighted modified K-means algorithm is used in color quantization. The update rule for the cluster center is modified to include the pixel weight centroid. Calculating a color class C by i Center of mass c i
Figure BDA0003947628390000151
Since the pixel vectors are weighted, the centroid moves to the point where the weight is higher.
The center of the initial cluster is determined using K-means plus combined weight. The weighted distortion measure is defined by the following equation, which is used to determine which clusters to split until the initial number of clusters K is reached.
D i =∑v(n)||x(n)-c i || 2 ,x(n)∈C i (2-18)
Thus, less weighted points will be assigned fewer clusters. The number of color clusters in the detailed area is suppressed.
Since fewer clusters will be assigned to the less weighted points in the initial cluster center selection method, the number of clusters in the texture region is suppressed. Thus, after color quantization, a large number of pixels with the same color will have multiple clusters, since the goal of K-means clustering is to minimize global distortion. Therefore, it is necessary to further merge the similar clusters so that the minimum distance between the two centroids satisfies a preset threshold, and the threshold used in the experiment is 12. And allocating each pixel to the nearest clustering centroid to obtain a final quantized image.
2.3 JSEG image segmentation
The object-based change detection target is to accurately extract the ground object in the two time-phase remote sensing images. The JSEG segmentation algorithm is one of the most commonly used color image segmentation methods. The JSEG is sensitive to edges, and is weak in perception of the interior of an object with uniform pixel distribution, so that the boundaries of the object can be well extracted, and image segmentation is further realized. Therefore, the JSEG main part algorithm is used for extracting the ground object objects in the two time-phase remote sensing images so as to complete OOCD change detection.
2.3.1 J value and J image
The colors in the image are coarsely quantized in the previous step without significantly degrading the color quality, and the object is to extract only a few representative colors, which can distinguish neighboring regions in the image. Typically, only 10 to 20 colors are needed in an image of a natural scene. After the colors are quantized, the quantized colors are divided into classes, and such an image with labels as pixel values is referred to herein as a class-map class diagram.
Assuming that the set of points Z is the set of all N points in a class-map and U is the universe of class-maps. Z = (x, y), Z ∈ Z, is the coordinate of the point in the image. m is the coordinate center of the point set.
Figure BDA0003947628390000152
Dividing Z into C parts according to color class, namely Z = { Zi,1 ≦ i ≦ C }, then m i Is the center of the set of points Zi.
Figure BDA0003947628390000153
J value of defined area:
J=(S T -S W )/S W (2-21)
wherein:
Figure BDA0003947628390000161
Figure BDA0003947628390000162
as can be seen from the class-map plots of FIGS. 3, 4, and 5 and the corresponding J values, the J values characterize the uniformity of the distribution of color classes within a region. FIG. 3 shows that several classes are separately piled and distributed unevenly; FIG. 4 is a graph in which each class is uniformly distributed in region U; fig. 5 is a combination of the two.
If the region U is divided into k regions U i (1. Ltoreq. I. Ltoreq.k) with M in each region i Points, respectively calculating the region U i J value J of i And defining an average J value J of the region U avg
Figure BDA0003947628390000163
With J avg As a standard, all possible methods of segmenting images are proposed. For a fixed number of regions, a better segmentation tends to have a lower J avg Therefore, the optimal division for fig. 3 and 5 is shown in fig. 6 and 7, respectively.
Fig. 6 and 7 are schematic diagrams of optimal segmentation. (a) J. the design is a square + =0,J * =0,J o =0,J avg =0;(b)J + =0,J {*,o} =0.011,J avg =0.05。
J of the entire image ave Global minimization is not practical because there are millions of ways to segment an image. If the J-value is applied to a local region of the class diagram, it is well indicated whether the region is inside or close to a region boundary. Therefore, it can be considered to construct an image whose pixel values correspond to these J values computed over a small window centered on the pixel. These images are referred to as J-images and the corresponding pixel values are referred to as local J-values. Higher local J value corresponds to imageThe more likely the pixel is to be near a region boundary. The J-image is like a valley and a hill containing a three-dimensional topographic map, which actually represents the region interior and the region boundary, respectively.
Small size windows are useful for locating intensity/color edges, while large size windows are also useful for detecting texture boundaries. Typically, multiple scales are required to segment an image. In the embodiment of the present invention, the basic window at the smallest dimension is a 9 × 9 circular window without corners, as shown in fig. 8 and 9.
Fig. 8 and 9 are schematic diagrams of windows for calculating local J values. FIG. 8 is a basic window for calculating local J values; FIG. 9 is a schematic of down-sampling of a 2-level window, with only the "+" point used to calculate the local J value.
2.3.2 seed assay
The characteristics of the J-image allow segmentation of the image using region growing methods. The algorithm starts segmenting the image at a coarse initial scale. The same process is then repeated for the newly segmented region on the next finer scale. Region growing includes determining seed points and growing from these seed locations. Region growing is followed by a region merging operation resulting in the final segmented image.
After a set of initial seed regions is determined as the basis for region growing, these regions correspond to the minimum of the local J values. In general, it is important to find the best initial seed region. The following simple heuristic method provided good results in the experiments:
(1) The mean and standard deviation of the J images were calculated as μ J And σ J And (4) showing.
(2) Setting a threshold variable T J
T J =μ J +aσ J (2-25)
Where a is several different preset values. Setting local J value less than T J After the pixels are used as candidate seed points, the candidate seed regions are obtained based on the 4-connectivity connection candidate seed points. Different threshold values will obtain different seed regions and the number thereof, and the most number is taken as the final result。
(3) A candidate seed region is determined to be a seed if its size is greater than the minimum size listed in table 1 above on the corresponding scale.
TABLE 1 Window sizes at different scales
Dimension Window (Pixel) Sampling (1/pixel) Minimum seed size (Pixel)
1 9×9 1/(1×1) 32
2 17×17 1/(2×2) 128
3 33×33 1/(4×4) 512
4 65×65 1/(8×8) 2048
2.3.3 seed growth
After the initial seed is determined, a new region is grown from the seed. The seed grows slowly pixel by pixel. We use a faster approach in the implementation:
(1) Filling the voids in the seed.
(2) The local J values of the remaining unassigned portions of the region are averaged and pixels below the average are connected to form a growing region. If a growth region is adjacent to one and only one seed, it is assigned to that seed.
(3) The local J values of the remaining pixels at the next smaller scale are calculated and step 2 is repeated to more accurately locate the boundary.
(4) The remaining pixels are grown one by one at the smallest scale. Unclassified pixels at the seed boundary are stored in a buffer. Each time, the pixel with the smallest J value is assigned to its neighboring seed and the buffer is updated until all pixels are classified.
2.4 results and analysis
The data used in the experiment covered the area of 2011 where a grade 6.3 earthquake occurred in month 2 and was reconstructed in the next years. The data set consisted of aerial images acquired 4 months 2012, including 20.5km 2 12796 buildings within range (16077 buildings in the same region for the 2016 dataset). Through manual selection of 30 Ground Control Points (GCP) on the ground, geographic correction is performed to obtain an aviation data set with the accuracy of 1.6 pixels and the resolution of 0.2 meters.
An experiment was performed by selecting image 1 and image 2 with a block number of 032 from the data set, as shown in fig. 10 and 11. The image size is 512 × 512 pixels, which is an RGB 24-bit true color image.
The window size set of J values is set to 9 × 9, 17 × 17, 33 × 33, and 65 × 65 pixels, and the number of scales M =4. Fig. 12 and 13 are J images of 33 x 33 pixel windows for image 1 and image 2, respectively.
By image segmentation of images 1 and 2 at different scales, respectively, a sequence of segmented objects of image 1 and image 2 is obtained, which objects are to be used for change detection. Fig. 14 and 15 show the results of the segmentation of the two images at scale 3. By visually comparing fig. 10 and 11 with fig. 14 and 15, at the scale 3, the method can better segment the buildings, and the segmented edges substantially conform to the actual edges of the buildings.
3. Object change based detection
3.1 similarity measure
For the features of the J image, each object in the multi-scale J image needs to be analyzed and compared according to the segmentation result. At this time, it is important to select a suitable similarity measure to describe the similarity of an object at different times. Common measurement methods include various "distances," such as euclidean and mahalanobis distances, histogram matching, covariance, and the like.
3.1.1 SSIM
Structural Similarity (SSIM) takes into account the mean, variance and covariance of vectors, and thus can represent the Similarity between vectors well. The SSIM between vector x and vector y is defined by:
S(x,y)=[l(x,y)] α ·[c(x,y)] β ·[s(x,y)] γ (3-1)
wherein:
Figure BDA0003947628390000181
Figure BDA0003947628390000182
Figure BDA0003947628390000183
in the formula, mu x 、μ y 、σ x 、σ y 、σ 2 x 、σ 2 y Mean, standard deviation and variance of x and y, respectively. Sigma xy Is the covariance between x and y. Alpha, beta and gamma are the weights of three vectors, C 1 、C 2 And C 3 Is a constant in the formula to prevent instability when the denominator is near zero.
Often α = β = γ =1,c 3 =C 2 And/2, the formula (3-1) can be simplified as follows:
Figure BDA0003947628390000184
the larger S (x, y), the smaller the object variation between multi-temporal images, and the higher the similarity. Furthermore, SSIM has the following characteristics by its definition:
it has a bounded S (x, y) e [0,1]; it is symmetrical: s (x, y) = S (y, x); it has a unique maximum value, S (x, y) =1 when x = y.
3.1.2PSNR
Peak Signal to Noise Ratio (PSNR), an index that calculates the Signal to Noise Ratio in decibels. It is commonly used for quality assessment after refinement of the original image. The value of PSNR determines the quality of the refined image. Higher values mean higher quality and vice versa. PSNR is calculated by:
Figure BDA0003947628390000185
where MAX represents the maximum pixel value of the image, and MAX is equal to 255 if the image is an 8-bit grayscale image; MSE represents the mean square error between the actual image and the predicted/degraded image, and in both images I and K, which are m × n in size, MSE is calculated by the following equation:
Figure BDA0003947628390000191
the smaller the MSE, the more similar the two images are. And MSE is at the denominator position of equation (3-6), so the larger the value of PSNR, the more similar the two images. Compared to SSIM, PSNR does not meet the "bounded" feature, so this similarity measure is normalized before change detection.
3.2 Multi-Scale fusion and Change detection
Compared with single-scale detection, the multi-scale fusion strategy can effectively improve the detection precision and generate more reliable results. The fusion strategy employs weighted data fusion. w is a i (i =1,m) represents a weight value of the detection result at each scale. The multi-scale fusion change detection decision rule can be interpreted as follows:
(1) First, for each object R j Multiscale similarity S j By calculating from:
Figure BDA0003947628390000192
(2) Then, given a similarity determination threshold T for change detection S If S is j ≥T S Then R is j Remain unchanged, labeled 0.
(3) Otherwise, R j It is considered that a change has occurred, marked 1.
(4) And (4) repeating the steps (1) to (3) until all the objects in the segmentation result are completed.
3.3 landscape architecture index Filtering
The shape of the building is generally regular, and is mostly rectangular or a combination of rectangles. If the change area with regular shape can be detected, some non-building areas with irregular shape false detection can be filtered, so the landscape pattern index is introduced in the invention.
Landscape architecture is a spatial structural feature of a landscape. The landscape pattern index reflects the characteristics of the landscape pattern in terms of structural composition, spatial configuration and the like, and generally comprises an area, a perimeter-to-area ratio, a fractal dimension and the like. The fractal dimension adopted by the invention is an index reflecting the shape complexity of the landscape object, and the calculation formula is as follows:
Figure BDA0003947628390000193
where C is the perimeter of the object and S is the area of the object.
When the fractal dimension is close to 1, the fractal dimension represents that the shape is very simple, such as a square; larger fractal dimensions represent more complex shapes. Therefore, the dimension division can be used for distinguishing artificial buildings such as buildings, roads and the like on the ground. Setting a threshold T for each detected change region object idx And when the fractal dimension index of the object is greater than the threshold value, the object is regarded as a non-building.
In the fusion strategy, the weight is set as w 1 =0.1,w 2 =0.2,w 3 =0.4 and w 4 And =0.3. Under the similarity measure of SSIM, the threshold value of change detection is set to be 0.5; the normalized change detection threshold is set to 0.27 under PSNR measurement.
Based on the visual observation and comparison of fig. 10 and fig. 11, analysis label graphs of the change detection results under two measures are obtained. The analysis was as follows: (1) Detection of the change of the SSIM measure is incomplete for detection of a newly built building; the PSNR measure is almost completely detected for this part. (2) For the B regions of the two images, the SSIM measure wrongly regards trees as the change region, and the PSNR measure does not wrongly judge. (3) the SSIM measure misconsiders a parking lot as a change area; the PSNR measure is sensitive to the shaded parts of the parking lot. And (4) the SSIM measure misjudges the open ground as a change area. (5) The change of SSIM measure detects the occurrence of stripes in the image, and cannot detect the whole building. (6) PSNR measurement shadow areas are detected as change areas.
It can thus be concluded that: the change detection of SSIM may not detect the entire actual feature, and may misjudge an invariant region as a changed region. The change detection of PSNR can completely detect an invariant region or a variant region, but is susceptible to the influence of shading.
On the basis, based on the characteristic that the building shape is more regular, the fractal dimension in the landscape pattern index is used for filtering the change detection result of the PSNR, and the result is recorded as PSNR + JG. Fractal dimension threshold T idx Set to 1.1. PSNR + JG removes and smoothes irregular and long areas with small variation results compared to variation detection of PSNR.Therefore, the change area detected by the PSNR + JG method is more suitable for practical situations.
The performance of the two measurement evaluation methods and the PSNR + JG method is quantitatively evaluated by calculating the overall accuracy, the false alarm rate, the omission factor and the Kappa coefficient, and the result is shown in Table 2.
TABLE 2 multiscale fusion change detection of two measures and detection precision of PSNR + JG
Figure BDA0003947628390000201
Combined with the accuracy evaluation index analysis, in this experiment, the SSIM measure is obviously inappropriate as the similarity measure for change detection. The change detection effect of the PSNR similarity measure is much better than that of the SSIM measure. And the PSNR + JG change detection has the highest overall precision and Kappa coefficient and has lower false alarm rate.
In order to analyze the dependency of the change detection on the scale, the detection results and their accuracy parameters are further compared. Table 3 shows a comparison of quantitative evaluation indexes of the above-described detection results.
TABLE 3 detection precision of single scale change detection results of PSNR and PSNR + JG
Figure BDA0003947628390000202
In conjunction with the accuracy evaluation of table 3, the analysis concluded the following: (1) At a certain threshold T S Next, the detected change region decreases as the scale increases. The scale 1 detects more and the scale 4 hardly detects the change area. (2) Dimension 3 works best in all single-scale change detections. (3) The multi-scale fusion is optimal in terms of visual effect and precision evaluation. (4) Adding a JG operation enables higher overall accuracy for scale 3 or multi-scale fusion. However, the JG operation is effective on the premise that the detection result area is complete and has better area division.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any modification, equivalent replacement, and improvement made by those skilled in the art within the technical scope of the present invention disclosed in the present invention should be covered within the scope of the present invention.

Claims (10)

1. A method for detecting the change of a high-resolution remote sensing image based on an object is characterized by comprising the following steps:
firstly, carrying out radiation correction and geometric correction on high-resolution remote sensing data; sequentially carrying out peer-to-peer filtering, color quantization, J value calculation and space segmentation processing on the corrected high-resolution remote sensing data;
secondly, carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR; and performing change detection by using a multi-scale fusion strategy, and filtering a suspected false detection area in the detection result by using the fractal dimension in the landscape pattern index to obtain a change detection result.
2. The object-based high resolution remote sensing image change detecting method according to claim 1, wherein the object-based high resolution remote sensing image change detecting method comprises the steps of:
step one, carrying out radiation correction and geometric correction on high-resolution remote sensing data; smoothing and denoising the two corrected remote sensing images through peer-to-peer filtering;
step two, merging clusters with similar colors by using the modified fast K-means clustering to obtain a color quantized image;
thirdly, JSEG image segmentation is carried out on the quantized image on different scales to obtain a multi-scale ground object; carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR;
step four, change detection is carried out by utilizing a multi-scale fusion strategy; and filtering the suspected false detection area by using the fractal dimension in the landscape pattern index to obtain a change detection result.
3. The method for detecting changes in object-based remote sensing images with high resolution as set forth in claim 2, wherein the smoothing and denoising of the two corrected remote sensing images by peer-to-peer filtering comprises:
(1) Calculating d i First order difference f of (n) i (n):
f i (n)=d i+1 (n)-d i (n);
Wherein, d i (n) represents all pixels in the window to the center pixel x 0 (n) Euclidean distance;
(2) Using the formula for x i And (n) testing M points before and after the test, and judging whether the points belong to impulse noise:
f i (n)≤α;
wherein M = w/2, half the window size; α is set to a large value for highly corrupted images and to a small value for slightly corrupted images;
(3) If f i (n) does not satisfy alpha, then j is less than or equal to i or j>i end point x j (n) treating as impulse noise and removing; the rest d j (n) for estimating true peer-to-peer groups;
(4) Pixel x 0 (n) replace with a weighted average of peer group members:
Figure FDA0003947628380000011
wherein, w i Representing a standard Gaussian filtering weight;
(5) If x 0 (n) belongs to impulse noise and there is no peer group, then the true peer group of locations is estimated by other peer pixels in the window.
4. The method of claim 2, wherein the obtaining a color quantized image by color quantizing using modified fast K-means plus addition clusters and merging clusters of similar colors comprises:
1) Obtaining the maximum distance T (n) of each peer group in the process of obtaining the peer group filtering:
T(n)=d m(n)-1 (n);
wherein d is i (i =0, m (n) -1) represents the distance of the peer group member pixel to the center pixel; m (n) represents the size of the peer group, which is the number of the peer group members;
2) The smoothness of the local area represented by the value of the maximum distance T (n) of the ground peer group centered on the pixel x (n) is obtained, and the weight v (n) of each pixel x (n) is calculated using the following formula:
v(n)=exp(-T(n));
3) The initial number of clusters K in the color quantization is determined using the following equation:
K=βT avg
wherein, T avg Represents the average value of T (n), and beta represents a parameter;
4) Setting an updating rule of a clustering center to comprise a pixel weight centroid; calculating a color class C by i Center of mass c i
Figure FDA0003947628380000021
5) Determining the center of the initial cluster by adding the weighted K mean value, performing weighted distortion measurement by using the following formula, and determining the cluster to be split until the initial cluster number K is reached:
D i =∑v(n)||x(n)-c i || 2 ,x(n)∈C i
6) And merging the similar clusters, enabling the minimum distance between the two centroids to meet a preset threshold value, and distributing each pixel to the nearest cluster centroid to obtain a color quantized image.
5. The object-based high resolution remote sensing image change detection method of claim 2, wherein the obtaining of the multi-scale surface feature object by performing JSEG image segmentation on the quantized image at different scales comprises:
(1) Determining a set of all N points of the color quantization image to obtain a point set Z, dividing Z into C parts according to color classes, wherein Z = { Zi,1 ≦ i ≦ C }, and calculating the point set Z i Center m of i
Figure FDA0003947628380000022
Wherein, Z = (x, y), Z ∈ Z, and is the coordinate of the point in the image;
(2) J-value of the determined region:
J=(S T -S W )/S W
wherein:
Figure FDA0003947628380000023
Figure FDA0003947628380000024
(3) Dividing the gamut U of a color quantized image into k regions U i (1. Ltoreq. I. Ltoreq.k) with M in each region i Points, respectively calculating the region U i J value J of i And calculating the average J value J of the region U avg
Figure FDA0003947628380000025
(4) Constructing an image, enabling the pixel value of the image to correspond to the J value calculated on a small window with the pixel as the center to obtain a J image, and enabling the corresponding pixel value to serve as a local J value;
(5) The mean and standard deviation of the J images were calculated as μ J And σ J Represents; and calculating a threshold variable T J
T J =μ J +aσ J
Wherein a represents several different preset values;
(6) Screening initial seeds: setting local J value less than T J As candidate seed points: connecting the candidate seed points based on the 4 connectivity connection points to obtain a candidate seed area; calculating different thresholds by using different a to obtain different seed regions and quantities, and taking the most quantity as a final result; if the size of the candidate seed area is larger than the corresponding minimum size, determining the candidate seed area as a seed;
(7) Filling the cavities in the seeds; averaging the local J values of the remaining unassigned portions of the region and connecting pixels below the average to form a growing region; if a growth region is adjacent to one and only one seed, it is assigned to the seed;
(8) Calculating the local J value of the residual pixel under the next smaller scale, repeating the step (7) and positioning the boundary; growing the remaining pixels one by one on a minimum scale; the pixel with the smallest J value is assigned to the adjacent seed and the buffer for storing the unclassified pixels at the seed boundary is updated until all pixels are classified.
6. The method for detecting changes in object-based high resolution remote sensing images according to claim 2, wherein the changes are detected using a multi-scale fusion strategy; the step of filtering the suspected false detection area of the detection result by using the fractal dimension in the landscape pattern index to obtain the change detection result comprises the following steps:
1) Each object R is calculated using the formula j Multi-scale similarity of S j
Figure FDA0003947628380000031
2) Judging each object R j Multi-scale similarity of S j Similarity determination threshold T with change detection S If S is the magnitude relation of j ≥T S Then R is j Left unchanged, labeled 0; otherwise, judging R j A change, labeled 1, occurred;
3) Repeating the steps 1) to 2) until all the objects in the segmentation result are completed;
4) And calculating the fractal dimension of all the detected change region objects by using the following formula:
Figure FDA0003947628380000032
wherein C represents the circumference of the object; s represents the area of the object;
5) Comparing whether the fractal dimension index of the object in the variation area obtained by calculation is larger than T idx If the difference is larger than the preset threshold, judging that the building is not constructed, excluding the building from the change detection result, and traversing all the objects in the change area to obtain the change detection result.
7. An object-based high-resolution remote sensing image change detection system for implementing the object-based high-resolution remote sensing image change detection method according to any one of claims 1 to 6, wherein the object-based high-resolution remote sensing image change detection system comprises:
the data processing module is used for carrying out radiation correction and geometric correction on the high-resolution remote sensing data;
the object extraction module is used for sequentially carrying out peer-to-peer filtering, color quantization, J value calculation and space segmentation processing on the corrected high-resolution remote sensing data to obtain a plurality of objects;
the object analysis module is used for carrying out quantitative evaluation on the similarity of the objects by using two similarity measures of SSIM and PSNR respectively to obtain a detection result;
and the multi-scale fusion and fractal dimension filtering module is used for carrying out change detection by utilizing a multi-scale fusion strategy and filtering a suspected false detection area in the detection result by utilizing the fractal dimension in the landscape pattern index to obtain a change detection result.
8. A computer arrangement, characterized in that the computer arrangement comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of the object based high resolution remote sensing image change detection method according to any one of claims 1-7.
9. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the object-based high resolution remote sensing image change detection method according to any one of claims 1 to 7.
10. An information data processing terminal for implementing the object-based high resolution remote sensing image change detecting system according to claim 7.
CN202211439434.2A 2022-11-17 2022-11-17 Object-based high-resolution remote sensing image change detection method and system Pending CN115690086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211439434.2A CN115690086A (en) 2022-11-17 2022-11-17 Object-based high-resolution remote sensing image change detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211439434.2A CN115690086A (en) 2022-11-17 2022-11-17 Object-based high-resolution remote sensing image change detection method and system

Publications (1)

Publication Number Publication Date
CN115690086A true CN115690086A (en) 2023-02-03

Family

ID=85054357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211439434.2A Pending CN115690086A (en) 2022-11-17 2022-11-17 Object-based high-resolution remote sensing image change detection method and system

Country Status (1)

Country Link
CN (1) CN115690086A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091881A (en) * 2023-02-14 2023-05-09 安徽星太宇科技有限公司 Remote sensing information management system based on multisource data fusion
CN116341932A (en) * 2023-05-31 2023-06-27 自然资源部第二海洋研究所 Tidal flat change monitoring method and system based on double remote sensing time sequence indexes
CN117292137A (en) * 2023-11-27 2023-12-26 广东泰一高新技术发展有限公司 Aerial remote sensing image optimization segmentation processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091881A (en) * 2023-02-14 2023-05-09 安徽星太宇科技有限公司 Remote sensing information management system based on multisource data fusion
CN116341932A (en) * 2023-05-31 2023-06-27 自然资源部第二海洋研究所 Tidal flat change monitoring method and system based on double remote sensing time sequence indexes
CN116341932B (en) * 2023-05-31 2023-08-22 自然资源部第二海洋研究所 Tidal flat change monitoring method and system based on double remote sensing time sequence indexes
CN117292137A (en) * 2023-11-27 2023-12-26 广东泰一高新技术发展有限公司 Aerial remote sensing image optimization segmentation processing method
CN117292137B (en) * 2023-11-27 2024-04-19 广东泰一高新技术发展有限公司 Aerial remote sensing image optimization segmentation processing method

Similar Documents

Publication Publication Date Title
CN109522908B (en) Image significance detection method based on region label fusion
EP3455782B1 (en) System and method for detecting plant diseases
CN107330875B (en) Water body surrounding environment change detection method based on forward and reverse heterogeneity of remote sensing image
Zaitoun et al. Survey on image segmentation techniques
Park et al. Color image segmentation based on 3-D clustering: morphological approach
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN108537239B (en) Method for detecting image saliency target
CN115690086A (en) Object-based high-resolution remote sensing image change detection method and system
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
Kim et al. Color–texture segmentation using unsupervised graph cuts
Hoonhout et al. An automated method for semantic classification of regions in coastal images
Wang et al. A novel multi-scale segmentation algorithm for high resolution remote sensing images based on wavelet transform and improved JSEG algorithm
CN108805057B (en) SAR image reservoir area detection method based on joint significance analysis
CN107992856B (en) High-resolution remote sensing building shadow detection method under urban scene
Byun et al. A multispectral image segmentation approach for object-based image classification of high resolution satellite imagery
Lone et al. Object detection in hyperspectral images
Sutha Object based classification of high resolution remote sensing image using HRSVM-CNN classifier
CN113850792A (en) Cell classification counting method and system based on computer vision
Wang et al. Color image segmentation using PDTDFB domain hidden Markov tree model
CN111882573A (en) Cultivated land plot extraction method and system based on high-resolution image data
CN108805186B (en) SAR image circular oil depot detection method based on multi-dimensional significant feature clustering
Luus et al. The effects of segmentation-based shadow removal on across-date settlement type classification of panchromatic QuickBird images
Yang et al. A method for tree image segmentation combined adaptive mean shifting with image abstraction
Abraham et al. Unsupervised building extraction from high resolution satellite images irrespective of rooftop structures
Vasquez et al. An iterative approach for obtaining multi-scale superpixels based on stochastic graph contraction operations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination