CN104680542A - Online learning based detection method for change of remote-sensing image - Google Patents

Online learning based detection method for change of remote-sensing image Download PDF

Info

Publication number
CN104680542A
CN104680542A CN201510112839.9A CN201510112839A CN104680542A CN 104680542 A CN104680542 A CN 104680542A CN 201510112839 A CN201510112839 A CN 201510112839A CN 104680542 A CN104680542 A CN 104680542A
Authority
CN
China
Prior art keywords
image
change detection
pixel
detection result
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510112839.9A
Other languages
Chinese (zh)
Other versions
CN104680542B (en
Inventor
张建龙
翟建峰
李洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510112839.9A priority Critical patent/CN104680542B/en
Publication of CN104680542A publication Critical patent/CN104680542A/en
Application granted granted Critical
Publication of CN104680542B publication Critical patent/CN104680542B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an online learning based detection method for change of a remote-sensing image, and aims at solving the problems of unstable detection result and poor precision of the existing detection technology. The method comprises the following steps: acquiring two remote-sensing images; creating two differential images according to the types of the remote-sensing images; creating a training sample database for the first differential image and dividing to be an image block set in a video frame form; respectively detecting the change of each frame image block by a cascade classifier through the online learning strategy; then splicing the detection results of all frame image blocks to obtain the detecting result CM1 of the first differential image; processing the second differential image by the same way to obtain the detection result CM3; performing grey mapping for the detection results CM1 and CM2; fusing to obtain the fused differential image XF; clustering the XF to obtain the final change detection result. Through adoption of the method, the obtained detection results of the remote-sensing images of different types are high in robustness and high in precision; the method is applicable to urban planning.

Description

Remote sensing image change detection method based on online learning
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a change detection method for SAR remote sensing images and optical remote sensing images. The method can be used for monitoring the utilization condition of the ground object coverage, planning the urban development, evaluating the natural disaster and updating the map.
Background
The remote sensing image change detection aims to detect the change situation of images in different time in the same area, namely the change information of ground features in the area along with the time. At present, remote sensing image change detection is widely applied to the aspects of evaluation of natural disasters such as earthquakes, floods, debris flows, forest fires and the like, update of geographic space data in mapping, investigation and treatment of illegal buildings, planning and construction of post-disaster reconstruction cities and the like.
The most common method for detecting the change of the remote sensing image is a change detection method based on difference image analysis. The method mainly comprises three processing processes: 1) preprocessing the remote sensing image, including radiation correction, geometric registration and the like; 2) comparing the corrected remote sensing images to obtain a difference image; 3) and analyzing the difference images, and dividing the difference images into a variation class and a non-variation class by adopting a threshold value or clustering class method and the like to obtain a final variation detection result.
For the classification method adopted in the differential image analysis process, the change detection method can be divided into an unsupervised change detection method and a supervised change detection method according to whether a training sample set exists or not. The unsupervised change detection method does not need the support of prior change sample information, and the change detection result can be obtained directly from the difference image through a clustering or segmentation algorithm. For the method, the selection of the differential image segmentation threshold is a key problem of change detection, the overall accuracy of the detection result is directly influenced, and how to set the threshold to improve the accuracy of the detection result becomes one of the problems studied by various scholars. Compared with an unsupervised change detection method, the supervised change detection classification method can accurately identify changed parts, has better robustness to different atmospheric conditions and different illumination conditions, and has the premise of obtaining good effect that a large number of training samples are required, namely a large amount of ground real change information is required to be mastered; and gathering large amounts of ground truth is difficult and time consuming.
Disclosure of Invention
The invention aims to provide a remote sensing image change detection method based on online learning to overcome the defects of low detection precision and low robustness of an unsupervised change detection method and the defects that a large number of training samples need to be constructed in the supervised change detection method.
The invention relates to a change detection method based on semi-supervision, which continuously increases a sample library by a small amount of training samples in an online learning mode, improves the classification performance of a classifier, and analyzes a difference image to obtain a change detection result. The method comprises the following implementation steps:
(1) obtaining two remote sensing images X with the size of I multiplied by J and through radiation correction and geometric registration1And X2Wherein, I is the line number of the remote sensing image, and J is the column number of the remote sensing image;
(2) using two remote sensing images X1And X2Constructing two difference images XLAnd XD
(3) For the first difference image XLConstruction of a sample library P0 and monoscopic images based on 2 x 2 image blocksA pixel sample library P1, and initializing the first stage of the cascade classifier according to the sample library P0, namely the threshold TH0 of the mean classifier;
(4) the first difference image XLFrom left to right, in order from top to bottom, into a set of image blocks of size in the form of an nxn video frameWherein N is an even number“Z+"is a positive integer""means rounding up;
(5) initializing a set of image blocksStarts to process the 1 st frame image block B with the index value i of 11Carrying out change detection;
(6) image block aggregation using cascaded classifiersIth frame image block BiCarrying out change detection and optimizing and updating the cascade classifier;
(7) i, adding 1 by self, and performing change detection on the next frame of image block by using the optimized and updated cascade classifier;
(8) repeating the steps (6) to (7) untilCompleting the change detection of the frame image block set B to obtain a corresponding change detection result set
(9) Splicing the obtained change detection result set C of the frame image block set into a final change detection result image CM1 from left to right and from top to bottom to finish the first difference image XLDetecting a change in (c);
(10) the other difference image X in the step (2) is processedDAccording to the first difference image XLThe step (3) to the step (9) of detecting the change of the second difference image XDThe change detection result map of (1) is CM 2;
(11) mapping the change detection result images CM1 and CM2 of the two difference images into gray level images A1 and A2, and fusing the gray level images A1 and A2 by a similar principal component analysis method to obtain a fused difference image XF
(12) Using Kmeans clustering algorithm to perform fusion on difference image XFClustering is carried out to generate a final change detection result graph XCDAnd finishing the detection of the change information of the remote sensing image.
Compared with the prior art, the invention has the following advantages:
1. the method abandons the traditional method of the differential image analysis process, and does not classify and process the whole differential image as a processing object according to a certain fixed criterion pixel by pixel; the whole difference image is divided into image blocks similar to the video frames, and then the change detection is carried out frame by using the frame image blocks as processing objects through the cascade classifier.
2. The invention draws the advantages of good robustness and accurate classification of supervised classification, overcomes the defect that a large number of training samples need to be constructed in the traditional supervised change detection method, and continuously improves the performance of the classifier, realizes accurate classification and improves the detection accuracy of change detection because the invention uses the priori knowledge to carry out change detection on each frame image block one by one through an online learning strategy and continuously updates a sample library.
3. According to the invention, an image fusion strategy of self-adaptively determining the weight based on the airspace is adopted, so that the influence of excessive manual intervention is avoided, the optimal weight can be automatically found for linear weighted fusion, the fused difference image can more comprehensively and truly reflect the change condition of ground feature radiation energy, and the detection precision of change detection is improved;
compared with a fusion method based on a transform domain, such as a wavelet transform, the fusion method of the invention has the advantages of simplicity and small calculation amount.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a sub-flowchart of the present invention for change detection of a frame image block and optimized updating of a cascade classifier;
FIG. 3 is a sub-flowchart of the fusion of grayscale images A1 and A2 in the present invention;
FIG. 4 is two SAR remote sensing images and a standard reference map of Berne area in Switzerland used in simulation of the present invention;
FIG. 5 is two difference images constructed from two SAR remote sensing images in Berne area during simulation of the present invention;
FIG. 6 is a diagram illustrating the results of detecting changes in the two difference images of FIG. 5 according to the present invention;
FIG. 7 is a difference image X obtained by mapping and fusing the two detection results of FIG. 6 into a gray image according to the present inventionF
FIG. 8 is a simulation result diagram of the change detection performed on the two SAR remote sensing images in FIG. 4 according to the present invention;
FIG. 9 is two optical remote sensing images and a standard reference map of the Sardinia area used in the simulation of the present invention;
FIG. 10 is two difference images constructed from two optical remote sensing images of the Sardinia area when simulated in accordance with the present invention;
FIG. 11 is a diagram illustrating the results of the change detection performed on the two difference images of FIG. 10 according to the present invention;
FIG. 12 is a difference image X obtained by mapping and fusing the two detection results of FIG. 11 into a gray image according to the present inventionF
Fig. 13 is a simulation result diagram of the change detection of the two optical remote sensing images in fig. 9 according to the present invention.
Detailed Description
The technical solution and effects of the present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the implementation steps of the invention are as follows:
step 1, two remote sensing images are obtained.
The two remote sensing images are remote sensing images in the same area and different time after radiation correction and geometric registration and are respectively marked as X1And X2The image size is I × J, wherein I is the number of rows of the remote sensing image, and J is the number of columns of the remote sensing image.
Step 2, utilizing the remote sensing image X1And X2Constructing two difference images XLAnd XD
Common construction methods of the difference image include a difference method, a ratio method, a mean ratio method, a logarithmic ratio method, and the like. The difference value method is to compare two remote sensing images pixel by pixel, and take the difference of the gray value at the corresponding position and take the absolute value; the ratio method is to perform ratio processing on the gray values of the corresponding positions of the two remote sensing images; the mean value ratio method is to respectively calculate the mean value of the gray values of the local areas of the corresponding pixel positions of the two remote sensing images and then carry out ratio processing; the logarithmic ratio method is to perform logarithmic processing on the gray value of each pixel of the difference image obtained by the ratio method.
The embodiment aims at the remote sensing image X according to the characteristic that the SAR remote sensing image and the optical remote sensing image contain different noises1And X2Type of (2) performing two difference images XLAnd XDThe structure of (1):
if remote sensing image X1And X2All are SAR remote sensing images, then difference image XLAnd XDAnd respectively selecting a logarithmic ratio method and a mean ratio method for construction, wherein the corresponding construction expressions are respectively as follows:
XL=|log X2-log X1|
<math> <mrow> <msub> <mi>X</mi> <mi>D</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mi>min</mi> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> <msub> <mi>&mu;</mi> <mn>2</mn> </msub> </mfrac> <mo>,</mo> <mfrac> <msub> <mi>&mu;</mi> <mn>2</mn> </msub> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> </mfrac> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein, mu1、μ2Are each X1、X2The mean of local area gray values of (a);
if remote sensing image X1And X2All of which are optical remote sensing images, then difference image XLAnd XDConstructing by respectively selecting a difference method and a logarithmic ratio method, which are called as a difference image and a logarithmic ratio image, wherein corresponding construction expressions are respectively as follows:
XL=|X1-X2|
XD=|log(1+X2)-log(1+X1)|。
step 3, aiming at the first difference image XLA sample bank P0 and a single-pixel sample bank P1 based on 2 x 2 image blocks are constructed and the first stage of the cascade classifier, i.e. the threshold TH0 of the mean classifier, is initialized according to the sample bank P0.
3a) In the first difference image XLArtificially marking a2 × 2 image block as a changed pixel block, marking as a positive sample in a sample bank P0, and selecting a3 × 3 image block with each pixel of the 2 × 2 image block as a center and changing the image block into a column vector as a positive sample of a single-pixel sample bank P1, that is, 1 positive sample in the sample bank P0 corresponds to 4 positive samples in the sample bank P1;
3b) repeating the step 3a)100 times to obtain 100 positive samples in the sample bank P0 and 400 positive samples in the single-pixel sample bank P1;
3c) in the first difference image XLArtificially marking a2 × 2 image block as an unchanged pixel block, marking as a negative sample in a sample bank P0, and selecting a3 × 3 image block with each pixel of the 2 × 2 image block as a center and changing the image block into a column vector as a negative sample of a single-pixel sample bank P1, i.e. 1 negative sample in the sample bank P0 corresponds to 4 negative samples in the sample bank P1;
3d) repeating the step 3c)100 times to obtain 100 negative samples in the sample library P0 and 400 negative samples in the single-pixel sample library P1;
3e) the first stage of the cascade classifier is initialized according to 100 positive samples and 100 negative samples in the sample bank P0, i.e., the threshold TH0 of the mean classifier is (positive sample mean + negative sample mean)/4.
Step 4, the first difference image X is processedLPartitioning a set of image blocks in a video frame from left to right, top to bottomThe size of each image block is NN, N is an even number, i belongs to Z+And isWherein "Z" is+"is a positive integer""means rounding up.
Step 5, initializing image block setStarts to process the 1 st frame image block B with the index value i of 11And (5) carrying out change detection.
Step 6, utilizing a cascade classifier to collect image blocksIth frame image block BiAnd carrying out change detection and carrying out optimization updating on the cascade classifier.
Referring to fig. 2, the specific implementation steps of this step are as follows:
6a) most of non-target pixel points, namely non-change pixel points, are removed by using a first-stage mean classifier of a cascade classifier, and the initial classification of the current frame is completed:
6a1) scanning the ith frame image block B with an overlapping 2 x 2 sliding windowi
6a2) Calculating the mean value of the 2 x 2 pixel blocks in the sliding window, comparing the mean value with a threshold value TH0 of a first-stage mean value classifier of the cascade classifier, if the mean value of the pixel blocks in the sliding window is smaller than TH0, the current pixel points in the 2 x 2 window do not contain target pixel points, and otherwise, the current pixel points in the 2 x 2 window contain target pixel points;
6a3) counting the probability that each pixel point contains a target pixel point, if the probability is less than 0.5, judging that the current pixel point is a non-target pixel point, namely a non-change pixel point, and if not, executing the step 6 d);
6b) using the first-stage mean classifier of the cascade classifier to perform the I-th frame image block BiClassification based on 2 × 2 pixel blocks is performed to construct a first mask MAP 0:
6b1) scanning the ith frame image block B with non-overlapping 2 x 2 sliding windowsi
6b2) Calculating the mean value of 2 multiplied by 2 pixel blocks in the sliding window, comparing the mean value with a threshold value TH0 of a first-stage mean value classifier of the cascade classifier, and setting the gray values of the pixel points in the sliding window to be 0 if the mean value of the pixel blocks in the sliding window is smaller than TH 0; otherwise, setting the gray values of the pixel points in the window to be 1;
6b3) the above steps 6B1) -6B2) are repeatedly performed, thereby completing the process of the ith frame image block BiObtaining a first mask MAP 0;
6c) an isolated 2 x 2 pixel block in the first mask MAP0 is found and mapped to the original i-th frame image block BiAdding the pixel blocks into a sample bank P0, updating a threshold TH0 of a first-stage mean classifier of the cascade classifier to obtain an updated threshold TH1 which is (positive sample mean + negative sample mean)/4, mapping the isolated 2 × 2 pixel blocks into single-pixel samples, and adding the single-pixel samples into the sample bank P1 to obtain an updated sample bank P1';
6d) training a second-stage Support Vector Machine (SVM) classifier of the cascade classifier by using the updated sample library P1 ', and performing ' fine classification ' on the residual pixels to be classified passing through the mean classifier in the step 6a), namely classifying the residual pixels to be classified one by using the trained SVM classifier, and further eliminating non-target pixels to form a second mask MAP 1;
6e) according to the priori knowledge that the number of pixels of each connected pixel block in the second mask MAP1 should not be less than K, performing region opening operation processing on the second mask MAP1, namely filtering the connected pixel blocks with the number of pixels less than K-5 to obtain the ith frame image block BiChange detection result of (C)iAnd filtering out the communicationPixel block mapping to frame image block BiAnd adding the pixel points at the corresponding positions into the sample library P1 ', and updating the sample library P1 ' again to obtain a sample library P1 ' after secondary updating.
And 7, adding 1 to the i, and performing change detection on the current frame image block by using an updated mean classifier threshold TH1 obtained after the change detection on the previous frame image block is completed and a sample base P1' after secondary update.
And 8, repeating the steps 6 to 7, continuously updating the sample base and optimizing the classification performance of the cascade classifier until the classification performance of the cascade classifier is optimizedComplete set of image blocks of a frameTo obtain a corresponding change detection result set
Step 9, the change detection result set C of the frame image block set B obtained above is spliced into a final change detection result graph CM1 from left to right and from top to bottom, and the first difference image X is completedLDetection of a change in (c).
Step 10, another difference image X in the step 2 is obtainedDAccording to the first difference image XLStep 3-step 9 of the change detection of (2) to complete the second difference image XDThe change detection result map of (2) is CM 2.
Step 11, mapping the change detection result images CM1 and CM2 of the two difference images into gray level images A1 and A2, and fusing the gray level images A1 and A2 by a similar principal component analysis method to obtain a fused difference image XF
Referring to fig. 3, the specific implementation of this step is as follows:
11a) the first difference image XLThe pixel value at the position of the changed pixel point of the detection result map CM1 is set as the original difference image XLCorresponding to the gray value of the position, and taking 0 as the pixel value of the unchanged pixel point position to obtain a first difference image XLThe gray image Y1 after the mapping of the detection result map CM 1;
11b) the second difference image XDThe pixel value at the position of the changed pixel point of the detection result map CM2 is set as the original difference image XDCorresponding to the gray value of the position, and taking 0 as the pixel value of the unchanged pixel point position to obtain a second difference image XDThe gray image Y2 after the mapping of the detection result map CM 2;
11c) calculating the average value of the gray level images Y1 and Y2, and recording the image with smaller average value as a first average gray level image A1 and the image with larger average value as a second average gray level image A2;
11d) respectively changing the first mean gray image A1 and the second mean gray image A2 into column vectors in a row-first or column-first manner and calculating a covariance matrix;
11e) obtaining the eigenvalue from the covariance matrix, and determining the eigenvector (x, y) corresponding to the first principal componentTWhere "T" represents the transpose operator;
11f) the first difference image XLComparing the detection result graph CM1 with a standard change detection reference graph, calculating the sum of the number of false alarm pixels and the number of missed detection pixels, and marking as E1;
11g) the second difference image XDComparing the detection result graph CM2 with a standard change detection reference graph, calculating the sum of the number of false alarm pixels and the number of missed detection pixels, and marking as E2;
11h) the weight of the first mean grayscale image a1 was determined from E1 and E2:
<math> <mrow> <mi>w</mi> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>y</mi> <mo>/</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mi>E</mi> <mn>1</mn> <mo>-</mo> <mi>E</mi> <mn>2</mn> <mo>&le;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>x</mi> <mo>/</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mi>E</mi> <mn>1</mn> <mo>-</mo> <mi>E</mi> <mn>2</mn> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
wherein x is the feature vector (x, y) in step 11e)TY is a feature vector (x, y)TA second element value of (d);
11i) calculating to obtain a fused difference image XF=w×A1+(1-w)×A2。
Step 12, using a Kmeans clustering algorithm to perform fusion on the difference image XFClustering is carried out to generate a final change detection result graph XCDAnd finishing the detection of the change information of the remote sensing image.
The effect of the present invention can be further illustrated by the following simulation results:
1. conditions of the experiment
The experimental environment is as follows: CPU Intel (R) core (TM) i5-34703.20GHz, memory 8GB, WINDOWS 7 operating system, software platform MATLAB R2013 b.
The first data set is a Berne region SAR remote sensing image data set in Switzerland, as shown in FIG. 4, an ERS-2 image, the image size is 301 × 301, the shooting time of FIG. 4(a) and FIG. 4(b) is 4 months 1999 and 5 months 1999, respectively, and FIG. 4(c) is a standard reference variation graph, which includes 1155 variation pixels and 89446 non-variation pixels.
The second data set was a spectral image of the fourth band of Landsat-5 satellite TM in the region of Sardinia, Italy, with image sizes of 300X 412, with the acquisition times of FIGS. 9(a) and 9(b) being 1995-9 and 1996-7, respectively, and FIG. 9(c) being a standard reference variation map comprising 7626 changed pixels and 115974 unchanged pixels.
2. Evaluation index of experiment
Quantitative change detection result analysis can be performed on experimental simulation with a standard change detection reference picture, and the main evaluation indexes are as follows:
false detection number: counting the number of pixels in the unchanged area in the experiment result graph, comparing the number of pixels with the number of pixels in the unchanged area in the reference graph, and calling the number of pixels which are detected as changed in the experiment result graph without being changed in the reference graph as a false detection number FP;
number of missed detections: counting the number of pixels in the changed area in the experiment result graph, comparing the number of pixels with the number of pixels in the changed area in the reference graph, and calling the number of pixels which are changed in the reference graph but are detected as unchanged in the experiment result graph as a missed detection number FN;
total number of errors detected OE: the sum of the missed detection number and the false detection number;
correct classification probability PCC:wherein TP and TN are the number of pixels for correctly detecting non-change and change, respectively.
Kappa coefficient for measuring consistency of the detection result graph and the reference graph:wherein, <math> <mrow> <mi>PRE</mi> <mo>=</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <mi>TP</mi> <mo>+</mo> <mi>FP</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>Nc</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>FN</mi> <mo>+</mo> <mi>TN</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>Nu</mi> </mrow> <msup> <mi>N</mi> <mn>2</mn> </msup> </mfrac> <mo>,</mo> </mrow> </math> here, N denotes the total number of pixels, and Nc and Nu denote the actual number of changed pixels and the number of unchanged pixels, respectively.
3. Contents and results of the experiments
For SAR remote sensing Images, the method of the invention is compared with the existing Gong method in the article "Change detection Synthetic Aperture radiation Images based on Image Fusion and Fusion Clustering" in 2012, and the comparison method is recorded as Wavelet Fusion _ RFLCM method.
For the optical remote sensing image, the method of the invention and two change detection methods in the prior 2 patents are used for carrying out change detection on the second data set, and the two comparison methods are respectively as follows: the patent application of the science and technology of electronic science and technology of Western An "remote sensing image change detection method based on image fusion" (patent application No. 201210414782.4, publication No. 102968790A) proposes a remote sensing image change detection method based on image fusion, which is recorded as Fuse _ FCM fusion method; an optical remote sensing image change detection method based on image fusion, which is proposed in the patent application 'optical remote sensing image change detection based on image fusion' (patent application number: 201210234076.1, publication number: 102750705A) of the science of electronic technology of Western Ann, is recorded as a Fuse _ FLICM fusion method.
Experiment 1. the method of the present invention was compared with the Wavelet fusion _ RFLICM method.
The change detection was performed on the first data set using the method of the present invention, and the intermediate experimental results and the final detection results using the method of the present invention are shown in fig. 5 to 8. Wherein FIG. 5(a) is a first difference image of the inventive structure and FIG. 5(b) is a second difference image of the inventive structure; FIG. 6(a) is a graph showing the results of change detection obtained by the change detection of FIG. 5(a) using the method of the present invention; FIG. 6(b) is a graph showing the results of change detection obtained by the change detection of FIG. 5(b) using the method of the present invention; fig. 7 is a fused difference image obtained by the similar principal component analysis method, and fig. 8 is a graph of the final change detection result obtained by the present invention.
The comparison of the performance indexes of the change detection results of the invention and the Wavelet fusion _ RFLICM method is shown in Table 1.
TABLE 1 comparison of Performance indexes of Change detection results of the present invention and Wavelet fusion _ RFLICM method
Method of producing a composite material FP FN OE PCC Kappa
Wavelet fusion_RFLICM 133 159 292 99.68% 0.871
The invention 120 154 274 99.70% 0.878
As can be seen from Table 1, the method of the present invention is superior to the conventional unsupervised clustering method, and has good robustness; and the detection precision of the change detection is further improved by adopting a similar principal component analysis method to perform image fusion, and from the final change detection result, the result graph has the least total error number OE, the highest accuracy PCC and the highest Kappa coefficient, and the performance is optimal.
Experiment 2. the method of the invention was compared to the existing Fuse _ FCM fusion and Fuse _ FLICM fusion.
The change measurements on the second data set were performed using the method of the present invention, and the intermediate experimental results and the final measurement results are shown in FIGS. 10 to 13. Wherein FIG. 10(a) is a first difference image constructed in accordance with the present invention; FIG. 10(b) is a second difference image constructed in accordance with the present invention; FIG. 11(a) is a graph showing the results of the variation detection performed on FIG. 10(a) according to the present invention; FIG. 11(b) is a graph showing the results of the variation detection performed on FIG. 10(b) according to the present invention; FIG. 12 is a fused difference image obtained by a similar principal component analysis method; FIG. 13 is a diagram of the final change detection results of the present invention.
The comparison of the performance indexes of the detection results of the two change detection methods of the present invention and the existing change detection method is shown in table 2.
TABLE 2 comparison of Performance indicators for Change test results in the invention and patents
Method of producing a composite material FP FN OE PCC Kappa
Fuse _ FCM fusion method 916 1187 2103 98.30% 0.8506
Fuse _ FLICM fusion method 1370 586 1956 98.42% 0.8696
The invention 907 943 1850 98.50% 0.8704
As can be seen from Table 2, compared with the traditional unsupervised clustering method, the method of the invention has better robustness, the FP and the FN are relatively balanced, and the performance is relatively stable; the detection precision of change detection is further improved by adopting an image fusion method of similar principal component analysis; from the final detection result, the result graph of the invention has the minimum total error number OE, the highest accuracy PCC and the highest Kappa coefficient, and shows the optimal performance.
In conclusion, the method has better performance in both subjective effect and objective index, and compared with the comparison method, the total error number is minimum, and the detection precision of the change detection result is improved.

Claims (4)

1. A remote sensing image change detection method based on online learning is characterized in that: the method comprises the following steps:
(1) obtaining two remote sensing images X with the size of I multiplied by J and through radiation correction and geometric registration1And X2Wherein, I is the line number of the remote sensing image, and J is the column number of the remote sensing image;
(2) using two remote sensing images X1And X2Constructing two difference images XLAnd XD
(3) For the first difference image XLConstruction foundationInitializing a first stage of a cascade classifier, namely a threshold TH0 of a mean classifier, in a sample bank P0 of the 2 × 2 image block and a single-pixel sample bank P1 according to the sample bank P0;
(4) the first difference image XLFrom left to right, in order from top to bottom, into a set of image blocks of size in the form of an nxn video frameWherein N is an even number, i ∈ Z+And is“Z+"is a positive integer which is a positive integer,represents rounding up;
(5) initializing a set of image blocksStarts to process the 1 st frame image block B with the index value i of 11Carrying out change detection;
(6) image block aggregation using cascaded classifiersIth frame image block BiCarrying out change detection and optimizing and updating the cascade classifier;
(7) i, adding 1 by self, and performing change detection on the next frame of image block by using the optimized and updated cascade classifier;
(8) repeating the steps (6) to (7) untilCompleting the change detection of the frame image block set B to obtain a corresponding change detection result set
(9) Splicing the obtained change detection result set C of the frame image block set B into a final change detection result image CM1 from left to right and from top to bottom to finish the first difference image XLDetecting a change in (c);
(10) the other difference image X in the step (2) is processedDAccording to the first difference image XLThe step (3) to the step (9) of detecting the change of the second difference image XDThe change detection result map of (1) is CM 2;
(11) mapping the change detection result images CM1 and CM2 of the two difference images into gray level images A1 and A2, and fusing the gray level images A1 and A2 by a similar principal component analysis method to obtain a fused difference image XF
(12) Using Kmeans clustering algorithm to perform fusion on difference image XFClustering is carried out to generate a final change detection result graph XCDAnd finishing the detection of the change information of the remote sensing image.
2. The method of claim 1, wherein the step (2) utilizes two remote sensing images X1And X2Constructing two difference images XLAnd XDAnd according to the type of the remote sensing image:
if remote sensing image X1And X2All are SAR remote sensing images, then difference image XLAnd XDThe structural formula of (A) is respectively as follows:
XL=|logX2-logX1|
<math> <mrow> <msub> <mi>X</mi> <mi>D</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mi>min</mi> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> <msub> <mi>&mu;</mi> <mn>2</mn> </msub> </mfrac> <mo>,</mo> <mfrac> <msub> <mi>&mu;</mi> <mn>2</mn> </msub> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> </mfrac> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein, mu1、μ2Are each X1、X2The mean of local area gray values of (a);
if remote sensing image X1And X2All of which are optical remote sensing images, then difference image XLAnd XDThe structural formula of (A) is respectively as follows:
XL=|X1-X2|
XD=|log(1+X2)-log(1+X1)|。
3. the method according to claim 1, wherein the step (6) of using a cascade of classifiers for the image block setIth frame image block BiChange detection is carried out, optimization updating is carried out on the cascade classifier, and the method comprises the following steps:
6a) scanning the i-th frame image block B in a2 x 2 overlapping sliding window mode by using a first-stage mean classifier of a cascade classifieriPerforming 'coarse classification' on the current frame, removing most non-target pixel points, namely non-change pixel points, and finishing the initial classification of the current frame;
6b) scanning the i-th frame image block B in a2 x 2 non-overlapping sliding window mode by using a first-stage mean classifier of a cascade classifieriA first mask MAP0 is formed by classifying the pixels based on 2 × 2 pixel blocks;
6c) an isolated 2 x 2 pixel block in the first mask MAP0 is found and mapped to the original i-th frame image block BiAdding the pixel blocks into a sample library P0, updating the threshold value of a first-stage mean classifier of the cascade classifier to obtain TH1 (positive sample mean value + negative sample mean value)/4, and mapping each pixel block into a single-pixel sample to be added into the sample library P1;
6d) training a second-stage Support Vector Machine (SVM) classifier of the cascade classifier by using the updated sample library P1, performing 'fine classification' on the residual pixel points to be classified passing through the mean classifier in the step 6a), and further excluding non-target pixel points to form a second mask MAP 1;
6e) according to the priori knowledge that the number of pixels of each connected pixel block in the second mask MAP1 should not be less than K, performing region opening operation processing on the second mask MAP1, filtering the connected pixel blocks with the number of pixels less than K, and obtaining the ith frame image block BiChange detection result of (C)iAnd mapping the filtered connected pixel blocks into frame image blocks BiThe pixel points at the corresponding positions are added to the sample bin P1, and the sample bin P1 is updated again, where K is 5.
4. The method according to claim 1, wherein the step (11) of mapping the two difference image change detection result maps CM1 and CM2 to grayscale images a1 and a2, and fusing the grayscale images a1 and a2 by a principal component analysis-like method to obtain a fused difference image XFThe method comprises the following steps:
11a) the first difference image XLThe pixel value at the position of the changed pixel point of the detection result map CM1 is set as the original difference image XLCorresponding to the gray value of the position, and taking 0 as the pixel value of the unchanged pixel point position to obtain a first difference image XLThe gray image Y1 after the mapping of the detection result map CM 1;
11b) the second difference image XDThe pixel value at the position of the changed pixel point of the detection result map CM2 is set as the original difference image XDCorresponding to the gray value of the position, and taking 0 as the pixel value of the unchanged pixel point position to obtain a second difference image XDThe gray image Y2 after the mapping of the detection result map CM 2;
11c) calculating the average value of the gray level images Y1 and Y2, and recording the image with smaller average value as a first average gray level image A1 and the image with larger average value as a second average gray level image A2;
11d) respectively changing the first mean gray image A1 and the second mean gray image A2 into column vectors in a row-first or column-first manner and calculating a covariance matrix;
11e) obtaining the eigenvalue from the covariance matrix, and determining the eigenvector (x, y) corresponding to the first principal componentTWhere "T" represents the transpose operator;
11f) the first difference image XLComparing the detection result graph CM1 with a standard change detection reference graph, calculating the sum of the number of false alarm pixels and the number of missed detection pixels, and marking as E1;
11g) the second difference image XDComparing the detection result graph CM2 with a standard change detection reference graph, calculating the sum of the number of false alarm pixels and the number of missed detection pixels, and marking as E2;
11h) determining the weight of the first mean gray image A1 according to the E1 and E2:
<math> <mrow> <mi>w</mi> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>y</mi> <mo>/</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mi>E</mi> <mn>1</mn> <mo>-</mo> <mi>E</mi> <mn>2</mn> <mo>&le;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>x</mi> <mo>/</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mi>E</mi> <mn>1</mn> <mo>-</mo> <mi>E</mi> <mn>2</mn> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
wherein x is a feature vector (x, y)TY is a feature vector (x, y)TA second element value of (d);
11i) calculating to obtain a fused difference image XF=w×A1+(1-w)×A2。
CN201510112839.9A 2015-03-15 2015-03-15 Remote sensing image variation detection method based on on-line study Expired - Fee Related CN104680542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510112839.9A CN104680542B (en) 2015-03-15 2015-03-15 Remote sensing image variation detection method based on on-line study

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510112839.9A CN104680542B (en) 2015-03-15 2015-03-15 Remote sensing image variation detection method based on on-line study

Publications (2)

Publication Number Publication Date
CN104680542A true CN104680542A (en) 2015-06-03
CN104680542B CN104680542B (en) 2017-10-24

Family

ID=53315535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510112839.9A Expired - Fee Related CN104680542B (en) 2015-03-15 2015-03-15 Remote sensing image variation detection method based on on-line study

Country Status (1)

Country Link
CN (1) CN104680542B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225227A (en) * 2015-09-07 2016-01-06 中国测绘科学研究院 The method and system that remote sensing image change detects
CN106056577A (en) * 2016-05-19 2016-10-26 西安电子科技大学 Hybrid cascaded SAR image change detection method based on MDS-SRM
CN106203521A (en) * 2016-07-15 2016-12-07 西安电子科技大学 Based on disparity map from the SAR image change detection of step study
CN107248172A (en) * 2016-09-27 2017-10-13 中国交通通信信息中心 A kind of remote sensing image variation detection method based on CVA and samples selection
CN107368781A (en) * 2017-06-09 2017-11-21 陕西师范大学 Synthetic Aperture Radar images change detecting method based on Subspace partition
CN108053409A (en) * 2017-12-11 2018-05-18 中南大学 Automatic construction method and system for remote sensing image segmentation reference library
CN108776772A (en) * 2018-05-02 2018-11-09 北京佳格天地科技有限公司 Across the time building variation detection modeling method of one kind and detection device, method and storage medium
CN109255388A (en) * 2018-09-28 2019-01-22 西北工业大学 A kind of unsupervised heterogeneous method for detecting change of remote sensing image
CN111091054A (en) * 2019-11-13 2020-05-01 广东国地规划科技股份有限公司 Method, system and storage medium for monitoring land type change
CN111539296A (en) * 2020-04-17 2020-08-14 河海大学常州校区 Method and system for identifying illegal building based on remote sensing image change detection
CN113033510A (en) * 2021-05-21 2021-06-25 浙江大华技术股份有限公司 Training and detecting method, device and storage medium for image change detection model
CN113538536A (en) * 2021-07-21 2021-10-22 中国人民解放军国防科技大学 SAR image information-assisted remote sensing optical image dense cloud detection method and system
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN114973028A (en) * 2022-05-17 2022-08-30 中国电子科技集团公司第十研究所 Aerial video image real-time change detection method and system
CN116402693A (en) * 2023-06-08 2023-07-07 青岛瑞源工程集团有限公司 Municipal engineering image processing method and device based on remote sensing technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240500A1 (en) * 2007-04-02 2008-10-02 Industrial Technology Research Institute Image processing methods
CN101894125A (en) * 2010-05-13 2010-11-24 复旦大学 Content-based video classification method
CN102789578A (en) * 2012-07-17 2012-11-21 北京市遥感信息研究所 Infrared remote sensing image change detection method based on multi-source target characteristic support
CN103500450A (en) * 2013-09-30 2014-01-08 河海大学 Multi-spectrum remote sensing image change detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240500A1 (en) * 2007-04-02 2008-10-02 Industrial Technology Research Institute Image processing methods
CN101894125A (en) * 2010-05-13 2010-11-24 复旦大学 Content-based video classification method
CN102789578A (en) * 2012-07-17 2012-11-21 北京市遥感信息研究所 Infrared remote sensing image change detection method based on multi-source target characteristic support
CN103500450A (en) * 2013-09-30 2014-01-08 河海大学 Multi-spectrum remote sensing image change detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAOGUO GONG 等: ""Change Detection in Synthetic Aperture Radar Images based on Image Fusion and Fuzzy Clustering"", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
辛芳芳等: ""基于小波域Fisher分类器的SAR 图像变化检测"", 《红外与毫米波学报》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225227B (en) * 2015-09-07 2018-03-30 中国测绘科学研究院 The method and system of remote sensing image change detection
CN105225227A (en) * 2015-09-07 2016-01-06 中国测绘科学研究院 The method and system that remote sensing image change detects
CN106056577A (en) * 2016-05-19 2016-10-26 西安电子科技大学 Hybrid cascaded SAR image change detection method based on MDS-SRM
CN106056577B (en) * 2016-05-19 2019-02-15 西安电子科技大学 SAR image change detection based on MDS-SRM Mixed cascading
CN106203521A (en) * 2016-07-15 2016-12-07 西安电子科技大学 Based on disparity map from the SAR image change detection of step study
CN106203521B (en) * 2016-07-15 2019-03-26 西安电子科技大学 The SAR image change detection learnt based on disparity map from step
CN107248172A (en) * 2016-09-27 2017-10-13 中国交通通信信息中心 A kind of remote sensing image variation detection method based on CVA and samples selection
CN107368781B (en) * 2017-06-09 2019-08-20 陕西师范大学 Synthetic Aperture Radar images change detecting method based on Subspace partition
CN107368781A (en) * 2017-06-09 2017-11-21 陕西师范大学 Synthetic Aperture Radar images change detecting method based on Subspace partition
CN108053409A (en) * 2017-12-11 2018-05-18 中南大学 Automatic construction method and system for remote sensing image segmentation reference library
CN108053409B (en) * 2017-12-11 2022-05-13 中南大学 Automatic construction method and system for remote sensing image segmentation reference library
CN108776772A (en) * 2018-05-02 2018-11-09 北京佳格天地科技有限公司 Across the time building variation detection modeling method of one kind and detection device, method and storage medium
CN108776772B (en) * 2018-05-02 2022-02-08 北京佳格天地科技有限公司 Cross-time building change detection modeling method, detection device, method and storage medium
CN109255388B (en) * 2018-09-28 2021-12-31 西北工业大学 Unsupervised heterogeneous remote sensing image change detection method
CN109255388A (en) * 2018-09-28 2019-01-22 西北工业大学 A kind of unsupervised heterogeneous method for detecting change of remote sensing image
CN111091054A (en) * 2019-11-13 2020-05-01 广东国地规划科技股份有限公司 Method, system and storage medium for monitoring land type change
CN111091054B (en) * 2019-11-13 2020-11-10 广东国地规划科技股份有限公司 Method, system and device for monitoring land type change and storage medium
CN111539296B (en) * 2020-04-17 2022-09-23 河海大学常州校区 Method and system for identifying illegal building based on remote sensing image change detection
CN111539296A (en) * 2020-04-17 2020-08-14 河海大学常州校区 Method and system for identifying illegal building based on remote sensing image change detection
CN113033510B (en) * 2021-05-21 2021-10-15 浙江大华技术股份有限公司 Training and detecting method, device and storage medium for image change detection model
CN113033510A (en) * 2021-05-21 2021-06-25 浙江大华技术股份有限公司 Training and detecting method, device and storage medium for image change detection model
CN113538536A (en) * 2021-07-21 2021-10-22 中国人民解放军国防科技大学 SAR image information-assisted remote sensing optical image dense cloud detection method and system
CN113538536B (en) * 2021-07-21 2022-06-07 中国人民解放军国防科技大学 SAR image information-assisted remote sensing optical image dense cloud detection method and system
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN113936217B (en) * 2021-10-25 2024-04-30 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weak supervision building change detection method
CN114973028A (en) * 2022-05-17 2022-08-30 中国电子科技集团公司第十研究所 Aerial video image real-time change detection method and system
CN114973028B (en) * 2022-05-17 2023-02-03 中国电子科技集团公司第十研究所 Aerial video image real-time change detection method and system
CN116402693A (en) * 2023-06-08 2023-07-07 青岛瑞源工程集团有限公司 Municipal engineering image processing method and device based on remote sensing technology
CN116402693B (en) * 2023-06-08 2023-08-15 青岛瑞源工程集团有限公司 Municipal engineering image processing method and device based on remote sensing technology

Also Published As

Publication number Publication date
CN104680542B (en) 2017-10-24

Similar Documents

Publication Publication Date Title
CN104680542B (en) Remote sensing image variation detection method based on on-line study
US10818000B2 (en) Iterative defect filtering process
CN110246112B (en) Laser scanning SLAM indoor three-dimensional point cloud quality evaluation method based on deep learning
CN110287932B (en) Road blocking information extraction method based on deep learning image semantic segmentation
CN103810699B (en) SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network
CN108875600A (en) A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO
CN110348437B (en) Target detection method based on weak supervised learning and occlusion perception
CN109671071B (en) Underground pipeline defect positioning and grade judging method based on deep learning
CN111461212A (en) Compression method for point cloud target detection model
CN112084869A (en) Compact quadrilateral representation-based building target detection method
CN112819821B (en) Cell nucleus image detection method
CN109002792B (en) SAR image change detection method based on layered multi-model metric learning
CN108492298A (en) Based on the multispectral image change detecting method for generating confrontation network
CN104778717A (en) SAR image change detection method based on oriented difference chart
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN110298410A (en) Weak target detection method and device in soft image based on deep learning
CN118097755A (en) Intelligent face identity recognition method based on YOLO network
CN115439654A (en) Method and system for finely dividing weakly supervised farmland plots under dynamic constraint
CN117611879A (en) Defect detection method, device, equipment and computer readable medium
CN114529552A (en) Remote sensing image building segmentation method based on geometric contour vertex prediction
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN113988222A (en) Forest fire detection and identification method based on fast-RCNN
CN112348750B (en) SAR image change detection method based on threshold fusion and neighborhood voting
CN113591608A (en) High-resolution remote sensing image impervious surface extraction method based on deep learning
Yan et al. The research of building earthquake damage object-oriented change detection based on ensemble classifier with remote sensing image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171024

CF01 Termination of patent right due to non-payment of annual fee