CN107256409B - High-resolution SAR image change detection method based on SAE and significance detection - Google Patents

High-resolution SAR image change detection method based on SAE and significance detection Download PDF

Info

Publication number
CN107256409B
CN107256409B CN201710365433.0A CN201710365433A CN107256409B CN 107256409 B CN107256409 B CN 107256409B CN 201710365433 A CN201710365433 A CN 201710365433A CN 107256409 B CN107256409 B CN 107256409B
Authority
CN
China
Prior art keywords
layer
training data
data set
network
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710365433.0A
Other languages
Chinese (zh)
Other versions
CN107256409A (en
Inventor
焦李成
屈嵘
孟繁荣
张丹
杨淑媛
侯彪
马文萍
刘芳
尚荣华
张向荣
唐旭
马晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710365433.0A priority Critical patent/CN107256409B/en
Publication of CN107256409A publication Critical patent/CN107256409A/en
Application granted granted Critical
Publication of CN107256409B publication Critical patent/CN107256409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

A high-resolution SAR image change detection method based on SAE and significance detection extracts image blocks with different sizes from two registered SAR images in the same region and different time phases to serve as a first training data set and a second training data set; respectively normalizing the two training data sets to be between [0,1 ]; respectively constructing two three-layer stack self-coding networks, determining the characteristic number of each layer of the network, randomly initializing the weight and the offset, respectively sending two normalized training data sets into the three-layer stack self-coding networks, and training to obtain the weight and the offset of each layer; respectively sending the two images into a trained network to obtain the characteristics of the two images; obtaining the difference of the two images in the characteristic domain, determining a threshold segmentation difference graph for the difference by a threshold method, and respectively obtaining a significance region; and combining the two significant areas to obtain a final significant area, and obtaining a final change detection result through a clustering algorithm. The invention effectively improves the detection precision.

Description

High-resolution SAR image change detection method based on SAE and significance detection
Technical Field
The invention belongs to the field of combination of deep learning and remote sensing image processing, and particularly relates to a high-resolution SAR image change detection method based on SAE and significance detection.
Background
The change detection based on the remote sensing image is a technology for qualitatively or quantitatively analyzing and determining the change characteristics and the change process of the earth surface from multi-temporal remote sensing images of the same geographic area acquired at different times. The change detection technology can detect local texture change information and a radiation value of the remote sensing image, so that the utilization rate of the land, the coverage condition of the land, the coverage rate of the forest and vegetation, the expansion condition of the city and the like can be detected in the aspect of resource and environment monitoring; in the aspect of agricultural investigation, the technology can update the geospatial data in time, and then know the growth conditions of crops in a certain geographical area and the like; meanwhile, the technology has an important role in monitoring and estimation of natural disasters, military fields and the like.
The classic SAR image change detection method comprises the following steps: (1) the method comprises the following steps of (1) detecting changes based on simple algebraic operation, wherein the changes classically comprise an image difference method, an image ratio method and a logarithmic ratio method; (2) change detection based on image transformation, which classically comprises a principal component analysis method, a change vector analysis method and a correlation analysis method image transformation method; (3) change detection method based on image classification.
In recent years, research on SAR image change detection has attracted much attention, and many excellent teams at home and abroad have conducted extensive and detailed research on the SAR image change detection. On the basis of a classical change detection algorithm, deep learning is researched in the SAR image change detection field, and a change detection result is obtained by analyzing an image in a characteristic domain through a deep neural network trained by some marked data.
Most of the existing SAR change detection algorithms are SAR image change detection with medium-low resolution, pixel level and single polarization. With the increasing maturity of the SAR technology, the image quality is gradually improved, the resolution is gradually enhanced, the capability and the precision of data acquisition are higher and higher, the image acquisition is more and more convenient, and the prospect of SAR change detection is wider.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a high-resolution SAR image change detection method based on SAE and significance detection.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
1) extracting image blocks with different sizes from the two registered SAR images in the same region and different time phases to respectively serve as a first training data set D1 and a second training data set D2;
2) normalizing the first training data set D1 and the second training data set D2 to be between [0,1] respectively to obtain a first normalized training data set N1 and a second normalized training data set N2;
3) respectively constructing a first three-layer stack self-coding network and a second three-layer stack self-coding network, determining the feature number of each layer of the two networks, randomly initializing weight and bias, respectively sending a first normalized training data set N1 and a second normalized training data set N2 into the first three-layer stack self-coding network and the second three-layer stack self-coding network, and training by adopting a layer-by-layer greedy training method to obtain the weight and the bias of each layer; respectively sending the two images into a trained network to obtain the characteristics of the two images;
4) obtaining the difference of the two images in the characteristic domain, determining a threshold segmentation difference graph for the difference by a threshold method, and respectively obtaining a first significant region and a second significant region with different sizes;
5) and combining the first significant region and the second significant region to obtain a final significant region, and performing a clustering algorithm on the extracted final significant region to obtain a final change detection result.
Step 1) taking image blocks with the size of 41 × 41 from the two SAR images as a first training data set D1 in a sliding window mode, and taking image blocks with the size of 51 × 51 from the two SAR images as a second training data set D2.
The step of constructing the first three-layer stack self-coding network comprises the following steps:
3-1a, setting a first layer characteristic number 3362, a second layer characteristic number 1681 and a third layer characteristic number 840;
3-1b, defining the input of each layer in the self-coding network as input, the reconstruction result as output, and defining the loss function as:
Figure BDA0001301458280000021
selecting samples from a first normalized training data set N1 and sending the samples into a first-layer network, and training the network through a minimum loss function to obtain first-layer characteristics; sending the obtained first layer characteristics as input data into a second layer network, and obtaining second layer characteristics in the same way; sending the obtained second layer characteristics as input data into a third layer network to obtain the deepest layer characteristics;
and 3-1c, sending the first normalized training data set N1 into the self-coding network trained in the step 3-1b to obtain the image characteristics.
Step 4) obtaining a first feature difference map S1 by subtracting the two image features F1 and F2 obtained in the step 3-1 c:
S1=|F1|-|F2|
and determining a threshold segmentation difference map on the first feature difference map S1 by a threshold method to obtain a first significant region with the image block size of 41 × 41 in the two SAR images.
The step of constructing the second three-layer stack self-coding network comprises the following steps:
3-2a, setting a first layer characteristic number 5202, a second layer characteristic number 2601 and a third layer characteristic number 1300;
3-2b, defining the input of each layer in the self-coding network as input, the reconstruction result as output, and defining the loss function as a formula:
Figure BDA0001301458280000031
selecting samples from a second normalized training data set N2 and sending the samples into a first-layer network, and training the network through a minimum loss function to obtain first-layer characteristics; sending the obtained first layer characteristics as input data into a second layer network, and obtaining second layer characteristics in the same way; sending the obtained second layer characteristics as input data into a third layer network to obtain the deepest layer characteristics;
and 3-2c, sending the second normalized training data set N2 to the self-coding network trained in the step 3-2b to obtain the image characteristics.
Step 4) obtaining a second feature difference map S2 by subtracting the two image features F3 and F4 obtained in the step 3-2 c:
S2=|F3|-|F4|
and determining a threshold segmentation difference map on the second feature difference map S2 by a threshold method to obtain a second significant region with the image block size of 51 × 51 in the two SAR images.
And 3) respectively selecting 20% of samples from the first normalized training data set N1 and the second normalized training data set N2 to be sent into the first three-layer stack self-coding network and the second three-layer stack self-coding network.
The normalization in the step 2) adopts a linear scaling method or a 0-mean value normalization method.
When a linear scaling method is adopted, a first training data set maximum value max (D1) of a first training data set D1 and a second training data set maximum value max (D2) of a second training data set D2 are firstly obtained; each element in the first training data set D1 and the second training data set D2 is then divided by the first training data set maximum value max (D1) and the second training data set maximum value max (D2), respectively, resulting in a first normalized training data set N1 and a second normalized training data set N2.
The step 5) takes the union of the first salient region and the second salient region as a final salient region R;
5a) initializing two clustering centers V1 and V2, and initializing a membership matrix randomly;
5b) calculating the distance from the ith sample to the jth cluster center:
Figure BDA0001301458280000041
5c) updating the membership degree of each sample according to a membership degree formula:
Figure BDA0001301458280000042
wherein, muj(xi) Fuzzy membership for the ith sample to the jth class;
5d) and updating the clustering center through a formula according to the membership degree of each sample:
Figure BDA0001301458280000043
wherein v isjIs the cluster center of the jth class;
5e) and calculating the sum of the squares of the errors of the samples and the class centers of the samples according to a formula:
Figure BDA0001301458280000044
when the clustering center is not changed or the sum of squared errors is not reduced, the optimal clustering result is achieved, and two types are obtained; and dividing the clustering result into a variation class and an unchanged class according to the characteristic difference of the unchanged class to obtain a final variation detection result.
Compared with the prior art, the invention has the following beneficial effects: since image salient regions are different from other regions and are easily observed, the large visual contrast of local regions enables salient detection to guide change detection. The method adopts SAE significance detection based on image blocks, obtains the significance area through self-learning and self-training of SAE, and obtains the final change detection result by utilizing a clustering method for the significance area, thereby effectively avoiding the noise influence of SAR image spots, increasing the size of a processable image and improving the detection precision. In addition, the detection method only adopts a clustering method to detect changes in the salient regions, so that the regions to be detected are effectively reduced, and the SAR image with a larger size can be processed.
Drawings
FIG. 1 is an overall flow chart of the detection method of the present invention;
fig. 2(a) a first set of experimental simulation plots taken at 4 months 2009 and measuring 2000 x 2000;
fig. 2(b) a first set of experimental simulation plots taken at 9 months 2009 and measuring 2000 x 2000;
FIG. 3 is a graph of the test results of the method of the present invention for a first set of simulation experiments;
FIG. 4(a) is a graph of variation detection results obtained by a first set of simulation experiments by producing a difference map by mean ratio and analyzing the difference map by KI thresholding; FIG. 4(b) is a graph of variation detection results obtained by generating a difference map through a mean ratio and analyzing the difference map by using an GKI threshold method in a first set of simulation experiments; FIG. 4(c) is a graph of variation detection results obtained by generating a difference graph and a Kmeans cluster analysis difference graph through a mean ratio in a first set of simulation experiments;
fig. 5(a) a second set of experimental simulation plots taken at 4 months of 2009 and measuring 2000 x 2000;
fig. 5(b) a second set of experimental simulation plots taken at 9 months 2009 and measuring 2000 x 2000;
FIG. 6 is a graph of the test results of the method of the present invention for a second set of simulation experiments;
FIG. 7(a) is a graph of variation detection results obtained by analyzing a difference map through a mean ratio production difference map and a KI threshold method in a second set of simulation experiments; FIG. 7(b) is a graph of variation detection results obtained by generating a difference map through a mean ratio and analyzing the difference map by using an GKI threshold method in a second set of simulation experiments; FIG. 7(c) is a diagram of the variation detection results obtained by generating a difference diagram and a Kmeans cluster analysis difference diagram through the mean ratio in the second set of simulation experiments;
fig. 8(a) a third set of experimental raw data construction plots taken at 4 months 2009 and measuring 2000 x 2000;
fig. 8(b) a third set of experimental raw data constructed graphs, 2000 x 2000 in size, taken at 9 months 2009;
FIG. 8(c) is a reference graph of variation detection of the third set of experimental data;
FIG. 9 is a graph of the test results of the third set of simulation experiments performed by the method of the present invention;
FIG. 10(a) is a variation detection result obtained by analyzing a difference map through a mean ratio production difference map and a KI threshold method in a third set of simulation experiments; FIG. 10(b) is a graph of variation detection results obtained by generating a difference map through a mean ratio and analyzing the difference map by using an GKI threshold method in a third set of simulation experiments; fig. 10(c) is a variation detection result obtained by generating a difference map through the mean ratio and analyzing the difference map by means of Kmeans clustering in the third set of simulation experiments.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the high resolution SAR image change detection method of the present invention is implemented as follows:
step 1, constructing a first training data set D1; and taking image blocks with the size of 41 × 41 from the two registered SAR images of the same region and different time phases in a sliding window manner to serve as a first training data set D1, wherein the first training data set D1 includes all data of the two images, the first half of which is data of the image 1, and the second half of which is data of the image 2.
And 2, normalizing the first training data set D1 to [0,1] to obtain a first normalized training data set N1.
Common normalization methods are: linear scaling method, 0-means normalization method.
The embodiment adopts a linear scaling method, namely, the maximum value max (D1) of the data set D1 is firstly obtained; each element in the first training data set D1 is then divided by the maximum value max (D1) to yield a first normalized training data set N1.
Step 3, constructing a first three-layer stack self-coding network:
(3a) setting a first layer characteristic number 3362, a second layer characteristic number 1681 and a third layer characteristic number 840;
(3b) defining the input of each layer in the self-coding network as input, reconstructing the result as output, and defining the loss function as:
Figure BDA0001301458280000061
selecting a part of samples from a first normalized training data set N1 and sending the samples into a first-layer network, and training the network through a minimum loss function to obtain first-layer characteristics; sending the obtained first layer characteristics as input data into a second layer network, and obtaining second layer characteristics in the same way; and sending the second layer characteristics as input data to a third layer network to obtain the required deepest layer characteristics.
(3c) Sending all the first normalized training data set N1 into the self-encoding network trained in the step (3b) to respectively obtain the characteristics F1 and F2 of the two images;
step 4, obtaining a characteristic difference map S1 by subtracting the F1 and the F2 obtained in the step 3 according to a formula <2 >:
S1=|F1|-|F2| <2>
determining a proper threshold segmentation difference map on the feature difference map S1 by a threshold method to obtain a first salient region R1;
and 5, constructing a second training data set D2, taking image blocks with the size of 51 × 51 from the two registered SAR images in the same region and different time phases in a sliding window manner to serve as a second training data set D2, wherein the second training data set D2 comprises all data of the two images, the first half of the data is data of the image 1, and the second half of the data is data of the image 2.
And 6, normalizing the second training data set D2 to [0,1] to obtain a second normalized training data set N2.
Common normalization methods are: linear scaling method, 0-means normalization method.
In the embodiment, a linear scaling method is adopted, namely, the maximum value max (D2) of the second training data set D2 is firstly obtained; each element in the second training data set D2 is then divided by the maximum value max (D2) to yield a second normalized training data set N2.
And 7, constructing a second three-layer stack self-coding network:
(7a) setting a first layer characteristic number 5202, a second layer characteristic number 2601 and a third layer characteristic number 1300;
(7b) defining the input of each layer in the self-coding network as input, the reconstruction result as output, defining the loss function as a formula <1>, selecting partial samples from a second normalized training data set N2 and sending the partial samples into a first layer network, and training the network through the minimum loss function to obtain first layer characteristics; sending the obtained first layer characteristics as input data to a second layer network, and obtaining second layer characteristics in the same way; and sending the second layer characteristics as input data to a third layer network to obtain the required deepest layer characteristics.
(7c) Sending the second normalized training data set N2 into the self-encoding network trained in the step (7b) to respectively obtain the characteristics F3 and F4 of the two images;
step 8, obtaining a characteristic difference map S2 by subtracting the F3 and the F4 obtained in the step 7 according to a formula <4 >:
S2=|F3|-|F4| <3>
determining a proper threshold segmentation difference map on the feature difference map S2 by a threshold method to obtain a second significant region R2;
step 9, taking the union of the significance regions in R1 and R2 as a final significance region R, and clustering the feature difference graph by a fuzzy C-means clustering (FCM) method, wherein the specific clustering steps comprise:
(9a) initializing two clustering centers V1 and V2, and initializing a membership matrix randomly;
(9b) calculating the distance from the ith sample to the jth cluster center according to the formula <5>
Figure BDA0001301458280000081
(9c) Updating the membership degree of each sample according to the membership degree formula <6>,
Figure BDA0001301458280000082
wherein, muj(xi) Fuzzy for jth class for ith sampleDegree of membership.
(9d) Updating the clustering center according to the membership degree of each sample through a formula <7 >:
Figure BDA0001301458280000083
wherein v isjIs the cluster center of the jth class.
(9e) Calculating the sum of the squares of the errors of each sample and the class center of the sample according to the formula <8 >:
Figure BDA0001301458280000084
and when the clustering center is not changed or the sum of squared errors is not reduced, the optimal clustering result is obtained, namely two types are obtained, and the clustering result is divided into a changed type and an unchanged type according to the small difference of the unchanged type characteristics to obtain the final change detection result.
The effect of the invention can be further illustrated by the following simulation experiment:
1. simulation parameters:
for the experimental simulation with reference plots, quantitative change detection analysis was performed:
a. calculating the number of missed detections:
counting the number of pixels in the changed area in the experimental result graph, comparing the number of pixels with the number of pixels in the changed area in the reference graph, and calling the number of pixels which are changed in the reference graph but are detected as unchanged in the experimental result graph as the number FN of missed detections;
b. calculating the number of false detections:
counting the number of pixels in an unchanged area in the experiment result graph, comparing the number of pixels with the number of pixels in an unchanged area in the reference graph, and calling the number of pixels which are not changed in the reference graph but are detected as changed in the experiment result graph as an error detection number FP;
c. calculating the number of change class positive detections:
counting the number of pixels in the changed area in the experiment result graph, comparing the number of pixels with the number of pixels in the changed area in the reference graph, and calling the number of pixels in the experiment result graph and the number of pixels in the reference graph which are changed as the number TP of change type positive detections;
d. calculating the number of unchanged positive detections:
counting the number of pixels in an unchanged area in the experiment result graph, comparing the number of pixels with the number of pixels in an unchanged area in the reference graph, and calling the number of pixels which are unchanged in both the experiment result graph and the reference graph as an unchanged type positive detection number TN;
e. probability of correct classification PCC: PCC (TP + TN)/(TP + FP + TN + FN)
f. Kappa coefficient for measuring consistency of the detection result graph and the reference graph:
Kappa=(PCC-PRE)/(1-PRE)
wherein: PRE ═ TP + FP × Nc + (FN + TN) × Nu/N2Here, N denotes the total number of pixels Nc and Nu denotes the actual number of changed pixels and the actual number of unchanged pixels, respectively.
2. Simulation conditions are as follows:
the hardware platform is as follows: intel (r) xeon (r) CPU E5-2630, 2.40GHz 16, with 64G memory.
The software platform is as follows: tensorflow.
3. Simulation content and results:
experiments were performed using the method of the present invention under the above-described simulation conditions.
Firstly, reflecting Namibia regional Thanksgiving river basin change images, the shooting time of the images in the images.
FIG. 3 is a graph of the results of high resolution SAR change detection by SAE-based and significance detection; FIG. 4(a) is a variation detection result obtained by producing a difference map by means of a mean ratio and analyzing the difference map by means of a KI threshold method; FIG. 4(b) is a graph of variation detection results obtained by generating a difference graph by means of the mean ratio and analyzing the difference graph by means of GKI threshold method; FIG. 4(c) shows the variation detection results obtained by generating a difference map from the mean ratio and analyzing the difference map by Kmeans of clustering. The experimental results show that the change detection result of the method has fewer noise points, the details are kept relatively complete, and the change detection effect is better.
② reflecting the Namibia regional Zanzhihe river basin change image 2, the shooting time of fig. 5(a) and fig. 5(b) is respectively 2009, 4 and 2009, 9 and the size is 2000 × 2000.
FIG. 6 is a graph of the results of high resolution SAR change detection by SAE-based and significance detection; FIG. 7(a) is a variation detection result obtained by producing a difference map by means of a mean ratio and analyzing the difference map by means of a KI threshold method; FIG. 7(b) is a graph of variation detection results obtained by generating a difference graph by means of the mean ratio and analyzing the difference graph by means of GKI threshold method; FIG. 7(c) shows the variation detection results obtained by generating a difference map from the mean ratio and analyzing the difference map by Kmeans of clustering. According to the experimental result, the change detection result of the method has fewer noise points, the detection of the change area is more complete, and the change detection effect is better.
The background is intercepted from the relatively unchanged area of the Namibia area by the set of experimental data, other types of ground objects intercepted in the same scene are taken as a changed area and placed in one background, correct class marks can be obtained due to the fact that the experiment is a changed area added manually, the two time phase diagrams to be detected are shown in fig. 8(a) and fig. 8(b), and the change reference diagram is shown in fig. 8 (c).
FIG. 9 is a graph of the results of high resolution SAR change detection by SAE-based and significance detection; FIG. 10(a) is a variation detection result obtained by producing a difference map by means of a mean ratio and analyzing the difference map by means of a KI threshold method; FIG. 10(b) is a graph of variation detection results obtained by generating a difference graph by means of the mean ratio and analyzing the difference graph by means of GKI thresholding; FIG. 10(c) shows the results of change detection by means of a difference map generated from the mean ratio and a Kmeans cluster analysis difference map. According to the experimental result, the detection effect in the changed area and the detection effect in the unchanged area are good, noise interference is almost avoided, and the change detection result is basically consistent with that of the reference graph. Table 1 shows the effect index of the change detection map obtained by the method of the present invention and three different difference map analysis methods.
TABLE 1 comparison of the effects of three prior methods and the change detection of the present invention
MR_KI MR_GKI MR_FCM Salient_FCM
PCC 0.939 0.961 0.960 0.981
Kappa 0.744 0.813 0.782 0.972
In summary, the significance region is extracted through SAE significance detection, and then the significance region is divided into two types through an FCM clustering method, so that a final change detection result is obtained.

Claims (7)

1. A high-resolution SAR image change detection method based on SAE and significance detection is characterized in that:
1) extracting image blocks from the two SAR images to be respectively used as a first training data set D1 and a second training data set D2, wherein the image blocks of the first training data set D1 and the second training data set D2 are different in size, and the two SAR images are registered images of the same region and different time phases;
2) normalizing the first training data set D1 and the second training data set D2 to be between [0,1] respectively to obtain a first normalized training data set N1 and a second normalized training data set N2;
3) respectively constructing a first three-layer stack self-coding network and a second three-layer stack self-coding network, determining the feature number of each layer of the two networks, randomly initializing weight and bias, respectively sending a first normalized training data set N1 and a second normalized training data set N2 into the first three-layer stack self-coding network and the second three-layer stack self-coding network, and training by adopting a layer-by-layer greedy training method to obtain the weight and the bias of each layer; sending the two images into a trained network to obtain the characteristics of the two images;
the step of constructing the first three-layer stack self-coding network comprises the following steps:
3-1a, setting a first layer characteristic number 3362, a second layer characteristic number 1681 and a third layer characteristic number 840;
3-1b, defining the input of each layer in the self-coding network as input, the reconstruction result as output, and defining the loss function as:
Figure FDA0002768839970000011
selecting samples from a first normalized training data set N1 and sending the samples into a first-layer network, and training the network through a minimum loss function to obtain first-layer characteristics; sending the obtained first layer characteristics as input data into a second layer network, and obtaining second layer characteristics in the same way; sending the obtained second layer characteristics as input data into a third layer network to obtain the deepest layer characteristics;
3-1c, sending the first normalized training data set N1 into the self-encoding network trained in the step 3-1b to obtain image characteristics;
the step of constructing the second three-layer stack self-coding network comprises the following steps:
3-2a, setting a first layer characteristic number 5202, a second layer characteristic number 2601 and a third layer characteristic number 1300;
3-2b, defining the input of each layer in the self-coding network as input, the reconstruction result as output, and defining the loss function as a formula:
Figure FDA0002768839970000012
selecting samples from a second normalized training data set N2 and sending the samples into a first-layer network, and training the network through a minimum loss function to obtain first-layer characteristics; sending the obtained first layer characteristics as input data into a second layer network, and obtaining second layer characteristics in the same way; sending the obtained second layer characteristics as input data into a third layer network to obtain the deepest layer characteristics;
3-2c, sending the second normalized training data set N2 into the self-coding network trained in the step 3-2b to obtain image characteristics;
4) obtaining the difference of the two images in the characteristic domain, determining a threshold segmentation difference graph for the difference by a threshold method, and respectively obtaining a first significant region and a second significant region with different sizes;
5) combining the first significant region and the second significant region to obtain a final significant region, and performing a clustering algorithm on the extracted final significant region to obtain a final change detection result;
step 5) specifically taking a union of the first salient region and the second salient region as a final salient region R;
5a) initializing two clustering centers V1 and V2, and initializing a membership matrix randomly;
5b) calculating the distance from the ith sample to the jth cluster center:
Figure FDA0002768839970000021
5c) updating the membership degree of each sample according to a membership degree formula:
Figure FDA0002768839970000022
wherein, muj(xi) Fuzzy membership for the ith sample to the jth class;
5d) and updating the clustering center through a formula according to the membership degree of each sample:
Figure FDA0002768839970000023
wherein v isjIs the cluster center of the jth class;
5e) and calculating the sum of the squares of the errors of the samples and the class centers of the samples according to a formula:
Figure FDA0002768839970000024
when the clustering center is not changed or the sum of squared errors is not reduced, the optimal clustering result is achieved, and two types are obtained; and dividing the clustering result into a variation class and an unchanged class according to the characteristic difference of the unchanged class to obtain a final variation detection result.
2. The SAE and significance detection-based high-resolution SAR image change detection method according to claim 1, characterized in that: step 1) taking image blocks with the size of 41 × 41 from the two SAR images as a first training data set D1 in a sliding window mode, and taking image blocks with the size of 51 × 51 from the two SAR images as a second training data set D2.
3. The method for detecting the change of the high-resolution SAR image based on the SAE and the significance detection as claimed in claim 1, wherein the step 4) is to obtain a first feature difference map S1 by subtracting the features F1 and F2 of the two images obtained in the step 3-1 c:
S1=|F1|-|F2|
and determining a threshold segmentation difference map on the first feature difference map S1 by a threshold method to obtain a first significant region with the image block size of 41 × 41 in the two SAR images.
4. The method for detecting changes in high-resolution SAR images based on SAE and saliency detection as claimed in claim 1, wherein step 4) is to obtain a second feature difference map S2 by subtracting the features F3 and F4 of the two images obtained in step 3-2 c:
S2=|F3|-|F4|
and determining a threshold segmentation difference map on the second feature difference map S2 by a threshold method to obtain a second significant region with the image block size of 51 × 51 in the two SAR images.
5. The SAE and significance detection-based high-resolution SAR image change detection method according to claim 1, characterized in that: and 3) respectively selecting 20% of samples from the first normalized training data set N1 and the second normalized training data set N2 to be sent into the first three-layer stack self-coding network and the second three-layer stack self-coding network.
6. The SAE and significance detection-based high-resolution SAR image change detection method according to claim 1, characterized in that: the normalization in the step 2) adopts a linear scaling method or a 0-mean value normalization method.
7. The SAE and significance detection-based high-resolution SAR image change detection method according to claim 6, characterized in that: when a linear scaling method is adopted, a first training data set maximum value max (D1) of a first training data set D1 and a second training data set maximum value max (D2) of a second training data set D2 are firstly obtained; dividing each element in the first training data set D1 by the maximum value max (D1) of the first training data set to obtain a first normalized training data set N1; dividing each element in the second training data set D2 by the second training data set maximum max (D2) yields a second normalized training data set N2.
CN201710365433.0A 2017-05-22 2017-05-22 High-resolution SAR image change detection method based on SAE and significance detection Active CN107256409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710365433.0A CN107256409B (en) 2017-05-22 2017-05-22 High-resolution SAR image change detection method based on SAE and significance detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710365433.0A CN107256409B (en) 2017-05-22 2017-05-22 High-resolution SAR image change detection method based on SAE and significance detection

Publications (2)

Publication Number Publication Date
CN107256409A CN107256409A (en) 2017-10-17
CN107256409B true CN107256409B (en) 2021-01-01

Family

ID=60028180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710365433.0A Active CN107256409B (en) 2017-05-22 2017-05-22 High-resolution SAR image change detection method based on SAE and significance detection

Country Status (1)

Country Link
CN (1) CN107256409B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447057B (en) * 2018-04-02 2021-11-30 西安电子科技大学 SAR image change detection method based on significance and depth convolution network
CN108492298B (en) * 2018-04-13 2021-11-09 西安电子科技大学 Multispectral image change detection method based on generation countermeasure network
CN108805057B (en) * 2018-05-29 2020-11-17 北京师范大学 SAR image reservoir area detection method based on joint significance analysis
CN109242889B (en) * 2018-08-27 2020-06-16 大连理工大学 SAR image change detection method based on context significance detection and SAE
CN113034471B (en) * 2021-03-25 2022-08-02 重庆大学 SAR image change detection method based on FINCH clustering

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020978A (en) * 2012-12-14 2013-04-03 西安电子科技大学 SAR (synthetic aperture radar) image change detection method combining multi-threshold segmentation with fuzzy clustering
CN105205807A (en) * 2015-08-19 2015-12-30 西安电子科技大学 Remote sensing image change detection method based on sparse automatic code machine

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101634705B (en) * 2009-08-19 2011-12-07 西安电子科技大学 Method for detecting target changes of SAR images based on direction information measure
EP2431764A1 (en) * 2010-09-17 2012-03-21 BAE Systems PLC Processing SAR imagery
CN102176014B (en) * 2011-01-19 2013-02-13 西安理工大学 Method for detecting urban region change based on multi-temporal SAR (synthetic aperture radar) images
CN103955926B (en) * 2014-04-22 2016-10-05 西南交通大学 Method for detecting change of remote sensing image based on Semi-NMF

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020978A (en) * 2012-12-14 2013-04-03 西安电子科技大学 SAR (synthetic aperture radar) image change detection method combining multi-threshold segmentation with fuzzy clustering
CN105205807A (en) * 2015-08-19 2015-12-30 西安电子科技大学 Remote sensing image change detection method based on sparse automatic code machine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Stacked Convolutional Denoising Auto-Encoders for Feature Representation;Bo Du etal.;《IEEE TRANSACTIONS ON CYBERNETICS》;20170430;第47卷(第04期);第1-4节 *
面向对象特征融合的高分辨率遥感图像变化检测方法;王文杰等;《计算机应用研究》;20090831;第26卷(第08期);第1-4节 *

Also Published As

Publication number Publication date
CN107256409A (en) 2017-10-17

Similar Documents

Publication Publication Date Title
CN107256409B (en) High-resolution SAR image change detection method based on SAE and significance detection
US10909409B2 (en) System and method for blind image quality assessment
CN103475898B (en) Non-reference image quality assessment method based on information entropy characters
Wan et al. Multi-sensor remote sensing image change detection based on sorted histograms
CN104200471B (en) SAR image change detection based on adaptive weight image co-registration
CN101950364A (en) Remote sensing image change detection method based on neighbourhood similarity and threshold segmentation
Gong et al. A coupling translation network for change detection in heterogeneous images
CN102360503B (en) SAR (Specific Absorption Rate) image change detection method based on space approach degree and pixel similarity
CN111008644B (en) Ecological change monitoring method based on local dynamic energy function FCN-CRF model
CN116012364B (en) SAR image change detection method and device
CN104680536B (en) The detection method changed to SAR image using improved non-local mean algorithm
CN108830856B (en) GA automatic segmentation method based on time series SD-OCT retina image
CN108171119B (en) SAR image change detection method based on residual error network
Fu et al. A statistical approach to detect edges in SAR images based on square successive difference of averages
CN112508963B (en) SAR image segmentation method based on fuzzy C-means clustering
Wang et al. The PAN and MS image fusion algorithm based on adaptive guided filtering and gradient information regulation
CN108986083B (en) SAR image change detection method based on threshold optimization
Wang et al. Modified statistically homogeneous pixels’ selection with multitemporal SAR images
Hao et al. A novel change detection approach for VHR remote sensing images by integrating multi-scale features
CN113837074B (en) Remote sensing image change detection method combining posterior probability and space neighborhood information
CN105205807A (en) Remote sensing image change detection method based on sparse automatic code machine
CN104392209B (en) A kind of image complexity evaluation method of target and background
CN117221816A (en) Multi-building floor positioning method based on Wavelet-CNN
Muthukannan et al. Color image segmentation using k-means clustering and optimal fuzzy C-means clustering
CN107358261B (en) High-resolution SAR image change detection method based on curvelet SAE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant