CN109871875B - Building change detection method based on deep learning - Google Patents
Building change detection method based on deep learning Download PDFInfo
- Publication number
- CN109871875B CN109871875B CN201910054336.9A CN201910054336A CN109871875B CN 109871875 B CN109871875 B CN 109871875B CN 201910054336 A CN201910054336 A CN 201910054336A CN 109871875 B CN109871875 B CN 109871875B
- Authority
- CN
- China
- Prior art keywords
- image
- pixels
- training
- building
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The invention discloses a building change detection method based on deep learning, and belongs to the technical field of computer vision. Firstly, a building is segmented by applying a deep learning image segmentation algorithm U-net to obtain a binary segmentation image; then, combining the binary segmentation images to generate a building area combined binary image; then, taking the merged image as a masking image, and removing noise ground objects in the input image to obtain a noiseless image; and finally, carrying out change detection on the noiseless image by adopting an unsupervised deep learning network PCAnet, and selecting and outputting an optimal change image. Compared with the existing building change detection method, the method utilizes deep learning to carry out change detection without a large amount of labeled training data; when noise and ground objects such as trees, vehicles, pedestrians and the like interfere in the input image, change detection is performed only for the building area. The test result shows that the accuracy of the method is improved by 7% and the false alarm rate is reduced by 59.8% compared with the GDBM model method.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a building change detection method based on deep learning.
Background
The building is a ground object closely related to human life, densely distributed and wide. With the continuous development of remote sensing technology and aerial photography technology, it becomes easier to acquire high-resolution satellite images or aerial images of buildings. The building change detection technology is to analyze the same building image obtained at different time periods, so as to obtain the change information of the building. Along with the lapse of time, three major changes such as new construction, demolition, reconstruction etc. mainly take place for the building. Some buildings change into legal update, reflecting the development of cities and society. Meanwhile, changes of violation also exist, which affect the urban appearance and restrict social progress. Therefore, the high-precision building change detection is carried out, so that the illegal building area extraction is facilitated, the dynamic monitoring of the building is realized, and the building distribution is improved in time; the method can also realize the rapid and accurate statistics of the buildings, and plays an important role in urban planning, geographic information updating, land resource management and the like.
Building change detection methods can be roughly classified into two types, including methods based on classical image processing and methods based on deep learning, according to the difference of detection algorithms. The method based on classical image processing mainly adopts pixel level, feature level or object level image processing, so that more manual participation is realized, and the probability of detecting a pseudo-change region is higher. For example, the pixel-level based detection method only considers the characteristics of a single pixel, lacks spatial information of neighboring pixels, and the like, so that the detection result is sensitive to noise and has poor robustness. The effect of the feature level detection method depends on the result of building feature extraction, and when the change detection is performed by using a single feature, more missed detections and false alarms may be caused. The object-level-based detection method is based on image segmentation, pixels with the same properties are combined to form an object, and then change detection is carried out, so that the defects of the pixel-level detection method are improved to a certain extent. However, the traditional image segmentation method is difficult to realize the optimal extraction of the similar objects, and further influences the detection result of the building change. In addition, some methods based on classical image processing require digital surface models to be constructed, geographic information systems to be utilized, and the like, and are high in cost and complex in calculation.
In recent years, deep learning has become one of the research hotspots in the field of computer vision. With continuous optimization and improvement of the deep neural network, particularly with the proposal of the convolutional neural network, the feature extraction capability of the network is further improved, and the fields of image classification, recognition, segmentation and the like are rapidly developed. The deep learning network can automatically learn deep variation characteristics under various conditions, and the defects of more manual participation, poor robustness, limited characteristic extraction capability and the like in a classical variation detection method are overcome. Therefore, methods based on deep learning have become an important direction for current building change detection research. The existing typical method is that the change detection is realized by taking a change area in an image as a target to be detected and an unchanged area as a background by utilizing the excellent target identification capability of an area fast convolution neural network; and extracting features by using a twin convolution network, and judging a change area by measuring the distance between feature vectors. One common feature of these methods is that a large amount of labeled image data is required as a training set to train the neural network to learn image features. When the labeled training sample is insufficient, it is difficult to achieve the ideal detection effect. And a method for detecting the change of the building by using a Gaussian-Bernoulli depth Boltzmann Machine (GDBM) model, extracting most possible change and unchanged area samples from a change intensity map obtained by preprocessing by setting a self-adaptive sampling interval, training the model to extract change characteristics, and finally generating a change detection map. The method can reduce the requirement on the amount of the labeled training data, but the detection accuracy is not high enough.
The analysis of the current change detection method based on deep learning shows that except that most methods require a large amount of labeled training data, the methods directly judge the change of the whole image to be detected. However, in an image generally used for building change detection, not only buildings but also a large number of noise features exist. The noise feature refers to features of other forms besides buildings, such as trees, vehicles, pedestrians, etc., which inevitably interfere with the detection result to different degrees. In other words, detecting the full map increases the probability of detecting the false change region, i.e., the false alarm rate increases, and at the same time, the accuracy decreases.
Disclosure of Invention
The invention aims to adopt an unsupervised deep learning network to carry out change detection, such as a change detection method based on the unsupervised deep learning network PCANet (Feng Gao, JunyuDong, BoLi, and Qizhi Xu, "Automatic change detection in synthetic aperture images based on PCANet," IEEE Geoscience and Remote Sensing Letters vol, 13, 12, pp.1792-1796,2016), and reduce the requirement on the data volume of label images; by image segmentation, noise ground objects which do not belong to buildings are removed to the maximum extent, so that the accuracy is improved, and the false alarm rate is reduced.
The technical scheme of the invention is that two input images to be detected are given(i 1.. m; j 1.. n, mxn is the image size); first, a deep learning Image segmentation algorithm such as U-Net (Olif Ronneberger, Philipp Fische, andTimes Brox, "U-Net: volumetric network for biological Image segmentation," Medical Image Computing and Computer-Assisted interaction, vol.9351, pp.234-241,2015) is applied to the I1(i,j)、I2(I, j) carrying out building segmentation to obtain a binary segmentation image Is1(i,j)、Is2(i, j); then, the binary divided image Is1(i,j)、Is2(I, j) merging to generate a building area merged binary image IM(i, j); then, to merge the image IM(I, j) removing the input image I for masking image1(i,j)、I2(I, j) to obtain a noise-free image Ip1(i,j)、Ip2(i, j); finally, change detection is carried out on the noiseless image by adopting an unsupervised deep learning network such as PCANet, and the optimal change image is selected and outputThe specific implementation steps (see fig. 1) are as follows:
the first step is as follows: inputting two images I to be detected1(i,j)、I2(i, j) the image single-channel bit depth is B.
The second step is that: and (5) dividing the building area. Using the trained deep learning image segmentation network U-Net (the network structure is shown in FIG. 2), respectively for I1(i,j)、I2(i, j) performing building region segmentation to obtain a binary segmentation image
The specific steps of training U-Net (see FIG. 3) are as follows:
step 1: input training setWherein N isRRepresenting the total number of samples in the training set; a set of hyper-parameters required for network training is set, including the learning rate alpha (generally 10)-3~10-6) And the number of training rounds epochs (generally 10-20).
Step 2: and starting to train the U-Net network.
And step 3: adjustment of learning rate α (× 10)-1) And the training round number epochs (+1), and selecting the network parameters with the best segmentation effect (the highest accuracy) from the results obtained by training of a plurality of groups of different hyper-parameters.
And 4, step 4: and outputting the trained U-Net network.
The third step: and merging the building segmentation maps. Merging the binary segmentation images I by adopting the following formulas1(i,j)、Is2(i, j) generating a building region-merged binary image
The fourth step: and removing noise and ground objects. Merging binary images I using building regionsM(I, j) for two images I to be detected1(i,j)、I2(i, j) removing noise and feature to generate a noise-free image
The fifth step: building change detection. Using unsupervised network PCANet pair Ip1(i,j)、Ip2(i, j) detecting the change. The change detection algorithm based on PCANet is implemented by the following steps (see FIG. 4):
step 1: to Ip1(i,j)、Ip2(i, j) carrying out logarithmic ratio processing to generate a rough estimation change image
Wherein "| · |" represents a modulo operation.
Step 2: pixel pre-classification and labeling. For image IDExtracting Gabor characteristics from each pixel in (I, j), and matching with Fuzzy C-Means clustering (FCM) algorithm to carry out I pairDClustering pixels in (i, j) to obtain three types of labeled classes, wherein the change w is (1) in sequencec: the subscript c indicates a change, and the intra-class pixel (i, j) is labeled as l ij1 ═ 1; (2) unchanged wuc: the subscript uc indicates no change, and the pixel (i, j) within the class is labeled as lij-1; (3) to be determined wud: the subscript ud indicates that the pending, intra-class pixel (i, j) is labeled as lij0; the number of pixels included in the three categories is Nc、Nuc、NudAnd N isc+Nuc+Nud=m×n。
And step 3: and calculating the number of training sample pixels. From step 2: randomly selecting training sample pixels from all m multiplied by n pixels obtained by pixel pre-classification and marking, screening positive sample pixels (changed pixels) and negative sample pixels (unchanged pixels) according to a certain proportion, wherein the label of the training sample pixels (i, j) is lijE { -1, +1 }. Total number of training sample pixels NsumCalculated using the following formula:
Nsum=m×n×r (7)
where r represents the proportion of randomly chosen sample pixels. Number of positive sample pixels NposCalculated using the following formula:
negative sample pixel number NnegThen it is:
Nneg=Nsum-Npos (9)
and 4, step 4: and generating a training set. In the training set, the generation process of the training sample image is shown in fig. 5; in two input images I respectivelyp1(i,j)、Ip2(i, j) selecting image block with size of k × k (k is typically odd number in 3-11) with sample pixel (i, j) as centerThen, two image blocks are put togetherLongitudinally connected to form a sample image PyThe size is 2k × k, i.e.:
sample image PyLabel l ofyIs the label l of the sample pixel (i, j)ijI.e. lyE { -1, +1 }. Co-generation of NsumA sample image, whereinNposPositive sample, NnegA negative sample to form a training set
And 5: extracting PCANet characteristics and generating a characteristic vector set. Extracting a training set through an unsupervised deep learning network PCANetFeature vector F of each sample imageyAnd a label lyTogether forming a feature vector setFirstly, a group of parameters to be set for training PCANet is set, wherein the parameters include the number of stages (generally 2) for performing PCA filter convolution (PCA filters convolution), and the number L of filters in each staget(generally 5 to 8) t 1.
Step 6: and (5) training a classifier. Using feature vector setsAnd training a classifier. The classifier uses a linear kernel Support Vector Machine (Liner SVM). The parameter needed to be set for training the Liner SVM is only one coefficient in the objective function of the classifier, called as an error penalty factor, and is generally 1-10.
And 7: the pixels are reclassified. For step 2: undetermined classification w obtained by pixel pre-classification and markingudGenerating an image by the method shown in fig. 5 for each pixel; then extracting features of the image by using PCANet; inputting the obtained feature vectors into a Liner SVM which finishes training for classification; dividing pixels into variation classes w according to output results of Liner SVMud_cOr unchanged class wud_uc(ii) a Finally, synthesize w obtained in step 2c、wucPixels in class, then full image pixels are classified as changesAnd is not changed
In the formula, "u" represents "union".
And a sixth step: generating a change image Io(i,j)∈m×n. Set of pixels derived from classificationGenerating a binary change image by the following formula:
the seventh step: and selecting the optimal variation image. Fixing the number of PCA filtering convolution stages to be 2, and adjusting the number L of filters in the two PCA filtering convolution stages1、L2(5-8 are respectively taken, step 1 is taken), the image block size parameter k (odd number in 3-11) and the error penalty factor C (1-10 are taken, step 1) of the Liner SVM are taken, and the accuracy ACC and the omission factor P of each change image obtained in the sixth step are calculatedMDFalse alarm rate PFA(ii) a According to ACC, PMD、PFASelecting relatively optimal change image from high priority to low priorityAnd its corresponding parameters. The calculation formula of each evaluation index is as follows:
wherein N isTC、NFCRespectively representing the number of pixels that are correctly detected and the number of pixels that are detected as being changed but not actually changed, NTUC、NFUCRespectively representing the number of unchanged pixels correctly detected and the number of pixels detected as unchanged but actually changed.
Compared with the existing building change detection method, the method has the advantages that the change detection is carried out by utilizing deep learning, and a large amount of labeled training data is not needed; when noise ground objects such as trees, vehicles, pedestrians and the like interfere in the input image, change detection is only carried out on the area of the building, accuracy is improved, and false alarm rate is reduced. For example, building change detection is carried out on aerial images of 2008 and 2013 in a certain area, the accuracy of the existing building change detection method by using a GDBM model is 83.64%, and the false alarm rate is 25.36% (Zhang Xinlong, Chen Xiuwan, Lifei, etc.. deep learning change detection method of high-resolution remote sensing images [ J ] survey and drawing report, 2017,46(8): 999-. In comparison, the method can achieve 89.50% of correct rate, and the false alarm rate is only 10.20%; the accuracy is improved by 7%, and the false alarm rate is reduced by 59.8%.
Drawings
Fig. 1 is a general block diagram of the present invention.
Fig. 2 is a structural diagram of the deep learning image segmentation network U-Net.
FIG. 3 is a training block diagram of U-Net.
Fig. 4 is a block diagram of a change detection method based on PCANet.
Fig. 5 is a diagram of a sample image generation process in the change detection method based on PCANet.
Fig. 6 is an aerial image of two years. (a)2008, aerial images; (b) images were taken by plane in 2013.
Fig. 7 is a diagram of the building segmentation result. (a)2008, aerial images; (b) images were taken by plane in 2013.
FIG. 8 is a merged view of two aerial image building segmentation maps.
Fig. 9 is a diagram showing the aerial image noise feature removal result. (a)2008, aerial images; (b) images were taken by plane in 2013.
Fig. 10 is a rough estimation change image.
Fig. 11 is a diagram showing a result of detecting a change in a building and a diagram showing a true value of the change. (a) The result chart of the patent; (b) a building change truth map.
Detailed Description
The following describes a specific embodiment of the present invention in detail with reference to the technical solution and the attached drawings.
Two existing aerial images are aerial images of a certain area in 2008 and 2013, the size of each aerial image is 512 x 512, and the single-channel bit depth B of each aerial image is 8. The building change detection by adopting the invention is as follows:
the first step is as follows: inputting aerial images of 2008 and 2013, and recording as I1(i,j)、I2(i, j) as shown in FIGS. 6(a) and (b).
The second step is that: and (5) dividing the building area. Using the trained deep learning image segmentation network U-Net (the network structure is shown in FIG. 2), respectively for I1(i,j)、I2(i, j) dividing the building region to obtain a binary divided image according to the formulas (1) and (2) and the image bit depthAs shown in fig. 7(a) and (b). The specific steps of training U-Net (see FIG. 3) are as follows:
step 1: input training setUsing a data set published by CSU-DP laboratory collation, the spatial resolution of the data set was 0.22m, and a total of 1233 aerial images of 512X 512 size, NR1233; the hyper-parameter required to set a set of training networks is the learning rate α 10-3The training round number epochs is 10.
Step 2: and starting to train the U-Net network.
And step 3: adjustment of learning rate α (× 10)-1Get 10 immediately-4,10-4,10-5,10-6) And training round number epochs (+1, namely 11,12,13,14,15,16,17,18,19 and 20) are taken, and among the results obtained by the sets of hyper-parameters, the network parameter with the best segmentation effect (highest accuracy) is selected.
And 4, step 4: and outputting the trained U-Net network.
The third step: and merging the building segmentation maps. Merging the binary segmentation images I by adopting a formula (3)s1(i,j)、Is2(i, j) generating a building region-merged binary imageAs shown in fig. 8.
The fourth step: and removing noise and ground objects. Combining the binary images I by using the building region by adopting the formulas (4) and (5)M(I, j) for two images I to be detected1(i,j)、I2(i, j) removing noise and feature to generate a noise-free imageAs shown in fig. 9(a) and (b).
The fifth step: building change detection. Using unsupervised network PCANet pair Ip1(i,j)、Ip2(i, j) detecting the change. The change detection algorithm based on PCANet is implemented by the following steps (see FIG. 4):
step 1: using formula (6) to Ip1(i,j)、Ip2(i, j) carrying out logarithmic ratio processing to generate a rough estimation change imageAs shown in fig. 10.
Step 2: pixel pre-classification and labeling. For image IDExtracting Gabor characteristics of each pixel in (I, j), and matching with an FCM algorithm to carry out IDClustering pixels in (i, j) to obtain three types of labeled classes, wherein the change w is (1) in sequencecAnd the intra-class pixel (i, j) is labeled as l ij1 ═ 1; (2) unchanged, intra-class pixel (i, j) is labeled as lij-1; (3) to be determined wudAnd the intra-class pixel (i, j) is labeled as lij0; the number of pixels included in the three categories is Nc=204018、Nuc=40563、Nud=17563,Nc+Nuc+Nud=512×512。
And step 3: and calculating the number of training sample pixels. From step 2: randomly selecting training sample pixels from all pixels obtained by pixel pre-classification and marking, screening positive sample pixels (changed pixels) and negative sample pixels (unchanged pixels) according to the proportion that r is 30%, wherein the label of the training sample pixel (i, j) is lijE { -1, +1 }. Calculating the total number of pixels of the training sample to be N by adopting the formula (7)sum78643; respectively calculating the total number of the positive and negative sample pixels to be N by adopting formulas (8) and (9)pos=65600、Nneg=13043。
And 4, step 4: and generating a training set. In the training set, the generation process of the training sample image is shown in fig. 5; in two input images I respectivelyp1(i,j)、Ip2In (i, j), an image block of size 7 × 7 (k: 7) is selected with the sample pixel (i, j) as the centerThen, two image blocks are mapped by equation (10) Longitudinally connected to form a sample imageSample image PyLabel l ofyIs the label l of the sample pixel (i, j)ijI.e. lyE { -1, +1 }. 78643 sample images can be generated in total, wherein 65600 positive samples and 13043 negative samples form a training set
And 5: extracting PCANet characteristics and generating a characteristic vector set. Extracting a training set through an unsupervised deep learning network PCANetFeature vector F of each sample imageyAnd a label lyTogether forming a feature vector setFirstly, a group of parameters required for training PCANet is set, wherein the parameters comprise a stage number of stages of 2, and the number of filters in each stage is L1=7、L2=7。
Step 6: and (5) training a classifier. Using feature vector setsAnd training a classifier. The classifier selects a Liner SVM. And setting the hyperparametric error penalty factor as C-1.
And 7: the pixels are reclassified. For step 2: undetermined classification w obtained in pixel pre-classification and labelingudGenerating an image by the method shown in fig. 5 for each pixel; then extracting features of the image by using PCANet; inputting the obtained feature vectors into a Liner SVM which finishes training for classification; dividing pixels into variation classes w according to output results of Liner SVMud_cOr unchanged class wud_uc(ii) a Finally, synthesizing w obtained in step 2 by adopting formulas (11) and (12)c、wucPixels in classes, classifying full image pixels as changesAnd is not changed
And a sixth step: generating a change image Io(i,j)∈512×512. Set of pixels derived from classificationGenerating a binary change image I using equation (13)o(i,j)。
The seventh step: and selecting the optimal variation image. Adjusting the number L of filters in two PCA filtering convolution stages1、L2(5, 6 and 8 are respectively taken), an image block size parameter k (3, 5,9 and 11 is taken), an error penalty factor C (2, 3,4,5,6,7,8,9 and 10 is taken) of the Liner SVM, and the accuracy ACC and the omission factor P of each change image obtained in the sixth step are calculated by adopting the formulas (14), (15) and (16)MDFalse alarm rate PFA(ii) a According to ACC, PMD、PFASelecting relatively optimal change image from high priority to low priorityAnd the corresponding parameters: number L of filters in two PCA filtering convolution stages1、L2Respectively 8,9 for image block size parameter k and 1 for error penalty factor C of the Liner SVM.
Claims (1)
1. A building change detection method based on deep learning is characterized by comprising the following steps:
the first step is as follows: inputting two images I to be detected1(i,j)、I2(i, j), the single-channel bit depth of the image is B;
the second step is that: dividing a building area; respectively carrying out I pair on I pair by utilizing a trained deep learning image segmentation network U-Net1(i,j)、I2(i, j) is carried outDividing the building region to obtain a binary divided image
The U-Net training method comprises the following specific steps:
step 1: input training setWherein N isRRepresenting the total number of samples in the training set; setting a group of hyper-parameters required by network training, including learning rate alpha, taking 10-3~10-6(ii) a Training the number of epochs, and taking 10-20;
step 2: starting to train the U-Net network;
and step 3: adjusting the learning rate alpha, each time multiplying by 10-1(ii) a Adjusting the number of training rounds epochs, adding 1 each time, and selecting the network parameters with the best segmentation effect, namely the highest accuracy rate, from the results obtained by training of a plurality of groups of different hyper-parameters;
and 4, step 4: outputting the trained U-Net network;
the third step: merging the building segmentation maps; merging the binary segmentation images I by adopting the following formulas1(i,j)、Is2(i, j) generating a building region-merged binary image
The fourth step: removing noise and ground objects; merging binary images I using building regionsM(I, j) for two images I to be detected1(i,j)、I2(i, j) removing noise and feature to generate a noise-free image
The fifth step: building change detection; using unsupervised network PCANet pair Ip1(i,j)、Ip2(i, j) performing change detection; the change detection algorithm based on the PCANet comprises the following specific implementation steps:
step 1: to Ip1(i,j)、Ip2(i, j) carrying out logarithmic ratio processing to generate a rough estimation change image
Wherein "| · |" represents a modulo operation;
step 2: pixel pre-classification and marking; for image IDExtracting Gabor characteristics from each pixel in (I, j), and matching with a fuzzy C mean value clustering algorithm to carry out ID(i, j) clustering the pixels to obtain threeThe category with labels is sequentially (1) change wc: the subscript c indicates a change, and the intra-class pixel (i, j) is labeled as lij1 ═ 1; (2) unchanged wuc: the subscript uc indicates no change, and the pixel (i, j) within the class is labeled as lij-1; (3) to be determined wud: the subscript ud indicates that the pending, intra-class pixel (i, j) is labeled as lij0; the number of pixels included in the three categories is Nc、Nuc、NudAnd N isc+Nuc+Nud=m×n;
And step 3: calculating the number of training sample pixels; from step 2: randomly selecting training sample pixels from all m multiplied by n pixels obtained by pixel pre-classification and marking, screening positive sample pixels and negative sample pixels according to a certain proportion, wherein the label of the training sample pixels (i, j) is lijE { -1, +1 }; total number of training sample pixels NsumCalculated using the following formula:
Nsum=m×n×r (7)
where r represents the proportion of randomly chosen sample pixels; number of positive sample pixels NposCalculated using the following formula:
negative sample pixel number NnegThen it is:
Nneg=Nsum-Npos (9)
and 4, step 4: generating a training set; in two input images I respectivelyp1(i,j)、Ip2(i, j) selecting an image block of size k × k centered on the sample pixel (i, j)k is an odd number in 3-11; then, two image blocks are put togetherLongitudinally connected to form a sample image PyThe size is 2k × k, i.e.:
sample image PyLabel l ofyIs the label l of the sample pixel (i, j)ijI.e. lyE { -1, +1 }; co-generation of NsumA sample image of NposPositive sample, NnegA negative sample to form a training set
And 5: extracting PCANet characteristics and generating a characteristic vector set; extracting a training set through an unsupervised deep learning network PCANetFeature vector F of each sample imageyAnd a label lyTogether forming a feature vector setFirstly, setting a group of parameters to be set for training PCANet, including the number of stages for carrying out PCA filtering convolution, and taking 2; number of filters L in each stagetTaking 5-8, wherein t is 1.
Step 6: training a classifier; using feature vector setsTraining a classifier; the classifier selects a linear kernel support vector machine; the parameter required to be set for training the Liner SVM is only one coefficient in a classifier target function, namely an error punishment factor, which is marked as C, and is taken as 1-10;
and 7: the pixels are classified again; for step 2: undetermined classification w obtained by pixel pre-classification and markingudGenerating an image by the method in the step 4 for each pixel; then extracting features of the image by using PCANet; will obtainInputting the feature vectors into a Liner SVM which finishes training for classification; dividing pixels into variation classes w according to output results of Liner SVMud_cOr unchanged class wud_uc(ii) a Finally, synthesize w obtained in step 2c、wucPixels in class, then full image pixels are classified as changesAnd is not changed
In the formula, "U" represents "union set";
and a sixth step: generating a change image Io(i,j)∈m×n(ii) a Set of pixels derived from classificationGenerating a binary change image by the following formula:
the seventh step: selecting an optimal change image; fixing the number of PCA filtering convolution stages to be 2, and adjusting the number L of filters in the two PCA filtering convolution stages1、L2Respectively taking 5-8, and stepping by 1; taking odd numbers in 3-11 as an image block size parameter k; taking 1-10 error penalty factors C of a Liner SVM, and stepping by 1; calculating the accuracy ACC and the omission factor P of each change image obtained in the sixth stepMDFalse alarm rate PFA(ii) a According to ACC, PMD、PFASelecting relatively optimal change image from high priority to low priorityAnd its corresponding parameters; the calculation formula of each evaluation index is as follows:
wherein N isTC、NFCRespectively representing the number of pixels that are correctly detected and the number of pixels that are detected as being changed but not actually changed, NTUC、NFUCRespectively representing the number of unchanged pixels which are correctly detected and the number of pixels which are detected as unchanged but actually changed;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910054336.9A CN109871875B (en) | 2019-01-21 | 2019-01-21 | Building change detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910054336.9A CN109871875B (en) | 2019-01-21 | 2019-01-21 | Building change detection method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109871875A CN109871875A (en) | 2019-06-11 |
CN109871875B true CN109871875B (en) | 2021-01-19 |
Family
ID=66917915
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910054336.9A Active CN109871875B (en) | 2019-01-21 | 2019-01-21 | Building change detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109871875B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263705B (en) * | 2019-06-19 | 2023-07-07 | 上海交通大学 | Two-stage high-resolution remote sensing image change detection system oriented to remote sensing technical field |
CN110969088B (en) * | 2019-11-01 | 2023-07-25 | 华东师范大学 | Remote sensing image change detection method based on significance detection and deep twin neural network |
CN112508853B (en) * | 2020-11-13 | 2022-03-25 | 电子科技大学 | Infrared thermal image defect detection and quantification method for extracting space-time characteristics |
CN112801929A (en) * | 2021-04-09 | 2021-05-14 | 宝略科技(浙江)有限公司 | Local background semantic information enhancement method for building change detection |
CN113362286B (en) * | 2021-05-24 | 2022-02-01 | 江苏星月测绘科技股份有限公司 | Natural resource element change detection method based on deep learning |
CN116051519B (en) * | 2023-02-02 | 2023-08-22 | 广东国地规划科技股份有限公司 | Method, device, equipment and storage medium for detecting double-time-phase image building change |
CN116452983B (en) * | 2023-06-12 | 2023-10-10 | 合肥工业大学 | Quick discovering method for land landform change based on unmanned aerial vehicle aerial image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04311187A (en) * | 1991-04-10 | 1992-11-02 | Toshiba Corp | Method for extracting change area of image to be monitored |
CN102855487A (en) * | 2012-08-27 | 2013-01-02 | 南京大学 | Method for automatically extracting newly added construction land change image spot of high-resolution remote sensing image |
CN105069811A (en) * | 2015-09-08 | 2015-11-18 | 中国人民解放军重庆通信学院 | Multi-temporal remote sensing image change detection method |
CN105809673A (en) * | 2016-03-03 | 2016-07-27 | 上海大学 | SURF (Speeded-Up Robust Features) algorithm and maximal similarity region merging based video foreground segmentation method |
CN108765465A (en) * | 2018-05-31 | 2018-11-06 | 西安电子科技大学 | A kind of unsupervised SAR image change detection |
-
2019
- 2019-01-21 CN CN201910054336.9A patent/CN109871875B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04311187A (en) * | 1991-04-10 | 1992-11-02 | Toshiba Corp | Method for extracting change area of image to be monitored |
CN102855487A (en) * | 2012-08-27 | 2013-01-02 | 南京大学 | Method for automatically extracting newly added construction land change image spot of high-resolution remote sensing image |
CN105069811A (en) * | 2015-09-08 | 2015-11-18 | 中国人民解放军重庆通信学院 | Multi-temporal remote sensing image change detection method |
CN105809673A (en) * | 2016-03-03 | 2016-07-27 | 上海大学 | SURF (Speeded-Up Robust Features) algorithm and maximal similarity region merging based video foreground segmentation method |
CN108765465A (en) * | 2018-05-31 | 2018-11-06 | 西安电子科技大学 | A kind of unsupervised SAR image change detection |
Non-Patent Citations (3)
Title |
---|
Automatic Change Detection in Synthetic Aperture Radar Images Based on PCANet;Feng Gao 等;《IEEE Geoscience and Remote Sensing Letters》;20161231;第13卷(第12期);全文 * |
BUILDING CHANGE DETECTION BY COMBINING LiDAR DATA AND ORTHO IMAGE;Daifeng Peng 等;《The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences》;20160719;第XLI-B3卷;全文 * |
基于DBN与对象融合的遥感图像变化检测方法;窦方正 等;《计算机工程》;20180430;第44卷(第4期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109871875A (en) | 2019-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109871875B (en) | Building change detection method based on deep learning | |
Yin et al. | Hot region selection based on selective search and modified fuzzy C-means in remote sensing images | |
Wang et al. | Optimal segmentation of high-resolution remote sensing image by combining superpixels with the minimum spanning tree | |
CN107133569B (en) | Monitoring video multi-granularity labeling method based on generalized multi-label learning | |
Tao et al. | Scene context-driven vehicle detection in high-resolution aerial images | |
Zhan et al. | Unsupervised scale-driven change detection with deep spatial–spectral features for VHR images | |
CN109657610A (en) | A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images | |
CN109784392A (en) | A kind of high spectrum image semisupervised classification method based on comprehensive confidence | |
Peng et al. | Object-based change detection from satellite imagery by segmentation optimization and multi-features fusion | |
CN110738247A (en) | fine-grained image classification method based on selective sparse sampling | |
CN108427919B (en) | Unsupervised oil tank target detection method based on shape-guided saliency model | |
CN113988147B (en) | Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device | |
Fang et al. | Unsupervised Bayesian classification of a hyperspectral image based on the spectral mixture model and Markov random field | |
CN109635726B (en) | Landslide identification method based on combination of symmetric deep network and multi-scale pooling | |
CN111639697B (en) | Hyperspectral image classification method based on non-repeated sampling and prototype network | |
CN112990282A (en) | Method and device for classifying fine-grained small sample images | |
CN115456957B (en) | Method for detecting change of remote sensing image by full-scale feature aggregation | |
CN115829996A (en) | Unsupervised synthetic aperture radar image change detection method based on depth feature map | |
CN115830322A (en) | Building semantic segmentation label expansion method based on weak supervision network | |
Abujayyab et al. | Integrating object-based and pixel-based segmentation for building footprint extraction from satellite images | |
Jauhari et al. | Grouping Madura Tourism Objects with Comparison of Clustering Methods | |
Kanthi et al. | A 3D-Inception CNN for Hyperspectral Image Classification | |
Ankayarkanni et al. | Object based segmentation techniques for classification of satellite image | |
Jia et al. | Identifying dynamic changes with noisy labels in spatial-temporal data: A study on large-scale water monitoring application | |
Karmuhil et al. | An automatic road network extraction from satellite images using modified SOFM approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |