CN111553272B - High-resolution satellite optical remote sensing image building change detection method based on deep learning - Google Patents

High-resolution satellite optical remote sensing image building change detection method based on deep learning Download PDF

Info

Publication number
CN111553272B
CN111553272B CN202010347858.0A CN202010347858A CN111553272B CN 111553272 B CN111553272 B CN 111553272B CN 202010347858 A CN202010347858 A CN 202010347858A CN 111553272 B CN111553272 B CN 111553272B
Authority
CN
China
Prior art keywords
network
building
result
classification
change detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010347858.0A
Other languages
Chinese (zh)
Other versions
CN111553272A (en
Inventor
岳照溪
潘琛
郭功举
刘一宁
毛炜青
冯威丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI SURVEYING AND MAPPING INSTITUTE
Original Assignee
SHANGHAI SURVEYING AND MAPPING INSTITUTE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI SURVEYING AND MAPPING INSTITUTE filed Critical SHANGHAI SURVEYING AND MAPPING INSTITUTE
Priority to CN202010347858.0A priority Critical patent/CN111553272B/en
Publication of CN111553272A publication Critical patent/CN111553272A/en
Application granted granted Critical
Publication of CN111553272B publication Critical patent/CN111553272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The method for detecting the change of the high-resolution satellite optical remote sensing image building based on deep learning comprises the following steps of 1, carrying out vector range delineation of the building on remote sensing satellite images in the same region with multiple periods and the same resolution; step 2, carrying out vector range delineation of the building change area according to the front and back two-stage remote sensing images; step 3, manufacturing a sample; step 4, expanding the sample; step 5, building a building classification network 1; step 6, calling parameters in the classification network 1, building a change detection network 2, and detecting the change of the images in the two stages; and 7, 8: training classification network parameters and residual parameters; step 9, carrying out building classification detection and change detection by using the trained network parameters, and carrying out morphological processing and training result vectorization on a detection result; and step 10, comparing and evaluating the change detection result, the building classification result and the manual calibration result, optimizing the supplementary sample and retraining. The technical scheme effectively detects the change and obtains the newly increased and reduced change areas of the building.

Description

High-resolution satellite optical remote sensing image building change detection method based on deep learning
Technical Field
The invention belongs to the technical field of computer vision and remote sensing, relates to a deep learning change detection method, and particularly relates to a high-resolution satellite optical remote sensing image building change detection method based on deep learning.
Background
As is well known in the art, the term "change of a building" refers to a situation where a building is newly added or removed, and the change of the building can be observed from satellite images at different times. Therefore, "detection of building change" is the detection of building change (addition and removal) by using satellite images commonly used in urban construction and management, so as to supervise urban land utilization.
The building is taken as the most main constituent part of a city, is always a key research object of remote sensing image interpretation, and has important significance in the field of city planning and management, and the change detection of the building is important in the aspects of data updating, city land use implementation control, illegal building monitoring and the like.
The deep learning has a wide application range, and at present, certain research results are available in the fields of remote sensing image segmentation and target recognition, and as a mainstream deep learning framework of visual processing, CNN is widely applied to image classification, and a series of general CNN architectures such as AlexNet, VGGNet, GoogleNet, ResNet and the like are gradually developed on the basis of CNN.
The high-resolution urban remote sensing image contains rich surface feature information and high surface feature complexity, and the main automatic classification methods at present comprise a Markov random field, a conditional random field, an SVM, a decision tree and the like, but because the methods depend on the selection of image features, the methods are difficult to popularize in large-area image classification.
The deep learning network effectively solves the problem of feature selection, and high-level semantic features can be obtained through automatic learning of the deep network model, so that a more accurate semantic segmentation result is obtained.
Disclosure of Invention
The invention mainly solves the problems of lack of high-resolution satellite image change samples and low automatic change detection degree in the existing change detection technology, and provides a building change detection method based on deep learning high-resolution satellite optical remote sensing images, which can effectively detect changes so as to obtain newly increased and reduced change areas of a building.
The technical scheme adopted by the invention is as follows:
a high-resolution satellite optical remote sensing image building change detection method based on deep learning comprises the following steps:
step 1: carrying out vector range sketching on the high-resolution remote sensing satellite images in the same region with the same resolution in multiple periods;
step 2: carrying out vector range delineation on of a building change area according to the front and back high-resolution remote sensing images;
and step 3: constructing a sample by using the vector range outlined in the step 1 and the step 2, respectively manufacturing a building classification sample and a change detection sample, and cutting all samples into uniform sizes;
and 4, step 4: rotating, mirror image turning, color and brightness adjustment and the like are carried out on the sample cut in the step 3 to expand the sample amount, the color and brightness adjustment is to convert the image from the RGB color space to the HSI color space, and the adjusted image is converted back to the RGB color space by adjusting H, S, I three components, so that the richness of the sample is realized;
and 5: building a deep learning building classification network 1 on the basis of ResNet 50;
step 6: calling parameters in the building classification network 1 in the step 5, and building a change detection network 2 on the basis of the parameters to realize change detection of the two-stage images;
step 6.1: the network is composed of three parts, including two classification networks 1 and a change detection network 2, wherein part of network parameters of the classification network 1 are obtained by training in step 5, and the parameters of the trained building classification network 1 are used for respectively carrying out feature extraction and deconvolution up-sampling on the images 1 and 2 in the front and back two stages, and because each layer effectively contains the features of the images under different scales in the deconvolution process, five hidden layers and a prediction layer are used as the input (feature map) of the change detection network 2;
step 6.2: the difference graph 1i is an absolute value of a difference value between the feature graph 1-i and the feature graph 2-i, the matrix is 50 × 2048, the difference graph 1(i +1) is obtained by deconvolution after normalization, the matrix is 50 × 256, then the difference value and the absolute value are also obtained by the feature graph 1- (i +1) and the feature graph 2- (i +1), the matrix difference graph 2i of 50 × 256 is obtained by convolution processing of 3 × 3, the matrix difference graph 2i of 50 × 512 is obtained by combination of the difference graph 1(i +1), and a final change detection result prediction result of 400 × 1 is obtained by analogy (i here takes a natural number from 1 to 5 and represents a feature graph obtained by the ith network layer), and the result and a change calibration sample are calculated to be softpymetros:
first, softmax calculation is performed:
Figure BDA0002470837490000031
s in formula (1)jFor the output result of softmax at the jth position, k represents the current category, and n represents the total number of categories (0)<k is less than or equal to n); denominator representing k from 0 to n
Figure BDA0002470837490000032
The integration of (2) and (3) represents an overall situation of the classification result; a isjRepresenting the numerical value obtained by network calculation at the jth class;
Figure BDA0002470837490000033
denotes the base e, ajAs a calculated value of an index;
then the S isjAs an input to equation (2), a cross entropy calculation is performed:
Hy′(y)=-∑iyi′log(yi) (2)
y in formula (2)i' to delineate the ith value, y, in the sampleiIs the output result of the formula (1) softmax, namely SjEvaluating the classification result by using a formula (2);
and 7: putting the building sample into a classification network 1, and training parameters of the classification network;
and 8: freezing parameters of the classification network 1 part, putting a change sample into the change detection network 2, calling the parameters of the classification network 1, and training the rest parameters according to the structure of the change detection network 2;
and step 9: carrying out building classification detection and change detection in a certain area range by using the parameters of the trained network 1 and network 2, then carrying out morphological processing on the detection result, and carrying out vectorization on the training result;
step 9.1: putting the images in the full range of the detected area into a trained network, and because the images of 400 × 3 are used in the network training, the images in the full range of the detected area need to be tested in a block mode in the test and combined together;
step 9.2: performing opening operation and closing operation of morphological processing on the detected result, wherein the opening operation is to corrode and expand the result first and remove isolated dots, burrs and the like; the closed operation is to perform expansion-first and corrosion-second operation on the result to fill and level small holes, so that the result edge is more complete;
step 9.3: vectorizing the result processed in the step 9.2 to obtain a building vector boundary and a change detection vector boundary;
step 10: and comparing the change detection result, the building classification result and the manual calibration result, evaluating the result, optimizing the supplementary sample, and performing further training to form a benign cycle.
The technical scheme effectively detects the change, thereby obtaining the newly increased and reduced change area of the building.
Drawings
FIG. 1 is a general flow chart of example 1 of the present invention;
FIG. 2 is a schematic diagram of the design of a building classification network according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of a change detection network according to embodiment 1 of the present invention;
FIG. 4 is a schematic view of a building sample in accordance with example 1 of the present invention;
fig. 5 is a schematic diagram of a change detection sample in embodiment 1 of the present invention.
FIG. 6 is a schematic diagram illustrating a change detection result according to embodiment 1 of the present invention
FIG. 7 is a schematic diagram of the building classification result according to embodiment 1 of the present invention
FIG. 8 is a comparison of the results of the conventional change detection in example 1 of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Example 1
Referring to fig. 1, the invention provides a building change detection method based on deep learning high-resolution satellite optical remote sensing images, which comprises the following steps:
step 1: carrying out vector range sketching on the high-resolution remote sensing satellite images in the same region with the same resolution in multiple periods;
step 2: carrying out vector range delineation on of a building change area according to the front and back high-resolution remote sensing images;
and step 3: carrying out sample construction by using the vector range outlined in the step 1 and the step 2, respectively manufacturing a building classification sample and a change detection sample, and cutting all samples into uniform sizes;
and 4, step 4: rotating, mirror image turning, color and brightness adjustment and the like are carried out on the sample cut in the step 3 to expand the sample amount, the color and brightness adjustment is to convert the image from the RGB color space to the HSI color space, and the adjusted image is converted back to the RGB color space by adjusting H, S, I three components, so that the richness of the sample is realized;
step 4.1: respectively rotating each image sample by 90 degrees, 180 degrees and 270 degrees, and then carrying out mirror image turning on the image samples in the transverse direction and the longitudinal direction;
step 4.2: the image is converted into an HIS color space, then H, S, I three components are adjusted respectively, the brightness is adjusted to be higher and lower by 10 percent, the saturation is adjusted to be higher and lower by 10 percent, and the hue is shifted to red, green and blue respectively, so that the richness of the sample is realized.
And 5: improving the network on the basis of ResNet50, and constructing a deep learning building classification network 1;
step 5.1: as shown in fig. 2, the input of the network is a three-band RGB three-band image of 400 × 3 of three bands, and the classification prediction result of 400 × 1 of a single band is output, and in the feature extraction stage, the structure of the network is basically consistent with that of ResNet50, and the effectiveness of network feature extraction is ensured by adopting a residual learning module, which is composed of 7 × 7 convolutional layers with a step length of 2, the maximum pooling of 3 × 3, and convolutional layers with 5 different depths;
step 5.2: then, taking account of the structure of Unet, performing deconvolution, integrating the corresponding layers in the down-sampling process for each layer of deconvolution, taking account of the network structure design of FPN, adding the part of a thin dotted arrow in FIG. 2, namely, directly resampling the 5 matrix forms in the up-sampling process to the original image size of 400 × 1, and adding the original image size to the final prediction result to calculate the loss value.
Step 6: calling parameters in the building classification network 1 in the step 5, and building a change detection network 2 on the basis of the parameters to realize change detection of the two-stage images;
step 6.1: as shown in fig. 3, the network is composed of three parts, including two classification networks 1 and a change detection network 2, wherein part of the network parameters of the classification network 1 are obtained by training in step 5, and the parameters of the trained building classification network 1 are used for feature extraction and deconvolution up-sampling of the images of the front and back two stages, image1 and image2, respectively, and because each layer effectively contains the features of images at different scales in the deconvolution process, five hidden layers and prediction layers are used as the input (feature map) of the change detection network 2;
step 6.2: as shown in fig. 3, taking the characteristic diagram 1-i as an example, it is actually a matrix of 50 x 2048 connected to its dotted line, then the feature map 1-i is subtracted from the feature map 2-i and the absolute value is taken, resulting in a difference map 1i, then, through deconvolution, a difference graph 1(i +1) of 50 × 256 is obtained, and a matrix of 50 × 512 is obtained by subtracting the feature graph 1- (i +1) from the feature graph 2- (i +1), the difference map 2i of 50 x 256 is obtained by convolution, the difference map 2i is directly merged with the difference map 1(i +1), a matrix is obtained, the matrix is deconvoluted to obtain a difference graph 2(i +1), a final prediction result of the change detection result of 400 × 1 is obtained by analogy, and the result and the change calibration sample are used for calculating the softmax cross entry value:
first, softmax calculation is performed:
Figure BDA0002470837490000061
s in formula (1)jFor the output result of softmax at the jth position, k represents the current category, and n represents the total number of categories (0)<k is less than or equal to n); denominator representing k from 0 to n
Figure BDA0002470837490000062
The integration of (2) and (3) represents an overall situation of the classification result; a isjRepresenting the numerical value obtained by network calculation at the jth class;
Figure BDA0002470837490000063
denotes the base e, ajAs a calculated value of an index;
then the S isjAs an input to equation (2), a cross entropy cross entry calculation is performed:
Hy′(y)=-∑iyi′log(yi) (2)
y in formula (2)i' to delineate the ith value, y, in the sampleiIs the output result of the formula (1) softmax, namely SjThe classification result can be evaluated by using the formula.
And 7: putting the building sample into a classification network 1, and training parameters of the classification network;
and 8: freezing parameters of the classification network 1 part, putting a change sample into the change detection network 2, calling the parameters of the classification network 1, and training the rest parameters according to the structure of the change detection network 2;
and step 9: carrying out building classification detection and change detection in a certain area range (such as the whole city range of a certain city) by using the parameters of the trained network 1 and network 2, then carrying out morphological processing on the detection result, and carrying out vectorization on the training result;
step 9.1: putting the images of the whole market into a trained network, wherein the images of 400 x 3 are used in the network training, the images of the whole market are required to be tested in blocks in the test and are combined together, in order to solve the problem of joint connection at a joint, the adjacent blocks adopt 20-pixel-width connection transition, two classification tests are carried out within the 20-pixel width, and the classification result of each pixel is one of the two detections with higher probability so as to solve the joint problem;
step 9.2: performing opening operation and closing operation of morphological processing on the detected result, wherein the opening operation is to corrode and expand the result, so that isolated dots, burrs and the like can be removed; the closed operation is an operation of expanding the result and then corroding the result, so that small holes can be filled and the result edge is more complete;
step 9.3: vectorizing the result processed in the step 9.2 to obtain a building vector boundary and a change detection vector boundary;
step 10: and comparing the change detection result, the building classification result and the manual calibration result, evaluating the result, optimizing the supplementary sample, and performing further training to form a benign cycle.
Step 10.1: comparing the difference between the building automatic classification result and the manual calibration result, calibrating the error classification part as a building negative sample, adding the undetected part into the building classification sample as a supplement, and performing parameter optimization of the classification network 1;
step 10.2: the structure of the original classification network 1 is not changed, and the optimized samples are used for training and optimizing network parameters;
step 10.3: comparing the difference between the change detection result and the manual calibration result, calibrating the false detection part into a changed negative sample, adding the change missed detection part into the change detection sample as a supplement, and optimizing the parameters of the change detection network 2;
step 10.4: and (3) calling the optimized classification network 1 parameters when the structure of the original change detection network 2 is unchanged, and optimizing the network parameters of the change detection part on the basis.
Comparison of algorithm change detection results
The network of fig. 2, i.e., network 1, classifies network modules for buildings, and extracts features of buildings through a multi-layer network structure. The network configuration of fig. 3, includes two "networks 1" and "network 2", the "network 2" being a change detection network module. The network of fig. 3 may incorporate the features of the building directly into the change detection network module.
The traditional change detection network method usually takes change information as a sample and directly inputs the change information into a deep learning network for learning, but due to the complexity of remote sensing ground features, the traditional change detection network directly learns the changed sample of the building, the obtained model result often has many false detection conditions, and the detected change areas are not even of the building type. The traditional change detection network directly puts the change information into the existing Unet, FCN, Resnet and the like, or directly combines the two images and puts into the network,
the detection results are shown in the comparison diagram of FIG. 8:
the method of the application is to detect by the network structure and algorithm shown in fig. 1, fig. 2 and fig. 3;
the traditional method is to directly put six wave bands (each phase image is three wave bands of red, green and blue, and two phases are six wave bands) of the two-phase image into Resnet50 for detection. It can be seen that the conventional method has a missing detection condition compared with the algorithm of the present application.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (1)

1. A high-resolution satellite optical remote sensing image building change detection method based on deep learning is characterized by comprising the following steps:
step 1: carrying out vector range sketching on the high-resolution remote sensing satellite images in the same region with the same resolution in multiple periods;
and 2, step: carrying out vector range delineation on of a building change area according to the front and back high-resolution remote sensing images;
and step 3: carrying out sample construction by using the vector range outlined in the step 1 and the step 2, respectively manufacturing a building classification sample and a change detection sample, and cutting all samples into uniform sizes;
and 4, step 4: rotating, mirror image turning, color and brightness adjustment and the like are carried out on the sample cut in the step 3 to expand the sample amount, the color and brightness adjustment is to convert the image from the RGB color space to the HSI color space, and the adjusted image is converted back to the RGB color space by adjusting H, S, I three components, so that the richness of the sample is realized;
and 5: building a deep learning building classification network 1 on the basis of ResNet 50;
step 6: calling parameters in the building classification network 1 in the step 5, and building a change detection network 2 on the basis of the parameters to realize change detection of the two-stage images;
step 6.1: the network is composed of three parts, including two classification networks 1 and a change detection network 2, wherein part of network parameters of the classification network 1 are obtained by training in step 5, and the parameters of the trained building classification network 1 are used for respectively carrying out feature extraction and deconvolution up-sampling on the images 1 and 2 in the front and back two stages, and because each layer effectively contains the features of the images under different scales in the deconvolution process, five hidden layers and a prediction layer are used as the input of the change detection network 2;
step 6.2: the difference graph 1i is an absolute value of a difference value between the feature graph 1-i and the feature graph 2-i, the matrix is 50 × 2048, the difference graph 1(i +1) is obtained by deconvolution after normalization, the matrix is 50 × 256, then the difference value and the absolute value are also obtained by the feature graph 1- (i +1) and the feature graph 2- (i +1), the matrix difference graph 2i of 50 × 256 is obtained by convolution processing of 3 × 3, the matrix difference graph 2i of 50 × 512 is obtained by combination of the difference graph 1(i +1), the final change detection result prediction result of 400 × 1 is obtained by analogy, i is a natural number from 1 to 5, the feature graph obtained by the ith network layer is represented, and the result and the change calibration sample are calculated as softgels:
first, softmax calculation is performed:
Figure FDA0003514430410000021
s in formula (1)jAs the output result of softmax at the jth position, k represents the current category, and n represents the total number of categories 0<k is less than or equal to n; denominator representing k from 0 to n
Figure FDA0003514430410000022
The integration of (2) and (3) represents an overall situation of the classification result; a isjRepresenting the numerical value obtained by network calculation at the jth class;
Figure FDA0003514430410000023
denotes the base e, ajAs a calculated value of an index;
then the S isjAs an input to equation (2), a cross entropy cross entry calculation is performed:
Hy′(y)=-∑iyi′log(yi) (2)
y in formula (2)i' is the ith value in the delineation sample, yiIs the output result of the formula (1) softmax, namely SjEvaluating the classification result by using a formula (2);
and 7: putting the building sample into a classification network 1, and training parameters of the classification network;
and 8: freezing parameters of the classification network 1 part, putting a change sample into the change detection network 2, calling the parameters of the classification network 1, and training the rest parameters according to the structure of the change detection network 2;
and step 9: carrying out building classification detection and change detection in a certain area range by using the parameters of the trained network 1 and network 2, then carrying out morphological processing on the detection result, and carrying out vectorization on the training result;
step 9.1: putting the images in the full range of the detected area into a trained network, and because the images of 400 × 3 are used in the network training, the images in the full range of the detected area need to be tested in a block mode in the test and combined together;
step 9.2: performing opening operation and closing operation of morphological processing on the detected result, wherein the opening operation is to corrode and expand the result first and remove isolated dots, burrs and the like; the closed operation is to perform expansion-first and corrosion-second operation on the result to fill and level small holes, so that the result edge is more complete;
step 9.3: vectorizing the result processed in the step 9.2 to obtain a building vector boundary and a change detection vector boundary;
step 10: and comparing the change detection result, the building classification result and the manual calibration result, evaluating the result, optimizing the supplementary sample, and performing further training to form a benign cycle.
CN202010347858.0A 2020-04-28 2020-04-28 High-resolution satellite optical remote sensing image building change detection method based on deep learning Active CN111553272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010347858.0A CN111553272B (en) 2020-04-28 2020-04-28 High-resolution satellite optical remote sensing image building change detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010347858.0A CN111553272B (en) 2020-04-28 2020-04-28 High-resolution satellite optical remote sensing image building change detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN111553272A CN111553272A (en) 2020-08-18
CN111553272B true CN111553272B (en) 2022-05-06

Family

ID=72005867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010347858.0A Active CN111553272B (en) 2020-04-28 2020-04-28 High-resolution satellite optical remote sensing image building change detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN111553272B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651338B (en) * 2020-12-26 2022-02-15 广东电网有限责任公司电力科学研究院 Method and device for distinguishing hidden danger of illegal construction of power transmission line

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700411A (en) * 2015-03-15 2015-06-10 西安电子科技大学 Sparse reconstruction-based dual-time phase remote-sensing image change detecting method
CN108446588A (en) * 2018-02-05 2018-08-24 中国测绘科学研究院 A kind of double phase remote sensing image variation detection methods and system
US20180293456A1 (en) * 2015-12-18 2018-10-11 Ventana Medical Systems, Inc. Systems and methods of unmixing images with varying acquisition properties
CN109446992A (en) * 2018-10-30 2019-03-08 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN109961105A (en) * 2019-04-08 2019-07-02 上海市测绘院 A kind of Classification of High Resolution Satellite Images method based on multitask deep learning
CN110136170A (en) * 2019-05-13 2019-08-16 武汉大学 A kind of remote sensing image building change detecting method based on convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700411A (en) * 2015-03-15 2015-06-10 西安电子科技大学 Sparse reconstruction-based dual-time phase remote-sensing image change detecting method
US20180293456A1 (en) * 2015-12-18 2018-10-11 Ventana Medical Systems, Inc. Systems and methods of unmixing images with varying acquisition properties
CN108446588A (en) * 2018-02-05 2018-08-24 中国测绘科学研究院 A kind of double phase remote sensing image variation detection methods and system
CN109446992A (en) * 2018-10-30 2019-03-08 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN109961105A (en) * 2019-04-08 2019-07-02 上海市测绘院 A kind of Classification of High Resolution Satellite Images method based on multitask deep learning
CN110136170A (en) * 2019-05-13 2019-08-16 武汉大学 A kind of remote sensing image building change detecting method based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images》;J. Liu,et al;《IEEE Transactions on Neural Networks and Learning Systems》;20180331;第29卷(第3期);全文 *
《基于改进U-Net网络的遥感图像云检测》;张永宏等;《测绘通报》;20200331;全文 *

Also Published As

Publication number Publication date
CN111553272A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN115331087B (en) Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
CN111914611B (en) Urban green space high-resolution remote sensing monitoring method and system
CN111126202A (en) Optical remote sensing image target detection method based on void feature pyramid network
CN111860351B (en) Remote sensing image fishpond extraction method based on line-row self-attention full convolution neural network
CN111291826B (en) Pixel-by-pixel classification method of multisource remote sensing image based on correlation fusion network
CN114943963A (en) Remote sensing image cloud and cloud shadow segmentation method based on double-branch fusion network
CN107680113A (en) The image partition method of multi-layer segmentation network based on Bayesian frame edge prior
CN110956207B (en) Method for detecting full-element change of optical remote sensing image
CN111161224A (en) Casting internal defect grading evaluation system and method based on deep learning
CN113312993B (en) Remote sensing data land cover classification method based on PSPNet
CN114972191A (en) Method and device for detecting farmland change
CN114494821A (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN113887493B (en) Black and odorous water body remote sensing image identification method based on ID3 algorithm
CN112418049A (en) Water body change detection method based on high-resolution remote sensing image
CN115272776B (en) Hyperspectral image classification method based on double-path convolution and double attention and storage medium
CN109961105A (en) A kind of Classification of High Resolution Satellite Images method based on multitask deep learning
CN111797920A (en) Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion
CN115965862A (en) SAR ship target detection method based on mask network fusion image characteristics
CN115661677A (en) Light-weight satellite image cloud detection method based on dark channel feature guidance
CN114120036A (en) Lightweight remote sensing image cloud detection method
CN111553272B (en) High-resolution satellite optical remote sensing image building change detection method based on deep learning
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN115330703A (en) Remote sensing image cloud and cloud shadow detection method based on context information fusion
CN114926826A (en) Scene text detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant