CN113569815A - Method for detecting remote sensing image change based on image segmentation and twin neural network - Google Patents

Method for detecting remote sensing image change based on image segmentation and twin neural network Download PDF

Info

Publication number
CN113569815A
CN113569815A CN202111106196.9A CN202111106196A CN113569815A CN 113569815 A CN113569815 A CN 113569815A CN 202111106196 A CN202111106196 A CN 202111106196A CN 113569815 A CN113569815 A CN 113569815A
Authority
CN
China
Prior art keywords
image
change
remote sensing
convolution
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111106196.9A
Other languages
Chinese (zh)
Other versions
CN113569815B (en
Inventor
萧毅鸿
朱必亮
赵亮
陈磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tianshu Intelligent Co ltd
Original Assignee
Speed Space Time Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Speed Space Time Information Technology Co Ltd filed Critical Speed Space Time Information Technology Co Ltd
Priority to CN202111106196.9A priority Critical patent/CN113569815B/en
Publication of CN113569815A publication Critical patent/CN113569815A/en
Application granted granted Critical
Publication of CN113569815B publication Critical patent/CN113569815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting remote sensing image change based on image segmentation and a twin neural network, which comprises the following steps: s1: constructing a training sample library according to the collected images to be processed, the collected earth surface coverage vectors and the collected raster files, wherein the training sample library comprises multi-time image data of the same area and label data of ground object changes; s2: training the change detection network by using the multi-time image data and the label data of the ground feature change in the training sample library constructed in the step S1, and learning the change characteristics of different ground features in the high-resolution remote sensing image; s3: and carrying out post-processing on the extracted change detection result, removing noise and miscellaneous spots in a change area, and regularizing the outline of the building to obtain a final change detection result. The method is used for detecting the change of the high-resolution remote sensing image, has high accuracy and can be applied to the change detection of complex scenes by constructing a new remote sensing image change detection model based on image segmentation and a twin neural network.

Description

Method for detecting remote sensing image change based on image segmentation and twin neural network
Technical Field
The invention relates to the field of remote sensing change detection, in particular to a high-resolution remote sensing image change detection method based on deep learning, which can be used for change detection of satellite and unmanned aerial vehicle high-resolution two-phase remote sensing images.
Background
With the improvement of the spatial resolution and the revisit frequency of the satellite images, the rapid and accurate discovery of the surface change information by using the change detection technology becomes a hotspot of the current remote sensing field research. The remote sensing change detection is to utilize multi-source remote sensing images and related geographic space data of the same earth surface area in different periods, combine corresponding ground feature characteristics and a remote sensing imaging mechanism, determine and analyze the change of the position and range of the ground feature and the change of the property and the state of the ground feature by adopting an image and graphic processing theory and a mathematical model method, separate out interested change information, filter irrelevant change information serving as an interference factor, and have wide application in the fields of vegetation change, city expansion, illegal building detection and the like.
The early method mainly aims at the medium-low resolution remote sensing image, the pixel is used as a basic unit for analysis, the pixel spectrum difference is analyzed pixel by pixel to extract the change information, the method utilizes the characteristic information of a single pixel, the spatial information and the spectrum information of the pixel are easy to ignore, and the 'salt and pepper' noise and incomplete expression of the change area are caused. With commercialization of high-resolution remote-sensing images, object-oriented image analysis techniques have been introduced into high-resolution remote-sensing image analysis, and the basic unit of change detection has been converted from pixels to objects. The object is used as the most basic analysis unit, and the spectral information of the pixel and the spatial information of the neighborhood of the pixel are integrated, so that the false alarm rate and the false alarm rate in the difference diagram are reduced. Due to the characteristics of the high-resolution remote sensing image, the expression forms of low-level features such as textures, shapes and the like in the image are very complex, the traditional change detection technology usually carries out manual feature extraction according to expert prior, and effective features containing deep-level change information cannot be extracted. The manual features often carry more redundant information and noise, which greatly affects the change detection accuracy. The quality of the performance of the detection algorithm depends in fact to a large extent on the setting of the hyper-parameters. Although the traditional detection algorithm improves the efficiency of parameter search by means of strategies such as grid search and random search, a great deal of manpower and computing resources are wasted. The main limitation of manually setting algorithm parameters is that the conventional optimization strategy is difficult to achieve global optimization in parameter space, which is reflected in poor generalization performance.
By means of strong image feature extraction capability, the change detection method based on deep learning becomes a hot point for remote sensing image change detection research. Different from the traditional change detection algorithm, the mainstream deep learning change detection party at the present stage does not use a single pixel or an object as a basic unit for analysis any more, but adopts an image comparison-based method, treats the change detection as a semantic segmentation task, directly converts the input into a change diagram through a full-volume neural network, simplifies the complexity of the change detection in an end-to-end detection mode, effectively improves the precision of the detection result, has great advantages in the detection speed, and is beneficial to rapidly processing a large amount of data.
Aiming at the problems of complicated work, large workload, low automation degree and the like of urban land resource change detection, Wangming and the like propose a multi-scale network FPN Res-UNet based on a residual error structure and a feature pyramid network, the residual error structure and the feature pyramid network are fused into a UNet model, and the detection performance of the model on different-scale targets is enhanced. Yuan et + + and attention mechanism fused change detection algorithm is provided, and a multi-output fusion strategy is combined for remote sensing image change detection, so that the detection result can better keep the smoothness and integrity of the edge. In order to reduce the pseudo change phenomenon in the detection result, the Ningbo and the like propose a remote sensing image change detection method based on a twin residual error neural network, a multi-temporal multispectral image superpixel is segmented and merged, the segmented subblocks are subjected to feature extraction, then the twin residual error neural network is used for secondary classification to obtain the similarity, and a final change detection difference graph is obtained after OTSU threshold segmentation.
The above studies represent the combination of two time phase images in the present stage deep learning detection method respectively: (1) the early fusion method comprises the following steps: superposing and inputting image data of different time phases into a network; (2) a twin neural network method: inputting the double-time phase images into 1 feature extractor in sequence, and then combining the output feature graph pairs; (3) pseudo-twin network approach: the two time phase images are input into 2 different feature extractors respectively. In the early fusion method, image data of different time phases are input into a network in a superposition manner, and the difference detection of images is started from the first layer of the network, so that the characteristics belonging to different time phases are influenced mutually, and the high-dimensional characteristics of an original image are difficult to maintain. The twin neural network method receives image data of different time phases through two input ends respectively, and links the feature extraction function and the difference recognition function of an original image in the same network through a multilayer network, so that although the high-dimensional features of the image are reserved, the hidden danger of gradient disappearance is greatly increased, and the representativeness of the original image features extracted at the front end of the network is poor.
Disclosure of Invention
The invention provides a method for detecting the change of a remote sensing image based on image segmentation and a twin neural network, which is used for detecting the change of a high-resolution remote sensing image by constructing a new remote sensing image change detection model based on image segmentation and the twin neural network, wherein the detection model is divided into a coder and a decoder, a depth convolution module in the coder can extract the high-dimensional characteristics of the image and then inputs the high-dimensional characteristics into the multi-level fused twin neural network to generate a multi-scale contrast characteristic difference graph; the decoder is responsible for identifying a change area from the difference image, and finally, a detection result with the same size as the original image is obtained by utilizing bilinear interpolation.
In order to solve the technical problems, the invention adopts the technical scheme that: the method for detecting the change of the remote sensing image based on the image segmentation and the twin neural network specifically comprises the following steps:
s1: constructing a training sample library according to the collected images to be processed, the collected earth surface coverage vectors and the collected raster files, wherein the training sample library comprises multi-time image data of the same area and label data of ground object changes;
s2: training a change detection network Siam-Deep by using the multi-time image data and the label data of the ground feature change in the training sample library constructed in the step S1, and learning the change characteristics of different ground features in the high-resolution remote sensing image;
s3: and carrying out post-processing on the extracted change detection result, removing noise and miscellaneous spots in a change area, and regularizing the outline of the building to obtain a final change detection result.
By adopting the technical scheme, a new remote sensing image change detection model Sim-Deep based on image segmentation and a twin neural network is constructed and used for change detection of a high-resolution remote sensing image, and data and a label need to be trained in order to train the model; the detection model is divided into an encoder and a decoder, wherein a depth convolution module in the encoder can extract high-dimensional features of an image and then input the high-dimensional features into a multi-level fused twin neural network to generate a multi-scale contrast feature difference map; the decoder is responsible for identifying a change area from the difference image, and finally, a detection result with the same size as the original image is obtained by utilizing bilinear interpolation.
As a preferred technical solution of the present invention, the training sample library in step S1 further includes real label data based on manual labeling and label data obtained by performing image difference analysis based on the all-purpose segmentation model result.
As a preferred technical solution of the present invention, in the step S1, the specific steps of constructing the training sample library include:
s11 registration of the change area image and spatio-temporal matching: and performing space-time matching on the collected high-resolution image data according to the area covered by the existing change vector and the raster information data, namely matching the data of the same longitude and latitude area in different periods. And if the image to be processed is a framing image, cutting and splicing the framing and framing images to obtain a complete image.
S12 image resampling: counting the resolution of the high-resolution image cut out by the space-time matching in the last step, and resampling other images by taking the resolution of the image with the largest proportion as a reference;
s13 vector change label rasterization: rasterizing the collected vector change file, and converting the vector change file into a grid label with the same resolution as that of a corresponding image, wherein a label pixel comprises two values of a change area and a constant area;
s14 model training sample preparation: cutting out label blocks with the size of 256 multiplied by 256 from random positions of the grid change labels for many times, counting the number of change pixels contained in the label blocks, reserving the label blocks with the ratio of the number of the pixels to the total number of the pixels larger than 0.5, cutting out corresponding image blocks from high-resolution images corresponding to different periods according to the positions of the reserved label blocks, naming the label blocks and the sample blocks, and storing the named label blocks and sample blocks in a training sample library. Wherein, the variable area is marked by (1), and the invariable area is marked by (0).
As a preferred embodiment of the present invention, the change detection network Siam-Deep in step S2 includes two parts, namely, an Encoder (Encoder) and a Decoder (Decoder), wherein the Encoder is composed of two depth convolution modules (DCNN) and a twin space pyramid module (Siam-ASPP); the decoder part consists of an up-sampling module, a characteristic connection module and a convolution module.
As a preferred technical solution of the present invention, the depth Convolution module (DCNN) of the encoder is composed of two groups of successively stacked hole convolutions (irregular Convolution) and modified Linear units (ReLU); the twin space pyramid module (Siam-ASPP) is composed of a cavity convolution and a cavity space convolution pooling pyramid (ASPP), 4 features with different scales obtained by the twin space pyramid module are concat together in the channel dimension, and then are sent to convolution of 1x1 for fusion and a new feature of 256-channel is obtained.
As a preferred technical solution of the present invention, the decoder includes an up-sampling module, a feature connection module, and two convolution modules; the up-sampling module consists of two up-sampling layers; the characteristic connection module is a connection layer; the two convolution modules are a 1 × 1 convolution layer and a 3 × 3 convolution layer, respectively. The decoder firstly uses 1x1 convolution to reduce the dimension of the low-level features output by the depth convolution module, secondly uses the bilinear interpolation of the features obtained by the encoder to obtain 4 times of features, then uses the low-level features concat with the corresponding size in the encoder, secondly uses 3x3 convolution to further fuse the features, and finally uses the bilinear interpolation to obtain the segmentation prediction with the same size as the original picture.
As a preferred embodiment of the present invention, all maximum Pooling layers (Max power Layer) in the hole convolution in the encoder are replaced by stride =2 depth separable convolution.
As a preferred embodiment of the present invention, the method for enhancing data in step S15 includes: image rotation, inversion, random noise addition and brightness adjustment.
As a preferred embodiment of the present invention, the expansion rate (rate) of the void convolution is 2, the twin spatial pyramid module includes 3 expansion rates, which are 6, 12 and 18, respectively, the convolution kernel sizes are all 3 × 3, and the convolution step size is 1.
As a preferred embodiment of the present invention, in the post-processing in step S3, a convolution kernel with low-pass filtering is used to remove noise and noise spots in the detection result, and a polygon regularization method composed of coarse and fine tuning is used to transform the polygons in the detection result into a structured contour.
Compared with the prior art, the invention has the beneficial effects that: 1) a twin neural network-based change detection model is provided for change detection of high-resolution remote sensing images; 2) even in a complex change area, the detection result is still smooth and is close to the real terrain change; 3) the trained network model can be used for detecting the change of various complex scenes such as water bodies, buildings, forests, roads and the like.
Drawings
FIG. 1 is a schematic structural diagram of an overall model provided by the method for detecting changes of remote sensing images based on image segmentation and a twin neural network;
FIG. 2 is a schematic diagram of a twin hole convolution module (Sim-ASPP) in a method for detecting changes of remote sensing images based on image segmentation and a twin neural network;
FIG. 3 is a sample of the results of change detection in the method for detecting changes in remote sensing images based on image segmentation and twin neural networks; wherein (a) is aerial remote sensing image of a certain place, (b) is aerial remote sensing image of later return visit of the same area in (a), and (c) is a map of analysis result of ground feature change in (a) and (b); (d) is another local aerial remote sensing image, (e) is a late return visit aerial remote sensing image of the same area in (d), and (f) is a map of the analysis result of the change of the ground object in (d) and (e).
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the drawings of the embodiments of the present invention.
Example (b): the method for detecting the change of the remote sensing image based on the image segmentation and the twin neural network specifically comprises the following steps:
s1: as shown in fig. 1, in order to train a model, firstly, training data and labels, and constructing a training sample library according to an existing image to be processed, a ground surface coverage vector and a raster file, which are collected by a satellite and an unmanned aerial vehicle, wherein the training sample library comprises multi-time image data and label data of ground feature changes in the same area;
in step S1, the specific steps of constructing the training sample library include:
s11 registration of the change area image and spatio-temporal matching: performing space-time matching on the collected high-resolution image data according to the area covered by the existing change vector and the raster information data, namely matching the data of the same longitude and latitude area in different periods; if the image to be processed is a framing image, cutting and splicing the framing and framing images to obtain a complete image;
s12 image resampling: counting the resolution of the high-resolution image cut out by the space-time matching in the step S11, and resampling other images by taking the resolution of the image with the largest proportion as a reference;
s13 vector change label rasterization: rasterizing the collected vector change file, converting the vector change file into a grid label with the same resolution as that of a corresponding image, wherein a labeled pixel comprises two values of a change area (1) and an invariant area (0);
s14 model training sample preparation: cutting out label blocks with the size of 256 multiplied by 256 from random positions of the grid change labels for many times, counting the number of change pixels contained in the label blocks, reserving the label blocks with the ratio of the number of the pixels to the total number of the pixels larger than 0.5, simultaneously cutting out corresponding image blocks from high-resolution images corresponding to different periods according to the positions of the reserved label blocks, naming the label blocks and the sample blocks, and storing the named label blocks and sample blocks in a training sample library;
s15 data enhancement: performing data enhancement on the image blocks and the corresponding label blocks in the training sample library to generate a training sample library; the method for enhancing data in step S15 includes: rotating and turning the image, adding random noise and adjusting the brightness; the training sample library also comprises real label data based on manual labeling and label data obtained by performing image difference analysis based on all-purpose segmentation model results;
s2: training a change detection network Siam-Deep by using the multi-time image data and the label data of the ground feature change in the training sample library constructed in the step S1, and learning the change characteristics of different ground features in the high-resolution remote sensing image;
the change detection network Siam-Deep in the step S2 includes two parts of an Encoder (Encoder) and a Decoder (Decoder), wherein the Encoder is composed of two Deep convolution modules (DCNN) and a twin spatial pyramid module (Siam-ASPP); the decoder part consists of an up-sampling module, a characteristic connection module and a convolution module; the depth Convolution module (DCNN) of the encoder consists of two groups of successively stacked hole convolutions (atom Convolution) and modified Linear units (ReLU); as shown in fig. 2, the twin space pyramid module (Siam-ASPP) is composed of a cavity convolution and a cavity space convolution pooling pyramid (ASPP), 4 features of different scales obtained by the twin space pyramid module are concat together in the channel dimension, and then are sent to convolution of 1 × 1 for fusion and a new feature of 256-channel is obtained; the decoder comprises an up-sampling module, a characteristic connecting module (Concat Layer) and two convolution modules; the up-sampling module consists of two up-sampling layers; the characteristic connection module is a connection layer; the two Convolution modules are respectively a 1 × 1 Convolution Layer (Convolution Layer) and a 3 × 3 Convolution Layer (Convolution Layer); the expansion rate (rate) of the cavity convolution is 2, the twin space pyramid module comprises 3 expansion rates which are 6, 12 and 18 respectively, the sizes of convolution kernels are all 3 multiplied by 3, and convolution step lengths are all 1;
the decoder firstly uses 1x1 convolution to reduce the dimension of the low-level features output by the depth convolution module, secondly obtains 4 times of features by bilinear interpolation of the features obtained by the encoder, then further fuses the features with the low-level features concat with the corresponding size in the encoder, secondly uses 3x3 convolution, and finally obtains the segmentation prediction with the same size as the original picture by bilinear interpolation; all maximum Pooling layers (Max power Layer) in the hole convolution in the encoder are replaced by stride =2 depth separable convolution;
s3: carrying out post-processing on the extracted change detection result, removing noise and miscellaneous spots in a change area, and regularizing the outline of the building to obtain a final change detection result; in the post-processing in step S3, a convolution kernel with low-pass filtering is used to remove noise and noise spots in the detection result, and a polygon regularization method composed of coarse and fine tuning is used to transform the polygons in the detection result into a structured contour. FIG. 3 is a sample diagram of an example of applying the method of the present invention, wherein (a) in FIG. 3 is an aerial remote sensing image of a certain place, (b) is a late return visit aerial remote sensing image of the same area in (a), and (c) is a graph of the results of analysis of the feature changes in (a) and (b); fig. 3 (d) is another local aerial remote sensing image, (e) is a later return visit aerial remote sensing image of the same region in (d), and (f) is a map of the analysis result of the feature change in (d) and (e).
In order to verify and illustrate the effectiveness of the method, the method is compared with 4 existing deep learning methods, and table 1 shows the accuracy comparison of the method of the present invention and other deep learning-based methods.
TABLE 1 comparison of the method of the present invention with 4 additional methods of deep learning
Detection method Rate of accuracy Recall rate F1 score Overall rate of accuracy
FC-EF 0.609 0.528 0.594 0.911
FC-Siam-diff 0.706 0.658 0.667 0.932
EF-UNet++ 0.911 0.883 0.896 0.978
DASNet(ResNet50) 0.932 0.922 0.927 0.982
Siam-Deeplab 0.947 0.934 0.940 0.986
As can be seen from Table 1, our method is totally superior to the other 4 methods, the accuracy rate is 0.947, the recall rate is 0.934, the F1 score is 0.940, and the total accuracy rate is 0.986.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for detecting remote sensing image change based on image segmentation and a twin neural network is characterized by comprising the following steps:
s1: constructing a training sample library according to the collected images to be processed, the collected earth surface coverage vectors and the collected raster files, wherein the training sample library comprises multi-time image data of the same area and label data of ground object changes;
s2: training a change detection network Siam-Deep by using the multi-time image data and the label data of the ground feature change in the training sample library constructed in the step S1, and learning the change characteristics of different ground features in the high-resolution remote sensing image;
s3: and carrying out post-processing on the extracted change detection result, removing noise and miscellaneous spots in a change area, and regularizing the outline of the building to obtain a final change detection result.
2. The method for detecting changes in remote sensing images based on image segmentation and twin neural networks as claimed in claim 1, wherein the training sample library in step S1 includes real label data based on artificial labeling and label data obtained by performing image difference analysis based on the total segmentation model result.
3. The method for detecting changes in remote sensing images based on image segmentation and twin neural networks according to claim 1 or 2, wherein in the step S1, the specific steps of constructing the training sample library are as follows:
s11 registration of the change area image and spatio-temporal matching: performing space-time matching on the collected high-resolution image data according to the area covered by the existing change vector and the raster information data, namely matching the data of the same longitude and latitude area in different periods; if the image to be processed is a framing image, cutting and splicing the framing and framing images to obtain a complete image;
s12 image resampling: counting the resolution of the high-resolution image cut out by the space-time matching in the step S11, and resampling other images by taking the resolution of the image with the largest proportion as a reference;
s13 vector change label rasterization: rasterizing the collected vector change file, and converting the vector change file into a grid label with the same resolution as that of a corresponding image, wherein a label pixel comprises two values of a change area and a constant area;
s14 model training sample preparation: cutting out label blocks with the size of 256 multiplied by 256 from random positions of the grid change labels for many times, counting the number of change pixels contained in the label blocks, reserving the label blocks with the ratio of the number of the pixels to the total number of the pixels larger than 0.5, simultaneously cutting out corresponding image blocks from high-resolution images corresponding to different periods according to the positions of the reserved label blocks, naming the label blocks and the sample blocks, and storing the named label blocks and sample blocks in a training sample library;
s15 data enhancement: and performing data enhancement on the image blocks and the corresponding label blocks in the training sample library to generate the training sample library.
4. The method for detecting the changes of the remote sensing image based on the image segmentation and the twin neural network as claimed in claim 3, wherein the change detection network Siam-Deep in step S2 includes two parts, namely an encoder and a decoder, wherein the encoder is composed of two depth convolution modules and a twin spatial pyramid module; the decoder part consists of an up-sampling module, a characteristic connection module and a convolution module.
5. The method for detecting the change of the remote sensing image based on the image segmentation and the twin neural network as claimed in claim 4, wherein the depth convolution module of the encoder is composed of two groups of cavity convolution and modified linear units which are stacked continuously; the twin space pyramid module is composed of a group of cavity convolutions and a cavity space convolution pooling pyramid.
6. The method for detecting the change of the remote sensing image based on the image segmentation and the twin neural network as claimed in claim 4, wherein the decoder comprises an up-sampling module, a feature connection module and two convolution modules; the up-sampling module consists of two up-sampling layers; the characteristic connection module is a connection layer; the two convolution modules are a 1 × 1 convolution layer and a 3 × 3 convolution layer, respectively.
7. The method for detecting remote sensing image changes based on image segmentation and twin neural networks as claimed in claim 5, wherein all maximal pooling layers in the void convolution in the encoder are replaced by stride =2 depth separable convolution.
8. The method for detecting image segmentation and twin neural network-based remote sensing image change according to claim 5, wherein the data enhancement method in step S15 includes: image rotation, inversion, random noise addition and brightness adjustment.
9. The method for detecting the changes of the remote sensing image based on the image segmentation and the twin neural network as claimed in claim 5, wherein the expansion rate of the cavity convolution is 2, the twin spatial pyramid module comprises 3 expansion rates, 6 expansion rates, 12 expansion rates and 18 expansion rates, the sizes of convolution kernels are 3x3, and the convolution step lengths are 1.
10. The method for detecting remote sensing image change based on image segmentation and twin neural network as claimed in claim 3, wherein a convolution kernel with low pass filtering is used in the post-processing in step S3 to remove noise and noise spots in the detection result, and a polygon regularization method composed of coarse and fine tuning is used to transform polygons in the detection result into a structured contour.
CN202111106196.9A 2021-09-22 2021-09-22 Method for detecting remote sensing image change based on image segmentation and twin neural network Active CN113569815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111106196.9A CN113569815B (en) 2021-09-22 2021-09-22 Method for detecting remote sensing image change based on image segmentation and twin neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111106196.9A CN113569815B (en) 2021-09-22 2021-09-22 Method for detecting remote sensing image change based on image segmentation and twin neural network

Publications (2)

Publication Number Publication Date
CN113569815A true CN113569815A (en) 2021-10-29
CN113569815B CN113569815B (en) 2021-12-31

Family

ID=78173890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111106196.9A Active CN113569815B (en) 2021-09-22 2021-09-22 Method for detecting remote sensing image change based on image segmentation and twin neural network

Country Status (1)

Country Link
CN (1) CN113569815B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049335A (en) * 2021-11-18 2022-02-15 感知天下(北京)信息科技有限公司 Remote sensing image change detection method based on space-time attention
CN114155200A (en) * 2021-11-09 2022-03-08 二十一世纪空间技术应用股份有限公司 Remote sensing image change detection method based on convolutional neural network
CN115018754A (en) * 2022-01-20 2022-09-06 湖北理工学院 Novel performance of depth twin network improved deformation profile model
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method and device, computer equipment and storage medium
CN115690591A (en) * 2023-01-05 2023-02-03 速度时空信息科技股份有限公司 Remote sensing image farmland non-agricultural change detection method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919206A (en) * 2019-02-25 2019-06-21 武汉大学 A kind of remote sensing image ground mulching classification method based on complete empty convolutional neural networks
CN111161218A (en) * 2019-12-10 2020-05-15 核工业北京地质研究院 High-resolution remote sensing image change detection method based on twin convolutional neural network
CN111640159A (en) * 2020-05-11 2020-09-08 武汉大学 Remote sensing image change detection method based on twin convolutional neural network
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919206A (en) * 2019-02-25 2019-06-21 武汉大学 A kind of remote sensing image ground mulching classification method based on complete empty convolutional neural networks
CN111161218A (en) * 2019-12-10 2020-05-15 核工业北京地质研究院 High-resolution remote sensing image change detection method based on twin convolutional neural network
CN111640159A (en) * 2020-05-11 2020-09-08 武汉大学 Remote sensing image change detection method based on twin convolutional neural network
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155200A (en) * 2021-11-09 2022-03-08 二十一世纪空间技术应用股份有限公司 Remote sensing image change detection method based on convolutional neural network
CN114155200B (en) * 2021-11-09 2022-08-26 二十一世纪空间技术应用股份有限公司 Remote sensing image change detection method based on convolutional neural network
CN114049335A (en) * 2021-11-18 2022-02-15 感知天下(北京)信息科技有限公司 Remote sensing image change detection method based on space-time attention
CN114049335B (en) * 2021-11-18 2022-06-14 感知天下(北京)信息科技有限公司 Remote sensing image change detection method based on space-time attention
CN115018754A (en) * 2022-01-20 2022-09-06 湖北理工学院 Novel performance of depth twin network improved deformation profile model
CN115018754B (en) * 2022-01-20 2023-08-18 湖北理工学院 Method for improving deformation contour model by depth twin network
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method and device, computer equipment and storage medium
CN115690591A (en) * 2023-01-05 2023-02-03 速度时空信息科技股份有限公司 Remote sensing image farmland non-agricultural change detection method based on deep learning

Also Published As

Publication number Publication date
CN113569815B (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN113569815B (en) Method for detecting remote sensing image change based on image segmentation and twin neural network
Guo et al. A coarse-to-fine boundary refinement network for building footprint extraction from remote sensing imagery
CN111898543B (en) Building automatic extraction method integrating geometric perception and image understanding
Wang et al. A unified multiscale learning framework for hyperspectral image classification
Akshay et al. Satellite image classification for detecting unused landscape using CNN
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
Khan et al. DSMSA-Net: Deep spatial and multi-scale attention network for road extraction in high spatial resolution satellite images
Mansourifar et al. GAN-based satellite imaging: A survey on techniques and applications
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
Haverkamp Automatic building extraction from IKONOS imagery
Singh et al. A hybrid approach for information extraction from high resolution satellite imagery
Cui et al. A graph-based dual convolutional network for automatic road extraction from high resolution remote sensing images
Kazimi et al. Semantic segmentation of manmade landscape structures in digital terrain models
Schuegraf et al. Deep learning for the automatic division of building constructions into sections on remote sensing images
Sharma et al. Road Features Extraction Using Convolutional Neural Network
Xing et al. Building extraction from google earth images
CN112488190A (en) Point cloud data classification method and system based on deep learning
Tripodi et al. Automated chain for large-scale 3d reconstruction of urban scenes from satellite images
Widyaningrum et al. Skeleton-based automatic road network extraction from an orthophoto colored point cloud
Patel et al. Road Network Extraction Methods from Remote Sensing Images: A Review Paper.
Wen Based on the improved depth residual Unet high-resolution remote sensing road extraction method
He et al. Fast and Accurate Sea-Land Segmentation Based on Improved SeNet and Coastline Database for Large-Scale Image
Kikin et al. Use of machine learning techniques for rapid detection, assessment and mapping the impact of disasters on transport infrastructure
Lavreniuk et al. Generative Adversarial Networks for the Satellite Data Super Resolution Based on the Transformers with Attention
Wang et al. Aerial-DEM geolocalization for GPS-denied UAS navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230414

Address after: Room 403C, Building 2 and 3, Building M-10, Maqueling Industrial Zone, Maling Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057

Patentee after: Speed spatiotemporal big data research (Shenzhen) Co.,Ltd.

Address before: 210042 8 Blocks 699-22 Xuanwu Avenue, Xuanwu District, Nanjing City, Jiangsu Province

Patentee before: SPEED TIME AND SPACE INFORMATION TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Room 403C, Building 2 and 3, Building M-10, Maqueling Industrial Zone, Maling Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057

Patentee after: Shenzhen Tianshu Intelligent Co.,Ltd.

Address before: Room 403C, Building 2 and 3, Building M-10, Maqueling Industrial Zone, Maling Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057

Patentee before: Speed spatiotemporal big data research (Shenzhen) Co.,Ltd.