CN112613352A - Remote sensing image change detection method based on twin network - Google Patents

Remote sensing image change detection method based on twin network Download PDF

Info

Publication number
CN112613352A
CN112613352A CN202011409630.6A CN202011409630A CN112613352A CN 112613352 A CN112613352 A CN 112613352A CN 202011409630 A CN202011409630 A CN 202011409630A CN 112613352 A CN112613352 A CN 112613352A
Authority
CN
China
Prior art keywords
convolution
remote sensing
twin
network
change detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011409630.6A
Other languages
Chinese (zh)
Inventor
石爱业
陆定一
王越
马浩洋
姚雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202011409630.6A priority Critical patent/CN112613352A/en
Publication of CN112613352A publication Critical patent/CN112613352A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image change detection method based on a twin network. The method takes a twin convolution network as a frame and combines the idea of a coding and decoding structure, and two convolution networks with the same structure and shared weight, namely twin convolution network structures, are used for respectively extracting the characteristics of two time-phase remote sensing images after radiation correction at a coding end, so that two input images are respectively converted into the same characteristic space. After each convolution, the features of the two branches are added and overlapped with the image features of the decoding end by using jump connection, so that more abstract features are extracted. Finally, CRF is connected after the output of the decoding end to obtain better segmentation effect. The experimental result shows that compared with other existing methods, the method has the advantages that the precision of the detection result is improved, and the detected change area is relatively complete.

Description

Remote sensing image change detection method based on twin network
Technical Field
The invention relates to a twin network-based remote sensing image change detection method, and belongs to the technical field of optical remote sensing.
Background
The change detection of the remote sensing image can be applied to a plurality of fields and plays an important role. In the aspect of weather, the change detection can detect disasters as early as possible when natural disasters such as mountain floods, earthquakes, landslides and the like occur, determine a disaster source, evaluate the scale of the disasters, facilitate the expansion of disaster relief work and better deal with the disasters; in the aspect of city growth tracking, the change detection can be used for knowing the ground greening condition, monitoring the land utilization and solving the problems of illegal land occupation and the like.
The change detection method of the remote sensing image can be divided into a supervision mode and an unsupervised mode from the perspective of whether prior knowledge is needed or not. The supervised change detection method needs a marked reference change map, and the unsupervised change detection method does not need the marked reference change map and directly obtains changed and unchanged areas from two remote sensing images of the same region at different time phases. Common unsupervised methods are thresholding and clustering.
The purpose of the thresholding method is to find the best threshold for distinguishing between unchanged classes and changed classes, and to classify the difference map by comparing the threshold with the pixels in the difference map. In addition to the classical K & I threshold method, Bazi et al propose an improved KI criterion based on the generalized gaussian model for modeling the distribution of varying and invariant classes. Moser and Serpic develop automatic threshold techniques that solve the problem of detecting changes occurring on the ground by analyzing SAR images. An image ratio method for SAR change detection is adopted, non-Gaussian distribution of SAR image amplitude is combined, and Kittler and Illingworth minimum error threshold algorithm is popularized. Im et al propose a Moving Threshold Window (MTW) based change detection method, experiments show that the method is superior to conventional binary change detection methods in both single and multiple change enhanced images of the study area. Clustering can be roughly divided into two categories: hard clustering and soft clustering. The most classical methods in hard clustering are K-means clustering (K-means), while the classical methods in soft clustering are Fuzzy C-means clustering (FCM), FLICM and RFLICM. Furthermore, Ding et al propose a sparse hierarchical clustering method that generates discriminative change features by stacking spatio-temporal multi-scale centrosymmetric local binary pattern features, learning a tree-structured dictionary from a pseudo-training set and unlabeled data to explore the multi-modal and hierarchical distribution of the change features.
The traditional change detection method has strict requirements on preprocessing, and the features extracted from the image are shallow, so that the accuracy needs to be improved. In recent years, deep learning has achieved a dramatic progress in the field of image processing and the like, and scholars apply the concept of deep learning algorithm to remote sensing image change detection, and the adoption of deep neural network to detect remote sensing image change is a hot point of study of scholars at home and abroad at present, so as to improve the performance of change detection. Zhan et al proposed an optical image transformation detection method based on a deep twin convolutional network and trained the twin convolutional network using weighted contrast loss. The novelty of the method lies in that the traditional change detection method extracts features manually, and the twin network extracts features directly from the image pair, so that the extracted features are more abstract and robust.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a remote sensing image change detection method based on a twin network, wherein a CRF is connected after the output of a decoding end to obtain a better segmentation effect, the precision of a detection result is improved, and a detected change area is relatively complete.
The invention adopts the following technical scheme for solving the technical problems: the invention designs a remote sensing image change detection method based on a twin network, which is based on the twin convolution network and executes the following steps:
step 1, the twin convolutional network coding part is composed of two independent convolutional network branches with the same structure and is used for extracting t1And t2Image spectrum and structural characteristics of the remote sensing image at any moment;
step 2, in the twin convolution network decoding part, aiming at the up-sampling result of the result after the fourth pooling, adding the up-sampling result and the characteristics of two branches of the fourth convolution by utilizing jump connection, after transposition convolution, aiming at the characteristics of the two branches of the up-sampling and the third convolution, adding by utilizing jump connection, and repeating the operation for four times to obtain a characteristic diagram;
step 3, connecting a LogSoftmax classifier, predicting the class probability of each pixel in the feature map, and obtaining a change detection result map by binarizing the predicted probability map;
and 4, performing CRF post-treatment on the result to obtain a final output diagram.
As a preferred technical solution of the present invention, the encoding part in step 1 specifically includes the following steps:
2.1) in each convolution network branch in the coding part, performing 4 times of convolution and pooling alternative work respectively for extracting the corresponding features of the input remote sensing image, wherein the number of channels of the feature maps of the first layer to the fourth layer is 16, 32, 64 and 128 respectively, the sizes of convolution kernels are 3 multiplied by 3, padding is 1, the step size stride is 1, and the default convolution is followed by an activation function ReLU;
2.2) for the pooling operation, a maximum pooling of 2 × 2, step size stride of 2; the size of the convolution is not changed, and only the characteristic channel is changed; the characteristic channel is unchanged after each pooling, and the size is half of the original size.
As a preferred technical solution of the present invention, the decoding in step 2 specifically includes the following steps:
3.1) up-sampling the result of the fourth pooling of the twin convolutional network;
3.2) adding the result of the up-sampling and the characteristics of the two branches of the fourth convolution by utilizing jump connection, and repeating the operation for four times by utilizing jump connection addition according to the characteristics of the two branches of the up-sampling and the third convolution after the transposition convolution.
As a preferred embodiment of the present invention, the transpose convolution in step 2 includes the following steps:
the transposed convolution ConvTranpose2d is used in the upsampling, with a convolution kernel size of 3 × 3, padding of 1, stride of 2, and output _ padding of 1.
As a preferred technical solution of the present invention, the LogSoftmax classifier in step 3 includes the following steps:
the loss function adopts a log-likelihood loss function NLLLoss, and the formula is as follows:
Figure BDA0002818561410000031
in the formula
Figure BDA0002818561410000032
For the output of the Softmax function,
Figure BDA0002818561410000033
on the basis, logarithmic operation is carried out, namely LogSoftmax processing is carried out to obtain a logarithmic vector y of the prediction probabilitykIs a target label, the value is 0 or 1, if the output of a certain training sample is the ith class, yi1, there is y for the remaining j ≠ ij=0。
Compared with the prior art, the remote sensing image change detection method based on the twin network has the following technical effects by adopting the technical scheme:
the invention designs a twin network-based remote sensing image change detection method, and features of two time-phase remote sensing images after radiation correction are respectively extracted through two convolution networks with the same structure and shared weight, namely a twin convolution network structure at a coding end. After each convolution, the features of the two branches are added and overlapped with the image features of the decoding end by using jump connection, so that more abstract features are extracted. Finally, CRF is connected after the output of the decoding end to obtain better segmentation effect. Experiments show that compared with other methods, the method has the advantages that the precision of the detection result is improved, and the detected change area is relatively complete.
Drawings
FIG. 1 shows a system architecture diagram of a twin network of the present invention;
FIG. 2 shows an example of image enhancement;
FIG. 3 is a SZADA/1t1A time image;
FIG. 4 is a SZADA/1t2A time image;
FIG. 5 is a reference diagram of change detection;
fig. 6 is a change detection result of the CVA algorithm;
FIG. 7 is a change detection result of the PCA algorithm;
FIG. 8 is the change detection result of the MAD algorithm;
fig. 9 is the result of change detection of the algorithm of the present invention.
Detailed Description
The invention aims to overcome the defects in the prior art and provides a remote sensing image change detection method based on a twin network. After each convolution, the features of the two branches are added and overlapped with the image features of the decoding end by using jump connection, so that more abstract features are extracted. Finally, CRF is connected after the output of the decoding end to obtain better segmentation effect. Experiments show that compared with other methods, the method has the advantages that the precision of the detection result is improved, and the detected change area is relatively complete.
In order to achieve the above object, the present invention is implemented by the following technical solutions as shown in fig. 1.
Step 1, the coding part is composed of two independent and same-structure convolution network branches which are respectively used for extracting t1And t2Image spectra and structural features at the time of day.
And 2. adding the up-sampling result and the characteristics of the two branches of the fourth convolution by utilizing jump connection in a decoding part, and adding the up-sampling result and the characteristics of the two branches of the third convolution by utilizing jump connection after the up-sampling result and the characteristics of the two branches of the third convolution are transposed and convoluted, and repeating the operation for four times.
And 3, connecting a LogSoftmax classifier to predict the class probability of each pixel in the feature map, and obtaining a change detection result map by binarizing the predicted probability map.
And 4, performing CRF post-treatment on the result to obtain a final output diagram.
In step 1, the specific steps for the coding part are as follows:
2.1) in each of the two convolutional network branches, an alternating operation of 4 convolutions and pooling is performed to extract deeper features of the input image. The number of channels of the first layer feature map, the fourth layer feature map and the fourth layer feature map are respectively 16, 32, 64 and 128, the sizes of convolution kernels are all 3 multiplied by 3, padding is 1, the step size stride is 1, and after default convolution, the activation function ReLU is passed.
2.2) for the pooling operation, a maximum pooling of 2X 2 with a step size stride of 2. The size of the convolution is not changed, and only the characteristic channel is changed; the characteristic channel is unchanged after each pooling, and the size is half of the original size.
For the decoding in step 2, the specific steps are as follows:
3.1) upsampling the result after the fourth pooling.
3.2) adding the result of the up-sampling and the characteristics of the two branches of the fourth convolution by jump connection, adding the up-sampling and the characteristics of the two branches of the third convolution by jump connection after the transposition convolution, and repeating the operation for four times.
The step 2 of convolving the transpose comprises the following steps:
the transposed convolution ConvTranpose2d is used in the upsampling, with a convolution kernel size of 3 × 3, padding of 1, stride of 2, and output _ padding of 1.
The LogSoftmax classifier in the step 3 comprises the following steps:
the loss function adopts a Log-likelihood loss function NLLLoss (negative Log Likehood loss), wherein NLLLoss is commonly used for multi-classification tasks, and in addition, the loss function NLLLoss can set weights for each class for distribution so as to solve the problem of unbalanced samples in a training set. The inputs to NLLLoss are a log probability vector and an object label, which is suitable for networks whose last layer is LogSoftmax. The loss function is formulated as
Figure BDA0002818561410000051
In the formula
Figure BDA0002818561410000052
For the output of the Softmax function,
Figure BDA0002818561410000053
on the basis, logarithmic operation is carried out, namely LogSoftmax processing is carried out to obtain a logarithmic vector y of the prediction probabilitykIs a target label, the value is 0 or 1, if the output of a certain training sample is the ith class, yi1, there is y for the remaining j ≠ ij=0。
Training and testing were performed on a single Nvidia GeForce GTX 1080ti GPU graphics card with 8G memory.
The data set used in the experiment of this example was the 2008 SZTAKI AirChange Benchmark data set. The data set consists of three sub-data sets named SZADA, TISZADOB and ARCHIEVE, respectively, where each set contains 7, 5 and 1 pairs of resolution 1.5 meters/pixel, 952 × 640.
Based on the network model proposed in this embodiment, images in the SZADA and TISZADOB datasets are used for training and testing on the proposed network model. And, for both data sets, SZADA and TISZADOB, they are each trained and tested separately. Because the szaida data set and the TISZADOB data set respectively only contain 7 and 5 image pairs, and the data are too little, in order to prevent the overfitting phenomenon from occurring in the training stage, the experiment in this chapter adopts random inversion or rotation, including left-right inversion, up-down inversion, rotation by 90 degrees, 180 degrees and 270 degrees to expand the training data set, so as to achieve better effect, as shown in fig. 2.
For SZADA, selecting SZADA/1, cutting a rectangular area at the upper left corner 748 multiplied by 448 of the SZADA as a test set, and for the rest SZADA/2-7, cutting the size of 112 multiplied by 112 on the whole original image, wherein the sampling areas can be overlapped and randomly overturned or rotated to be used as a training set, and 19200 sheets are obtained in total; similar to SZADA/1, SZADA/2 is selected, a rectangular area of 748 × 448 on the upper left corner is cut out to be used as a test set, and for the rest of SZADA/1 and SZADA/3-7, 112 × 112 sizes are cut out on the whole original image, random inversion or rotation is carried out to be used as a training set, and 19200 pieces of the original image are obtained in total. For the TISZADOB, TISZADOB/3 is selected, a rectangular area of 748 multiplied by 448 on the upper left corner of the TISZADOB is cut to be used as a test set, and for the rest TISZADOB/1-2 and TISZADOB/4-5, the size of 112 multiplied by 112 is cut on the whole original image, random turning or rotation is carried out to be used as a training set, and 12800 pieces are obtained in total.
The change detection method of this example was compared with the following change detection methods:
(1) based on the change vector analysis method (CVA). The methods mentioned in the article "PCA-based land-use change detection and analysis using a multistory and multisensor satellite data [ J ]. International Journal of Remote Sensing,2008,29(16):4823-4838 ], by Deng J S et al. ]
(2) Based on Principal Component Analysis (PCA). The method mentioned by Allan et al in the articles "Multivariant Alteration Detection (MAD) and MAF Postprocessing in Multispectral, Bitemporal Image Data: New applications to Change Detection Studies [ J ]. Remote Sensing of Environment, 1998". ]
(3) Based on multivariate change detection Method (MAD). [ YIn W et al in the article "extension-Based volumetric New Network for Modeling Serial Pairs [ J ]. Computer Science, 2015.". ]
The quantitative analysis of the detection results of each algorithm is shown in table 2. By using F1The metrics, overall classification accuracy (OA), accuracy (P) and recall (R) compare the performance of the proposed method.
The results of the detection of each algorithm on the SZADA/1 data set were analyzed as shown in Table 2 below.
Figure BDA0002818561410000061
Figure BDA0002818561410000071
FIG. 3 is a SZADA/1t1Time image, SZADA/1t in FIG. 42Time of day image, fig. 5 is a reference diagram of change detection, fig. 6 is a change detection result of CVA algorithm, fig. 7 is a change detection result of PCA algorithm, fig. 8 is a change detection result of MAD algorithm, and fig. 9 is a change detection result of algorithm of the present invention. From the comparison between the reference image of fig. 5 and fig. 6 to 8, the detection effect of the algorithm of the present invention is the best in terms of visual effect.
It can also be seen from table 2 that the various indexes of the algorithm of the present invention are significantly better than those of the conventional change detection method. It can be seen from the result graph that the algorithm of the invention is obviously superior to the traditional change detection method, the detected change area is relatively complete, and the change area is relatively smooth. The detection results of CVA, PCA and MAD methods are interfered by noise, the detected white change area is not obvious compared with the method of a neural network, and a plurality of false detection and missing detection conditions exist.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (5)

1. A remote sensing image change detection method based on a twin network is characterized in that based on the twin convolutional network, the following steps are executed:
step 1, the twin convolutional network coding part is composed of two independent convolutional network branches with the same structure and is used for extracting t1And t2Image spectrum and structural characteristics of the remote sensing image at any moment;
step 2, in the twin convolution network decoding part, aiming at the up-sampling result of the result after the fourth pooling, adding the up-sampling result and the characteristics of two branches of the fourth convolution by utilizing jump connection, after transposition convolution, aiming at the characteristics of the two branches of the up-sampling and the third convolution, adding by utilizing jump connection, and repeating the operation for four times to obtain a characteristic diagram;
step 3, connecting a LogSoftmax classifier, predicting the class probability of each pixel in the feature map, and obtaining a change detection result map by binarizing the predicted probability map;
and 4, performing CRF post-treatment on the result to obtain a final output diagram.
2. The method for detecting the change of the remote sensing image based on the twin network as claimed in claim 1, wherein the encoding part in the step 1 comprises the following specific steps:
2.1) in each convolution network branch in the coding part, performing 4 times of convolution and pooling alternative work respectively for extracting the corresponding features of the input remote sensing image, wherein the number of channels of the feature maps of the first layer to the fourth layer is 16, 32, 64 and 128 respectively, the sizes of convolution kernels are 3 multiplied by 3, padding is 1, the step size stride is 1, and the default convolution is followed by an activation function ReLU;
2.2) for the pooling operation, a maximum pooling of 2 × 2, step size stride of 2; the size of the convolution is not changed, and only the characteristic channel is changed; the characteristic channel is unchanged after each pooling, and the size is half of the original size.
3. The twin network-based remote sensing image change detection method according to claim 1, wherein the decoding in step 2 specifically comprises the following steps:
3.1) up-sampling the result of the fourth pooling of the twin convolutional network;
3.2) adding the result of the up-sampling and the characteristics of the two branches of the fourth convolution by utilizing jump connection, and repeating the operation for four times by utilizing jump connection addition according to the characteristics of the two branches of the up-sampling and the third convolution after the transposition convolution.
4. The twin network-based remote sensing image change detection method according to claim 2, wherein the transpose convolution in step 2 comprises the following steps:
the transposed convolution ConvTranpose2d is used in the upsampling, with a convolution kernel size of 3 × 3, padding of 1, stride of 2, and output _ padding of 1.
5. The twin network-based remote sensing image change detection method according to claim 2, wherein the LogSoftmax classifier comprises the following steps in step 3:
the loss function adopts a log-likelihood loss function NLLLoss, and the formula is as follows:
Figure FDA0002818561400000021
in the formula
Figure FDA0002818561400000022
For the output of the Softmax function,
Figure FDA0002818561400000023
on the basis, logarithmic operation is carried out, namely LogSoftmax processing is carried out to obtain a logarithmic vector y of the prediction probabilitykIs a target label, the value is 0 or 1, if the output of a certain training sample is the ith class, yi1, there is y for the remaining j ≠ ij=0。
CN202011409630.6A 2020-12-04 2020-12-04 Remote sensing image change detection method based on twin network Withdrawn CN112613352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011409630.6A CN112613352A (en) 2020-12-04 2020-12-04 Remote sensing image change detection method based on twin network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011409630.6A CN112613352A (en) 2020-12-04 2020-12-04 Remote sensing image change detection method based on twin network

Publications (1)

Publication Number Publication Date
CN112613352A true CN112613352A (en) 2021-04-06

Family

ID=75228984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011409630.6A Withdrawn CN112613352A (en) 2020-12-04 2020-12-04 Remote sensing image change detection method based on twin network

Country Status (1)

Country Link
CN (1) CN112613352A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990112A (en) * 2021-04-20 2021-06-18 湖南大学 Edge-guided cyclic convolution neural network building change detection method and system
CN113240023A (en) * 2021-05-19 2021-08-10 中国民航大学 Change detection method and device based on change image classification and feature difference value prior
CN113469074A (en) * 2021-07-06 2021-10-01 西安电子科技大学 Remote sensing image change detection method and system based on twin attention fusion network
CN114419464A (en) * 2022-03-29 2022-04-29 南湖实验室 Twin network change detection model based on deep learning
CN114926746A (en) * 2022-05-25 2022-08-19 西北工业大学 SAR image change detection method based on multi-scale differential feature attention mechanism

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990112A (en) * 2021-04-20 2021-06-18 湖南大学 Edge-guided cyclic convolution neural network building change detection method and system
CN112990112B (en) * 2021-04-20 2021-07-27 湖南大学 Edge-guided cyclic convolution neural network building change detection method and system
CN113240023A (en) * 2021-05-19 2021-08-10 中国民航大学 Change detection method and device based on change image classification and feature difference value prior
CN113240023B (en) * 2021-05-19 2022-09-09 中国民航大学 Change detection method and device based on change image classification and feature difference value prior
CN113469074A (en) * 2021-07-06 2021-10-01 西安电子科技大学 Remote sensing image change detection method and system based on twin attention fusion network
CN113469074B (en) * 2021-07-06 2023-12-19 西安电子科技大学 Remote sensing image change detection method and system based on twin attention fusion network
CN114419464A (en) * 2022-03-29 2022-04-29 南湖实验室 Twin network change detection model based on deep learning
CN114419464B (en) * 2022-03-29 2022-07-26 南湖实验室 Construction method of twin network change detection model based on deep learning
CN114926746A (en) * 2022-05-25 2022-08-19 西北工业大学 SAR image change detection method based on multi-scale differential feature attention mechanism
CN114926746B (en) * 2022-05-25 2024-03-01 西北工业大学 SAR image change detection method based on multiscale differential feature attention mechanism

Similar Documents

Publication Publication Date Title
CN112613352A (en) Remote sensing image change detection method based on twin network
CN111723732B (en) Optical remote sensing image change detection method, storage medium and computing equipment
CN105574063B (en) The image search method of view-based access control model conspicuousness
Xiang et al. Hyperspectral anomaly detection by local joint subspace process and support vector machine
CN111415323B (en) Image detection method and device and neural network training method and device
CN105989597B (en) Hyperspectral image abnormal target detection method based on pixel selection process
CN111191735B (en) Convolutional neural network image classification method based on data difference and multi-scale features
CN109584284B (en) Hierarchical decision-making coastal wetland ground object sample extraction method
Patil et al. Enhanced radial basis function neural network for tomato plant disease leaf image segmentation
CN110991547A (en) Image significance detection method based on multi-feature optimal fusion
CN115601661A (en) Building change detection method for urban dynamic monitoring
CN115457051A (en) Liver CT image segmentation method based on global self-attention and multi-scale feature fusion
CN113569788A (en) Building semantic segmentation network model training method, system and application method
CN113487600A (en) Characteristic enhancement scale self-adaptive sensing ship detection method
CN112784777B (en) Unsupervised hyperspectral image change detection method based on countermeasure learning
He et al. Simplified texture spectrum for texture analysis
Ji et al. An automatic bad band pre-removal method for hyperspectral imagery
CN105046286A (en) Supervision multi-view feature selection method based on automatic generation of view and unit with l1 and l2 norm minimization
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium
CN109902690A (en) Image recognition technology
CN117115663A (en) Remote sensing image change detection system and method based on deep supervision network
LU500715B1 (en) Hyperspectral Image Classification Method Based on Discriminant Gabor Network
Tang et al. A recurrent curve matching classification method integrating within-object spectral variability and between-object spatial association
CN115496950A (en) Neighborhood information embedded semi-supervised discrimination dictionary pair learning image classification method
Sheikh et al. Noise tolerant classification of aerial images into manmade structures and natural-scene images based on statistical dispersion measures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210406

WW01 Invention patent application withdrawn after publication