CN112365525B - High-resolution remote sensing image change detection network, method and device - Google Patents

High-resolution remote sensing image change detection network, method and device Download PDF

Info

Publication number
CN112365525B
CN112365525B CN202011303619.1A CN202011303619A CN112365525B CN 112365525 B CN112365525 B CN 112365525B CN 202011303619 A CN202011303619 A CN 202011303619A CN 112365525 B CN112365525 B CN 112365525B
Authority
CN
China
Prior art keywords
time phase
level
remote sensing
sensing image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011303619.1A
Other languages
Chinese (zh)
Other versions
CN112365525A (en
Inventor
于莉楠
王战举
张哲�
任伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Science and Technology (Beijing) Space Information Application Co.,Ltd.
Original Assignee
Beijing Aerospace Titan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Titan Technology Co ltd filed Critical Beijing Aerospace Titan Technology Co ltd
Priority to CN202011303619.1A priority Critical patent/CN112365525B/en
Publication of CN112365525A publication Critical patent/CN112365525A/en
Application granted granted Critical
Publication of CN112365525B publication Critical patent/CN112365525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application relates to a high resolution remote sensing image change detection network, includes: two front time phase feature extraction branches and rear time phase feature extraction branches which are arranged in parallel; the front time phase feature extraction branch comprises a first convolution module and a front time phase feature fusion module; the front time phase feature fusion module acquires the low-level front time phase features output by the low-level convolution layer in the first convolution module and the high-level front time phase features output by the high-level convolution layer, and fuses the low-level front time phase features and the high-level front time phase features to acquire front time phase feature data; the post-time phase feature extraction branch comprises a second convolution module and a post-time phase feature fusion module; the post-time phase feature fusion module obtains the low-level post-time phase feature output by the low-level convolution layer in the second convolution module and the high-level post-time phase feature output by the high-level convolution layer, and fuses the low-level post-time phase feature and the high-level post-time phase feature to obtain post-time phase feature data. The accuracy of the detection result is effectively improved.

Description

High-resolution remote sensing image change detection network, method and device
Technical Field
The disclosure relates to the technical field of remote sensing image processing, in particular to a high-resolution remote sensing image change detection network, a high-resolution remote sensing image change detection method and a high-resolution remote sensing image change detection device.
Background
The remote sensing image change detection is one of key technologies in the remote sensing technical field, and plays an increasingly important role in the fields of urban planning, disaster monitoring, land resource management, military reconnaissance and the like. The high-resolution remote sensing image has become an important data source of the transformation detection technology because of the characteristics of high definition and strong real-time property. When the change detection is carried out based on the high-resolution remote sensing image, a large amount of pseudo change is often generated due to the influence of illumination and shooting visual angles, and particularly, higher requirements are put on the stability and reliability of a change detection algorithm for some complex environments (such as natural environments with larger seasonal image changes, buildings with larger influence of shooting visual angles and the like).
Among the related change detection methods, the pixel-based change detection method generally has a relatively difficult determination of a suitable segmentation threshold, whereas the object-oriented change detection method tends to easily cause erroneous extraction of a change region due to "homonymic foreign matters and homonymic foreign matters". Therefore, the existing change detection method is generally only suitable for an ideal environment with smaller illumination and shooting visual angle change of remote sensing images corresponding to front and back time phases, fewer interference factors and obvious change areas. For detection in a complex environment with larger illumination and shooting visual angle changes and more peripheral interference factors, false extraction of a change area is usually generated, so that the accuracy of a detection result is influenced.
Disclosure of Invention
In view of this, the disclosure provides a high-resolution remote sensing image change detection network, which can effectively improve the accuracy of the remote sensing image change detection result.
According to an aspect of the present disclosure, a high resolution remote sensing image change detection network is provided, for performing change detection on a high resolution remote sensing image facing a complex environment, including: two front time phase feature extraction branches and rear time phase feature extraction branches which are arranged in parallel;
the pre-time phase feature extraction branch comprises a first convolution module and a pre-time phase feature fusion module; the first convolution module comprises R layers which are sequentially cascaded;
the front time phase feature fusion module is used for acquiring a low-level front time phase feature output by a low-level convolution layer in the first convolution module and a high-level front time phase feature output by a high-level convolution layer, and fusing the low-level front time phase feature and the high-level front time phase feature to acquire front time phase feature data;
the post-time phase feature extraction branch comprises a second convolution module and a post-time phase feature fusion module; the second convolution module comprises S layers of convolution layers which are sequentially cascaded;
the post-phase feature fusion module is configured to obtain a low-level post-phase feature output by a low-level convolution layer in the second convolution module and a high-level post-phase feature output by a high-level convolution layer, and fuse the low-level post-phase feature and the high-level post-phase feature to obtain post-phase feature data.
In one possible implementation, the value of R is the same as the value of S;
the first convolution module and the second convolution module adopt VGG16 convolution structures.
In one possible implementation manner, the lower-level convolution layers in the first convolution module comprise convolution layers of which the front R layers are sequentially cascaded, and the higher-level convolution layers in the first convolution module comprise convolution layers of which the rear R-R layers are sequentially cascaded;
the lower-level convolution layers in the second convolution module comprise all the convolution layers of which the front S layers are sequentially cascaded, and the higher-level convolution layers in the second convolution module comprise all the convolution layers of which the rear S-S layers are sequentially cascaded;
wherein, the values of R, R, S, S are positive integers, R is larger than R, and S is larger than S.
In one possible implementation manner, the pre-time phase feature fusion module comprises a pre-time phase low-level feature processing sub-module and a pre-time phase high-level feature processing sub-module;
the front time phase low-level characteristic processing submodule comprises a convolution layer and a spatial attention mechanism layer which are connected in sequence;
the front time phase advanced feature processing submodule comprises a channel attention mechanism layer, a convolution layer and an up-sampling layer which are connected in sequence;
the network structure of the back time phase feature fusion module is the same as that of the front time phase feature fusion module.
In one possible implementation manner, the method further comprises a change detection result generation module;
the change detection structure generation module is connected with the front time phase characteristic fusion module and the rear time phase characteristic fusion module, and is used for acquiring the front time phase characteristic data and the rear time phase characteristic data, and acquiring corresponding change detection results based on the difference between the front time phase characteristic data and the rear time phase characteristic data.
According to another aspect of the present application, there is further provided a high resolution remote sensing image change detection method, for performing change detection on a remote sensing image to be detected currently based on any one of the foregoing high resolution remote sensing image change detection networks, including:
acquiring a front time phase remote sensing image and a rear time phase remote sensing image;
inputting the pre-time-phase remote sensing image and the post-time-phase remote sensing image into the trained remote sensing image change detection network;
the remote sensing image change detection network respectively performs feature extraction on the front-time-phase remote sensing image and the rear-time-phase remote sensing image to obtain front-time-phase feature data and rear-time-phase feature data;
based on the difference between the front time phase characteristic data and the rear time phase characteristic data, a corresponding change detection result is obtained;
The feature extraction is performed on the pre-time-phase remote sensing image and the post-time-phase remote sensing image by the remote sensing image change detection network, and the method comprises the following steps:
a front time phase characteristic fusion module in the remote sensing image change detection network acquires a low-level front time phase characteristic output by a low-level convolution layer in a first convolution module and a high-level front time phase characteristic output by a high-level convolution layer, and fuses the low-level front time phase characteristic and the high-level front time phase characteristic to acquire front time phase characteristic data;
and acquiring a low-level post-time phase characteristic output by a low-level convolution layer in a second convolution module and a high-level post-time phase characteristic output by a high-level convolution layer by a post-time phase characteristic fusion module in the remote sensing image change detection network, and fusing the low-level post-time phase characteristic and the high-level post-time phase characteristic to obtain post-time phase characteristic data.
In one possible implementation, when the pre-temporal feature fusion module fuses the low-level pre-temporal feature and the high-level pre-temporal feature, the pre-temporal feature fusion module includes:
convolving the low-level front time phase characteristic, extracting a first characteristic from the low-level front time phase characteristic, and processing the first characteristic by using a spatial attention mechanism to obtain a corresponding second characteristic;
Processing the high-level front time phase characteristic by using a channel attention mechanism to obtain a third characteristic, and sequentially rolling and upsampling the third characteristic to obtain a fourth characteristic;
fusing the second characteristic and the fourth characteristic to obtain the pre-time phase characteristic data;
the post-time phase feature fusion module fuses the low-level post-time phase feature and the high-level post-time phase feature, and comprises:
convolving the low-level post-time phase feature, extracting a fifth feature from the low-level post-time phase feature, and processing the fifth feature by using a spatial attention mechanism to obtain a corresponding sixth feature;
processing the high-level post-time phase feature by using a channel attention mechanism to obtain a seventh feature, and sequentially rolling and upsampling the seventh feature to obtain an eighth feature;
and fusing the sixth feature and the eighth feature to obtain the post-time phase feature data.
In one possible implementation manner, based on the difference between the pre-time phase feature data and the post-time phase feature data, a corresponding change detection result is obtained, including:
Calculating the difference between the front time phase characteristic data and the rear time phase characteristic data by adopting a Euclidean distance calculation method, and generating a corresponding distance map;
and processing the distance map by adopting a threshold segmentation method to obtain the change detection result.
In one possible implementation manner, the method further comprises the step of training the remote sensing image change detection network;
wherein, train remote sensing image change detection network, include:
acquiring a training sample set; the training sample set comprises a plurality of groups of remote sensing image data, wherein each group of remote sensing image data comprises two remote sensing images in the same region and different time phases;
sequentially inputting each group of remote sensing image data in the training sample set into a pre-built remote sensing image change detection network model, and training the remote sensing image change detection network model to obtain the remote sensing image change detection network;
when the remote sensing image change detection network model is trained, the adopted loss function is as follows:
Figure BDA0002787590030000051
wherein omega 1 Weights, ω, characterizing invariant feature pairs 2 Weights characterizing pairs of varying features, 0.1 x m and m being the boundary values for determining pairs of invariant samples and pairs of varying samples, d i,j The euclidean distance at position (i, j) for the feature map of the two images.
According to another aspect of the present application, there is further provided a high resolution remote sensing image change detection device, including an image acquisition module, an image input module, and an image change detection module;
the image acquisition module is configured to acquire a pre-time-phase remote sensing image and a post-time-phase remote sensing image;
the image input module is configured to input the pre-time-phase remote sensing image and the post-time-phase remote sensing image into the trained remote sensing image change detection network;
the image change detection module is configured to respectively extract characteristics of the front-time-phase remote sensing image and the rear-time-phase remote sensing image by the remote sensing image change detection network to obtain front-time-phase characteristic data and rear-time-phase characteristic data;
the image change detection module is further configured to obtain a corresponding change detection result based on the difference between the front time phase characteristic data and the rear time phase characteristic data;
the feature extraction is performed on the pre-time-phase remote sensing image and the post-time-phase remote sensing image by the remote sensing image change detection network, and the method comprises the following steps:
A front time phase characteristic fusion module in the remote sensing image change detection network acquires a low-level front time phase characteristic output by a low-level convolution layer in a first convolution module and a high-level front time phase characteristic output by a high-level convolution layer, and fuses the low-level front time phase characteristic and the high-level front time phase characteristic to acquire front time phase characteristic data;
and acquiring a low-level post-time phase characteristic output by a low-level convolution layer in a second convolution module and a high-level post-time phase characteristic output by a high-level convolution layer by a post-time phase characteristic fusion module in the remote sensing image change detection network, and fusing the low-level post-time phase characteristic and the high-level post-time phase characteristic to obtain post-time phase characteristic data.
By adopting the high-resolution remote sensing image change detection network constructed by the embodiment of the application, after the characteristic extraction is respectively carried out on the front-time-phase remote sensing image and the rear-time-phase remote sensing image, the front-time-phase characteristic data is obtained by fusing the low-level front-time-phase characteristic and the high-level front-time-phase characteristic extracted from the front-time-phase remote sensing image, and the rear-time-phase characteristic data is obtained by fusing the low-level rear-time-phase characteristic and the high-level rear-time-phase characteristic extracted from the rear-time-phase remote sensing image, so that the image characteristics are effectively enriched, and the accuracy of a detection result is effectively improved when the change detection of the remote sensing image is carried out based on the obtained front-time-phase characteristic data and the rear-time-phase characteristic data.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a network configuration diagram of a high resolution remote sensing image change detection network according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for detecting high resolution remote sensing image changes according to an embodiment of the present application;
fig. 3a shows a front-temporal remote sensing image map in a certain application scenario;
FIG. 3b shows a post-temporal remote sensing image map of the scene shown in FIG. 3 a;
FIG. 3c shows a graph of the detection results of the manual change detection of the application scenario shown in FIGS. 3a and 3 b;
fig. 3d shows a detection result diagram of performing change detection on the application scenario shown in fig. 3a and 3b by using the high-resolution remote sensing image change detection method according to an embodiment of the present application;
fig. 4a shows a front-temporal remote sensing image map in a certain application scenario;
FIG. 4b shows a post-temporal remote sensing image map of the scene shown in FIG. 4 a;
fig. 4c shows a detection result diagram of performing manual change detection on the application scenario shown in fig. 4a and 4 b;
Fig. 4d shows a detection result diagram of performing change detection on the application scenario shown in fig. 4a and fig. 4b by using the high resolution remote sensing image change detection method according to an embodiment of the present application;
fig. 5a shows a front-temporal remote sensing image map in a certain application scenario;
FIG. 5b shows a post-temporal remote sensing image map of the scene shown in FIG. 5 a;
FIG. 5c shows a graph of the detection results of the manual change detection of the application scenario shown in FIGS. 5a and 5 b;
fig. 5d shows a detection result diagram of performing change detection on the application scenario shown in fig. 5a and 5b by using the high-resolution remote sensing image change detection method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a process for detecting a remote sensing image change using a high resolution remote sensing image change detection network according to an embodiment of the present application;
fig. 7 is a block diagram illustrating a high-resolution remote sensing image change detection apparatus according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Fig. 1 is a network configuration diagram of a high-resolution remote sensing image change detection network according to an embodiment of the present application. It should be noted that the remote sensing image change detection network constructed by the application is mainly used for high-resolution remote sensing images facing complex environments and performing change detection on the high-resolution remote sensing images. Referring to fig. 1, it includes two pre-temporal feature extraction branches and a post-temporal feature extraction branch arranged in parallel.
The front time phase feature extraction branch comprises a first convolution module and a front time phase feature fusion module. The first convolution module comprises R layers which are cascaded in sequence. And carrying out convolution operation on the input pre-time-phase remote sensing image by a first convolution module, and extracting corresponding characteristic data from the pre-time-phase remote sensing image.
And the front time phase feature fusion module is used for acquiring the low-level front time phase features output by the low-level convolution layer in the first convolution module and the high-level front time phase features output by the high-level convolution layer, and fusing the low-level front time phase features and the high-level front time phase features to acquire front time phase feature data.
Correspondingly, the post-time phase feature extraction branch comprises a second convolution module and a post-time phase feature fusion module. The second convolution module comprises S layers which are sequentially cascaded, and is used for carrying out convolution operation on the input post-time-phase remote sensing image, and extracting corresponding characteristic data from the post-time-phase remote sensing image.
And the post-time phase feature fusion module is used for acquiring the low-level post-time phase features output by the low-level convolution layer in the second convolution module and the high-level post-time phase features output by the high-level convolution layer, and fusing the low-level post-time phase features and the high-level post-time phase features to obtain post-time phase feature data.
Therefore, the high-resolution remote sensing image change detection network constructed by the embodiment of the application is adopted, after the characteristic extraction is respectively carried out on the pre-time-phase remote sensing image and the post-time-phase remote sensing image, the low-level pre-time-phase characteristic and the high-level pre-time-phase characteristic extracted from the pre-time-phase remote sensing image are fused to obtain the pre-time-phase characteristic data, and the low-level post-time-phase characteristic and the high-level post-time-phase characteristic extracted from the post-time-phase remote sensing image are fused to obtain the post-time-phase characteristic data, so that the image characteristics are effectively enriched, and the accuracy of a detection result is effectively improved when the change detection of the remote sensing image is carried out based on the obtained pre-time-phase characteristic data and the post-time-phase characteristic data.
In one possible implementation manner, in the constructed remote sensing image change detection network, the number of convolution layers of the first convolution module and the second convolution module is the same. That is, the value of R is the same as the value of S.
Meanwhile, it should be noted that the lower-level convolution layers in the first convolution module include each convolution layer of which the front R layers are sequentially cascaded, and the higher-level convolution layers in the first convolution module include each convolution layer of which the rear R-R layers are sequentially cascaded. The lower-level convolution layers in the second convolution module comprise all the convolution layers of which the front S layers are sequentially cascaded, and the higher-level convolution layers in the second convolution module comprise all the convolution layers of which the rear S-S layers are sequentially cascaded. Wherein, the values of R, R, S, S are positive integers, R is larger than R, and S is larger than S.
Specifically, referring to fig. 1, the first convolution module and the second convolution module may both adopt VGG16 convolution structures. Namely, conv1-2, conv2-2, conv3-3, conv4-3, conv5-3 are included in cascade. In this embodiment, conv1-2 and Conv2-2 are used as low-level convolution layers, and the output results of the two convolution layers are the low-level front-time phase characteristic and the low-level rear-time phase characteristic. Conv3-3, conv4-3 and Conv5-3 are all used as high-level convolution layers, and the output results of the three convolution layers are high-level front time phase characteristics and high-level rear time phase characteristics.
After the input pre-time phase remote sensing image and the post-time phase remote sensing image are subjected to corresponding feature extraction through the first convolution module and the second convolution module respectively, the pre-time phase feature fusion module can fuse the extracted low-level pre-time phase features and high-level pre-time phase features to obtain corresponding pre-time phase feature data. And simultaneously, the post-time phase characteristic fusion module fuses the extracted low-level post-time phase characteristics and the high-level post-time phase characteristics to obtain corresponding post-time phase characteristic data.
In the constructed remote sensing image change detection network, referring to fig. 1, the front time phase feature fusion module includes a front time phase low-level feature processing sub-module and a front time phase high-level feature processing sub-module. The front time phase low-level characteristic processing submodule comprises a convolution layer and a spatial attention mechanism layer which are connected in sequence; the pre-time phase advanced feature processing submodule comprises a channel attention mechanism layer, a convolution layer and an up-sampling layer which are connected in sequence.
Therefore, when the pre-time phase feature fusion module fuses the low-level pre-time phase features and the high-level pre-time phase features to obtain pre-time phase feature data, the pre-time phase low-level feature processing submodule can convolve the low-level pre-time phase features, the low-level pre-time phase features extract first features, and the first features are processed by using a spatial attention mechanism to obtain corresponding second features. And simultaneously, the front time phase high-level characteristic processing submodule processes the front time phase characteristics of the high level by utilizing a channel attention mechanism to obtain a third characteristic, and sequentially carries out convolution and upsampling on the third characteristic to obtain a fourth characteristic. And then fusing the second feature and the fourth feature to obtain the pre-time phase feature data.
The network structure of the post-time phase feature fusion module is the same as that of the pre-time phase feature fusion module, so that the post-time phase feature fusion module fuses the low-level post-time phase features and the high-level post-time phase features to obtain the post-time phase feature data, and the specific process of the post-time phase feature data is the same as or similar to that of the pre-time phase feature fusion, and details are omitted here.
Meanwhile, referring to fig. 1, the remote sensing image change detection network according to an embodiment of the present application further includes a change detection result generation module. The change detection structure generation module is connected with the front time phase characteristic fusion module and the rear time phase characteristic fusion module and is used for acquiring front time phase characteristic data and rear time phase characteristic data and obtaining corresponding change detection results based on the difference between the front time phase characteristic data and the rear time phase characteristic data.
Correspondingly, the application also provides a remote sensing image change detection method based on the remote sensing image change detection network. The remote sensing image transformation detection method mainly realizes the change detection of the remote sensing image data to be detected at present based on any one of the remote sensing image transformation detection networks.
Specifically, fig. 2 shows a flowchart of a remote sensing image change detection method according to an embodiment of the present application. Referring to fig. 2, the method includes: step S100, acquiring a pre-time-phase remote sensing image and a post-time-phase remote sensing image. Here, the acquired pre-time-phase remote sensing image and post-time-phase remote sensing image refer to remote sensing images of different time phases in the same region.
Then, through step S200, the pre-time-phase remote sensing image and the post-time-phase remote sensing image are input into the trained remote sensing image change detection network. And executing step S300, wherein the remote sensing image change detection network is used for respectively carrying out feature extraction on the front-time-phase remote sensing image and the rear-time-phase remote sensing image to obtain front-time-phase feature data and rear-time-phase feature data. Finally, through step S400, a corresponding change detection result is obtained based on the difference between the front time phase characteristic data and the rear time phase characteristic data.
According to the related description of the foregoing remote sensing image change detection network, it may be determined that, in the remote sensing image change detection method of the embodiment of the present application, the remote sensing image change detection network performs feature extraction on the front-time-phase remote sensing image and the rear-time-phase remote sensing image respectively, including:
And acquiring low-level front time phase characteristics output by a low-level convolution layer in the first convolution module and high-level front time phase characteristics output by a high-level convolution layer by a front time phase characteristic fusion module in the remote sensing image change detection network, and fusing the low-level front time phase characteristics and the high-level front time phase characteristics to acquire front time phase characteristic data. and
And acquiring the low-level post-time phase characteristics output by the low-level convolution layer in the second convolution module and the high-level post-time phase characteristics output by the high-level convolution layer by a post-time phase characteristic fusion module in the remote sensing image change detection network, and fusing the low-level post-time phase characteristics and the high-level post-time phase characteristics to acquire post-time phase characteristic data.
Wherein, in one possible implementation manner, when the pre-time phase feature fusion module fuses the low-level pre-time phase feature and the high-level pre-time phase feature, the method includes: and convolving the low-level pre-time phase characteristic, extracting a first characteristic from the low-level pre-time phase characteristic, and processing the first characteristic by using a spatial attention mechanism to obtain a corresponding second characteristic. And simultaneously, processing the high-level front time phase characteristic by using a channel attention mechanism to obtain a third characteristic, and sequentially rolling and upsampling the third characteristic to obtain a fourth characteristic. And then fusing the second characteristic and the fourth characteristic to obtain the pre-time phase characteristic data.
Correspondingly, when the post-time phase feature fusion module fuses the low-level post-time phase feature and the high-level post-time phase feature, the post-time phase feature fusion module comprises: and convolving the low-level post-time phase characteristic, extracting a fifth characteristic from the low-level post-time phase characteristic, and processing the fifth characteristic by using a spatial attention mechanism to obtain a corresponding sixth characteristic. And processing the high-level post-time phase feature by using a channel attention mechanism to obtain a seventh feature, and sequentially rolling and upsampling the seventh feature to obtain an eighth feature. And fusing the sixth feature and the eighth feature to obtain post-phase feature data.
Further, after the front time phase characteristic data and the rear time phase characteristic data are respectively acquired in any one of the above modes, a corresponding change detection result can be obtained based on the difference between the acquired front time phase characteristic data and rear time phase characteristic data.
In one possible implementation manner, the corresponding change detection result is obtained based on the difference between the front time phase characteristic data and the rear time phase characteristic data, which can be achieved in the following manner.
That is, first, the difference between the pre-time-phase feature data and the post-time-phase feature data is calculated by using the euclidean distance calculation method, and a corresponding distance map is generated. And then, processing the distance map by adopting a threshold segmentation method to obtain a change detection result.
Therefore, in the remote sensing image change detection method, the remote sensing image change detection network with the twin network structure is adopted to respectively extract the corresponding characteristics of the front-time-phase remote sensing image and the rear-time-phase remote sensing image, and in the characteristic extraction process, the characteristics of the extracted low-level characteristics and high-level characteristics in the front-time-phase remote sensing image are respectively fused, so that the image characteristics are richer. Meanwhile, in the remote sensing image change detection network, rich context information is obtained by introducing a channel attention mechanism, and some background details are effectively filtered by introducing a space attention mechanism, so that the image characteristics are more remarkable. This also effectively improves the detection result.
In addition, it should be noted that, since the remote sensing image change detection method according to the embodiment of the present application is performed based on the remote sensing image change detection network constructed above. Therefore, when the constructed remote sensing image change detection network is applied to remote sensing image change detection, training is required to be performed on the multi-constructed remote sensing image change detection network model so that the constructed remote sensing image change detection network model converges to an optimal network structure.
The training of the remote sensing image change detection network can be achieved in the following manner. That is, first, a training sample set is acquired. Here, it should be noted that the training sample set includes multiple sets of remote sensing image data, where each set of remote sensing image data includes two remote sensing images in the same region and different phases. And then, sequentially inputting each group of remote sensing image data in the training sample set into a pre-built remote sensing image change detection network model, and training the remote sensing image change detection network model to obtain a remote sensing image change detection network.
When the remote sensing image change detection network model is trained, the adopted loss function is as follows:
Figure BDA0002787590030000121
wherein omega 1 Weights, ω, characterizing invariant feature pairs 2 Weights characterizing pairs of varying features, 0.1 x m and m being the boundary values for determining pairs of invariant samples and pairs of varying samples, d i,j The euclidean distance at position (i, j) for the feature map of the two images.
In order to more clearly describe the training process of the constructed remote sensing image change detection network in the remote sensing image change detection method of the embodiment of the present application, the training process is described in more detail below.
Referring to fig. 6, first, a change detection training sample set is prepared, via step S001.
Taking two high-resolution remote sensing images of the same region and different time phases as one group of data, preparing a plurality of groups of data, registering and correcting each group of images, then carrying out vector labeling on a change region of each group of data, mapping a vector labeling file into a binary image (the change region is mapped to 255, and the non-change region is mapped to 0), correspondingly cutting out the front and rear time phase images and the change region in the binary image to form a sample set, and further carrying out a process of 7: the scale of 3 is divided into training and validation sets.
For example, two high-resolution remote sensing images of the same region and different time phases are regarded as a group of data, 100 groups of data are prepared, each group of data is registered and corrected, then the change area of each group of data is subjected to vector labeling, the vector labeling file is mapped into a binary image (the change area is mapped into 255, the non-change area is mapped into 0), the front-back time phase images and the change area in the binary image are correspondingly cut out, 2000 groups of sample sets are formed, and the number of the sample sets is 7: the scale of 3 is divided into a training set and a verification set, namely the training sample set comprises 1400 groups of samples, the verification set comprises 600 groups of samples, and the image size in each group of samples is 256x256 pixels.
And S002, constructing a change detection network model.
The built network model is shown in fig. 1, and measures the similarity degree of two time phase images before and after based on a twin network structure, namely, the model comprises two parallel branches, wherein one branch is used for acquiring the characteristics of the time phase remote sensing image before and the other branch is used for acquiring the characteristics of the time phase remote sensing image after, the convolution layers of the two branches both adopt VGG16 convolution structures, namely, the convolution layers comprise Conv1-2, conv2-2, conv3-3, conv4-3 and Conv5-3, and in order to more highlight the significance of the image characteristics, the invention fuses the low-level characteristics and the high-level characteristics extracted by the convolution layers, and for one branch, the characteristics output by the branches Conv1-2 and Conv2-2 are used as the low-level characteristics F low Further convolving it and obtaining f through a spatial attention mechanism module 1 The method comprises the steps of carrying out a first treatment on the surface of the The Conv3-3, conv4-3 and Conv5-3 output features are regarded as advanced features F high And processes it based on the channel attention module, and then rolls it up and down4 times up sampling to obtain f 2 Will f 1 And f 2 Fusing to obtain a characteristic diagram f with obvious characteristics 3 The method comprises the steps of carrying out a first treatment on the surface of the The processing procedure of the other branch is the same as that of the branch, and the feature graphs f of the two branches are calculated based on the Euclidean distance calculation method 3 And f 3 ' differences, i.e. generating distance maps f dist The model uses an improved contrast loss function in the training process, as shown in a formula (1), and compared with the original contrast loss function, the improved contrast loss function adds a difference measurement tolerance of 0.1 x m and introduces a weight omega 1 And omega 2 For representing invariant feature pairs and weights of variant feature pairs. Wherein 0.1 x m and m are taken as boundary values for the constant sample pair and the variable sample pair, d i,j For the Euclidean distance of the feature map of the two images at position (i, j), if the corresponding pixel pair of (i, j) has not changed, y i,j =0, if the corresponding pixel pair of (i, j) is changed, y i,j =1。ω 1 And omega 2 See formula (2), wherein P in formula (2) U 、P C The frequencies of occurrence of the non-changing pixel pairs and the changing pixel pairs are respectively calculated, the Loss is used for calculating the difference between the change truth value diagram and the model output result diagram, and the network model is gradually optimized through continuous loop iteration; distance map f during the test dist And obtaining a binary image representing the change condition through threshold segmentation.
For example, a network structure as shown in FIG. 2 is constructed, wherein the convolution layers of two branches adopt VGG16 convolution structures, i.e. Conv1-2, conv2-2, conv3-3, conv4-3, conv5-3, when sample images with the size of 256x256 and the front and back phases with the channel number of 3 are respectively input to 2 input ends of the network, each branch obtains a low-level feature with the dimension of 256x256x192, the low-level feature obtains a feature vector with the dimension of 256x256x64 after convolution operation, and a feature vector f with the dimension of 256x256x64 is obtained after the low-level feature passes through the spatial attention module 1 The method comprises the steps of carrying out a first treatment on the surface of the Each branch will get a high level feature of 64x64x384, after passing through the channel attention module, the dimension of 64x64x384 is still maintained, and 256x256x64 dimensions are obtained by convolution of 1x1x64 and 4 times up-samplingFeature vector f 2 Will f 1 And f 2 And obtaining a 256x256x1 feature map after fusion, and performing model training by adopting the improved contrast loss function.
Step S003, training a change detection network model;
that is, setting model training parameters such as learning rate, step length, etc., and setting termination conditions of model training, i.e., setting Loss less than a set threshold Loss threshold When the training of the model is stopped. In the training process, a training sample set and a verification sample set prepared in advance are input into the network structure as a training set and a verification set for model training. Images of the front and rear phases of each group of samples are respectively input into two branches of the twin network structure, the corresponding binary image is used as a labeling true value of a change area to participate in model training, and each parameter of the model is gradually optimized through training until the model training termination condition is met, so that training of the network model is completed.
For example, setting the super parameters of the model, including learning rate, step size, etc., inputting the prepared sample images with 256x256 pixels and 3 front and back time phases into the network structure pair by pair, and taking the binary image as the labeling truth value information to participate in the model training. When the output Loss is smaller than the set threshold value, training is ended.
Step S004, testing a change detection network model;
firstly, setting test parameters, registering and correcting front and back time phase remote sensing images to be tested, if the remote sensing images to be tested are super-large images and the areas of important attention are only a small part of the areas, drawing the areas of important attention as vector image spots by using a vector file, and applying the areas of important attention as prior information in the test process, namely, only carrying out change detection on local areas corresponding to the vector image spots in the test process, thereby effectively shielding interference caused by non-important attention areas and improving the processing efficiency. In the processing process, the images are input into the network model constructed by the invention block by block according to the preset block size, the change detection binary image output by each block is spliced into an oversized image, and the region with the pixel value of 255 in the spliced image is the region with the change.
For example, using 2 front and rear images with 10000x10000 pixels as test data, using vector labeling software to delineate a rectangular vector element with 6000x6000 pixels as a concerned region in the area covered by the two images, using the vector file as prior information for testing, the model will output a 10000x10000 pixel binary image, and obtaining the final change detection vector file through grid vector conversion. Fig. 3a to 3d, fig. 4a to 4d, and fig. 5a to 5d each show partial comparison of the detection result with the true value.
Therefore, in the remote sensing image change detection method of the embodiment of the application, when the constructed remote sensing image change detection network model is trained, by adding a difference measurement tolerance (namely, 0.1×m) into the loss function, the tolerance can identify the area with weak difference of two images (such as a vegetation area with certain change of color influenced by seasons and a deviation area around a building caused by shooting visual angles) as a non-change area, and can inhibit pseudo change caused by spectrum difference and visual angle difference to a certain extent.
Correspondingly, based on any one of the remote sensing image change detection methods, the application also provides a remote sensing image change detection device. Because the working principle of the remote sensing image change detection device provided by the application is the same as or similar to that of the remote sensing image change detection method, repeated parts are not repeated.
Referring to fig. 7, a remote sensing image change detection apparatus 100 according to an embodiment of the present application includes: the device comprises an image acquisition module 110, an image input module 120 and an image change detection module 130. Wherein, the image acquisition module 110 is configured to acquire a pre-temporal remote sensing image and a post-temporal remote sensing image. The image input module 120 is configured to input the pre-temporal remote sensing image and the post-temporal remote sensing image into the trained remote sensing image change detection network. The image change detection module 130 is configured to perform feature extraction on the front-time-phase remote sensing image and the rear-time-phase remote sensing image by using a remote sensing image change detection network, so as to obtain front-time-phase feature data and rear-time-phase feature data. The image change detection module 130 is further configured to obtain a corresponding change detection result based on a difference between the front time phase feature data and the rear time phase feature data.
The method for extracting the characteristics of the front-time-phase remote sensing image and the rear-time-phase remote sensing image by the remote sensing image change detection network comprises the following steps:
a front time phase characteristic fusion module in the remote sensing image change detection network acquires low-level front time phase characteristics output by a low-level convolution layer in a first convolution module and high-level front time phase characteristics output by a high-level convolution layer, and fuses the low-level front time phase characteristics and the high-level front time phase characteristics to acquire front time phase characteristic data;
and acquiring the low-level post-time phase characteristics output by the low-level convolution layer in the second convolution module and the high-level post-time phase characteristics output by the high-level convolution layer by a post-time phase characteristic fusion module in the remote sensing image change detection network, and fusing the low-level post-time phase characteristics and the high-level post-time phase characteristics to acquire post-time phase characteristic data.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. The utility model provides a high resolution remote sensing image change detection network which is characterized in that is used for facing the high resolution remote sensing image of complex environment and carries out change detection, includes: two front time phase feature extraction branches and rear time phase feature extraction branches which are arranged in parallel;
the pre-time phase feature extraction branch comprises a first convolution module and a pre-time phase feature fusion module; the first convolution module comprises R layers which are sequentially cascaded;
the front time phase feature fusion module is used for acquiring a low-level front time phase feature output by a low-level convolution layer in the first convolution module and a high-level front time phase feature output by a high-level convolution layer, and fusing the low-level front time phase feature and the high-level front time phase feature to acquire front time phase feature data;
the post-time phase feature extraction branch comprises a second convolution module and a post-time phase feature fusion module; the second convolution module comprises S layers of convolution layers which are sequentially cascaded;
the post-time phase feature fusion module is used for acquiring the low-level post-time phase features output by the low-level convolution layer in the second convolution module and the high-level post-time phase features output by the high-level convolution layer, and fusing the low-level post-time phase features and the high-level post-time phase features to obtain post-time phase feature data;
The front time phase feature fusion module comprises a front time phase low-level feature processing sub-module and a front time phase high-level feature processing sub-module;
the front time phase low-level characteristic processing submodule comprises a convolution layer and a spatial attention mechanism layer which are connected in sequence;
the front time phase advanced feature processing submodule comprises a channel attention mechanism layer, a convolution layer and an up-sampling layer which are connected in sequence;
the network structure of the back time phase feature fusion module is the same as that of the front time phase feature fusion module;
the device also comprises a change detection result generation module;
the change detection structure generation module is connected with the front time phase characteristic fusion module and the rear time phase characteristic fusion module, and is used for acquiring the front time phase characteristic data and the rear time phase characteristic data, and acquiring corresponding change detection results based on the difference between the front time phase characteristic data and the rear time phase characteristic data.
2. The network of claim 1, wherein the value of R is the same as the value of S;
the first convolution module and the second convolution module adopt VGG16 convolution structures.
3. The network of claim 1, wherein the lower level convolutional layers in the first convolutional module comprise convolutional layers in which the first R layer is cascaded in turn, and the higher level convolutional layers in the first convolutional module comprise convolutional layers in which the last R-R layer is cascaded in turn;
The lower-level convolution layers in the second convolution module comprise all the convolution layers of which the front S layers are sequentially cascaded, and the higher-level convolution layers in the second convolution module comprise all the convolution layers of which the rear S-S layers are sequentially cascaded;
wherein, the values of R, R, S, S are positive integers, R is larger than R, and S is larger than S.
4. A high resolution remote sensing image change detection method, characterized in that it performs change detection on a remote sensing image to be detected currently based on the high resolution remote sensing image change detection network according to any one of claims 1 to 3, and comprises:
acquiring a front time phase remote sensing image and a rear time phase remote sensing image;
inputting the pre-time-phase remote sensing image and the post-time-phase remote sensing image into the trained remote sensing image change detection network;
the remote sensing image change detection network respectively performs feature extraction on the front-time-phase remote sensing image and the rear-time-phase remote sensing image to obtain front-time-phase feature data and rear-time-phase feature data;
based on the difference between the front time phase characteristic data and the rear time phase characteristic data, a corresponding change detection result is obtained;
the feature extraction is performed on the pre-time-phase remote sensing image and the post-time-phase remote sensing image by the remote sensing image change detection network, and the method comprises the following steps:
A front time phase characteristic fusion module in the remote sensing image change detection network acquires a low-level front time phase characteristic output by a low-level convolution layer in a first convolution module and a high-level front time phase characteristic output by a high-level convolution layer, and fuses the low-level front time phase characteristic and the high-level front time phase characteristic to acquire front time phase characteristic data;
and acquiring a low-level post-time phase characteristic output by a low-level convolution layer in a second convolution module and a high-level post-time phase characteristic output by a high-level convolution layer by a post-time phase characteristic fusion module in the remote sensing image change detection network, and fusing the low-level post-time phase characteristic and the high-level post-time phase characteristic to obtain post-time phase characteristic data.
5. The method of claim 4, wherein the pre-phase feature fusion module, when fusing the low-level pre-phase features and the high-level pre-phase features, comprises:
convolving the low-level front time phase characteristic, extracting a first characteristic from the low-level front time phase characteristic, and processing the first characteristic by using a spatial attention mechanism to obtain a corresponding second characteristic;
Processing the high-level front time phase characteristic by using a channel attention mechanism to obtain a third characteristic, and sequentially rolling and upsampling the third characteristic to obtain a fourth characteristic;
fusing the second characteristic and the fourth characteristic to obtain the pre-time phase characteristic data;
the post-time phase feature fusion module fuses the low-level post-time phase feature and the high-level post-time phase feature, and comprises:
convolving the low-level post-time phase feature, extracting a fifth feature from the low-level post-time phase feature, and processing the fifth feature by using a spatial attention mechanism to obtain a corresponding sixth feature;
processing the high-level post-time phase feature by using a channel attention mechanism to obtain a seventh feature, and sequentially rolling and upsampling the seventh feature to obtain an eighth feature;
and fusing the sixth feature and the eighth feature to obtain the post-time phase feature data.
6. The method of claim 4, wherein obtaining a corresponding change detection result based on a difference between the pre-phase feature data and the post-phase feature data comprises:
Calculating the difference between the front time phase characteristic data and the rear time phase characteristic data by adopting a Euclidean distance calculation method, and generating a corresponding distance map;
and processing the distance map by adopting a threshold segmentation method to obtain the change detection result.
7. The method of any one of claims 4 to 6, further comprising the step of training the remote sensing image change detection network;
wherein, train remote sensing image change detection network, include:
acquiring a training sample set; the training sample set comprises a plurality of groups of remote sensing image data, wherein each group of remote sensing image data comprises two remote sensing images in the same region and different time phases;
sequentially inputting each group of remote sensing image data in the training sample set into a pre-built remote sensing image change detection network model, and training the remote sensing image change detection network model to obtain the remote sensing image change detection network;
when the remote sensing image change detection network model is trained, the adopted loss function is as follows:
Figure QLYQS_1
wherein omega 1 Weights, ω, characterizing invariant feature pairs 2 Weights characterizing pairs of varying features, 0.1 ×
Figure QLYQS_2
And->
Figure QLYQS_3
Respectively as boundary values for the decision invariant sample pair and the variant sample pair, +.>
Figure QLYQS_4
In position +.>
Figure QLYQS_5
Euclidean distance at.
8. The high-resolution remote sensing image change detection device is characterized by comprising an image acquisition module, an image input module and an image change detection module;
the image acquisition module is configured to acquire a pre-time-phase remote sensing image and a post-time-phase remote sensing image;
the image input module is configured to input the pre-time-phase remote sensing image and the post-time-phase remote sensing image into the trained remote sensing image change detection network;
the image change detection module is configured to respectively extract characteristics of the front-time-phase remote sensing image and the rear-time-phase remote sensing image by the remote sensing image change detection network to obtain front-time-phase characteristic data and rear-time-phase characteristic data;
the image change detection module is further configured to obtain a corresponding change detection result based on the difference between the front time phase characteristic data and the rear time phase characteristic data;
the feature extraction is performed on the pre-time-phase remote sensing image and the post-time-phase remote sensing image by the remote sensing image change detection network, and the method comprises the following steps:
A front time phase characteristic fusion module in the remote sensing image change detection network acquires a low-level front time phase characteristic output by a low-level convolution layer in a first convolution module and a high-level front time phase characteristic output by a high-level convolution layer, and fuses the low-level front time phase characteristic and the high-level front time phase characteristic to acquire front time phase characteristic data;
a post-time phase characteristic fusion module in the remote sensing image change detection network acquires low-level post-time phase characteristics output by a low-level convolution layer in a second convolution module and high-level post-time phase characteristics output by a high-level convolution layer, and fuses the low-level post-time phase characteristics and the high-level post-time phase characteristics to acquire post-time phase characteristic data;
the front time phase feature fusion module comprises a front time phase low-level feature processing sub-module and a front time phase high-level feature processing sub-module;
the front time phase low-level characteristic processing submodule comprises a convolution layer and a spatial attention mechanism layer which are connected in sequence;
the front time phase advanced feature processing submodule comprises a channel attention mechanism layer, a convolution layer and an up-sampling layer which are connected in sequence;
the network structure of the back time phase feature fusion module is the same as that of the front time phase feature fusion module.
CN202011303619.1A 2020-11-19 2020-11-19 High-resolution remote sensing image change detection network, method and device Active CN112365525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011303619.1A CN112365525B (en) 2020-11-19 2020-11-19 High-resolution remote sensing image change detection network, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011303619.1A CN112365525B (en) 2020-11-19 2020-11-19 High-resolution remote sensing image change detection network, method and device

Publications (2)

Publication Number Publication Date
CN112365525A CN112365525A (en) 2021-02-12
CN112365525B true CN112365525B (en) 2023-05-16

Family

ID=74534018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011303619.1A Active CN112365525B (en) 2020-11-19 2020-11-19 High-resolution remote sensing image change detection network, method and device

Country Status (1)

Country Link
CN (1) CN112365525B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807198B (en) * 2021-08-24 2023-08-22 深圳市魔方卫星科技有限公司 Road network change detection method, model training method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263705A (en) * 2019-06-19 2019-09-20 上海交通大学 Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN110853026A (en) * 2019-11-16 2020-02-28 四创科技有限公司 Remote sensing image change detection method integrating deep learning and region segmentation
CN110889399A (en) * 2019-12-23 2020-03-17 北京航天泰坦科技股份有限公司 High-resolution remote sensing image weak and small target detection method based on deep learning
CN110991611A (en) * 2019-11-29 2020-04-10 北京市眼科研究所 Full convolution neural network based on image segmentation
CN111429355A (en) * 2020-03-30 2020-07-17 新疆大学 Image super-resolution reconstruction method based on generation countermeasure network
CN111651504A (en) * 2020-06-03 2020-09-11 湖南大学 Multi-element time sequence multilayer space-time dependence modeling method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263705A (en) * 2019-06-19 2019-09-20 上海交通大学 Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN110853026A (en) * 2019-11-16 2020-02-28 四创科技有限公司 Remote sensing image change detection method integrating deep learning and region segmentation
CN110991611A (en) * 2019-11-29 2020-04-10 北京市眼科研究所 Full convolution neural network based on image segmentation
CN110889399A (en) * 2019-12-23 2020-03-17 北京航天泰坦科技股份有限公司 High-resolution remote sensing image weak and small target detection method based on deep learning
CN111429355A (en) * 2020-03-30 2020-07-17 新疆大学 Image super-resolution reconstruction method based on generation countermeasure network
CN111651504A (en) * 2020-06-03 2020-09-11 湖南大学 Multi-element time sequence multilayer space-time dependence modeling method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于协同分割的高分辨率遥感图像变化检测;袁敏;肖鹏峰;冯学智;张学良;胡永月;;南京大学学报(自然科学)(第05期);全文 *

Also Published As

Publication number Publication date
CN112365525A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN110135366B (en) Shielded pedestrian re-identification method based on multi-scale generation countermeasure network
CN110188685B (en) Target counting method and system based on double-attention multi-scale cascade network
CN108764063B (en) Remote sensing image time-sensitive target identification system and method based on characteristic pyramid
CN111723732B (en) Optical remote sensing image change detection method, storage medium and computing equipment
CN111738110A (en) Remote sensing image vehicle target detection method based on multi-scale attention mechanism
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN112861729B (en) Real-time depth completion method based on pseudo-depth map guidance
CN109636721B (en) Video super-resolution method based on countermeasure learning and attention mechanism
CN106548169A (en) Fuzzy literal Enhancement Method and device based on deep neural network
CN104616247B (en) A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT
CN112488978A (en) Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN114926511A (en) High-resolution remote sensing image change detection method based on self-supervision learning
CN114494821B (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN113822314A (en) Image data processing method, apparatus, device and medium
CN116071650A (en) Remote sensing image building extraction method integrating convolutional neural network and transducer
CN114663509A (en) Self-supervision monocular vision odometer method guided by key point thermodynamic diagram
CN105488759A (en) Image super-resolution reconstruction method based on local regression model
CN113610905A (en) Deep learning remote sensing image registration method based on subimage matching and application
CN106355210A (en) Method for expressing infrared image features of insulators on basis of depth neuron response modes
CN112365525B (en) High-resolution remote sensing image change detection network, method and device
CN114724021A (en) Data identification method and device, storage medium and electronic device
Ran et al. RGAN: Rethinking generative adversarial networks for cloud removal
CN107392211A (en) The well-marked target detection method of the sparse cognition of view-based access control model
CN114119987A (en) Feature extraction and descriptor generation method and system based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100075 room 1107, Haitai building, 229 beisihuanzhong Road, Haidian District, Beijing

Patentee after: Aerospace Science and Technology (Beijing) Space Information Application Co.,Ltd.

Address before: 100075 room 1107, Haitai building, 229 beisihuanzhong Road, Haidian District, Beijing

Patentee before: BEIJING AEROSPACE TITAN TECHNOLOGY Co.,Ltd.