CN116543297A - Remote sensing change detection method based on period coupling - Google Patents

Remote sensing change detection method based on period coupling Download PDF

Info

Publication number
CN116543297A
CN116543297A CN202310281105.8A CN202310281105A CN116543297A CN 116543297 A CN116543297 A CN 116543297A CN 202310281105 A CN202310281105 A CN 202310281105A CN 116543297 A CN116543297 A CN 116543297A
Authority
CN
China
Prior art keywords
feature
attention
time
remote sensing
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310281105.8A
Other languages
Chinese (zh)
Inventor
郑建炜
全玥芊
王逸彬
吴彭江
郑航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202310281105.8A priority Critical patent/CN116543297A/en
Publication of CN116543297A publication Critical patent/CN116543297A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing change detection method based on time period coupling, which comprises the steps of respectively extracting three dimension feature images of a double-time remote sensing image through a feature extraction module, and inputting the double-time feature images with the same dimension in the three dimension feature images of the double-time remote sensing image into a time period coupling attention module to output a difference feature image; then inputting three difference feature images at any time into a feature alignment module, aligning the difference feature images to realize multi-level information convergence, and outputting an aligned feature image; and finally, inputting the alignment feature map into a detection head module, respectively performing up-sampling and convolution operation, and generating a final change detection result through the operation of obtaining the maximum value parameter. The invention utilizes the attention mechanism to avoid excessive attention to irrelevant areas, enhances attention to truly-changed areas, adapts to the characteristics of different target sizes in remote sensing images through feature alignment and cross-scale fusion, obtains more abundant context information, and has more accurate detection results.

Description

Remote sensing change detection method based on period coupling
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a remote sensing change detection method based on time period coupling.
Background
The purpose of the change detection is to monitor the difference of the same geometric area through multi-time remote sensing images, and the change detection is widely applied to land coverage mapping, urban expansion estimation and disaster damage assessment. With the rapid development of precision sensors, people can acquire large-scale high-resolution remote sensing images, which also brings about the need for intelligent change monitoring. However, this may interfere with the detection of changes due to differences in imaging conditions caused by different photographing periods, such as season changes, illumination changes, and object reconstruction.
In order to cope with uncorrelated variations and complex objects, a great deal of research based on convolutional neural networks has been attempted. The main idea is to incorporate some schemes that facilitate modeling of context information, which is crucial to distinguishing changes from shared objects. The multi-level feature fusion strategy is characterized by first and foretell, and combines low-level features for promoting target positioning and high-level features for enriching semantic information. In addition, several schemes have been widely studied and employed to expand the scope of the acceptance domain, including deepening the structure, extending the convolution and attention mechanisms. Notably, attention mechanisms have evolved in various forms, such as channel attention can recalibrate channel-level feature reactions, while spatial attention can highlight spatial features related to tasks. Multi-level feature maps with local features guided by convolution are critical to modeling spatiotemporal context information in a change detection task. However, convolution lacks the ability to model long-term dependencies, thus leaving much room for improvement. Furthermore, neither of the two existing methods of change detection solves the problem of direct interaction between multiple time images prior to acquisition of different features.
Disclosure of Invention
The object of the present application is to provide a remote sensing change detection method based on time period coupling, so as to solve the above problems, and introduce a lightweight time period coupling attention module, which combines multi-time characterization with global view, and guides the attention point to the actually changed area.
In order to achieve the above purpose, the technical scheme of the application is as follows:
a remote sensing change detection method based on time period coupling, comprising:
constructing and training a detection model, wherein the detection model comprises a feature extraction module, a period coupling attention module, a feature alignment module and a detection head module;
acquiring a double-time remote sensing image to be detected, and respectively extracting three dimensional feature images of the double-time remote sensing image through a feature extraction module;
inputting the same-size double-time feature images in the three-size feature images of the double-time remote sensing image into a period coupling attention module, coupling the feature images, and outputting a difference feature image;
three difference feature images at any time are input into a feature alignment module, the difference feature images are aligned to achieve multi-level information convergence, and an alignment feature image is output;
and inputting the alignment feature map into a detection head module, respectively performing up-sampling and convolution operation, and generating a final change detection result through the operation of obtaining the maximum value parameter.
Further, the step of inputting the same-sized dual-time feature images in the three-sized feature images of the dual-time remote sensing image into the period coupling attention module, coupling the feature images, and outputting a difference feature image includes:
compressing the number of channels of the feature into half of the original number by a layer of 3X 3 convolution on the double-time feature map with the same size to obtain a feature map after primary channel compression;
the characteristic diagram after primary channel compression is respectively compressed into one fourth of the original number through a layer of 1X 1 convolution, and the characteristic diagram after secondary channel compression is obtained and used as the Q characteristic of the cross-period attention;
respectively carrying out multistage average pooling on the feature graphs after primary channel compression, and carrying out connection operation in the channel dimension to obtain connected features;
respectively carrying out linear transformation on the connected characteristics, compressing the number of channels into one fourth of the original number, and taking the obtained characteristics as K characteristics of the span attention;
respectively carrying out linear transformation on the connected features again, wherein the number of channels is unchanged, and the obtained features are used as V features of the cross-period attention;
the Q characteristic line transposition operation of the cross-period attention is carried out, multiplication operation is carried out on the Q characteristic line transposition operation and the K characteristic line transposition operation of the cross-period attention is carried out, and corresponding first attention force diagrams are obtained respectively;
coupling the first attention force diagram to obtain a differential attention force diagram;
and performing transposition operation on the attention of the difference, performing multiplication operation on the attention of the difference and the V characteristic of the corresponding cross-period attention, and performing addition operation on the attention of the difference and the characteristic diagram after primary channel compression to obtain a difference characteristic diagram.
Further, the step of inputting the three difference feature maps at any time into the feature alignment module, aligning the difference feature maps to achieve multi-level information convergence, and outputting an aligned feature map includes:
three different feature maps at the same time i are expressed ast∈{1,2,3};
Map the difference characteristicSample loading operation and difference feature map->Is scale-aligned and features are refined by a layer of 3 x 3 convolutions, again with feature map +.>Performing addition operation to obtain characteristic diagram->
Map the characteristic mapSample loading operation and difference feature map->Is aligned and features are refined by a layer of 3 x 3 convolutions and then is aligned with the feature map +.>Performing addition operation to obtain an alignment feature map P i
Further, the loss function of the detection model is:
L total =L(P,G)+L(P 1 ,G)+L(P 2 ,G)
wherein L represents cross entropy loss, G represents ground actual tag, P represents change detection result, and P 1 ,P 2 And (5) representing an alignment feature map corresponding to the double-time remote sensing image.
According to the remote sensing change detection method based on time period coupling, a lightweight time period coupling attention module and a characteristic alignment module are adopted on the basis of a double-branch structure. First, the time period coupling attention module performs coupling operation on intra-layer feature graphs of dual-time images extracted from the same Convolutional Neural Network (CNN) backbone, combines multi-time characterization with global view angle, and outputs a difference graph. And secondly, the feature alignment module aligns the features of the difference map, and gradually calibrates the low-level features with rich spatial information and the high-level features with rich semantics so as to realize multi-level information convergence. And finally, generating a final prediction graph through the operation of acquiring the maximum value parameter. The method and the device utilize the attention mechanism, avoid excessive attention to irrelevant areas, strengthen attention to truly-changed areas, adapt to the characteristics of different target sizes in remote sensing images through feature alignment and cross-scale fusion, acquire more abundant context information and have more accurate detection results.
Drawings
FIG. 1 is a flow chart of a remote sensing change detection method based on time period coupling according to the present application;
FIG. 2 is a schematic diagram of a detection model of the present application;
FIG. 3 is a schematic diagram of a period coupling attention module according to an embodiment of the present application;
fig. 4 is a schematic diagram of a feature alignment module according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a remote sensing change detection method based on time period coupling is provided, and the time period coupling and feature calibration alignment method is fully utilized to realize accurate change detection of a double-time remote sensing image. Comprising the following steps:
and S1, constructing and training a detection model, wherein the detection model comprises a feature extraction module, a period coupling attention module, a feature alignment module and a detection head module.
The detection model constructed by the method is shown in fig. 2, and comprises a feature extraction module, a period coupling attention module, a feature alignment module and a detection head module. The feature extraction module adopts a CNN backbone network, the detection head module adopts an Argmax detection head, and the period coupling attention module and the feature alignment module are described in detail in the following specific implementation steps.
The training of the network model is a relatively mature technology in the art, and will not be described in detail here.
And S2, acquiring a double-time remote sensing image to be detected, and respectively extracting three dimensional feature images of the double-time remote sensing image through a feature extraction module.
In order to extract low-level features and high-level features of optical remote sensing images as much as possible, firstly, a double-time remote sensing image to be detected is input into a feature extraction module of a detection model, wherein the double-time remote sensing image is a remote sensing image acquired at two different times. In this embodiment, the feature extraction module extracts image features by adopting a CNN backbone network, where the CNN backbone extracts three size feature graphs of the dual-time remote sensing image respectively, and feature sizes are respectively
And S3, inputting the double-time feature images with the same size in the three-size feature images of the double-time remote sensing image into a period coupling attention module, coupling the feature images, and outputting a difference feature image.
In order to better acquire the context information of the features, in this embodiment, three size feature graphs extracted by the CNN trunk are input into a period coupling attention module, and the period coupling attention mechanism is used to avoid excessively paying attention to irrelevant objects and to divert attention to a truly changing area.
In a particular embodiment, the period coupling attention module is shown in FIG. 3, which can suppress task independent differences and highlight regions of true variation.
The period coupling attention module in this embodiment performs the following operations:
s31, compressing the number of channels of the feature into half by a layer of 3×3 convolution respectively to obtain a feature map after one channel compression.
In this embodiment, the dual-time remote sensing image obtains three size feature images respectively, that is, two feature images of each size are respectively expressed asWherein t.epsilon. {1,2,3}, represents each size, the feature size is +.>And 1 and 2 in the subscripts respectively represent feature maps corresponding to the remote sensing images among different remote sensing images.
Features to be characterizedThe channel number of the feature is compressed to be half of the original channel number through a layer of 3X 3 convolution, and the feature after one-time channel number compression is +.>Its scale is +.>
S32, compressing the characteristic map after primary channel compression into one fourth of the original number through a layer of 1X 1 convolution respectively, and obtaining the characteristic map after secondary channel compression as the Q characteristic of the cross-period attention.
This step will featureThe channel number is compressed into one fourth by a layer of 1X 1 convolution, and the characteristic scale is +.>The final output is characterized by cross-period attention>
S33, carrying out multistage average pooling on the feature graphs after primary channel compression, and carrying out connection operation in the channel dimension to obtain the connected features.
This step will featureRespectively carrying out multistage average pooling and connecting operation in channel dimension, wherein the connected structure is characterized by +.>The characteristic scale is->
S34, respectively carrying out linear transformation on the connected features, compressing the number of channels into one fourth of the original number, and taking the obtained features as K features of the span attention.
This step will featureRespectively performing linear transformation to compress the channel number into one fourth of the original channel number, wherein the characteristic scale is +.>The final output is characterized by cross-period attention>
S35, respectively carrying out linear transformation on the connected features again, wherein the number of channels is unchanged, and the obtained features are used as V features of the span attention.
This step will featureRespectively performing linear transformation again, wherein the channel number is unchanged, and the characteristic scale is +.>The final output is characterized by cross-period attention>
S36, performing transposition operation on the Q characteristic row of the cross-period attention, and performing multiplication operation on the Q characteristic row of the cross-period attention and the K characteristic row of the cross-period attention respectively to obtain corresponding first attention force diagrams.
This step will featurePerforming transposition operations and respectively with +.>Multiplication is performed to obtain attention seeking force>The calculation formula is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing multiplication operations +.>Is of the scale +.>
S37, carrying out coupling operation on the first attention force diagram to obtain a differential attention force diagram.
This step will take care of the force diagramPerforming coupling operation to obtain differential attention force diagram +.>The calculation formula is as follows:
where abs (. Cndot.) represents the absolute value taking operation and softmax (. Cndot.) represents the activation function.
S38, performing transposition operation on the attention of the difference, performing multiplication operation on the attention of the difference and the V characteristic of the corresponding cross-period attention, and performing addition operation on the attention of the difference and the characteristic diagram after primary channel compression to obtain a difference characteristic diagram.
This step attempts to focus on the differencePerforming transposition operations and respectively with +.>Performing multiplication operations, next to the features->Performing addition operation to output difference characteristic diagram +.>The calculation formula is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing an addition operation.
In this example, the period coupling attention module highlights the change region of the remote sensing image, and has rich semantic information.
In order to align and fuse the features of each layer and obtain richer texture information, in this embodiment, the features processed by the period coupling attention module are input into the feature alignment module, as shown in fig. 4, and the specific steps refer to step S4.
And S4, inputting three difference feature graphs at any time into a feature alignment module, aligning the difference feature graphs to realize multi-level information convergence, and outputting an aligned feature graph.
In this embodiment, three difference feature graphs can be obtained by processing the remote sensing image at any time through the steps, and are expressed asi e {1,2}, t e {1,2,3}. i is equal to 1 and is equal to 2, and i is equal to 2, three difference feature maps corresponding to the remote sensing image at the first time are expressed as +.>t e {1,2,3}; three different feature maps corresponding to the second time remote sensing image are expressed as +.>t∈{1,2,3}。
The three difference feature maps at the same time are input into a feature alignment module, namely, the three difference feature maps at the first time are input into one feature alignment module, and the three difference feature maps at the second time are input into the other feature alignment module. The feature alignment module can perform alignment fusion on the clues of each level, recalibrate the response of each level difference, and as shown in fig. 4, the feature alignment module performs the following operations:
s41, representing three difference characteristic graphs at the same time i ast∈{1,2,3}。
Difference feature map processed in step S3 of this embodimenti epsilon {1,2}, t epsilon {1,2,3}, size ofThree difference feature maps at the same time i +.>An input feature alignment module.
S42, comparing the difference characteristic mapSample loading operation and difference feature map->Is scale-aligned and features are refined by a layer of 3 x 3 convolutions, again with feature map +.>Performing addition operation to obtain characteristic diagram->
This step will be a difference feature mapSample loading operation and difference feature map->Is aligned to give a dimension of +.>Is refined by a layer of 3 x 3 convolutions and then is combined with the feature map +.>Performing addition operation to obtain characteristic diagram->i ε {1,2}, the calculation formula is as follows:
wherein Conv 1×1 (. Cndot.) is a convolution operation with a 3 x 3 convolution kernel, UP (. Cndot.) represents an UP-sampling operation, and Concat (x.) represents a channel connection operation.
S43, mapping the characteristic diagramSample loading operation and difference feature map->Is aligned and features are refined by a layer of 3 x 3 convolutions and then is aligned with the feature map +.>Performing addition operation to obtain an alignment feature map P i
The step is to make the characteristic diagramSample loading operation and difference feature map->Is aligned to obtain the dimension ofIs refined by a layer of 3 x 3 convolutions and then is combined with the feature map +.>Performing addition operation to obtain an alignment feature map P i I∈ {1,2}, the calculation formula is as follows:
and S5, inputting the alignment feature map into a detection head module, respectively performing up-sampling and convolution operations, and generating a final change detection result through the operation of obtaining the maximum value parameter.
The step outputs the alignment feature map P of step S4 1 ,P 2 Up-sampling and 3×3 convolution operations are performed, and a final variation prediction result P is generated by an operation of acquiring a parameter of the maximum value, and the calculation formula is as follows:
P=Argmax(Conv 1×1 (UP(P 1 )),Conv 1×1 (UP(P 2 )))
wherein Argmax represents the operation of the parameter that acquires the maximum value.
The implementation designs a period coupling attention module and a characteristic alignment module, wherein the period coupling attention module mainly utilizes an attention mechanism, avoids excessive attention to irrelevant areas and enhances attention to truly changed areas. The feature alignment module adapts to the characteristics of different sizes of targets in the remote sensing image through feature alignment and cross-scale fusion, and obtains more abundant context information.
In another specific embodiment, when the detection model is trained, the loss is calculated by the output feature map and the ground actual label, the model training is supervised by the loss, and the model training is a relatively mature technology in the field and is not repeated here. During training, the change detection result P is aligned with the feature map P 1 ,P 2 Calculating loss with ground actual label, training with loss supervision model, and total loss function L total The calculation formula of (2) is as follows:
L total =L(P,G)+L(P 1 ,G)+L(P 2 ,G);
wherein L represents cross entropy loss, G represents ground actual tag, P represents change detection result, and P 1 ,P 2 And (5) representing an alignment feature map corresponding to the double-time remote sensing image.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (4)

1. The remote sensing change detection method based on the time period coupling is characterized by comprising the following steps of:
constructing and training a detection model, wherein the detection model comprises a feature extraction module, a period coupling attention module, a feature alignment module and a detection head module;
acquiring a double-time remote sensing image to be detected, and respectively extracting three dimensional feature images of the double-time remote sensing image through a feature extraction module;
inputting the same-size double-time feature images in the three-size feature images of the double-time remote sensing image into a period coupling attention module, coupling the feature images, and outputting a difference feature image;
three difference feature images at any time are input into a feature alignment module, the difference feature images are aligned to achieve multi-level information convergence, and an alignment feature image is output;
and inputting the alignment feature map into a detection head module, respectively performing up-sampling and convolution operation, and generating a final change detection result through the operation of obtaining the maximum value parameter.
2. The method for detecting remote sensing change based on time period coupling according to claim 1, wherein the step of inputting the same-sized dual-time feature map of the three-sized feature maps of the dual-time remote sensing image into the time period coupling attention module, coupling the feature maps, and outputting a difference feature map comprises the steps of:
compressing the number of channels of the feature into half of the original number by a layer of 3X 3 convolution on the double-time feature map with the same size to obtain a feature map after primary channel compression;
the characteristic diagram after primary channel compression is respectively compressed into one fourth of the original number through a layer of 1X 1 convolution, and the characteristic diagram after secondary channel compression is obtained and used as the Q characteristic of the cross-period attention;
respectively carrying out multistage average pooling on the feature graphs after primary channel compression, and carrying out connection operation in the channel dimension to obtain connected features;
respectively carrying out linear transformation on the connected characteristics, compressing the number of channels into one fourth of the original number, and taking the obtained characteristics as K characteristics of the span attention;
respectively carrying out linear transformation on the connected features again, wherein the number of channels is unchanged, and the obtained features are used as V features of the cross-period attention;
the Q characteristic line transposition operation of the cross-period attention is carried out, multiplication operation is carried out on the Q characteristic line transposition operation and the K characteristic line transposition operation of the cross-period attention is carried out, and corresponding first attention force diagrams are obtained respectively;
coupling the first attention force diagram to obtain a differential attention force diagram;
and performing transposition operation on the attention of the difference, performing multiplication operation on the attention of the difference and the V characteristic of the corresponding cross-period attention, and performing addition operation on the attention of the difference and the characteristic diagram after primary channel compression to obtain a difference characteristic diagram.
3. The method for detecting remote sensing change based on time period coupling according to claim 1, wherein the steps of inputting three difference feature maps at any time into a feature alignment module, aligning the difference feature maps to achieve multi-level information convergence, and outputting an aligned feature map include:
three different feature maps at the same time i are expressed ast∈{1,2,3};
Map the difference characteristicSample loading operation and difference feature map->Is scale-aligned and features are refined by a layer of 3 x 3 convolutions, again with feature map +.>Performing addition operation to obtain characteristic diagram->
Map the characteristic mapSample loading operation and difference feature map->Is aligned and features are refined by a layer of 3 x 3 convolutions and then is aligned with the feature map +.>Performing addition operation to obtain an alignment feature map P i
4. The method for detecting remote sensing changes based on time period coupling according to claim 1, wherein the loss function of the detection model is:
L total =L(P,G)+L(P 1 ,G)+L(P 2 ,G)
wherein L represents cross entropy loss, G represents ground actual tag, P represents change detection result, and P 1 ,P 2 And (5) representing an alignment feature map corresponding to the double-time remote sensing image.
CN202310281105.8A 2023-03-15 2023-03-15 Remote sensing change detection method based on period coupling Pending CN116543297A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310281105.8A CN116543297A (en) 2023-03-15 2023-03-15 Remote sensing change detection method based on period coupling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310281105.8A CN116543297A (en) 2023-03-15 2023-03-15 Remote sensing change detection method based on period coupling

Publications (1)

Publication Number Publication Date
CN116543297A true CN116543297A (en) 2023-08-04

Family

ID=87454990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310281105.8A Pending CN116543297A (en) 2023-03-15 2023-03-15 Remote sensing change detection method based on period coupling

Country Status (1)

Country Link
CN (1) CN116543297A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496362A (en) * 2024-01-02 2024-02-02 环天智慧科技股份有限公司 Land coverage change detection method based on self-adaptive convolution kernel and cascade detection head

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496362A (en) * 2024-01-02 2024-02-02 环天智慧科技股份有限公司 Land coverage change detection method based on self-adaptive convolution kernel and cascade detection head
CN117496362B (en) * 2024-01-02 2024-03-29 环天智慧科技股份有限公司 Land coverage change detection method based on self-adaptive convolution kernel and cascade detection head

Similar Documents

Publication Publication Date Title
CN111783590A (en) Multi-class small target detection method based on metric learning
CN110956581B (en) Image modality conversion method based on dual-channel generation-fusion network
CN114926746A (en) SAR image change detection method based on multi-scale differential feature attention mechanism
CN110245683B (en) Residual error relation network construction method for less-sample target identification and application
CN104486562B (en) Embedded infrared image superframe processing method based on the fixed time of integration
CN116543297A (en) Remote sensing change detection method based on period coupling
CN110598748A (en) Heterogeneous image change detection method and device based on convolutional neural network fusion
CN114820655A (en) Weak supervision building segmentation method taking reliable area as attention mechanism supervision
CN116342894B (en) GIS infrared feature recognition system and method based on improved YOLOv5
CN113012208A (en) Multi-view remote sensing image registration method and system
CN110111276A (en) Based on sky-spectrum information deep exploitation target in hyperspectral remotely sensed image super-resolution method
CN116071424A (en) Fruit space coordinate positioning method based on monocular vision
CN110503092B (en) Improved SSD monitoring video target detection method based on field adaptation
CN111881915A (en) Satellite video target intelligent detection method based on multiple prior information constraints
CN117274627A (en) Multi-temporal snow remote sensing image matching method and system based on image conversion
CN111696167A (en) Single image super-resolution reconstruction method guided by self-example learning
CN116469172A (en) Bone behavior recognition video frame extraction method and system under multiple time scales
CN116433528A (en) Image detail enhancement display method and system for target area detection
CN116580289A (en) Fine granularity image recognition method based on attention
CN115147727A (en) Method and system for extracting impervious surface of remote sensing image
CN112085779B (en) Wave parameter estimation method and device
CN115496788A (en) Deep completion method using airspace propagation post-processing module
Varma et al. HSIS-Net: Hyperspectral Image Segmentation Using Multi-view Active Learning Based FCSN.
CN114529455A (en) Task decoupling-based parameter image super-resolution method and system
CN113763471A (en) Visual-based bullet hole detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination