CN116052007B - Remote sensing image change detection method integrating time and space information - Google Patents

Remote sensing image change detection method integrating time and space information Download PDF

Info

Publication number
CN116052007B
CN116052007B CN202310322581.XA CN202310322581A CN116052007B CN 116052007 B CN116052007 B CN 116052007B CN 202310322581 A CN202310322581 A CN 202310322581A CN 116052007 B CN116052007 B CN 116052007B
Authority
CN
China
Prior art keywords
descriptors
semantic features
remote sensing
order semantic
space information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310322581.XA
Other languages
Chinese (zh)
Other versions
CN116052007A (en
Inventor
孙启玉
刘玉峰
孙平
杨公平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Fengshi Information Technology Co ltd
Original Assignee
Shandong Fengshi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Fengshi Information Technology Co ltd filed Critical Shandong Fengshi Information Technology Co ltd
Priority to CN202310322581.XA priority Critical patent/CN116052007B/en
Publication of CN116052007A publication Critical patent/CN116052007A/en
Application granted granted Critical
Publication of CN116052007B publication Critical patent/CN116052007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a remote sensing image change detection method integrating time and space information, and belongs to the technical field of remote sensing image change detection. The remote sensing images in the same region and different periods are respectively input into the same feature extraction network to perform feature extraction, original high-order semantic features in different periods are obtained, descriptors of the high-order semantic features are generated, information of the descriptors is fused, the descriptors are fused with further information of the original high-order semantic features, and finally the high-order semantic features fused with time and space information in different periods are input into a change detector to perform change detection. The invention not only can reduce the calculated amount, but also ensures the consistency of the characteristic expression of the remote sensing ground object in different areas and at different times, further improves the efficiency of change detection, and provides more favorable technical support for river and lake management.

Description

Remote sensing image change detection method integrating time and space information
Technical Field
The invention relates to a remote sensing image change detection method integrating time and space information, and belongs to the technical field of remote sensing image change detection.
Background
River and lake management is an important factor affecting social economic development, and meanwhile, the remote sensing image analysis technology is utilized to assist the river and lake management to be commonly known in the industry, and with the progress of earth observation technologies such as remote sensing satellites and unmanned aerial vehicles and the development of deep learning, the remote sensing image analysis technology based on the deep learning is widely applied to the field of river and lake management.
The change detection is a mainstream deep learning technology in the field of river and lake management, and the aim of the change detection is to assign binary labels to remote sensing images shot at different times in the same area, namely, change and no change. In different scenes, the substantial meaning of the change detection is different, for example, in the urban construction scene, the newly added building is identified as the change, and the rest changes are identified as no change; in a forest deforestation scenario, the deforestated trees will be identified as being changed and the remaining changes identified as unchanged.
With the development of deep learning, a method for extracting features based on a deep convolution network and enhancing features based on an attention mechanism has been successfully applied to the field of remote sensing image change detection and has shown good performance. However, due to the complex scene in the high-resolution remote sensing image and unexpected imaging conditions, the same object generates larger spectrum difference in different time and space positions, and the spectrum difference phenomenon seriously affects the accuracy of change detection; meanwhile, the feature enhancement method based on the attention mechanism is low in calculation efficiency and high in calculation complexity, and the calculation complexity is further increased along with the expansion of the image size, so that improvement of the efficiency of change detection is not facilitated. At present, long-distance dependence of time and space is still difficult to obtain in the field of remote sensing image change detection, because many methods either apply an attention mechanism to each time image to enhance the characteristics of the time image, or simply use the attention mechanism to weight special diagnosis in a channel or space dimension, and meanwhile, the method of utilizing the attention mechanism in the time and space dimensions needs high computational complexity and has low computational efficiency.
Disclosure of Invention
The invention aims to overcome the defects and provide a remote sensing image change detection method integrating time and space information, which fuses characteristic information of remote sensing ground objects in different periods through a self-attention mechanism, and then further enhances the characteristics of the remote sensing ground objects in different times and spaces by utilizing a cross-attention mechanism, thereby ensuring that the characteristic expression of the same ground object in different positions and different times is more consistent, further improving the change detection efficiency and providing more favorable technical support for river and lake management.
The technical scheme adopted by the invention is as follows:
a remote sensing image change detection method integrating time and space information comprises the following steps:
s1, extracting original high-order semantic features: respectively inputting the remote sensing images of different periods in the same region into the same feature extraction network to perform feature extraction to obtain original high-order semantic features of different periods;
s2, generating descriptors of high-order semantic features: processing original high-order semantic features in different periods in the same way, generating descriptors of the high-order semantic features in different periods in a calculation mode of a simulated attention mechanism, and reducing feature dimensions participating in calculation;
s3, information fusion of descriptors: combining descriptors of high-order semantic features in different periods, inputting the combined descriptors into a self-attention mechanism for feature enhancement to obtain descriptors fused with time and space information features;
s4, further information fusion of descriptors and original high-order semantic features: splitting the descriptors fused with the time and space information features, respectively restoring the descriptors into the dimensions of descriptors fused with the time and space information features, and inputting the original high-order semantic features in the same period and the split descriptors fused with the time and space information features into a cross attention mechanism to obtain the high-order semantic features fused with the time and space information;
s5, inputting the high-order semantic features fused with the time and space information in different periods into a change detector for change detection.
In the method, the feature extraction network in step S1 selects a residual network ResNet-50, and the ResNet-50 constructs a residual structure between input and output and is divided into four stages to respectively generate four-scale feature information.
The generation steps of the descriptors in the step S2 are as follows:
1) Generating a set of spatial attention maps with dimensions L x H x W by using 1 x 1 convolution learning on the original high-order semantic features, wherein L is smaller than C;
2) Flattening the generated L×H×W dimensional space attention map in a space dimension, and expanding the flattened L×H×W dimensional space attention map into L×HW; flattening the original high-order semantic features of the dimension C×H×W in the space dimension, expanding the flattened high-order semantic features into C×HW, and then transposing the flattened high-order semantic features into HW×C;
3) And carrying out matrix multiplication on the flattened and transposed high-order semantic features HW×C and the flattened space attention map LXHW to obtain descriptors of the high-order semantic features, wherein the dimension is LXC.
And 3, the descriptor merging mode for merging the high-order semantic features in different periods is that two descriptors with dimensions of L multiplied by C are unfolded into one-dimensional features and spliced together to obtain the merged descriptors with the length of 2 LC.
The self-attention mechanism described in step S3 and the cross-attention mechanism described in step S4 follow the following calculation formula:
wherein ,for the matrix to be calculated, the Query matrix (Query), the Key matrix (Key) and the Value matrix (Value) are respectively corresponding to +.>For scaling factor +.>According to->Calculation result determination->As a softmax function; if it isAre converted from the same variables, the calculation is the self-attention mechanism described in step S3, if +.>Converted from different variables, the calculation is the cross-attention mechanism described in step S4.
The change detector in step S5 takes the high-order semantic feature information fused with time and space information in different periods as input, and takes the difference and the absolute value of the two to obtain the change of the same area; sending the difference result into a convolution layer, changing the number of channels of the difference result passing through the convolution layer into 2, normalizing the convolution result to 0-1 by using a softmax function, wherein channel one represents the probability of change, and channel two represents the probability of unchanged; for each pixel point, the method takes the pass with larger probability as the change detection, and finally obtains the result of the change detection of the whole image.
It is another object of the present invention to provide a storage device which is a computer readable storage device having stored thereon a computer program for implementing the steps of a remote sensing image change detection method of fusing temporal and spatial information as described above.
It is still another object of the present invention to provide a remote sensing image change detecting apparatus for fusing time and space information, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executing the program implements a remote sensing image change detecting method for fusing time and space information as described above.
The beneficial effects of the invention are as follows:
the method fully excavates the information contained in the paired images in the transformation detection, can effectively extract the characteristics of the remote sensing ground object, and can efficiently model the space information and the time information of the remote sensing ground object; according to the invention, the descriptors corresponding to the high-order semantic features are generated, so that the descriptors simply and efficiently avoid the operand in the common attention calculation process, the extraction and the utilization of the multiple information are ensured, meanwhile, the generation framework of the descriptors in the invention has universality, can be efficiently migrated to other attention mechanism calculation, and the integral operand of the algorithm is reduced; according to the invention, the remote sensing ground feature characteristic information of different spaces and different times is fused through the self-attention mechanism and the cross-attention mechanism, and the original characteristics are enhanced, so that the consistency of the characteristic expression of the remote sensing ground feature in different areas and different times is ensured, the efficiency of change detection is further improved, and a more favorable technical support is provided for river and lake management.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a network structure of a method model according to the present invention;
FIG. 3 is a schematic diagram of the self-attention mechanism of the present invention;
FIG. 4 is a schematic diagram of the cross-attention mechanism of the present invention.
Detailed Description
The present invention will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent.
Example 1: a remote sensing image change detection method integrating time and space information comprises the following steps:
s1, extracting original high-order semantic features:
a pair of remote sensing images with labels, namely an image a and an image B (as shown in fig. 1), are obtained, and each pair of images is a remote sensing image photographed in the same region at different periods. The convolutional network selects a residual network ResNet-50 as a feature extractor to extract features, and the ResNet-50 constructs a residual structure between input and output, so that training of a deep network is possible. In order to ensure the consistency of feature extraction, the invention uses the same network for feature extraction for both images. Further, resNet-50 is divided into four stages, and feature information of four scales is generated respectively to extract high-order semantic information of an image and fuse time and space information. As shown in FIG. 2, after ResNet-50 feature extraction, the image A and the image B extract original high-order semantic features with the same dimension, and the dimension is marked as C×H×W.
S2, generating descriptors of high-order semantic features:
in order to perform context modeling on long-distance dependence in a double-time image in an efficient manner, the invention simulates the calculation mode of an attention mechanism to generate descriptors (the dimension is L multiplied by C) of high-order semantic features after the original high-order semantic features of each image are obtained in view of operation efficiency, so that feature dimension participating in calculation is further reduced. The generation steps of the descriptor are as follows:
1) Generating a set of spatial attention maps with dimensions L x H x W by using 1 x 1 convolution learning on the original high-order semantic features, wherein L is smaller than C;
2) Flattening the generated L×H×W dimensional space attention map in a space dimension, and expanding the flattened L×H×W dimensional space attention map into L×HW; flattening the original high-order semantic features of the dimension C×H×W in the space dimension, expanding the flattened high-order semantic features into C×HW, and then transposing the flattened high-order semantic features into HW×C;
3) And carrying out matrix multiplication on the flattened and transposed high-order semantic features HW×C and the flattened space attention map LXHW to obtain descriptors of the high-order semantic features, wherein the dimension is LXC.
S3, information fusion of descriptors:
after descriptors of high-order semantic features of the remote sensing image in different periods are obtained respectively in the same way, the invention hopes to obtain global semantic relations based on time and space of the descriptors, so that richer semantic features are generated for the remote sensing image in each period.
At present, a common method for obtaining long-distance dependence by using an algorithm based on deep learning is to introduce a focus mechanism. Generally, the attention mechanism follows the following calculation formula:
wherein ,for the matrix to be calculated, the Query matrix (Query), the Key matrix (Key) and the Value matrix (Value) are respectively corresponding to +.>For scaling factor +.>According to->Calculation result determination->As a softmax function. In the calculation process of the attention mechanism, the query matrix is multiplied by the key matrix, elements at each position in the query matrix are calculated by all keys, so that the importance degree of the current element on each key is determined, after the normalization of the softmax function, the calculation result is distributed between 0 and 1, and the intermediate result is called attention distribution. The attention distribution is multiplied by the value matrix, i.e. the weighting operation of the value matrix is achieved. Through the calculation process of the whole attention mechanism, more key information can be extracted from the original characteristics due to query and weighting operation, and the information enhancement of the original characteristics can be regarded. If->Converted from the same variable, this calculation is called self-attention mechanism, as shown in FIG. 3, where +.>,/>,/>Three matrices are generated corresponding to the original features respectively>Mapping matrix at the time; if->Converted from different variables, this calculation is called cross-attention mechanism, as shown in FIG. 4, where +.>Matrix generation +.>Mapping matrix at time, < >>,/>Respectively corresponding to descriptor generation->Mapping matrix at the time. The two mechanisms differ only in +.>The source of the matrix is different, and the matrix has the function of enhancing the original characteristics.
By utilizing the principle, the specific flow of the invention is shown in fig. 2, firstly, descriptors of remote sensing images in different periods are combined, the combination mode is that the descriptors with two dimensions of L multiplied by C are unfolded into one-dimensional characteristics, and the two descriptors are spliced together to obtain the combined descriptors with the length of 2 LC. The self-attention mechanism is then used to enhance the merged descriptors, i.e., the merged descriptors are used as input for calculation of the self-attention mechanism. Because the input of the self-attention mechanism is to combine descriptors at different times, through the step, the space information of the descriptors is enhanced, and the time information of the descriptors is considered, so that the fusion of the time and the space information is realized.
S4, further information fusion of descriptors and original high-order semantic features:
after obtaining the descriptors fused with the time and space information features, the invention splits the descriptors according to the merging positions and respectively restores the descriptors to the dimensions of the original descriptors, and the specific operation is shown in fig. 2.
The invention has obtained corresponding semantic information rich descriptors for each period remote sensing image, the descriptors are more compact, and the descriptors contain high-level semantic information of time and space at the same time. Furthermore, the invention projects the descriptors to the original high-order semantic features through a cross attention mechanism, so that the original high-order semantic features at different times contain more abundant space-time information. The specific steps are shown in fig. 2, the original high-order semantic features and the descriptors fused with the temporal and spatial information features are used as inputs to calculate the cross attention, and the calculation process of the cross attention follows the calculation formula of the attention mechanism as shown in fig. 4.
Through the information fusion, the original high-order semantic features corresponding to each period are fused with additional space-time information, so that the feature expressions of the same ground feature at different positions are more consistent, the feature expressions of the same ground feature at different periods are more similar, and the consistency of the feature expressions of the remote sensing ground feature in different areas and at different times is ensured. And because of the existence of descriptors with extremely small dimensions, the complexity of attention calculation related by the invention is greatly reduced, and the efficiency of remote sensing ground object information fusion and feature enhancement is greatly improved.
S5, inputting the high-order semantic features fused with time and space information in different periods into a change detector for change detection:
the high-order semantic feature information which is corresponding to the remote sensing images in different periods and is fused with the time and space information is taken as input, and the two are subjected to difference and absolute value to obtain the change of the same area. The difference result is sent to a convolution layer, the number of channels passing through the convolution layer for the difference result is changed to 2, and after the convolution result is normalized to 0 to 1 by using a softmax function, the first channel represents the probability of change, and the second channel represents the probability of unchanged. And for each pixel point, taking the pixel point with larger probability as the pass of the change detection, and finally obtaining the result of the change detection of the whole image.
Example 2:
a storage device which is a computer readable storage device having stored thereon a computer program for implementing the steps in the remote sensing image change detection method of fusing temporal and spatial information as described in the above embodiment 1.
A remote sensing image change detection apparatus that fuses time and space information, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the remote sensing image change detection method that fuses time and space information as described in embodiment 1 above when executing the program.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalents, and improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. A remote sensing image change detection method integrating time and space information is characterized by comprising the following steps:
s1, extracting original high-order semantic features: respectively inputting the remote sensing images of different periods in the same region into the same feature extraction network to perform feature extraction to obtain original high-order semantic features of different periods;
s2, generating descriptors of high-order semantic features: processing original high-order semantic features in different periods in the same way, generating descriptors of the high-order semantic features in different periods in a calculation mode of a simulated attention mechanism, and reducing feature dimensions participating in calculation;
the generation steps of the descriptor are as follows:
1) Generating a set of spatial attention maps with dimensions L x H x W by using 1 x 1 convolution learning on the original high-order semantic features, wherein L is smaller than C;
2) Flattening the generated L×H×W dimensional space attention map in a space dimension, and expanding the flattened L×H×W dimensional space attention map into L×HW; flattening the original high-order semantic features of the dimension C×H×W in the space dimension, expanding the flattened high-order semantic features into C×HW, and then transposing the flattened high-order semantic features into HW×C;
3) Performing matrix multiplication on the flattened and transposed high-order semantic features HW×C and the flattened space attention map LXHW to obtain descriptors of the high-order semantic features, wherein the dimension is LXC;
s3, information fusion of descriptors: combining descriptors of high-order semantic features in different periods, inputting the combined descriptors into a self-attention mechanism for feature enhancement to obtain descriptors fused with time and space information features;
the descriptor merging mode for merging the high-order semantic features in different periods is that two descriptors with dimensions of L multiplied by C are unfolded to form one-dimensional features, and the one-dimensional features are spliced together to obtain a merged descriptor with the length of 2 LC;
s4, further information fusion of descriptors and original high-order semantic features: splitting the descriptors fused with the time and space information features, respectively restoring the descriptors into the dimensions of descriptors fused with the time and space information features, and inputting the original high-order semantic features in the same period and the split descriptors fused with the time and space information features into a cross attention mechanism to obtain the high-order semantic features fused with the time and space information;
s5, inputting the high-order semantic features fused with the time and space information in different periods into a change detector for change detection.
2. The method for detecting changes in remote sensing images with fusion of time and space information according to claim 1, wherein the feature extraction network in step S1 selects a residual network res net-50.
3. The method for detecting a change in a remote sensing image by fusing time and space information according to claim 1, wherein the self-attention mechanism in step S3 and the cross-attention mechanism in step S4 follow the following calculation formula:
wherein ,for the matrix to be calculated, the query matrix, the key matrix and the value matrix are respectively corresponding,/>In order for the scaling factor to be a factor,according to->Calculation result determination->As a softmax function; if->Are converted from the same variables, the calculation is the self-attention mechanism described in step S3, if +.>Converted from different variables, the calculation is the cross-attention mechanism described in step S4.
4. The method for detecting the change of the remote sensing image by fusing time and space information according to claim 1, wherein the change detector in the step S5 takes the high-order semantic feature information fused with the time and space information in different periods as input, takes the difference of the two as an absolute value, and obtains the change of the same area; sending the difference result into a convolution layer, changing the number of channels of the difference result passing through the convolution layer into 2, normalizing the convolution result to 0-1 by using a softmax function, wherein channel one represents the probability of change, and channel two represents the probability of unchanged; and taking the pixel points with larger probability as a change detection result, and finally obtaining the change detection result of the whole image.
5. A storage device, which is a computer readable storage device, wherein the computer readable storage device stores a computer program for implementing the steps of a remote sensing image change detection method for fusing time and space information according to any one of claims 1 to 4.
6. A remote sensing image change detection device for fusing time and space information, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a remote sensing image change detection method for fusing time and space information according to any one of claims 1-4 when executing the program.
CN202310322581.XA 2023-03-30 2023-03-30 Remote sensing image change detection method integrating time and space information Active CN116052007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310322581.XA CN116052007B (en) 2023-03-30 2023-03-30 Remote sensing image change detection method integrating time and space information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310322581.XA CN116052007B (en) 2023-03-30 2023-03-30 Remote sensing image change detection method integrating time and space information

Publications (2)

Publication Number Publication Date
CN116052007A CN116052007A (en) 2023-05-02
CN116052007B true CN116052007B (en) 2023-08-11

Family

ID=86129807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310322581.XA Active CN116052007B (en) 2023-03-30 2023-03-30 Remote sensing image change detection method integrating time and space information

Country Status (1)

Country Link
CN (1) CN116052007B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949838A (en) * 2021-04-15 2021-06-11 陕西科技大学 Convolutional neural network based on four-branch attention mechanism and image segmentation method
CN113139969A (en) * 2021-05-17 2021-07-20 齐鲁工业大学 Attention mechanism-based weak supervision image semantic segmentation method and system
CA3121440A1 (en) * 2021-05-10 2021-11-16 Cheng Jun Chen Assembly body change detection method, device and medium based on attention mechanism
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN114372173A (en) * 2022-01-11 2022-04-19 中国人民公安大学 Natural language target tracking method based on Transformer architecture
WO2022227913A1 (en) * 2021-04-25 2022-11-03 浙江师范大学 Double-feature fusion semantic segmentation system and method based on internet of things perception
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
US11521379B1 (en) * 2021-09-16 2022-12-06 Nanjing University Of Information Sci. & Tech. Method for flood disaster monitoring and disaster analysis based on vision transformer
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method and device, computer equipment and storage medium
KR102479817B1 (en) * 2021-11-25 2022-12-21 인하대학교 산학협력단 Vision Transformer Apparatus for Small Dataset and Method of Operation
CN115690002A (en) * 2022-10-11 2023-02-03 河海大学 Remote sensing image change detection method and system based on Transformer and dense feature fusion

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN112949838A (en) * 2021-04-15 2021-06-11 陕西科技大学 Convolutional neural network based on four-branch attention mechanism and image segmentation method
WO2022227913A1 (en) * 2021-04-25 2022-11-03 浙江师范大学 Double-feature fusion semantic segmentation system and method based on internet of things perception
CA3121440A1 (en) * 2021-05-10 2021-11-16 Cheng Jun Chen Assembly body change detection method, device and medium based on attention mechanism
CN113139969A (en) * 2021-05-17 2021-07-20 齐鲁工业大学 Attention mechanism-based weak supervision image semantic segmentation method and system
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method
US11521379B1 (en) * 2021-09-16 2022-12-06 Nanjing University Of Information Sci. & Tech. Method for flood disaster monitoring and disaster analysis based on vision transformer
KR102479817B1 (en) * 2021-11-25 2022-12-21 인하대학교 산학협력단 Vision Transformer Apparatus for Small Dataset and Method of Operation
CN114372173A (en) * 2022-01-11 2022-04-19 中国人民公安大学 Natural language target tracking method based on Transformer architecture
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method and device, computer equipment and storage medium
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
CN115690002A (en) * 2022-10-11 2023-02-03 河海大学 Remote sensing image change detection method and system based on Transformer and dense feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于注意力机制的增强特征描述子;陈佳 等;《计算机工程》;第47卷(第5期);第260-266页 *

Also Published As

Publication number Publication date
CN116052007A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
Chen et al. The face image super-resolution algorithm based on combined representation learning
CN113486190B (en) Multi-mode knowledge representation method integrating entity image information and entity category information
CN113554032B (en) Remote sensing image segmentation method based on multi-path parallel network of high perception
CN115345866B (en) Building extraction method in remote sensing image, electronic equipment and storage medium
CN112001931A (en) Image segmentation method, device, equipment and storage medium
Chen et al. ASF-Net: Adaptive screening feature network for building footprint extraction from remote-sensing images
CN116740527A (en) Remote sensing image change detection method combining U-shaped network and self-attention mechanism
CN115965789A (en) Scene perception attention-based remote sensing image semantic segmentation method
Wang et al. Global contextual guided residual attention network for salient object detection
CN117033609B (en) Text visual question-answering method, device, computer equipment and storage medium
CN113066089B (en) Real-time image semantic segmentation method based on attention guide mechanism
CN111967516B (en) Pixel-by-pixel classification method, storage medium and classification equipment
CN113096133A (en) Method for constructing semantic segmentation network based on attention mechanism
CN116052007B (en) Remote sensing image change detection method integrating time and space information
CN117197632A (en) Transformer-based electron microscope pollen image target detection method
Yu et al. MagConv: Mask-guided convolution for image inpainting
CN114529450B (en) Face image super-resolution method based on improved depth iteration cooperative network
Zhang et al. A multi-cue guidance network for depth completion
CN113177546A (en) Target detection method based on sparse attention module
Zeng et al. Swin-CasUNet: cascaded U-Net with Swin Transformer for masked face restoration
Li et al. Global information progressive aggregation network for lightweight salient object detection
An et al. Generating infrared image from visible image using Generative Adversarial Networks
Zhang et al. ESDINet: Efficient Shallow-Deep Interaction Network for Semantic Segmentation of High-Resolution Aerial Images
Li et al. Learning to capture dependencies between global features of different convolution layers
Zhu et al. A Remote Sensing Image Segmentation Method Based on Fusion Mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant