CN116012364B - SAR image change detection method and device - Google Patents
SAR image change detection method and device Download PDFInfo
- Publication number
- CN116012364B CN116012364B CN202310101309.9A CN202310101309A CN116012364B CN 116012364 B CN116012364 B CN 116012364B CN 202310101309 A CN202310101309 A CN 202310101309A CN 116012364 B CN116012364 B CN 116012364B
- Authority
- CN
- China
- Prior art keywords
- target
- sar image
- time phase
- sample
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008859 change Effects 0.000 title claims abstract description 85
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 238000000605 extraction Methods 0.000 claims abstract description 124
- 238000012545 processing Methods 0.000 claims abstract description 109
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000004458 analytical method Methods 0.000 claims abstract description 26
- 238000011176 pooling Methods 0.000 claims description 133
- 230000006870 function Effects 0.000 claims description 48
- 230000004927 fusion Effects 0.000 claims description 44
- 230000004913 activation Effects 0.000 claims description 35
- 239000011159 matrix material Substances 0.000 claims description 28
- 230000007246 mechanism Effects 0.000 claims description 23
- 238000010586 diagram Methods 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 20
- 238000007499 fusion processing Methods 0.000 claims description 11
- 230000003213 activating effect Effects 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 7
- 238000012935 Averaging Methods 0.000 claims 2
- 239000000523 sample Substances 0.000 description 135
- 238000004364 calculation method Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 15
- 230000015654 memory Effects 0.000 description 8
- 238000003860 storage Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000002156 mixing Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 238000011835 investigation Methods 0.000 description 4
- 238000007500 overflow downdraw method Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002775 capsule Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000003623 enhancer Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000013074 reference sample Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Landscapes
- Radar Systems Or Details Thereof (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a SAR image change detection method and device. The method comprises the following steps: acquiring target SAR images of two time phases; extracting the characteristics of the image through a characteristic extraction network; the feature extraction network includes: the first feature extraction module is used for extracting features of the image to obtain a first feature map; the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map to determine channel attention weight, and weighting the first feature map based on the weight to obtain a second feature map; the spatial attention module is used for processing the second feature map to determine spatial attention weight, and weighting the second feature map based on the weight to obtain a target feature map; performing differential analysis on the two target feature graphs; and determining a target image area of which the later phase changes relative to the previous phase according to the target difference graph. Based on this method, the change detection accuracy for the SAR image can be improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a SAR image change detection method and device.
Background
The remote sensing image change detection extracts the change information of the earth surface by calculating and analyzing the difference of images in different geographic areas, and has wide application in a plurality of fields such as agricultural investigation, urban expansion, disaster prevention early warning, disaster assessment and the like. Compared with common optical remote sensing images, the synthetic aperture radar (Synthetic Aperture Radar, SAR) system utilizes the active microwave remote sensing pulse compression and the synthetic aperture principle to improve the imaging distance resolution and azimuth resolution, can acquire image data with short revisit period and wide coverage range all day long in all weather, overcomes the problems of high sample making and information extraction difficulty and the like caused by easy shadow interference of the optical images, and has strong recognition capability on water and metal. The rich and stable information in the SAR image with continuously improved resolution can provide a reliable data basis for research in aspects of target identification, classification and the like, and has great research value and development potential in the aspect of change detection.
In recent years, deep research is carried out on SAR image change detection methods at home and abroad, a series of change detection deep learning algorithms are sequentially provided based on structures such as a wavelet convolutional neural network, a deep cascade network, a pyramid pooling convolutional neural network, a multi-scale capsule network and the like, and complex features of different layers can be extracted from a large amount of data, so that the speed and the accuracy of image processing are greatly improved. However, the method is easy to be interfered by various factors in SAR images when processing the change detection task, and the information extraction effect is affected.
The attention mechanism can weight the feature map, highlight the change features of important types of ground features in the image, inhibit uninteresting targets and complex background factors, enhance noise robustness and optimize the information extraction effect. At present, various change detection methods based on an attention mechanism are provided for optical images, for example, a depth supervision image fusion network IFN fuses multistage deep features of an original image with image difference features, and the boundary integrity and semantic consistency of a change feature map are improved; the space-time attention neural network STANet consists of pyramid space-time attention modules, a self-attention mechanism is utilized to calculate weights for two pixels at different time positions so as to generate a characteristic with discriminant, and the change detection performance is improved by exploring the relation between the pixels at different time intervals.
However, developing SAR image deep learning change detection based on the attention mechanism still has a certain limitation in practical application: firstly, the existing change detection model requires the analyzed image to have higher resolution and stable and continuous revisit period so as to provide accurate change detection results. However, it is difficult for the spatial resolution and the temporal resolution of most SAR images to simultaneously satisfy the above requirements. Therefore, it is necessary to design a deep learning change detection model which is suitable for SAR images and can realize accurate detection. Secondly, because the optical image and the SAR image have larger differences in the aspects of imaging principle, image characteristics, semantic information and the like, the deep learning change detection model based on the attention mechanism and used for the optical image is directly used for the SAR image, so that a plurality of problems can be generated. Therefore, most of research on the current SAR image deep learning change detection method still stays at the theoretical level, or only simple identification and analysis of the change condition in a very small range can be realized, and the actual prediction effect, robustness and generalization are very limited.
Disclosure of Invention
It is an aim of embodiments of the present invention to address at least the above problems and/or disadvantages and to provide at least the advantages described below.
The embodiment of the invention provides a SAR image change detection method and device, which can extract fine change information of small-scale targets in SAR images in different time phases and improve the change detection precision of the SAR images.
In a first aspect, there is provided a SAR image change detection method, comprising:
acquiring target SAR images of two time phases;
extracting features of the target SAR images of the two time phases through a feature extraction network to obtain target feature images of the two time phases;
the feature extraction network includes:
the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase;
the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase;
Performing differential analysis on the target feature graphs of the two time phases to generate a target differential graph;
and determining a target image area in which the target SAR image of the latter time phase is changed relative to the target SAR image of the former time phase in the target SAR images of the two time phases according to the target difference map.
Optionally, the first feature extraction module includes a first convolution layer, a plurality of residual modules, and a second convolution layer that are sequentially connected;
the first convolution layer is used for performing dimension reduction processing on the target SAR image of each time phase;
each residual error module is used for extracting the characteristics of the input characteristics of the residual error module, and the input characteristics of the first residual error module are the output characteristics of the first convolution layer;
and the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module to obtain a first characteristic diagram with the same dimension as the target SAR image of each time phase.
Optionally, the channel attention module includes a first maximum pooling layer, a first average pooling layer, a third convolution layer, a first fusion layer, a first sigmoid activation function, and a first output layer, where the first maximum pooling layer and the first average pooling layer are connected in parallel to an input end of the third convolution layer, and the third convolution layer, the first fusion layer, the first sigmoid activation function, and the first output layer are sequentially connected;
The first maximum pooling layer and the first average pooling layer are respectively used for carrying out maximum pooling processing and average pooling processing of space dimension on the first feature map of each time phase;
the third convolution layer is used for respectively carrying out convolution processing on the output characteristics of the first maximum pooling layer and the first average pooling layer so as to obtain two channel attention weight matrixes;
the first fusion layer is used for adding the attention weight matrixes of the two channels to obtain a total channel attention weight matrix;
the first sigmoid activation function is used for activating the total channel attention weight matrix to obtain channel attention weights;
the first output layer is used for carrying out weighting processing on the first characteristic map of each time phase based on the channel attention weight to obtain a second characteristic map of each time phase.
Optionally, the spatial attention module includes a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolution layer, a second sigmoid activation function, and a second output layer, where the second maximum pooling layer and the second average pooling layer are connected in parallel to an input end of the second fusion layer, and the second fusion layer, the fourth convolution layer, and the second output layer are sequentially connected;
The second maximum pooling layer and the second average pooling layer are respectively used for carrying out maximum pooling treatment and average pooling treatment on the channel dimension of the second feature map of each time phase;
the second fusion layer is used for vector splicing of the output features of the second maximum pooling layer and the second average pooling layer;
the fourth convolution layer is used for carrying out convolution processing on the output characteristics obtained by vector splicing so as to obtain a space attention weight matrix; the second sigmoid activation function is used for activating the spatial attention weight matrix to obtain spatial attention weight;
and the second output layer is used for carrying out weighting processing on the second characteristic map of each time phase based on the spatial attention weight to obtain a target characteristic map of each time phase.
Optionally, the performing differential analysis on the target feature graphs of the two time phases to generate a target differential graph includes:
calculating the target feature graphs of the two time phases based on a difference operator and a logarithmic ratio operator respectively to obtain a difference graph based on the difference operator and a difference graph based on the logarithmic ratio operator;
and carrying out fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
Optionally, the fusing processing is performed on the difference graph based on the difference operator and the difference graph based on the log ratio operator, so as to generate the target difference graph, which includes:
and carrying out weighted fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
Optionally, the method further comprises:
acquiring a plurality of sample SAR image groups, wherein each sample SAR image group comprises two time-phase sample SAR images and a sample label, and the sample label is used for indicating an actual image area in which a sample SAR image of a later time phase in the two time-phase sample SAR images is changed relative to a sample SAR image of a previous time phase;
carrying out feature extraction on the sample SAR images of two time phases in each sample SAR image group through a feature extraction network to be trained to obtain sample feature images of the two time phases;
performing differential analysis on the sample feature images of the two phases corresponding to each sample SAR image group to generate a sample differential image corresponding to each sample SAR image group;
according to the sample difference map corresponding to each sample SAR image group, determining an image area in which the sample SAR image of the next time phase in each sample SAR image group changes relative to the sample SAR image of the previous time phase;
Determining loss information according to an image area and a sample label of a sample SAR image of a next time phase in each sample SAR image group, wherein the image area changes relative to the sample SAR image of a previous time phase;
and training the feature extraction network to be trained according to the loss information.
Optionally, the acquiring a plurality of sample SAR image sets includes:
acquiring original SAR images of a plurality of time phases;
performing data enhancement processing on the original SAR images of the multiple time phases to obtain sample SAR images of the multiple time phases;
and constructing a plurality of sample SAR image groups according to the sample SAR images of the plurality of phases.
Optionally, the determining the loss information according to the image area and the sample label of the sample SAR image of the last time phase in each sample SAR image group, where the sample SAR image of the last time phase changes relative to the sample SAR image of the previous time phase, includes:
respectively determining aggregate similarity loss information and cross entropy loss information according to an image area and a sample label of a sample SAR image of a later time phase in each sample SAR image group relative to a sample SAR image of a previous time phase;
and determining mixed loss information according to the aggregate similarity loss information and the cross entropy loss information.
In a second aspect, there is provided a SAR image change detection apparatus comprising:
the target image acquisition module is used for acquiring target SAR images of two time phases;
the target feature map extraction module is used for carrying out feature extraction on the target SAR images of the two time phases through a feature extraction network to obtain target feature maps of the two time phases;
the feature extraction network includes:
the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase;
the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase;
The target difference map generation module is used for carrying out difference analysis on the target feature maps of the two time phases to generate a target difference map;
and the target image area determining module is used for determining a target image area in which the target SAR image of the next time phase in the target SAR images of the two time phases is changed relative to the target SAR image of the previous time phase according to the target difference map.
In a third aspect, an electronic device is provided, comprising: the system comprises at least one processor and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method.
In a fourth aspect, a storage medium is provided, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method.
The embodiment of the invention at least comprises the following beneficial effects:
the embodiment of the invention provides a SAR image change detection method and a SAR image change detection device, wherein in the method, target SAR images of two time phases are firstly acquired; and then, carrying out feature extraction on the target SAR images of the two time phases through a feature extraction network to obtain target feature graphs of the two time phases, wherein the feature extraction network comprises: the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase; the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase; performing difference analysis on the target feature images of the two time phases to generate a target difference image; and finally, according to the target difference map, determining a target image area in which the target SAR image of the next time phase in the target SAR images of the two time phases is changed relative to the target SAR image of the previous time phase. Based on the method and the device, basic feature information in the target SAR image is extracted through the first feature extraction module, and deep, multi-dimensional and fine features of the target SAR image are further extracted through the channel attention module and the space attention module in the second feature module, so that fine change information of small-scale targets in SAR images in different time phases is extracted, and change detection precision of the SAR images is improved.
Additional advantages, objects, and features of embodiments of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of embodiments of the invention.
Drawings
Fig. 1 is a flowchart of a SAR image change detection method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a change detection model according to an embodiment of the present invention.
Fig. 3 is a flowchart of a training method of a feature extraction network according to an embodiment of the present invention.
Fig. 4 is a flowchart of a SAR image change detection method according to another embodiment of the present invention.
Fig. 5 is a flowchart of data set creation according to another embodiment of the present invention.
FIG. 6a is a target SAR image of a previous phase provided in accordance with another embodiment of the present invention; FIG. 6b is a target SAR image of a subsequent phase provided in accordance with another embodiment of the present invention; FIG. 6c is a label diagram provided by another embodiment of the present invention; fig. 6d is a diagram illustrating a SAR image variation detection result according to another embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a SAR image change detection apparatus according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Embodiments of the invention will be described in further detail below with reference to the drawings to enable those skilled in the art to practice the invention by reference to the description.
Fig. 1 is a flowchart of a SAR image change detection method according to an embodiment of the present invention, which is executed by a system with processing capability, a server device, or a SAR image change detection apparatus. As shown in fig. 1, the method includes steps 110 to 140.
In step 110, a target SAR image of two phases is acquired.
Here, the time interval between the two target SAR images may be an arbitrary time interval, for example, 24 hours or 1 week. It should be understood that the two target SAR images for change detection are SAR images for the same target region.
Step 120, extracting features of the target SAR images of the two time phases through a feature extraction network to obtain target feature images of the two time phases; the feature extraction network includes: the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase; the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and weighting the second feature map of each time phase based on the spatial attention weight to obtain the target feature map of each time phase.
In this step, first, the first feature extraction module extracts basic feature information, i.e., a first feature map, in the target SAR image. Next, the channel attention module processes the first feature map based on the attention mechanism, calculating a channel attention weight. The channel attention weight may represent the importance of each channel in the first feature map, i.e. higher weight is given to channels containing critical information, and lower weight is given to channels lacking critical information. Therefore, the first feature map is weighted based on the channel attention weight, so that the key information in the channel dimension can be extracted, a second feature map is generated, and the extraction of the non-key information in the channel dimension is inhibited in the second feature map. Further, the spatial attention module processes the second feature map based on an attention mechanism, and calculates a spatial attention weight. The spatial attention weight may represent the importance of each spatial location in the second feature map, i.e. higher weight is given to spatial locations containing critical information and lower weight is given to spatial locations lacking critical information. Therefore, the second feature map is weighted based on the spatial attention weight, so that the key information in the spatial dimension can be extracted, the target feature map is generated, and the extraction of the non-key information in the spatial dimension is inhibited in the target feature map. Based on the channel attention module and the space attention module, deep, multidimensional and fine features of the target SAR image can be extracted, and the capability of the second feature extraction module for feature representation is improved.
The target feature images of the two time phases extracted by the special extraction model contain the change features of key targets in the target SAR image, and the difference analysis is further carried out based on the target feature images of the two time phases to generate a target difference image, so that accurate change detection of the key targets in the SAR image, such as ships, buildings and the like, can be realized, and the change detection precision of the SAR image is improved.
Fig. 2 is a schematic diagram of a change detection model according to an embodiment of the present invention. As shown in fig. 2, the change detection model includes a feature extraction network and a difference generation and analysis module, wherein the feature extraction network includes a first feature extraction module and a second feature extraction module.
As shown in fig. 2, in some embodiments, the first feature extraction module includes a first convolution layer, a plurality of residual modules, and a second convolution layer connected in sequence; the first convolution layer is used for performing dimension reduction processing on the target SAR image of each time phase; each residual error module is used for extracting the characteristics of the input characteristics of the residual error module, and the input characteristics of the first residual error module are the output characteristics of the first convolution layer; and the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module to obtain a first characteristic diagram with the same dimension as the target SAR image of each time phase.
In a conventional residual network, after a plurality of residual modules, a global pooling layer and a full connection layer are generally connected, which results in a large parameter amount and a reduced calculation efficiency in the conventional residual network. Therefore, the embodiment of the invention adopts the improved residual error network to realize the first characteristic extraction module. Specifically, the global pooling layer and the full-connection layer connected after the plurality of residual modules in the conventional residual structure are replaced by a convolution layer, and the output characteristics of the last residual module (namely, the second convolution layer) are subjected to dimension-increasing processing to output a first characteristic diagram with the same dimension as the target SAR image, so that an improved residual structure is formed. By the method, the parameter quantity of the first feature extraction module can be reduced on the basis of guaranteeing the feature extraction capacity of the first feature extraction module, and the integral training efficiency and stability of the feature extraction network are guaranteed.
Specifically, in the first feature extraction module, each residual module (Resblock) is used for performing feature extraction on own input features. The input characteristic of each residual module comes from the output characteristic of the previous module to which it is connected. The residual module further includes a plurality of residual structures of identical structure, each residual structure including a plurality of convolutional layers. The first convolution layer is used for carrying out dimension reduction processing on the target SAR image, so that the dimension of the output characteristic of the first convolution layer meets the dimension requirement of the first residual error module on the input characteristic. And the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module, so that the output characteristics of the second convolution layer are restored to the dimension of the target SAR image. The embodiment of the invention does not limit the specific structure of the residual error module and the residual error structure.
In some examples, the first feature extraction module is implemented with a modified ResNet-34 residual network. Fig. 2 shows in particular a modified res net-34 residual network. In the improved ResNet-34 residual network, the residual network comprises a first convolution layer, 4 residual modules and a second convolution layer, wherein the residual modules ResBlock1, resBlock2, resBlock3 and ResBlock4 respectively comprise 3, 4, 6 and 3 residual structures (the residual structure in the first residual module is shown in FIG. 2 to be composed of 2 convolution layers of 3×3), the second convolution layer is connected after the last residual module ResBlock4, and the output characteristics of the second convolution layer are the first characteristic diagram.
As shown in fig. 2, in some embodiments, the channel attention module includes a first max-pooling layer, a first average pooling layer, a third convolution layer, a first fusion layer, a first sigmoid activation function, and a first output layer, where the first max-pooling layer and the first average pooling layer are connected in parallel to an input end of the third convolution layer, and the third convolution layer, the first fusion layer, the first sigmoid activation function, and the first output layer are sequentially connected; the first maximum pooling layer and the first average pooling layer are respectively used for carrying out maximum pooling processing and average pooling processing of space dimension on the first feature map of each time phase; the third convolution layer is used for respectively carrying out convolution processing on the output characteristics of the first maximum pooling layer and the first average pooling layer so as to obtain two channel attention weight matrixes; the first fusion layer is used for adding the attention weight matrixes of the two channels to obtain a total channel attention weight matrix; the first sigmoid activation function is used for activating the total channel attention weight matrix to obtain channel attention weights; the first output layer is used for carrying out weighting processing on the first characteristic map of each time phase based on the channel attention weight to obtain a second characteristic map of each time phase.
Specifically, the input feature of the channel attention module is the output feature of the first feature extraction module, i.e., the first feature map. The processing procedure of the channel attention module to the first feature map is as follows:
firstly, respectively carrying out maximum pooling processing and average pooling processing of space dimension on a first feature map by using a first maximum pooling layer and a first average pooling layer to respectively obtain two output features. The maximum pooling processing and the average pooling processing can realize the compression processing of the first feature map so as to improve the calculation efficiency of the feature extraction model; in addition, two compression modes are used simultaneously, so that the utilization of different information in the first characteristic diagram can be realized, and the characteristic representation capability of the channel attention module is improved. Given that the dimension of the first feature map is h×w×c, where H, W, C represents the height, width, and number of channels of the first feature map, two output features of 1×1×c can be obtained through the maximum pooling process and the average pooling process of the spatial dimension.
In the conventional channel attention module, the calculation of the channel attention weight matrix is generally performed by using a fully connected layer, however, the calculation efficiency of the feature extraction model is not ideal due to the large parameter quantity of the fully connected layer. Based on this, in the embodiment of the present invention, the output features of the first maximum pooling layer and the first average pooling layer are convolved by using the third convolution layer, so as to obtain two channel attention weight matrices, where each channel attention weight matrix is used to represent the importance of each channel in the corresponding output feature. Compared with a full-connection layer, the parameter quantity of the convolution layer is greatly reduced, so that the calculation efficiency of the model can be effectively improved, and meanwhile, the convolution layer in the channel attention module provided by the embodiment of the invention can meet the requirements of a feature extraction model, and accurate change detection of SAR images is guaranteed. In some examples, the third convolution layer is a one-dimensional convolution layer.
Then, the two channel attention weight matrices are added by using the first fusion layer to obtain a total channel attention weight matrix. Specifically, the addition of the two channel attention weighting matrices is the addition between the elements in the two matrices.
And then, performing activation processing on the total channel attention weight matrix by using a first sigmoid activation function to obtain the channel attention weight.
And finally, weighting the first feature map by using the first output layer to obtain a second feature map. Specifically, the second feature map may be obtained by multiplying the channel attention weight with the first feature map.
Accordingly, the calculation formula of the channel attention module for the processing procedure of the first feature map can be expressed as:
M C (F)=σ(f(AvgPool(F))+f(MaxPool(F)))
wherein F represents a first feature map, avgPool (F) and MaxPool (F) are respectively the average pooling and the maximum pooling of the spatial dimension of the first feature map, sigma is a sigmoid activation function, F is convolution processing, M c (F) Representing channel attention weights, F' represents a second feature map.
As shown in fig. 2, in some embodiments, the spatial attention module includes a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolution layer, a second sigmoid activation function, and a second output layer, where the second maximum pooling layer and the second average pooling layer are connected in parallel to an input end of the second fusion layer, and the second fusion layer, the fourth convolution layer, and the second output layer are sequentially connected; the second maximum pooling layer and the second average pooling layer are respectively used for carrying out maximum pooling treatment and average pooling treatment on the channel dimension of the second feature map of each time phase; the second fusion layer is used for vector splicing of the output features of the second maximum pooling layer and the second average pooling layer; the fourth convolution layer is used for carrying out convolution processing on the output characteristics obtained by vector splicing so as to obtain a space attention weight matrix; the second sigmoid activation function is used for activating the spatial attention weight matrix to obtain spatial attention weight; and the second output layer is used for carrying out weighting processing on the second characteristic map of each time phase based on the spatial attention weight to obtain a target characteristic map of each time phase.
In particular, the input features of the spatial attention module are the output features of the channel attention module, i.e. the second feature map. The processing procedure of the spatial attention module to the second feature map is as follows:
and firstly, respectively carrying out maximum pooling processing and average pooling processing of channel dimensions on the second feature map by using a second maximum pooling layer and a second average pooling layer to respectively obtain two output features. The maximum pooling processing and the average pooling processing can realize the compression processing of the second feature map so as to improve the calculation efficiency of the feature extraction model; in addition, two compression modes are simultaneously used, so that the utilization of different information in the second characteristic diagram can be realized, and the characteristic representation capability of the spatial attention module is improved. Given that the dimension of the second feature map is h×w×c, wherein H, W, C represents the height, width, and number of channels of the second feature map, two h×w×1 output features can be obtained through the maximum pooling process and the average pooling process of the channel dimension, respectively.
And then, vector splicing is carried out on the output characteristics of the second maximum pooling layer and the second average pooling layer by using a second fusion layer, so that fusion of the two output characteristics is realized.
And performing convolution processing on the output characteristics after vector splicing by using a fourth convolution layer to obtain a spatial attention weight matrix, wherein the spatial attention weight matrix is used for representing the importance of each spatial position in the output characteristics after vector splicing. The convolution layer in the spatial attention module provided by the embodiment of the invention can meet the requirements of the feature extraction model, and ensure that the accurate change detection of SAR images is realized.
Then, a second sigmoid activation function is used for carrying out activation processing on the spatial attention weight matrix so as to obtain the spatial attention weight.
And finally, weighting the second feature map by using a second output layer to obtain a target feature map. Specifically, the target feature map may be obtained by multiplying the spatial attention weight by the second feature map. The target feature map is the output feature of the second feature extraction module.
Accordingly, the calculation formula of the processing procedure of the second feature map by the spatial attention module can be expressed as:
M S (F')=σ(f(AvgPool(F');MaxPool(F')))
wherein F ' represents a second feature map, avgPool (F ') and MaxPool (F ') are respectively the average pooling and the maximum pooling of channel dimensions for the second feature map, σ is a sigmoid activation function, F is convolution processing, M s (F') represents an empty spaceInter-attention-weighting, F ", represents the target feature map.
And 130, performing difference analysis on the target feature graphs of the two time phases to generate a target difference graph.
As shown in fig. 2, in some embodiments, in step 130, performing a difference analysis on the target feature maps of the two phases to generate a target difference map includes: calculating the target feature graphs of the two time phases based on a difference operator and a logarithmic ratio operator (LR operator for short) respectively to obtain a difference graph based on the difference operator and a difference graph based on the logarithmic ratio operator; and carrying out fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
By the mode, noise interference can be overcome to the greatest extent, details and whole information in the two target feature images can be fused better, and further the change detection precision of SAR images is improved.
The disparity maps of the two operators can be fused by adopting an image fusion method, for example, pixel level, feature level or decision level fusion is adopted according to the hierarchy to which the disparity maps belong. The pixel level fusion method has the advantages of easiness in implementation and low calculation complexity. In the pixel-level fusion method, the weighted fusion method has low computational complexity and is easier to realize. Based on this, in some examples, the fusing the difference operator-based disparity map and the log ratio operator-based disparity map to generate the target disparity map includes: and carrying out weighted fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
Specifically, the calculation formula of the generation process of the target disparity map can be expressed as follows:
X C =α(|X 1 -X 2 |)+(1-α)(|log(X 2 +1)-log(X 1 +1)|)
wherein X is 1 And X 2 Target feature maps respectively representing two time phases; i X 1 -X 2 I represents a disparity map generated based on a difference operator; log (X) 2 +1)-log(X 1 +1) | representsA difference map generated based on a logarithmic ratio operator; alpha is a weight coefficient, and the value is 0-1; x is X c Representing a target disparity map.
And step 140, determining a target image area in which the target SAR image of the next time phase in the target SAR images of the two time phases is changed relative to the target SAR image of the previous time phase according to the target difference map.
Here, the target image area where the target SAR image of the next time phase is changed from the target SAR image of the previous time phase in the determined target SAR images of the two time phases is the change detection result, and may be represented by using a binary result graph. In the binary result map, the pixels that are changed may be represented in black, while the other pixels that are unchanged are represented in white.
In this step, the pixel type may be set to two types, namely, a changed type and an unchanged type. Then, a threshold is set for the target disparity map, and the type of the pixel is determined based on the threshold. And when a certain pixel in the target difference diagram exceeds the set threshold value, judging that the pixel belongs to the changed class, otherwise, when the certain pixel in the target difference diagram does not exceed the set threshold value, judging that the pixel belongs to the unchanged class. Finally, a binary result graph representing the change information is generated according to the judgment result.
In some embodiments, a Kilter & Illingworth (KI) thresholding method may be used to create a performance index function, and histogram fit may be performed on the pixel class condition distribution of the changed class and the unchanged class to find the minimum value of the function as the optimal threshold. Other threshold analysis methods may also be used to calculate the threshold and classify pixels in the target disparity map based on the calculated threshold. The embodiment of the present invention is not particularly limited thereto.
Fig. 3 is a flowchart of a training method of a feature extraction network according to an embodiment of the present invention. As shown in fig. 3, in some embodiments, the training method of the feature extraction network includes steps 310 through 360.
Step 310, acquiring a plurality of sample SAR image groups, wherein each sample SAR image group comprises sample SAR images of two phases and a sample label, and the sample label is used for indicating an actual image area in which a sample SAR image of a later phase in the sample SAR images of the two phases changes relative to a sample SAR image of a previous phase.
In practical application, a large-scale public data set for SAR image deep learning model training is few, accurate marking of sample data is difficult, and the problem of model overfitting is serious. Based on this, in some embodiments, the acquiring a plurality of sample SAR image sets comprises: acquiring original SAR images of a plurality of time phases; performing data enhancement processing on the original SAR images of the multiple time phases to obtain sample SAR images of the multiple time phases; and constructing a plurality of sample SAR image groups according to the sample SAR images of the plurality of phases.
The data enhancement processing is carried out on the original SAR images of a plurality of time phases, so that the amplification of the number and the diversity of the images of the sample data set can be realized, and the stability and the generalization capability of the SAR image deep learning model are improved.
The data enhancement method may be rotation, scaling, noise increase, and shielding, which is not particularly limited in the embodiment of the present invention.
Before the original SAR image is subjected to data enhancement processing, the original SAR image can be subjected to image preprocessing including operations such as radiation calibration, image self-adaptive filtering and geocoding, and the like, meanwhile, the original SAR image can be processed into an 8-bit image in a linear 2% stretching mode, images in the same area are cut into the same size, and fine registration is performed through a regional network adjustment tool. Further, the image after image preprocessing can be cut into a size of 256 pixels×256 pixels, no overlap exists between the cut blocks, and data enhancement can be performed by means of rotation, scaling, noise increase, occlusion and the like.
Each sample SAR image group also includes a sample tag. The change difference of the original SAR image or the original SAR image preprocessed by the image processing software can be roughly calculated by a color component synthesis tool, and then the change sample is manually marked by combining high-precision map data and priori knowledge to generate a sample label. The sample tag may be implemented using a binary reference map.
And 320, performing feature extraction on the sample SAR images of the two time phases in each sample SAR image group through a feature extraction network to be trained, and obtaining sample feature graphs of the two time phases.
And 330, performing differential analysis on the sample feature maps of the two time phases corresponding to each sample SAR image group, and generating a sample differential map corresponding to each sample SAR image group.
Step 340, determining an image area where the sample SAR image of the later time phase in each sample SAR image group changes relative to the sample SAR image of the previous time phase according to the sample disparity map corresponding to each sample SAR image group.
Step 350, determining loss information according to the image area and the sample label of the sample SAR image of the next time phase in each sample SAR image group, which are changed relative to the sample SAR image of the previous time phase.
In some embodiments, in step 350, the determining the loss information according to the image area and the sample label of the sample SAR image of the last phase in each sample SAR image group, where the sample SAR image of the last phase is changed relative to the sample SAR image of the previous phase, includes: respectively determining aggregate similarity loss information and cross entropy loss information according to an image area and a sample label of a sample SAR image of a later time phase in each sample SAR image group relative to a sample SAR image of a previous time phase; and determining mixed loss information according to the aggregate similarity loss and the cross entropy loss.
The aggregate similarity Loss function Dice Loss and the cross entropy Loss function CE Loss have the characteristics of focusing on global and microscopic information investigation respectively. The embodiment of the invention constructs the mixing loss function based on the two to calculate the mixing loss information, and can pay attention to global information and microscopic information investigation, thereby comprehensively and efficiently evaluating the training situation and solving the problem of poor learning effect caused by unbalanced positive and negative samples.
In some examples, the aggregate similarity loss information and the cross entropy loss information may be weighted summed to calculate the hybrid loss information. The two kinds of loss information are given different weights, and the influence of global information or microscopic information on the feature extraction network can be adjusted. Furthermore, the aggregate similarity loss information and the cross entropy loss information may also be added to calculate the blending loss information. This approach is equivalent to giving the same weight to both loss information to balance the investigation of global and microscopic information.
The calculation formula of the aggregate similarity Loss function Dice Loss is as follows:
the calculation formula of the cross entropy Loss function CE Loss is as follows:
L CE =-R T logR P
wherein L is D Representing aggregate similarity loss information; l (L) CE Representing cross entropy loss information; r is R T The sample label is represented, and the value of the sample label is 0 or 1, which respectively represents unchanged and changed; r is R P Representing the change prediction result, wherein the value range is [0,1]。
Then, in some examples, the calculation formula of the mixing loss function L may be expressed as:
L=L D +L CE
and step 360, training the feature extraction network to be trained according to the loss information.
And in the training process of the feature extraction network, performing repeated iterative training, and adjusting parameters of the feature extraction network in each iteration until the training ending condition is reached. The training end condition may be the number of iterations or a set detection accuracy threshold. The embodiment of the present invention is not particularly limited thereto.
It should be appreciated that the process of step 320 for feature extraction of a sample SAR image using the feature extraction network to be trained is the same as the step 120 for feature extraction of a target SAR image using the feature extraction network after training. Meanwhile, step 330 performs differential analysis on the sample feature images of the two time phases corresponding to each sample SAR image group to generate a sample differential image corresponding to each sample SAR image group, and the steps executed by step 130 for performing differential analysis on the target feature images of the two time phases to generate the target differential image should be kept consistent. Similarly, step 340 determines, according to the sample difference map corresponding to each sample SAR image group, an image region in which the sample SAR image of the next time phase in each sample SAR image group changes with respect to the sample SAR image of the previous time phase, and step 140 determines, according to the target difference map, that the target SAR image of the next time phase in the target SAR images of the two time phases changes with respect to the target SAR image of the previous time phase should be kept the same.
In summary, the embodiment of the invention provides a SAR image change detection method. In the method, firstly, target SAR images of two time phases are acquired; and then, carrying out feature extraction on the target SAR images of the two time phases through a feature extraction network to obtain target feature graphs of the two time phases, wherein the feature extraction network comprises: the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase; the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase; performing difference analysis on the target feature images of the two time phases to generate a target difference image; and finally, according to the target difference map, determining a target image area in which the target SAR image of the next time phase in the target SAR images of the two time phases is changed relative to the target SAR image of the previous time phase. Based on the method, basic feature information in the target SAR image is extracted through the first feature extraction module, and deep, multi-dimensional and fine features of the target SAR image are further extracted through the channel attention module and the space attention module in the second feature module, so that fine change information of small-scale targets in SAR images in different time phases is extracted, and change detection precision of the SAR image is improved.
The following provides a specific implementation scenario to further illustrate the SAR image change detection method provided by the embodiment of the present invention.
Fig. 4 is a flowchart of a SAR image change detection method according to an embodiment of the present invention. The SAR image change detection method provided by the embodiment of the invention mainly comprises three parts of data set making, model building and model training.
Step 410, data set fabrication
Fig. 5 is a flowchart of data set creation according to an embodiment of the present invention. As shown in fig. 5, step 410 of the embodiment of the present invention further includes steps 411 to 415.
Step 411, acquiring an original SAR image
And downloading a plurality of 1m resolution original SAR images in the same region at different times, wherein the processing object mainly comprises ships, buildings, barren lands and the like which are positioned near the port.
Step 412, image preprocessing
The method comprises the steps of sequentially performing operations such as radiometric calibration, image self-adaptive filtering, geocoding and the like on an original SAR image, processing the original SAR image into 8-bit images in a linear 2% stretching mode, cutting the images of the same region into the same size, and performing fine registration through a regional network adjustment tool.
Step 413, sample marking
The change difference between images in different time phases is roughly calculated through a color component synthesis tool of image processing software, and a change sample is manually and precisely marked by combining high-precision map data and priori knowledge to generate a binary reference image as a label image.
Step 414, data clipping and enhancement
All images are cut to a size of 256 pixels by 256 pixels, ensuring no overlap between cuts. The number and diversity of the images of the data set are amplified through data enhancement operations such as rotation, scaling, noise increase, shielding and the like. And the stability and generalization capability of the deep learning model are improved through data enhancement processing. And 95040 image groups consisting of the front and back time phase images and the label image are finally obtained.
Step 415, partitioning the data set
According to the comprehensive principle of the proportion and the image transformation type, 60000 groups in all data are used as training sets, 20000 groups are used as verification sets, and the rest 12040 groups are divided into test sets.
Step 420, building a change detection model
Fig. 2 is a schematic diagram of a change detection model according to an embodiment of the present invention. As shown in fig. 2, the change detection model includes a feature extraction network and a difference generation and analysis module, wherein the feature extraction network includes a first feature extraction module and a second feature extraction module.
Specifically, the first feature extraction module is implemented with a modified ResNet-34 residual network. Fig. 2 shows in particular a modified res net-34 residual network. In the improved ResNet-34 residual network, comprising 2 convolutional layers Conv and 4 residual modules, wherein the residual modules ResBlock1, resBlock2, resBlock3, resBlock4 respectively comprise 3, 4, 6 and 3 residual structures (the residual structure in the first residual module is shown in FIG. 2 to consist of 2 3×3 convolutional layers), one convolutional layer is connected before the first residual module ResBlock1 and after the last residual module ResBlock4 respectively.
Target SAR image I of two time phases 1 And I 2 The images are respectively input into the first feature extraction module to carry out feature extraction, and the images currently input into the first feature extraction module are input images. When the target SAR image is subjected to feature extraction through the first feature extraction module, the input image is subjected to dimension reduction through the first convolution layer, so that the dimension of the output feature of the first convolution layer meets the dimension requirement of the first residual error module on the input feature, and then each residual error module is used for carrying out special processing on the input feature of the target SAR imageAnd extracting features, and finally carrying out dimension lifting processing on the output features of the last residual error module through a second convolution layer, so that the output features of the second convolution layer are restored to the dimension of the target SAR image, and the output features of the second convolution layer are the first feature map.
The second feature extraction module includes a channel attention module and a spatial attention module.
The channel attention module comprises a first maximum pooling layer MaxPool, a first average pooling layer AvgPool, a third convolution layer Conv, a first fusion layer, a first sigmoid activation function and a first output layer, wherein the first maximum pooling layer and the first average pooling layer are connected in parallel with the input end of the third convolution layer, and the third convolution layer, the first fusion layer, the first sigmoid activation function and the first output layer are sequentially connected. When the first feature map is processed through the channel attention module, the first maximum pooling layer and the first average pooling layer are used for carrying out maximum pooling processing and average pooling processing of space dimension on the first feature map, and two output features are obtained respectively. Given that the dimension of the first feature map is h×w×c, where H, W, C represents the height, width, and number of channels of the first feature map, two output features of 1×1×c can be obtained through the maximum pooling process and the average pooling process of the spatial dimension. And then using a first fusion layer to carry out element addition on the two channel attention weight matrixes so as to obtain a total channel attention weight matrix. And then, performing activation processing on the total channel attention weight matrix by using a first sigmoid activation function to obtain the channel attention weight. And finally, multiplying the channel attention weight with the first characteristic diagram by using the first output layer to obtain a second characteristic diagram.
The calculation formula of the channel attention module for the processing procedure of the first feature map can be expressed as follows:
M C (F)=σ(f(AvgPool(F))+f(MaxPool(F)))
wherein F tableShowing a first feature map, wherein AvgPool (F) and MaxPool (F) are respectively the average pooling and the maximum pooling of the spatial dimension of the first feature map, sigma is a sigmoid activation function, F is convolution processing, M c (F) Representing channel attention weights, F' represents a second feature map.
The spatial attention module comprises a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolution layer, a second sigmoid activation function and a second output layer, wherein the second maximum pooling layer and the second average pooling layer are connected in parallel with the input end of the second fusion layer, and the second fusion layer, the fourth convolution layer and the second output layer are sequentially connected. The processing procedure of the spatial attention module to the second feature map is as follows: and carrying out maximum pooling treatment and average pooling treatment on the channel dimension of the second feature map by using a second maximum pooling layer and a second average pooling layer respectively to obtain two output features respectively. Given that the dimension of the second feature map is h×w×c, wherein H, W, C represents the height, width, and number of channels of the second feature map, two h×w×1 output features can be obtained through the maximum pooling process and the average pooling process of the channel dimension, respectively. And vector stitching the output features of the second maximum pooling layer and the second average pooling layer by using a second fusion layer. And performing convolution processing on the output characteristics after vector splicing by using a fourth convolution layer to obtain a space attention weight matrix. And then performing activation processing on the spatial attention weight matrix by using a second sigmoid activation function to obtain the spatial attention weight. And finally multiplying the spatial attention weight with the second feature map through the second output layer to obtain a target feature map which is used as the output feature of the second feature extraction module.
The calculation formula of the processing procedure of the second feature map by the spatial attention module can be expressed as follows:
M S (F')=σ(f(AvgPool(F');MaxPool(F')))
wherein F ' represents a second profile, avgPool (F ') and MaxPool (F ')Respectively carrying out average pooling and maximum pooling on channel dimensions of the second feature map, wherein sigma is a sigmoid activation function, f is convolution processing, and M s (F ') represents the spatial attention weight and F' represents the target feature map.
The embodiment of the invention adopts a mixed Loss function combining a set similarity Loss function Dice Loss and a cross entropy Loss function CE Loss.
The calculation formula of the aggregate similarity Loss function Dice Loss is as follows:
the calculation formula of the cross entropy Loss function CE Loss is as follows:
L CE =-R T log R P
wherein L is D Representing aggregate similarity loss information; l (L) CE Representing cross entropy loss information; r is R T The sample label is represented, and the value of the sample label is 0 or 1, which respectively represents unchanged and changed; r is R P Representing the change prediction result, wherein the value range is [0,1]。
The calculation formula of the mixing loss function L can be expressed as:
L=L D +L CE
next, two phase target feature map X 1 And X 2 And inputting the data to a difference generation and analysis module for subsequent processing. Firstly, calculating target feature graphs of two time phases based on a difference operator and a logarithmic comparison operator (LR operator for short) respectively to obtain a difference graph based on the difference operator and a difference graph based on the logarithmic comparison operator, then carrying out fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic comparison operator to generate a target difference graph, carrying out threshold analysis on the target difference graph, finally determining a target image area where a target SAR image of a later time phase changes relative to a target SAR image of a former time phase, and outputting a change detection result.
The calculation formula of the generation process of the target difference map can be expressed as follows:
X C =α(X 1 -X 2 |)+(1-α)(log(X 2 +1)-log(X 1 +1))
XC=α(|X1-X 2 |)+(1-α)(|log(X 2 +1)-log(X 1 +1)|)
wherein X is 1 And X 2 Target feature maps respectively representing two time phases; i X 1 -X 2 I represents a disparity map generated based on a difference operator; log (X) 2 +1)-log(X 1 +1) | represents a disparity map generated based on a logarithmic ratio operator; alpha is a weight coefficient, and the value is 0-1; x is X c Representing a target disparity map.
Step 430, model training and testing
Iterative training of the model is carried out on NVIDIA M4000 through a deep learning framework Pytorch, parameters such as convolution weight, bias and the like are updated by an AdaMax optimizer, the initial learning rate is set to be 0.001, the attenuation factor beta 1=0.9, and beta 2=0.999.
The trained model performance is tested using the test dataset. The model performance built by the embodiment of the invention is compared with the existing model performance to verify the effectiveness of the method provided by the embodiment of the invention. The evaluation index adopts Precision (P), recall (R), comprehensive evaluation index F1 and Overall Accuracy (OA), and the formula is as follows:
TP, TN, FP, FN is 4 judgment types in a classical confusion matrix; TP represents the number of pixels of the change predicted correctly; TN represents the number of unchanged pixels predicted correctly; FP represents the number of pixels for which no change is predicted as a change; FN denotes the number of pixels that predict a change as unchanged.
In the change detection task, a higher accuracy rate indicates that fewer errors are generated in the prediction result, a larger recall rate indicates that more positive samples are detected, F1 and overall accuracy carry out overall measurement on the prediction result, and the larger the numerical value of the index is, the better the prediction effect of the model is. The index evaluation results are shown in Table 1. As can be seen from Table 1, the overall performance of the method provided by the present invention is significantly improved as compared with other methods.
Table 1 index evaluation results
Model name | Accuracy P | Recall rate R | F1 | Overall accuracy OA |
STANet | 0.934 | 0.846 | 0.887 | 0.990 |
DSAMNet | 0.928 | 0.869 | 0.898 | 0.987 |
SNUNet | 0.950 | 0.906 | 0.927 | 0.983 |
Embodiments of the invention | 0.988 | 0.968 | 0.978 | 0.994 |
Fig. 6a is a target SAR image of a previous phase provided by an embodiment of the present invention; fig. 6b is a target SAR image of the latter phase provided by an embodiment of the present invention; FIG. 6c is a label diagram provided by an embodiment of the present invention; fig. 6d shows a SAR image change detection result according to an embodiment of the present invention. As shown in fig. 6a to 6d, the method provided by the embodiment of the invention can detect fine change information in the SAR image, has strong detection capability, accurate and complete information extraction, suppresses false alarm and omission, and has a prediction result close to a reference sample marked manually.
In summary, the SAR image change detection method provided by the embodiment of the invention adopts the residual network to realize the first feature extraction module, and forms the second feature extraction module by the channel attention module and the space attention module, builds the change detection model, obtains the best prediction result and higher precision in the comparison experiment of the large-scale data sets containing different scenes, proves that the SAR image change detection method has the capability of efficiently utilizing rich information and complex features in the high-resolution SAR image and the capability of improving the convergence and optimization efficiency of the model, can realize the accurate extraction of target dynamic information of ships, buildings and the like, and can improve the representation and change detection effect of the feature of the ground. The SAR image change detection method provided by the embodiment of the invention not only can overcome the defects and the shortcomings of the traditional method and improve the algorithm efficiency, but also can provide reliable references for the application in the fields of urban development, geological disaster monitoring and the like, thereby solving the actual problems in the fields of urban development, geological disaster monitoring and the like more accurately and pointedly.
Fig. 7 is a schematic structural diagram of a SAR image change detection apparatus according to an embodiment of the present invention. As shown in fig. 7, the SAR image change detection apparatus 700 includes: a target image acquisition module 710, configured to acquire target SAR images of two phases; the target feature map extracting module 720 is configured to perform feature extraction on the target SAR images of the two time phases through a feature extraction network, so as to obtain target feature maps of the two time phases; the feature extraction network includes: the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase; the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase; the target difference map generating module 730 is configured to perform difference analysis on the target feature maps of the two time phases to generate a target difference map; the target image area determining module 740 is configured to determine, according to the target difference map, a target image area in which a target SAR image of a subsequent phase in the target SAR images of the two phases changes relative to a target SAR image of a previous phase.
In some embodiments, the first feature extraction module includes a first convolution layer, a plurality of residual modules, and a second convolution layer connected in sequence; the first convolution layer is used for performing dimension reduction processing on the target SAR image of each time phase; each residual error module is used for extracting the characteristics of the input characteristics of the residual error module, and the input characteristics of the first residual error module are the output characteristics of the first convolution layer; and the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module to obtain a first characteristic diagram with the same dimension as the target SAR image of each time phase.
In some embodiments, the channel attention module includes a first max-pooling layer, a first average pooling layer, a third convolution layer, a first fusion layer, a first sigmoid activation function, and a first output layer, where the first max-pooling layer and the first average pooling layer are connected in parallel to an input of the third convolution layer, and the third convolution layer, the first fusion layer, the first sigmoid activation function, and the first output layer are connected in sequence;
the first maximum pooling layer and the first average pooling layer are respectively used for carrying out maximum pooling processing and average pooling processing of space dimension on the first feature map of each time phase;
The third convolution layer is used for respectively carrying out convolution processing on the output characteristics of the first maximum pooling layer and the first average pooling layer so as to obtain two channel attention weight matrixes;
the first fusion layer is used for adding the attention weight matrixes of the two channels to obtain a total channel attention weight matrix;
the first sigmoid activation function is used for activating the total channel attention weight matrix to obtain channel attention weights;
the first output layer is used for carrying out weighting processing on the first characteristic map of each time phase based on the channel attention weight to obtain a second characteristic map of each time phase.
In some embodiments, the spatial attention module includes a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolution layer, a second sigmoid activation function, and a second output layer, where the second maximum pooling layer and the second average pooling layer are connected in parallel to an input of the second fusion layer, and the second fusion layer, the fourth convolution layer, and the second output layer are connected in sequence;
the second maximum pooling layer and the second average pooling layer are respectively used for carrying out maximum pooling treatment and average pooling treatment on the channel dimension of the second feature map of each time phase;
The second fusion layer is used for vector splicing of the output features of the second maximum pooling layer and the second average pooling layer;
the fourth convolution layer is used for carrying out convolution processing on the output characteristics obtained by vector splicing so as to obtain a space attention weight matrix;
the second sigmoid activation function is used for activating the spatial attention weight matrix to obtain spatial attention weight;
and the second output layer is used for carrying out weighting processing on the second characteristic map of each time phase based on the spatial attention weight to obtain a target characteristic map of each time phase.
In some embodiments, the target disparity map generation module includes:
the difference map generation sub-module is used for calculating the target feature maps of the two time phases based on a difference operator and a logarithmic comparison operator respectively to obtain a difference map based on the difference operator and a difference map based on the logarithmic comparison operator;
and the difference map fusion sub-module is used for carrying out fusion processing on the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map.
In some embodiments, the disparity map fusion submodule is specifically configured to:
And carrying out weighted fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
In some embodiments, the apparatus further comprises:
the system comprises a sample image group acquisition module, a sampling image group acquisition module and a sampling image processing module, wherein the sample image group acquisition module is used for acquiring a plurality of sample SAR image groups, each sample SAR image group comprises two time-phase sample SAR images and a sample label, and the sample label is used for indicating an actual image area in which a sample SAR image of a later time phase in the two time-phase sample SAR images is changed relative to a sample SAR image of a previous time phase;
the sample feature map extraction module is used for carrying out feature extraction on the sample SAR images of two time phases in each sample SAR image group through a feature extraction network to be trained to obtain sample feature maps of the two time phases;
the sample difference map generation module is used for carrying out difference analysis on the sample feature maps of the two time phases corresponding to each sample SAR image group and generating a sample difference map corresponding to each sample SAR image group;
the image area determining module is used for determining an image area in which the sample SAR image of the next time phase in each sample SAR image group changes relative to the sample SAR image of the previous time phase according to the sample difference image corresponding to each sample SAR image group;
The loss information determining module is used for determining loss information according to an image area and a sample label of a sample SAR image of a next time phase in each sample SAR image group, wherein the image area changes relative to the sample SAR image of a previous time phase;
and the training module is used for training the feature extraction network to be trained according to the loss information.
In some embodiments, the sample image group acquisition module comprises:
the original image acquisition sub-module is used for acquiring original SAR images of a plurality of time phases;
the data enhancer module is used for carrying out data enhancement processing on the original SAR images of the multiple time phases to obtain sample SAR images of the multiple time phases;
and the sample image group constructing submodule is used for constructing a plurality of sample SAR image groups according to the sample SAR images of the plurality of time phases.
In some embodiments, the loss information determining module is specifically configured to:
respectively determining aggregate similarity loss information and cross entropy loss information according to an image area and a sample label of a sample SAR image of a later time phase in each sample SAR image group relative to a sample SAR image of a previous time phase;
and determining mixed loss information according to the aggregate similarity loss information and the cross entropy loss information.
Fig. 8 is an electronic device according to an embodiment of the present invention. As shown in fig. 8, the electronic device 800 includes: at least one processor 810, and a memory 820 communicatively coupled to the at least one processor 810, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform a method.
In particular, the above-mentioned memory 820 and the processor 810 are connected together via the bus 830, and can be general-purpose memories and processors, which are not limited herein, and when the processor 810 runs a computer program stored in the memory 820, various operations and functions described in connection with fig. 1 to 7 in the embodiment of the present invention can be performed.
In an embodiment of the present invention, electronic device 800 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile computing devices, smart phones, tablet computers, personal Digital Assistants (PDAs), handsets, messaging devices, wearable computing devices, and the like.
The embodiment of the invention also provides a storage medium, on which a computer program is stored, which program, when being executed by a processor, implements a method. The specific implementation may refer to a method embodiment, which is not described herein. In particular, a system or apparatus provided with a storage medium on which software program code implementing the functions of any of the above embodiments is stored and whose computer or processor is caused to read and execute instructions stored in the storage medium may be provided. The program code itself, read from a storage medium, may embody the functions of any of the above-described embodiments, and thus the machine-readable code and the storage medium storing the machine-readable code form part of the present invention.
Storage media include, but are not limited to, floppy diskettes, hard disks, magneto-optical disks, magnetic tape, nonvolatile memory cards, and ROM. Program code may also be downloaded from a server computer or cloud over a communications network.
It should be noted that, in the above processes and the system structures, not all steps and modules are necessary, and some steps and units may be omitted according to actual needs. The order of execution of the steps is not fixed and may be determined as desired. The device structures described in the above embodiments may be physical structures or logical structures. A certain module or unit may be implemented by the same physical entity, a certain module or unit may be implemented by a plurality of physical entities respectively, and a certain module or unit may also be implemented by a plurality of components in a plurality of independent devices together.
Although the embodiments of the examples of the present invention have been disclosed above, they are not limited to the use listed in the specification and the embodiments. It can be fully adapted to various fields suitable for embodiments of the present invention. Additional modifications will readily occur to those skilled in the art. Therefore, embodiments of the invention are not limited to the specific details and illustrations shown and described herein, without departing from the general concepts defined in the claims and their equivalents.
Claims (7)
1. A SAR image change detection method, comprising:
acquiring target SAR images of two time phases;
extracting features of the target SAR images of the two time phases through a feature extraction network to obtain target feature images of the two time phases;
the feature extraction network includes:
the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase;
the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase;
Performing differential analysis on the target feature graphs of the two time phases to generate a target differential graph;
according to the target difference map, determining a target image area in which a target SAR image of a later time phase in the target SAR images of the two time phases changes relative to a target SAR image of a previous time phase;
the first feature extraction module comprises a first convolution layer, a plurality of residual error modules and a second convolution layer which are sequentially connected;
the first convolution layer is used for performing dimension reduction processing on the target SAR image of each time phase;
each residual error module is used for extracting the characteristics of the input characteristics of the residual error module, and the input characteristics of the first residual error module are the output characteristics of the first convolution layer;
the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module to obtain a first characteristic diagram with the same dimension as the target SAR image of each time phase;
performing differential analysis on the target feature graphs of the two time phases to generate a target differential graph, wherein the differential analysis comprises the following steps:
calculating the target feature graphs of the two time phases based on a difference operator and a logarithmic ratio operator respectively to obtain a difference graph based on the difference operator and a difference graph based on the logarithmic ratio operator;
Carrying out fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph;
the fusing processing is carried out on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator, and the target difference graph is generated, which comprises the following steps:
and carrying out weighted fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
2. The SAR image change detection method of claim 1, wherein the channel attention module comprises a first max-pooling layer, a first average pooling layer, a third convolution layer, a first fusion layer, a first sigmoid activation function, and a first output layer, wherein the first max-pooling layer and the first average pooling layer are connected in parallel with an input end of the third convolution layer, and the third convolution layer, the first fusion layer, the first sigmoid activation function, and the first output layer are sequentially connected;
the first maximum pooling layer and the first average pooling layer are respectively used for carrying out maximum pooling processing and average pooling processing of space dimension on the first feature map of each time phase;
the third convolution layer is used for respectively carrying out convolution processing on the output characteristics of the first maximum pooling layer and the first average pooling layer so as to obtain two channel attention weight matrixes;
The first fusion layer is used for adding the attention weight matrixes of the two channels to obtain a total channel attention weight matrix;
the first sigmoid activation function is used for activating the total channel attention weight matrix to obtain channel attention weights;
the first output layer is used for carrying out weighting processing on the first characteristic map of each time phase based on the channel attention weight to obtain a second characteristic map of each time phase.
3. The SAR image change detection method of claim 1, wherein the spatial attention module comprises a second maximum pooling layer, a second averaging pooling layer, a second fusion layer, a fourth convolution layer, a second sigmoid activation function, and a second output layer, wherein the second maximum pooling layer and the second averaging pooling layer are connected in parallel with an input end of the second fusion layer, and the second fusion layer, the fourth convolution layer, and the second output layer are sequentially connected;
the second maximum pooling layer and the second average pooling layer are respectively used for carrying out maximum pooling treatment and average pooling treatment on the channel dimension of the second feature map of each time phase;
the second fusion layer is used for vector splicing of the output features of the second maximum pooling layer and the second average pooling layer;
The fourth convolution layer is used for carrying out convolution processing on the output characteristics obtained by vector splicing so as to obtain a space attention weight matrix;
the second sigmoid activation function is used for activating the spatial attention weight matrix to obtain spatial attention weight;
and the second output layer is used for carrying out weighting processing on the second characteristic map of each time phase based on the spatial attention weight to obtain a target characteristic map of each time phase.
4. The SAR image change detection method of claim 1, wherein the method comprises:
acquiring a plurality of sample SAR image groups, wherein each sample SAR image group comprises two time-phase sample SAR images and a sample label, and the sample label is used for indicating an actual image area in which a sample SAR image of a later time phase in the two time-phase sample SAR images is changed relative to a sample SAR image of a previous time phase;
carrying out feature extraction on the sample SAR images of two time phases in each sample SAR image group through a feature extraction network to be trained to obtain sample feature images of the two time phases;
performing differential analysis on the sample feature images of the two phases corresponding to each sample SAR image group to generate a sample differential image corresponding to each sample SAR image group;
According to the sample difference map corresponding to each sample SAR image group, determining an image area in which the sample SAR image of the next time phase in each sample SAR image group changes relative to the sample SAR image of the previous time phase;
determining loss information according to an image area and a sample label of a sample SAR image of a next time phase in each sample SAR image group, wherein the image area changes relative to the sample SAR image of a previous time phase;
and training the feature extraction network to be trained according to the loss information.
5. The SAR image change detection method of claim 4, wherein the acquiring a plurality of sample SAR image sets comprises:
acquiring original SAR images of a plurality of time phases;
performing data enhancement processing on the original SAR images of the multiple time phases to obtain sample SAR images of the multiple time phases;
and constructing a plurality of sample SAR image groups according to the sample SAR images of the plurality of phases.
6. The SAR image change detection method of claim 4, wherein the determining the loss information based on the image area and the sample label in which the sample SAR image of the subsequent phase in each sample SAR image group changes with respect to the sample SAR image of the previous phase comprises:
Respectively determining aggregate similarity loss information and cross entropy loss information according to an image area and a sample label of a sample SAR image of a later time phase in each sample SAR image group relative to a sample SAR image of a previous time phase;
and determining mixed loss information according to the aggregate similarity loss information and the cross entropy loss information.
7. A SAR image change detection apparatus, comprising:
the target image acquisition module is used for acquiring target SAR images of two time phases;
the target feature map extraction module is used for carrying out feature extraction on the target SAR images of the two time phases through a feature extraction network to obtain target feature maps of the two time phases;
the feature extraction network includes:
the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase;
the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase;
The target difference map generation module is used for carrying out difference analysis on the target feature maps of the two time phases to generate a target difference map;
the target image area determining module is used for determining a target image area of which the target SAR image of the next time phase in the target SAR images of the two time phases is changed relative to the target SAR image of the previous time phase according to the target difference image;
the first feature extraction module comprises a first convolution layer, a plurality of residual error modules and a second convolution layer which are sequentially connected; the first convolution layer is used for performing dimension reduction processing on the target SAR image of each time phase; each residual error module is used for extracting the characteristics of the input characteristics of the residual error module, and the input characteristics of the first residual error module are the output characteristics of the first convolution layer; the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module to obtain a first characteristic diagram with the same dimension as the target SAR image of each time phase;
the target difference graph generation module comprises:
the difference map generation sub-module is used for calculating the target feature maps of the two time phases based on a difference operator and a logarithmic comparison operator respectively to obtain a difference map based on the difference operator and a difference map based on the logarithmic comparison operator;
The difference map fusion sub-module is used for carrying out fusion processing on the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map;
the difference map fusion submodule is specifically configured to:
and carrying out weighted fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310101309.9A CN116012364B (en) | 2023-01-28 | 2023-01-28 | SAR image change detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310101309.9A CN116012364B (en) | 2023-01-28 | 2023-01-28 | SAR image change detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116012364A CN116012364A (en) | 2023-04-25 |
CN116012364B true CN116012364B (en) | 2024-01-16 |
Family
ID=86024952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310101309.9A Active CN116012364B (en) | 2023-01-28 | 2023-01-28 | SAR image change detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116012364B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524206B (en) * | 2023-06-30 | 2023-10-03 | 深圳须弥云图空间科技有限公司 | Target image identification method and device |
CN117173104B (en) * | 2023-08-04 | 2024-04-16 | 山东大学 | Low-altitude unmanned aerial vehicle image change detection method and system |
CN117494765A (en) * | 2023-10-23 | 2024-02-02 | 昆明理工大学 | Ultra-high spatial resolution remote sensing image change detection twin network and method |
CN117745688B (en) * | 2023-12-25 | 2024-06-14 | 中国科学院空天信息创新研究院 | Multi-scale SAR image change detection visualization system, electronic equipment and storage medium |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539316A (en) * | 2020-04-22 | 2020-08-14 | 中南大学 | High-resolution remote sensing image change detection method based on double attention twin network |
WO2021000906A1 (en) * | 2019-07-02 | 2021-01-07 | 五邑大学 | Sar image-oriented small-sample semantic feature enhancement method and apparatus |
CN112488025A (en) * | 2020-12-10 | 2021-03-12 | 武汉大学 | Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion |
CN113034471A (en) * | 2021-03-25 | 2021-06-25 | 重庆大学 | SAR image change detection method based on FINCH clustering |
CN113378686A (en) * | 2021-06-07 | 2021-09-10 | 武汉大学 | Two-stage remote sensing target detection method based on target center point estimation |
CN113420662A (en) * | 2021-06-23 | 2021-09-21 | 西安电子科技大学 | Remote sensing image change detection method based on twin multi-scale difference feature fusion |
CN113536929A (en) * | 2021-06-15 | 2021-10-22 | 南京理工大学 | SAR image target detection method under complex scene |
CN113567984A (en) * | 2021-07-30 | 2021-10-29 | 长沙理工大学 | Method and system for detecting artificial small target in SAR image |
CN113743383A (en) * | 2021-11-05 | 2021-12-03 | 航天宏图信息技术股份有限公司 | SAR image water body extraction method and device, electronic equipment and storage medium |
WO2022000426A1 (en) * | 2020-06-30 | 2022-01-06 | 中国科学院自动化研究所 | Method and system for segmenting moving target on basis of twin deep neural network |
CN114119621A (en) * | 2021-11-30 | 2022-03-01 | 云南电网有限责任公司输电分公司 | SAR remote sensing image water area segmentation method based on depth coding and decoding fusion network |
CN114283120A (en) * | 2021-12-01 | 2022-04-05 | 武汉大学 | End-to-end multi-source heterogeneous remote sensing image change detection method based on domain self-adaptation |
CN114494870A (en) * | 2022-01-21 | 2022-05-13 | 山东科技大学 | Double-time-phase remote sensing image change detection method, model construction method and device |
CN114841924A (en) * | 2022-04-11 | 2022-08-02 | 中国人民解放军战略支援部队航天工程大学 | Unsupervised change detection method for heterogeneous remote sensing image |
CN114841319A (en) * | 2022-04-29 | 2022-08-02 | 哈尔滨工程大学 | Multispectral image change detection method based on multi-scale self-adaptive convolution kernel |
CN114926746A (en) * | 2022-05-25 | 2022-08-19 | 西北工业大学 | SAR image change detection method based on multi-scale differential feature attention mechanism |
CN115187861A (en) * | 2022-07-13 | 2022-10-14 | 哈尔滨理工大学 | Hyperspectral image change detection method and system based on depth twin network |
CN115331087A (en) * | 2022-10-11 | 2022-11-11 | 水利部交通运输部国家能源局南京水利科学研究院 | Remote sensing image change detection method and system fusing regional semantics and pixel characteristics |
CN115457390A (en) * | 2022-09-13 | 2022-12-09 | 中国人民解放军国防科技大学 | Remote sensing image change detection method and device, computer equipment and storage medium |
-
2023
- 2023-01-28 CN CN202310101309.9A patent/CN116012364B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021000906A1 (en) * | 2019-07-02 | 2021-01-07 | 五邑大学 | Sar image-oriented small-sample semantic feature enhancement method and apparatus |
CN111539316A (en) * | 2020-04-22 | 2020-08-14 | 中南大学 | High-resolution remote sensing image change detection method based on double attention twin network |
WO2022000426A1 (en) * | 2020-06-30 | 2022-01-06 | 中国科学院自动化研究所 | Method and system for segmenting moving target on basis of twin deep neural network |
CN112488025A (en) * | 2020-12-10 | 2021-03-12 | 武汉大学 | Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion |
CN113034471A (en) * | 2021-03-25 | 2021-06-25 | 重庆大学 | SAR image change detection method based on FINCH clustering |
CN113378686A (en) * | 2021-06-07 | 2021-09-10 | 武汉大学 | Two-stage remote sensing target detection method based on target center point estimation |
CN113536929A (en) * | 2021-06-15 | 2021-10-22 | 南京理工大学 | SAR image target detection method under complex scene |
CN113420662A (en) * | 2021-06-23 | 2021-09-21 | 西安电子科技大学 | Remote sensing image change detection method based on twin multi-scale difference feature fusion |
CN113567984A (en) * | 2021-07-30 | 2021-10-29 | 长沙理工大学 | Method and system for detecting artificial small target in SAR image |
CN113743383A (en) * | 2021-11-05 | 2021-12-03 | 航天宏图信息技术股份有限公司 | SAR image water body extraction method and device, electronic equipment and storage medium |
CN114119621A (en) * | 2021-11-30 | 2022-03-01 | 云南电网有限责任公司输电分公司 | SAR remote sensing image water area segmentation method based on depth coding and decoding fusion network |
CN114283120A (en) * | 2021-12-01 | 2022-04-05 | 武汉大学 | End-to-end multi-source heterogeneous remote sensing image change detection method based on domain self-adaptation |
CN114494870A (en) * | 2022-01-21 | 2022-05-13 | 山东科技大学 | Double-time-phase remote sensing image change detection method, model construction method and device |
CN114841924A (en) * | 2022-04-11 | 2022-08-02 | 中国人民解放军战略支援部队航天工程大学 | Unsupervised change detection method for heterogeneous remote sensing image |
CN114841319A (en) * | 2022-04-29 | 2022-08-02 | 哈尔滨工程大学 | Multispectral image change detection method based on multi-scale self-adaptive convolution kernel |
CN114926746A (en) * | 2022-05-25 | 2022-08-19 | 西北工业大学 | SAR image change detection method based on multi-scale differential feature attention mechanism |
CN115187861A (en) * | 2022-07-13 | 2022-10-14 | 哈尔滨理工大学 | Hyperspectral image change detection method and system based on depth twin network |
CN115457390A (en) * | 2022-09-13 | 2022-12-09 | 中国人民解放军国防科技大学 | Remote sensing image change detection method and device, computer equipment and storage medium |
CN115331087A (en) * | 2022-10-11 | 2022-11-11 | 水利部交通运输部国家能源局南京水利科学研究院 | Remote sensing image change detection method and system fusing regional semantics and pixel characteristics |
Non-Patent Citations (5)
Title |
---|
A Deep Multiscale Pyramid Network Enhanced With Spatial-Spectral Residual Attention for Hyperspectral Image Change Detection;Yufei Yang等;《IEEE Transactions on Geoscience and Remote Sensing》;第60卷;1-13 * |
A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images;Chenxiao Zhang等;《ISPRS Journal of Photogrammetry and Remote Sensing》;第166卷;183-200 * |
Attention-Based Adaptive Spectral-Spatial Kernel ResNet for Hyperspectral Image Classification;Swalpa Kumar Roy等;《IEEE Transactions on Geoscience and Remote Sensing》;第59卷(第9期);7831-7843 * |
Feature Decomposition-Optimization-Reorganization Network for Building Change Detection in Remote Sensing Images;Yuanxin Ye等;《Remote Sens》;第14卷(第3期);1-18 * |
基于全局结构差异与局部注意力的变化检测;梅杰等;《中国科学:信息科学》;第52卷(第11期);2058-2074 * |
Also Published As
Publication number | Publication date |
---|---|
CN116012364A (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116012364B (en) | SAR image change detection method and device | |
CN108830870B (en) | Satellite image high-precision farmland boundary extraction method based on multi-scale structure learning | |
CN110781756A (en) | Urban road extraction method and device based on remote sensing image | |
CN107067405B (en) | Remote sensing image segmentation method based on scale optimization | |
CN107808138B (en) | Communication signal identification method based on FasterR-CNN | |
CN103871039B (en) | Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection | |
CN109726649B (en) | Remote sensing image cloud detection method and system and electronic equipment | |
CN113177592B (en) | Image segmentation method and device, computer equipment and storage medium | |
CN111008585A (en) | Ship target detection method based on self-adaptive layered high-resolution SAR image | |
CN115311502A (en) | Remote sensing image small sample scene classification method based on multi-scale double-flow architecture | |
CN114037891A (en) | High-resolution remote sensing image building extraction method and device based on U-shaped attention control network | |
CN115131373B (en) | SAR image segmentation method based on texture features and SLIC | |
CN115457492A (en) | Target detection method and device, computer equipment and storage medium | |
CN114240940B (en) | Cloud and cloud shadow detection method and device based on remote sensing image | |
CN107392863A (en) | SAR image change detection based on affine matrix fusion Spectral Clustering | |
CN116310852A (en) | Double-time-phase remote sensing image unsupervised classification and change detection method and system | |
Wang et al. | Modified statistically homogeneous pixels’ selection with multitemporal SAR images | |
CN108509835B (en) | PolSAR image ground object classification method based on DFIC super-pixels | |
Hao et al. | A novel change detection approach for VHR remote sensing images by integrating multi-scale features | |
CN113962968A (en) | Multi-source mixed interference radar image target detection system oriented to complex electromagnetic environment | |
Xu et al. | Difference-guided multiscale graph convolution network for unsupervised change detection in PolSAR images | |
CN107358625B (en) | SAR image change detection method based on SPP Net and region-of-interest detection | |
Kekre et al. | SAR image segmentation using vector quantization technique on entropy images | |
CN116912582A (en) | Strong robustness hyperspectral target detection method based on characterization model | |
CN109344837B (en) | SAR image semantic segmentation method based on deep convolutional network and weak supervised learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |