CN116012702A - Remote sensing image scene level change detection method - Google Patents

Remote sensing image scene level change detection method Download PDF

Info

Publication number
CN116012702A
CN116012702A CN202211553744.7A CN202211553744A CN116012702A CN 116012702 A CN116012702 A CN 116012702A CN 202211553744 A CN202211553744 A CN 202211553744A CN 116012702 A CN116012702 A CN 116012702A
Authority
CN
China
Prior art keywords
scene
change
level
network
pseudo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211553744.7A
Other languages
Chinese (zh)
Inventor
林聪�
傅俊豪
方宏
徐佳伟
周梦潇
沈雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Surveying And Mapping Research Institute Co ltd
Original Assignee
Nanjing Surveying And Mapping Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Surveying And Mapping Research Institute Co ltd filed Critical Nanjing Surveying And Mapping Research Institute Co ltd
Priority to CN202211553744.7A priority Critical patent/CN116012702A/en
Publication of CN116012702A publication Critical patent/CN116012702A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for detecting scene level change of remote sensing images comprises the following steps: 1) Extracting first three principal component components of the two-period images based on principal component analysis, extracting scene depth features by using a pretrained VGG-16, and generating a scene-level pseudo-change diagram 1 by using a change vector analysis method and fuzzy C uniform clustering; 2) Generating a pixel level classification diagram of the two-period images based on the decision tree, generating a pixel level change diagram by using a comparison method after classification, providing a conversion strategy from a pixel level to a scene level, and converting the pixel level change diagram into a scene level pseudo change diagram 2; 3) Fusing the pseudo-variation diagram 1 and the pseudo-variation diagram 2 to generate reliable variation and non-variation training samples; 4) Training a ternary change detection network using the automatically selected samples; 5) All scene pairs are input into the trained network, and a scene level change detection result is generated. The invention can effectively obtain the change of semantic level of the two-stage images, and provides a new method for dynamically monitoring the change of the urban functional area.

Description

Remote sensing image scene level change detection method
Technical Field
The invention relates to the technical field of remote sensing mapping, in particular to a method for detecting scene level change of a remote sensing image.
Background
Along with the key period of high-quality town realization in China. The intense human activities of urban renovation, infrastructure construction, new urban area development, etc. have resulted in significant changes in urban structure, architecture and functionality. The urban scene change is accurately monitored and interpreted, and the urban analysis system has great significance for comprehensively judging urban trend, optimizing the national soil space structure and supporting sustainable urban decisions.
Urban change conditions obtained by traditional artificial ground monitoring are difficult to meet the current demands. Remote sensing earth observation can realize long-time, large-range and periodic observation of the earth surface, and changes the technology for acquiring urban change information. The change detection method may be classified into a pixel level, an object level, and a scene level according to granularity of the analysis unit. Compared with the pixel-level and object-level methods, the scene-level change detection method can obtain higher-level semantic or functional changes from an industrial area to a residential area, and becomes a new research direction in the field of change detection.
The scene level change detection method mainly comprises two main categories of traditional and deep learning-based methods. The traditional method mainly utilizes a visual word bag model or a theme model to extract the middle-level features of the images so as to detect scene-level changes. Although more efficient, the failure to mine deep features of high-resolution images results in their potential for poor performance in images with complex feature distributions. The deep learning method is a mainstream method for detecting scene change at present due to the strong deep feature extraction capability. However, the existing scene change detection methods based on deep learning all require a large amount of training samples, have low automation degree, and are time-consuming and labor-consuming. Therefore, it is urgent to develop a full-automatic change detection method to more efficiently monitor scene level changes.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a remote sensing image scene level change detection method; the method solves the problems of insufficient depth feature mining, low automation degree and the like of the existing scene change detection method, and achieves the efficient and accurate extraction of the scene change of the two-stage images.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a method for detecting scene level change of remote sensing images comprises the following steps:
s1: for the same region, acquiring two-stage remote sensing images, extracting first three principal component components of the two-stage images based on a principal component analysis method as parameters, inputting the first three principal component components into a pretrained VGG-16 neural network to extract scene depth features of the two-stage images, and generating a first scene-level pseudo-change map by utilizing a change vector analysis method and fuzzy C uniform clustering;
s2: aiming at the two-stage remote sensing images in the step S1, generating a pixel-level classification chart of the two-stage images based on a decision tree, generating a pixel-level change chart by using a comparison method after classification, and converting the pixel-level change chart into a second scene-level pseudo-change chart by using a pixel-level to scene-level conversion method;
s3: fusing the first scene-level pseudo-change diagram and the second scene-level pseudo-change diagram to generate a training sample which is changed and unchanged;
s4: constructing a ternary change detection network, and training through the samples in the step S3;
s5: and using the trained ternary change detection network for other scene recognition to generate a scene level change detection result.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, in step S1, the specific content of generating the first scene-level pseudo-change map by using the change vector analysis method and fuzzy C-average clustering is as follows:
Figure BDA0003982145520000021
wherein x (m, n) represents a scene of an mth row and an n column in the first scene-level pseudo-change map; d (m, n) represents the difference characteristic value of the m-th row and n-th column of scenes in the two-phase images calculated by using a change vector analysis method, and a difference image after comparison of the two-phase images is formed according to the difference characteristic value; k (K) 1 And K 2 Is the intensity value of two clustering centers in the difference image calculated by fuzzy C uniform clustering, and K 1 <K 2 ;ω c 、ω u 、ω n The variable class, the uncertainty class, and the unchanged class are represented respectively.
Further, in step S2, the specific content of the pixel-level classification map for generating the two-phase image based on the decision tree is:
calculating a normalized difference water body index NDWI of each pixel point in the two-phase images, determining a threshold value X1 through a histogram filtering method, and judging that the corresponding pixel point is a water body when the NDWI index of a certain pixel point is greater than the threshold value X1, or else, judging that the corresponding pixel point is a non-water body; dividing the two-stage images into a water body and a non-water body respectively;
for the non-water pixel points, continuously calculating the normalized difference vegetation index NDVI of the non-water pixel points, determining a threshold value X2 through an OTSU method, judging that the corresponding pixel point is vegetation when the NDVI index of a certain pixel point is larger than the threshold value X2, and otherwise, judging that the corresponding pixel point is impervious to water; dividing vegetation and impermeable water from the two-stage images respectively;
and obtaining pixel-level classification diagrams of the two-stage images respectively.
Further, in step S2, the specific content of converting the pixel level change map into the second scene level pseudo change map by the method of converting the pixel level into the scene level is:
Figure BDA0003982145520000022
wherein x (m, n) represents a scene of an mth row and an n column in the second scene level pseudo-change map; n (m, N) represents the number of changed pixel points in the scene of the m-th row and N-th column in the pixel change diagram; beta represents a custom threshold; a. b represents all pixels in the scene respectively
The number of rows and columns of dots; omega c 、ω n Representing the variant class and the invariant class, respectively.
Further, the specific content of step S3 is: fusing the first scene-level pseudo-change map and the second scene-level pseudo-change map; if a scene is of a change class in both pseudo-change graphs, the scene is selected as a change scene; if a certain scene is an unchanged class in two pseudo-change graphs, the scene is selected as an unchanged scene; otherwise, the class is regarded as an uncertainty class, and the class is ignored; thereby generating training scene samples that vary and not vary.
Further, the specific content of step S4 is as follows:
the ternary change detection network comprises a late fusion sub-network and a early fusion sub-network;
the advanced fusion sub-network is used for inputting the training scene sample into the first basic feature extraction module, outputting the training scene sample into the first Ghost multi-scale feature module for processing after processing, and obtaining a first feature value; simultaneously inputting the training scene sample into a second basic feature extraction module, and outputting the training scene sample into a second Ghost multi-scale feature module for processing after processing to obtain a second feature value; obtaining a characteristic difference value based on the first characteristic value and the second characteristic value, and then inputting the characteristic difference value into a first global average pooling layer to obtain a characteristic value capable of reflecting scene change probability, namely a sample change probability value of the late fusion subnetwork;
the early-stage fusion sub-network is used for inputting the training scene sample into a third basic feature extraction module, outputting the training scene sample into a third Ghost multi-scale feature module for processing after processing to obtain a third feature value, and then inputting the third feature value into a second global average pooling layer to obtain a feature value capable of reflecting scene change probability, namely a sample change probability value of the early-stage fusion sub-network;
carrying out characteristic connection on the sample change probability value of the late fusion sub-network and the sample change probability value of the early fusion sub-network to obtain a final sample change probability value of the whole ternary neural network; and (5) completing construction and training of the ternary change detection network.
Further, the first basic feature extraction module, the second basic feature extraction module and the third basic feature extraction module all comprise four convolution layers and four pooling layers, wherein the number of convolution kernels is 32, 64 and 64 respectively, the sizes of the convolution kernels are 3×3, and the activation function is ReLU;
the first Ghost multi-scale feature module, the second Ghost multi-scale feature module and the third Ghost multi-scale feature module comprise a plurality of Ghost convolution layers with different convolution kernels, each Ghost convolution layer outputs 64 features, the compression ratio is 2, and the activation function is ReLU.
Further, the Loss function of the ternary change detection network is Loss, and the specific calculation formula is as follows:
Figure BDA0003982145520000031
wherein M represents the number of training scene samples; p is p i Representing the true value of the training sample;
Figure BDA0003982145520000032
and->
Figure BDA0003982145520000033
And respectively representing sample change probability values output by the ternary change detection network, the early fusion sub-network and the late fusion sub-network.
A computer-readable storage medium storing a computer program that causes a computer to execute the scene-level-change detection method according to any one of the above.
An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the scene-level-change-detection method according to any of the preceding claims when the computer program is executed.
The beneficial effects of the invention are as follows: the method and the device can effectively utilize the time sequence characteristics between two images and the depth characteristics of each image, realize scene level change detection with high precision and high efficiency, and better serve for dynamic monitoring of urban land utilization or functional areas. Meanwhile, the problems that a large amount of training samples are needed in the existing scene change detection method for deep learning, the degree of automation is low, and time and labor are wasted are avoided.
Drawings
FIG. 1 is a schematic diagram of the scene level change truth and images of an investigation region in an embodiment of the present invention.
Fig. 2 is a flow chart of the overall technical scheme of the invention.
FIG. 3 is a schematic diagram of a decision tree for pixel-level classification in accordance with the present invention.
FIG. 4 is a schematic diagram showing a fusion of two pre-detected pseudo-change maps according to the present invention.
FIG. 5 is a schematic diagram of a lightweight multi-scale feature extraction module in an embodiment of the invention.
FIG. 6 is a schematic diagram of the detection results of the method according to the embodiment of the present invention and the detection results of the change of other comparison methods.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings.
The research area of the embodiment of the invention is a Nanjing city partial area, and the two-stage image and scene level change truth value is shown in figure 1. The two-phase image has a size of 20480 pixels×20480 pixels and a spatial resolution of 0.5 m, and includes four bands of blue, green, red and near infrared.
Fig. 2 is a flowchart of a method for detecting scene level automatic change of a remote sensing image based on knowledge-guided sample selection and a ternary neural network, and the method of the present invention mainly includes the following steps, which are described in detail below.
Step 1, extracting first three principal component components (the elements of the components are different wave bands) of a two-period image based on principal component analysis, extracting scene depth features by using a pretrained VGG-16, and generating a scene-level pseudo-change diagram 1 by using a change vector analysis method and fuzzy C uniform clustering. The specific calculation mode is as follows:
Figure BDA0003982145520000041
wherein omega c Representing the class of variation omega n Representing unchanged class, omega u Representing an uncertainty class, D (m, n) is the difference eigenvalue of the scene of the m-th row and n-th column calculated using a variance vector analysis method, K 1 And K 2 (K 1 <K 2 ) Is the two class center intensity values calculated by fuzzy C equal clustering.
And 2, generating a pixel level classification diagram of the two-stage image based on the decision tree, generating a pixel level change diagram by using a comparison method after classification, providing a conversion strategy from a pixel level to a scene level, and converting the pixel level change diagram into a scene level pseudo-change diagram 2. The conversion strategy is defined as follows:
Figure BDA0003982145520000051
wherein x (m, N) 'is the scene of the m-th row and the N-th column, N (m, N) is the number of the changed pixels in x (m, N)' and beta represents a self-defined threshold; a. b represents the number of rows and columns of all pixel points in the scene respectively; omega c ′、ω n ' represents the variant class and the invariant class, respectively. The beta of the example study area was determined to be 0.25 by tuning.
The decision tree is constructed based on Normalized Differential Vegetation Index (NDVI) and Normalized Differential Water Index (NDWI), as shown in particular in fig. 3.
And 3, fusing the pseudo-variation graph 1 and the pseudo-variation graph 2 to generate reliable variation and non-variation training samples. The fusion mode is as follows: if a scene is a change class in both pseudo change graphs, the scene is selected as a change scene; if a scene is an unchanged class in both pseudo-change graphs, the scene is selected as the unchanged scene; the remaining scenes are considered as uncertain classes. As shown in particular in fig. 4.
And step 4, training the ternary change detection network by using the automatically selected samples. For the structure of the network, the invention is set forth as follows:
1) The ternary scene change detection network consists of a late fusion subnetwork and an early fusion subnetwork. Each sub-network comprises a basic feature extraction module and a lightweight multi-scale feature extraction module. The basic feature extraction module consists of four convolution layers and four pooling layers, the number of convolution kernels is 32, 64 and 64, the size is 3 multiplied by 3, and the activation function is ReLU. The lightweight multi-scale feature extraction module combines the pooling operation of Ghost convolution containing different convolution kernels and different pooling modes, each Ghost convolution layer outputs 64 features, the compression ratio is 2, the activation function is ReLU, and the specific structure is shown in figure 5.
2) The network incorporates a deep supervision strategy to calculate the loss function. The specific calculation mode is as follows:
Figure BDA0003982145520000052
Figure BDA0003982145520000061
wherein M is the number of training samples, p i Is the true value of the sample and,
Figure BDA0003982145520000062
and->
Figure BDA0003982145520000063
The sample change probability values output by the ternary neural network, the early fusion subnetwork and the late subnetwork are respectively.
And 5, inputting all scene pairs into a trained network to generate a scene level change detection result. To better illustrate the advantages of the present invention, ten methods were used for comparison, the detection results of the example study area different methods are shown in fig. 6, and the detection accuracy is shown in table 1. The invention can better detect scene level change, and F1 and OA of detection results are higher than those of other comparison methods.
TABLE 1
Figure BDA0003982145520000064
The ternary change detection network is a hybrid network, integrates an early-stage integrated sub-network and a late-stage integrated sub-network, and fully utilizes time sequence characteristics between two-stage images and depth characteristics of each-stage image.
It should be noted that the terms like "upper", "lower", "left", "right", "front", "rear", and the like are also used for descriptive purposes only and are not intended to limit the scope of the invention in which the invention may be practiced, but rather the relative relationship of the terms may be altered or modified without materially altering the teachings of the invention.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the invention without departing from the principles thereof are intended to be within the scope of the invention as set forth in the following claims.

Claims (10)

1. The method for detecting the scene level change of the remote sensing image is characterized by comprising the following steps of:
s1: for the same region, acquiring two-stage remote sensing images, extracting first three principal component components of the two-stage images based on a principal component analysis method as parameters, inputting the first three principal component components into a pretrained VGG-16 neural network to extract scene depth features of the two-stage images, and generating a first scene-level pseudo-change map by utilizing a change vector analysis method and fuzzy C uniform clustering;
s2: aiming at the two-stage remote sensing images in the step S1, generating a pixel-level classification chart of the two-stage images based on a decision tree, generating a pixel-level change chart by using a comparison method after classification, and converting the pixel-level change chart into a second scene-level pseudo-change chart by using a pixel-level to scene-level conversion method;
s3: fusing the first scene-level pseudo-change diagram and the second scene-level pseudo-change diagram to generate a training sample which is changed and unchanged;
s4: constructing a ternary change detection network, and training through the samples in the step S3;
s5: and using the trained ternary change detection network for other scene recognition to generate a scene level change detection result.
2. The method for detecting scene level changes of remote sensing images according to claim 1, wherein in step S1, the specific content of generating the first scene level pseudo-change map by using a change vector analysis method and fuzzy C-clustering is as follows:
Figure FDA0003982145510000011
wherein xm, n represent the scene of the m-th row and n-th column in the first scene-level pseudo-change diagram; d (m, n) represents the difference characteristic value of the m-th row and n-th column of scenes in the two-phase images calculated by using a change vector analysis method, and a difference image after comparison of the two-phase images is formed according to the difference characteristic value; k (K) 1 And K 2 Is the intensity value of two clustering centers in the difference image calculated by fuzzy C uniform clustering, and K 1 <K 2 ;ω c 、ω u 、ω n The variable class, the uncertainty class, and the unchanged class are represented respectively.
3. The method for detecting scene level changes of remote sensing images according to claim 1, wherein in step S2, the pixel level classification map for generating two-phase images based on the decision tree comprises the following specific contents:
calculating a normalized difference water body index NDWI of each pixel point in the two-phase images, determining a threshold value X1 through a histogram filtering method, and judging that the corresponding pixel point is a water body when the NDWI index of a certain pixel point is greater than the threshold value X1, or else, judging that the corresponding pixel point is a non-water body; dividing the two-stage images into a water body and a non-water body respectively;
for the non-water pixel points, continuously calculating the normalized difference vegetation index NDVI of the non-water pixel points, determining a threshold value X2 through an OTSU method, judging that the corresponding pixel point is vegetation when the NDVI index of a certain pixel point is larger than the threshold value X2, and otherwise, judging that the corresponding pixel point is impervious to water; dividing vegetation and impermeable water from the two-stage images respectively;
and obtaining pixel-level classification diagrams of the two-stage images respectively.
4. The method for detecting scene level changes of remote sensing images according to claim 1, wherein in step S2, the specific content of converting the pixel level change map into the second scene level pseudo change map by the method for converting pixel level to scene level is:
Figure FDA0003982145510000021
wherein x (m, n)' represents the scene of the m-th row and n-th column in the second scene-level pseudo-change map; n (m, N) represents the number of changed pixel points in the scene of the m-th row and N-th column in the pixel change diagram; beta represents a custom threshold; a. b represents the number of rows and columns of all pixel points in the scene respectively; omega c ′、ω n ' represents the variant class and the invariant class, respectively.
5. The method for detecting scene level changes of remote sensing images according to claim 1, wherein the specific content of step S3 is as follows: fusing the first scene-level pseudo-change map and the second scene-level pseudo-change map; if a scene is of a change class in both pseudo-change graphs, the scene is selected as a change scene; if a certain scene is an unchanged class in two pseudo-change graphs, the scene is selected as an unchanged scene; otherwise, the class is regarded as an uncertainty class, and the class is ignored; thereby generating training scene samples that vary and not vary.
6. The method for detecting scene level changes of remote sensing images according to claim 1, wherein the specific content of step S4 is as follows:
the ternary change detection network comprises a late fusion sub-network and a early fusion sub-network;
the advanced fusion sub-network is used for inputting the training scene sample into the first basic feature extraction module, outputting the training scene sample into the first Ghost multi-scale feature module for processing after processing, and obtaining a first feature value; simultaneously inputting the training scene sample into a second basic feature extraction module, and outputting the training scene sample into a second Ghost multi-scale feature module for processing after processing to obtain a second feature value; obtaining a characteristic difference value based on the first characteristic value and the second characteristic value, and then inputting the characteristic difference value into a first global average pooling layer to obtain a characteristic value capable of reflecting scene change probability, namely a sample change probability value of the late fusion subnetwork;
the early-stage fusion sub-network is used for inputting the training scene sample into a third basic feature extraction module, outputting the training scene sample into a third Ghost multi-scale feature module for processing after processing to obtain a third feature value, and then inputting the third feature value into a second global average pooling layer to obtain a feature value capable of reflecting scene change probability, namely a sample change probability value of the early-stage fusion sub-network;
carrying out characteristic connection on the sample change probability value of the late fusion sub-network and the sample change probability value of the early fusion sub-network to obtain a final sample change probability value of the whole ternary neural network; and (5) completing construction and training of the ternary change detection network.
7. The method of claim 6, wherein,
the first basic feature extraction module, the second basic feature extraction module and the third basic feature extraction module all comprise four convolution layers and four pooling layers, wherein the number of convolution kernels is 32, 64 and 64 respectively, the sizes of the convolution kernels are 3 multiplied by 3, and the activation function is ReLU;
the first Ghost multi-scale feature module, the second Ghost multi-scale feature module and the third Ghost multi-scale feature module comprise a plurality of Ghost convolution layers with different convolution kernels, each Ghost convolution layer outputs 64 features, the compression ratio is 2, and the activation function is ReLU.
8. The method for detecting scene level changes of remote sensing images according to claim 6, wherein the Loss function of the ternary change detection network is Loss, and the specific calculation formula is:
Figure FDA0003982145510000031
wherein M represents the number of training scene samples; p is p i Representing the true value of the training sample;
Figure FDA0003982145510000032
and->
Figure FDA0003982145510000033
And respectively representing sample change probability values output by the ternary change detection network, the early fusion sub-network and the late fusion sub-network.
9. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the scene-level change detection method according to any one of claims 1 to 8.
10. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the scene-level-change detection method according to any of claims 1-8 when the computer program is executed.
CN202211553744.7A 2022-12-06 2022-12-06 Remote sensing image scene level change detection method Pending CN116012702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211553744.7A CN116012702A (en) 2022-12-06 2022-12-06 Remote sensing image scene level change detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211553744.7A CN116012702A (en) 2022-12-06 2022-12-06 Remote sensing image scene level change detection method

Publications (1)

Publication Number Publication Date
CN116012702A true CN116012702A (en) 2023-04-25

Family

ID=86023839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211553744.7A Pending CN116012702A (en) 2022-12-06 2022-12-06 Remote sensing image scene level change detection method

Country Status (1)

Country Link
CN (1) CN116012702A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612020A (en) * 2024-01-24 2024-02-27 西安宇速防务集团有限公司 SGAN-based detection method for resisting neural network remote sensing image element change

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612020A (en) * 2024-01-24 2024-02-27 西安宇速防务集团有限公司 SGAN-based detection method for resisting neural network remote sensing image element change

Similar Documents

Publication Publication Date Title
Song et al. Automated pavement crack damage detection using deep multiscale convolutional features
Zhang et al. Supervision by fusion: Towards unsupervised learning of deep salient object detector
CN114119582B (en) Synthetic aperture radar image target detection method
CN113160234B (en) Unsupervised remote sensing image semantic segmentation method based on super-resolution and domain self-adaptation
CN114565860B (en) Multi-dimensional reinforcement learning synthetic aperture radar image target detection method
CN111461083A (en) Rapid vehicle detection method based on deep learning
CN108537824B (en) Feature map enhanced network structure optimization method based on alternating deconvolution and convolution
CN111753677B (en) Multi-angle remote sensing ship image target detection method based on characteristic pyramid structure
CN111667030B (en) Method, system and storage medium for realizing remote sensing image target detection based on deep neural network
CN111753682B (en) Hoisting area dynamic monitoring method based on target detection algorithm
CN110334719B (en) Method and system for extracting building image in remote sensing image
CN113609896A (en) Object-level remote sensing change detection method and system based on dual-correlation attention
Wang et al. The poor generalization of deep convolutional networks to aerial imagery from new geographic locations: an empirical study with solar array detection
Xia et al. A deep Siamese postclassification fusion network for semantic change detection
CN111461129B (en) Context prior-based scene segmentation method and system
CN113313094B (en) Vehicle-mounted image target detection method and system based on convolutional neural network
CN116342894B (en) GIS infrared feature recognition system and method based on improved YOLOv5
Liu et al. Survey of road extraction methods in remote sensing images based on deep learning
Chen et al. ASF-Net: Adaptive screening feature network for building footprint extraction from remote-sensing images
CN112101153A (en) Remote sensing target detection method based on receptive field module and multiple characteristic pyramid
CN116012702A (en) Remote sensing image scene level change detection method
CN116168240A (en) Arbitrary-direction dense ship target detection method based on attention enhancement
Fang et al. Automatic urban scene-level binary change detection based on a novel sample selection approach and advanced triplet neural network
CN111881915A (en) Satellite video target intelligent detection method based on multiple prior information constraints
Cui et al. Deep saliency detection via spatial-wise dilated convolutional attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination