CN109977968B - SAR change detection method based on deep learning classification comparison - Google Patents

SAR change detection method based on deep learning classification comparison Download PDF

Info

Publication number
CN109977968B
CN109977968B CN201910226864.8A CN201910226864A CN109977968B CN 109977968 B CN109977968 B CN 109977968B CN 201910226864 A CN201910226864 A CN 201910226864A CN 109977968 B CN109977968 B CN 109977968B
Authority
CN
China
Prior art keywords
sar
network
features
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910226864.8A
Other languages
Chinese (zh)
Other versions
CN109977968A (en
Inventor
杨学志
吴聪聪
汪骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201910226864.8A priority Critical patent/CN109977968B/en
Publication of CN109977968A publication Critical patent/CN109977968A/en
Application granted granted Critical
Publication of CN109977968B publication Critical patent/CN109977968B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting SAR change by comparing after deep learning classification, which comprises the following steps: and respectively extracting the depth features of the SAR images at two moments before and after the disaster by using the characteristic that the MDPS-CNN has two network channels and can have different weights. Meanwhile, the texture features and the gray features of the original SAR image at two moments are obtained, and then the texture features, the gray features and the depth features are fused in the MDPS-CNN network. In addition, in order to make a balance under the conditions of different depth layers and high accuracy guarantee for specific SAR images, the MDPS-CNN adopts segmented back propagation to optimize a network so as to automatically select a proper number of layers. And classifying the two groups of fused images by using a threshold method. And finally, comparing the obtained two classification images with the water area change to obtain a final water area change detection image.

Description

SAR change detection method based on deep learning classification comparison
Technical Field
The invention relates to the field of SAR image change detection, in particular to an SAR change detection method based on deep learning classification and comparison.
Background
In recent years, a wide range of natural disasters, such as earthquakes, tsunamis, etc., seriously threaten the safety of human life and property. The change detection and analysis of the disaster area has important significance for post-disaster rescue and reconstruction. Obtaining disaster information from remote sensing data is an important research means. Synthetic Aperture Radar (SAR) is a powerful tool for monitoring the change condition of a disaster area because it is not affected by weather and day-night changes. However, SAR images present more difficulties in change detection due to the multiplicative noise inherent to SAR images.
Generally, SAR image change detection methods can be divided into two categories: (1) the post-comparison analysis method generates a Difference Image (DI) between the SAR images at two time points before and after the disaster, and then classifies the DI to obtain a change detection result. Therefore, it can also be called DI analysis [ ]. However, the quality of the DI can affect the final detection result. (2) And the post-classification comparison method is used for classifying the SAR images at two moments respectively and then comparing the classification results of the SAR images so as to display changed and unchanged areas. The problems of radiation normalization are required due to different SAR imaging time and imaging sensors, and the two methods can avoid the problems. The method has the disadvantages of accumulating classification errors and the requirement of carrying out high-precision classification on SAR images at two moments. Under the trade-off, we propose a method based on the latter idea.
The invention aims to provide an SAR image change detection method based on MDPS-CNN, which is used for detecting and evaluating water area change conditions in areas before and after disasters. And extracting the depth features of the MDPS-CNN model by using the trained MDPS-CNN model. Then, the texture features and the gray features of the original SAR images at two moments are obtained, and then the texture features, the gray features and the depth features are fused in the MDPS-CNN network. And classifying the two groups of fused images by using a threshold method. And after classification, detecting the disaster area condition by using an image difference method. Finally, compared with some classical change detection methods, the quantitative analysis result shows that the change detection of the MDPS-CNN has higher detection precision.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a SAR change detection method for comparison after deep learning classification is characterized in that: the method comprises the following steps:
step 1) feature extraction, which comprises the following specific steps:
1a) and depth feature extraction:
because the depth features are rarely used in the deep learning, most of the deep learning directly outputs a classification result graph according to probability, and therefore, the deep learning is decided to begin with the classification result and process the depth features. In the part, a channel is taken as an example, and the depth feature of the SAR image is extracted. Training an input SAR image with the size of M x N by utilizing an MDPS-CNN network, wherein each layer in the network can output a feature map of the layer, and assuming that the network has C layers in common, the feature map of the SAR image is output before a full connection layer and is recorded as the depth feature of the SAR image;
1b) and extracting texture features:
the Gabor texture features have good scale and direction selectivity, and have been successfully applied to the fields of image segmentation, target detection and identification. Multidirectional and multiscale Gabor textural features are fused. The 2-dimensional Gabor basis function is typically expressed as a gaussian function modulated by a complex sinusoidal function;
1c) and gray level feature extraction:
the SAR image gray scale feature reflects the backscattering feature of ground objects to radar waves, if different ground objects have the same or different backscattering features, different or the same gray scale features exist in the SAR image, the image gray scale information is extracted by utilizing neighborhood statistics, namely the gray scale mean value and the standard deviation of a pixel neighborhood in a certain range around a certain pixel point are calculated and used as the gray scale feature of the pixel neighborhood;
1d) and characteristic fusion:
in training the design network, we need an appropriate objective function, the optimization of which can produce good performance in the feature fusion task, so as to make the following classification cushion. The purpose of the training is to find the fusion function Fw(X). Assuming that the SAR image X has the size of h multiplied by w multiplied by c, wherein h and w are space dimensions, c is a channel size, and X = { X (i, j) |1 ≦ i ≦ h, 1 ≦ j ≦ w }. Thus, x (i, j) is the intensity vector at position (i, j) in the image, having a c-dimension, which is equal to the number of channels of the input image. In our proposed method, the SAR image we use is one channel, so c is 1, let X1 and X2 be the SAR images of the two networks to be input.
Step 2) MDPS-CNN network construction, which comprises the following specific steps:
2a) constructing a Multi-Depth CNN network:
the multi-depth CNN (multi-depth CNN) is a network that uses multiple depths to realize surface feature classification. The network includes register layers, dropout layers, convolutional layers, max-effort layers, and full-connected layers. Compared to the conventional convolutional neural network, the proposed network has two key points: multi-depth forward propagation (M-FP) and segment back propagation (P-BP);
2b) a Pseudo-siense network:
aiming at single SAR image processing, a plurality of single-channel CNN networks can be selected, and in the process that two SAR images are input into the network for SAR image change detection, a two-channel pseudo-siense network is selected to realize the change detection.
The pseudo-siense network is obtained by not sharing weights in different network layers on the basis of the siense network, namely whether the weights between corresponding layers in two channels contribute or not. The siense and pseudo-siense networks are two-channel network models, and each channel can be treated as a complete CNN network. The difference between the two is whether the weights between the corresponding network layers are shared. We will choose pseudo-siemese network because we process SAR images at different time instants and possibly different sensors, which results in the weight not necessarily shared between each corresponding layer in the two-channel network layer;
2c) MDPS-CNN network:
the network combines the original pseudo-siense network and the multi-depth network, and the SAR image, the texture characteristics and the gray scale characteristics are used as input on an input layer. Taking a channel as an example, firstly inputting the SAR image into the MDPS-CNN network, when the SAR image passes through two convolutional layers and two maximum pooling layers, marking the SAR image and outputting an error which is marked as error _ show at the moment, then marking the SAR image again when the SAR image passes through one convolutional layer and one maximum pooling layer and outputting an error which is marked as error _ deep, and finally comparing the error _ show with the error _ deep, and selecting the smaller error to perform the following steps. The following steps are as follows: and (3) acquiring the depth feature of the SAR image in the network by using the layer of the smaller error obtained in the last step, fusing the texture feature and the gray feature acquired by the SAR image with the depth feature of the SAR image, outputting a feature map, and classifying by using a threshold value method. Finally, comparing the classification images obtained by the two channels to generate a difference image;
2d) and selecting the number of layers of the MDPS-CNN network:
firstly inputting an SAR image into an MDPS-CNN network, when the SAR image passes through two convolutional layers and two maximum pooling layers, marking the SAR image and outputting an error which is marked as error _ show, then when the SAR image passes through one convolutional layer and one maximum pooling layer, marking the SAR image and outputting an error which is marked as error _ deep, finally comparing the error _ show with the error _ deep, and selecting the smaller error to perform the following steps;
step 3), detection after classification, wherein the process is as follows:
the obtained depth features, texture features and gray level features are fused in MDPS-CNN, and the fused results are classified by a threshold value method to obtain Fw(X1),Fw(X2). Then, the classification result maps obtained by the two channels are compared to generate a difference map (Final change map, F)w(X1, X2)). The monotonicity of the weight function of the above formula is consistent with that of the original polarized SAR non-local mean filtering weight, the more similar the image blocks are, the larger the weight is, and the above formula is applied to the polarized SAR non-local mean filtering.
Compared with the prior art, the invention has the following advantages:
1) the invention extracts the depth characteristics by using the deep learning, can avoid preprocessing such as filtering and the like, and has better noise resistance.
2) The invention fuses the feature map to ensure that the boundary is maintained on the basis of the original deep learning classification.
Drawings
FIG. 1 is a flow chart of a method for detecting SAR change by comparison after deep learning classification according to the present invention.
Fig. 2 is a frame diagram of a method for comparing SAR change detection after deep learning classification.
FIG. 3 is a diagram of the results of comparing SAR change detection methods after deep learning classification in the present invention.
Detailed Description
As shown in fig. 1, the method for detecting SAR variation by deep learning and post-classification comparison includes the following steps:
step 1) feature extraction, which comprises the following specific steps:
1a) and depth feature extraction:
because the depth features are rarely used in the deep learning, most of the deep learning directly outputs a classification result graph according to probability, and therefore, the deep learning is decided to begin with the classification result and process the depth features. In the part, a channel is taken as an example, and the depth feature of the SAR image is extracted. Training an input SAR image with the size of M x N by utilizing an MDPS-CNN network, wherein each layer in the network can output a feature map of the layer, and assuming that the network has C layers in common, the feature map of the SAR image is output before a full connection layer and is recorded as the depth feature of the SAR image;
1b) and extracting texture features:
the Gabor texture features have good scale and direction selectivity, and have been successfully applied to the fields of image segmentation, target detection and identification. Multidirectional and multiscale Gabor textural features are fused. The 2-dimensional Gabor basis function is typically expressed as a gaussian function modulated by a complex sinusoidal function;
1c) and gray level feature extraction:
the SAR image gray scale feature reflects the backscattering feature of ground objects to radar waves, if different ground objects have the same or different backscattering features, different or the same gray scale features exist in the SAR image, the image gray scale information is extracted by utilizing neighborhood statistics, namely the gray scale mean value and the standard deviation of a pixel neighborhood in a certain range around a certain pixel point are calculated and used as the gray scale feature of the pixel neighborhood;
1d) and characteristic fusion:
in training the design network, we need an appropriate objective function, the optimization of which can produce good performance in the feature fusion task, so as to make the following classification cushion. The purpose of the training is to find the fusion function Fw(X). Assuming that the SAR image X has the size of h multiplied by w multiplied by c, wherein h and w are space dimensions, c is a channel size, and X = { X (i, j) |1 ≦ i ≦ h, 1 ≦ j ≦ w }. Thus, x (i, j) is the intensity vector at position (i, j) in the image, having a c-dimension,which is equal to the number of channels of the input image. In our proposed method, the SAR image we use is one channel, so c is 1, let X1 and X2 be the SAR images of the two networks to be input.
Step 2) MDPS-CNN network construction, which comprises the following specific steps:
2a) constructing a Multi-Depth CNN network:
the multi-depth CNN (multi-depth CNN) is a network that uses multiple depths to realize surface feature classification. The network includes register layers, dropout layers, convolutional layers, max-effort layers, and full-connected layers. Compared to the conventional convolutional neural network, the proposed network has two key points: multi-depth forward propagation (M-FP) and segment back propagation (P-BP);
2b) a Pseudo-siense network:
aiming at single SAR image processing, a plurality of single-channel CNN networks can be selected, and in the process that two SAR images are input into the network for SAR image change detection, a two-channel pseudo-siense network is selected to realize the change detection.
The pseudo-siense network is obtained by not sharing weights in different network layers on the basis of the siense network, namely whether the weights between corresponding layers in two channels contribute or not. The siense and pseudo-siense networks are two-channel network models, and each channel can be treated as a complete CNN network. The difference between the two is whether the weights between the corresponding network layers are shared. We will choose pseudo-siemese network because we process SAR images at different time instants and possibly different sensors, which results in the weight not necessarily shared between each corresponding layer in the two-channel network layer;
2c) MDPS-CNN network:
the network combines the original pseudo-siense network and the multi-depth network, and the SAR image, the texture characteristics and the gray scale characteristics are used as input on an input layer. Taking a channel as an example, firstly inputting the SAR image into the MDPS-CNN network, when the SAR image passes through two convolutional layers and two maximum pooling layers, marking the SAR image and outputting an error which is marked as error _ show at the moment, then marking the SAR image again when the SAR image passes through one convolutional layer and one maximum pooling layer and outputting an error which is marked as error _ deep, and finally comparing the error _ show with the error _ deep, and selecting the smaller error to perform the following steps. The following steps are as follows: and (3) acquiring the depth feature of the SAR image in the network by using the layer of the smaller error obtained in the last step, fusing the texture feature and the gray feature acquired by the SAR image with the depth feature of the SAR image, outputting a feature map, and classifying by using a threshold value method. Finally, comparing the classification images obtained by the two channels to generate a difference image;
2d) and selecting the number of layers of the MDPS-CNN network:
firstly inputting an SAR image into an MDPS-CNN network, when the SAR image passes through two convolutional layers and two maximum pooling layers, marking the SAR image and outputting an error which is marked as error _ show, then when the SAR image passes through one convolutional layer and one maximum pooling layer, marking the SAR image and outputting an error which is marked as error _ deep, finally comparing the error _ show with the error _ deep, and selecting the smaller error to perform the following steps;
step 3), detection after classification, wherein the process is as follows:
the obtained depth features, texture features and gray level features are fused in MDPS-CNN, and the fused results are classified by a threshold value method to obtain Fw(X1),Fw(X2). Then, the classification result maps obtained by the two channels are compared to generate a difference map (Final change map, F)w(X1, X2)). The monotonicity of the weight function of the above formula is consistent with that of the original polarized SAR non-local mean filtering weight, the more similar the image blocks are, the larger the weight is, and the above formula is applied to the polarized SAR non-local mean filtering.
Therefore, SAR change detection by deep learning classification and comparison is basically finished.
The effectiveness of the invention is further illustrated by airborne SAR image experiments.
Airborne polarized SAR image contrast experiment:
1. experimental setup:
the experimental data is an SAR image of Huaihe river reach of Anhui province acquired by a high-resolution three-number system, the image visibility is 4 views, the resolution is 10m multiplied by 10m, and the size is 2058 multiplied by 2578 pixels. The comparison experiment respectively realizes Markov random field, classic CNN algorithm and threshold algorithm. .
2. And (4) analyzing results:
as can be seen from fig. 2, the CNN algorithm has a good smoothing effect on the image, but at the same time, the texture structure of the non-water body in the SAR image is also smoothed, which causes the loss of the detail information of the image.
As can be seen from fig. 3, the method for detecting SAR variation after deep learning classification provided by the present invention not only suppresses speckle noise in a homogeneous region, but also maintains structural features in a region with rich image detail information better.

Claims (5)

1. A SAR change detection method for comparison after deep learning classification is characterized by comprising the following steps:
step 1) extracting the depth features of the SAR image by using a trained MDPS-CNN model, and preparing feature fusion;
step 2) obtaining texture features and gray features of the original SAR images at two moments, and fusing the texture features, the gray features and the depth features in the MDPS-CNN network;
step 3) generating a difference map by using the fused image;
wherein, the MDPS-CNN model network combines the original pseudo-siense network and the multi-depth network, selects the pseudo-siense network with double channels, the SAR image and its texture feature and gray scale feature are input on the input layer, the SAR image is firstly input into the MDPS-CNN network, when passing through two convolutional layers and two maximum pooling layers, firstly, making a mark and outputting an error, at this time, marking the mark as error _ show, then, when the SAR passes through a convolutional layer and a maximum pooling layer, marking the SAR again and outputting an error, wherein the mark is error _ deep, finally comparing the error _ show with the error _ deep, selecting the smaller error, acquiring the depth characteristic of the SAR image in the network by using the layer with the smaller error, fusing the texture characteristic and the gray characteristic obtained by the SAR image with the depth characteristic, outputting a characteristic graph, and classifying by using a threshold value method; and comparing the classification maps obtained by the two channels to generate a difference map.
2. The method for detecting SAR change after deep learning classification comparison as claimed in claim 1, wherein the step of deep feature extraction in step 1) is: starting from the classification result, processing the depth features, training a single channel by using an MDPS-CNN network, and training an input SAR image with the size of M x N, wherein each layer in the network can output a feature map of the layer, and assuming that the network has C layers in common, we select to output the feature map of the SAR image before the full connection layer and record the feature map as the depth features.
3. The method for detecting SAR change after deep learning classification comparison as claimed in claim 1, wherein the step of extracting texture feature in step 2) is: fusing multidirectional multi-scale Gabor texture features, the 2-dimensional Gabor basis function is usually expressed as a gaussian function modulated by a complex sinusoidal function.
4. The method for detecting SAR change after deep learning classification comparison according to claim 1, wherein the step of extracting the gray feature in the step 2) is as follows: and extracting image gray information by using the neighborhood statistic, calculating the gray mean value and standard deviation of the neighborhood of the pixel in a certain range around a certain pixel point, and taking the gray mean value and standard deviation as the gray feature of the pixel.
5. The method for detecting SAR change after deep learning classification comparison as claimed in claim 1, wherein the step 2) of performing feature fusion is: in the process of training a design network, an objective function is determined, optimization of which can produce good performance in a feature fusion task so as to cushion subsequent classification, and the purpose of training is to find a fusion function Fw(X); let SAR image X be h X w X c,wherein h and w are space dimensions, c is channel size, X = { X (i, j) |1 ≦ i ≦ h, 1 ≦ j ≦ w }; x (i, j) is the intensity vector at location (i, j) in the image, with c dimension equal to the number of channels of the input image; the SAR image is a channel, so c is 1, let X1 and X2 be the SAR images of the two networks to be input.
CN201910226864.8A 2019-03-25 2019-03-25 SAR change detection method based on deep learning classification comparison Expired - Fee Related CN109977968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910226864.8A CN109977968B (en) 2019-03-25 2019-03-25 SAR change detection method based on deep learning classification comparison

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910226864.8A CN109977968B (en) 2019-03-25 2019-03-25 SAR change detection method based on deep learning classification comparison

Publications (2)

Publication Number Publication Date
CN109977968A CN109977968A (en) 2019-07-05
CN109977968B true CN109977968B (en) 2021-03-12

Family

ID=67080299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910226864.8A Expired - Fee Related CN109977968B (en) 2019-03-25 2019-03-25 SAR change detection method based on deep learning classification comparison

Country Status (1)

Country Link
CN (1) CN109977968B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570397B (en) * 2019-08-13 2020-12-04 创新奇智(重庆)科技有限公司 Method for detecting ready-made clothes printing defects based on deep learning template matching algorithm
CN111914686B (en) * 2020-07-15 2022-10-18 云南电网有限责任公司带电作业分公司 SAR remote sensing image water area extraction method, device and system based on surrounding area association and pattern recognition
CN112837221B (en) * 2021-01-26 2022-08-19 合肥工业大学 SAR image super-resolution reconstruction method based on dual discrimination
CN113297942B (en) * 2021-05-18 2022-09-27 合肥工业大学 Layered compression excitation network-based outdoor multi-scene rapid classification and identification method
CN113420771B (en) * 2021-06-30 2024-04-19 扬州明晟新能源科技有限公司 Colored glass detection method based on feature fusion

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608698A (en) * 2015-12-25 2016-05-25 西北工业大学 Remote image change detection method based on SAE
CN106971402A (en) * 2017-04-21 2017-07-21 西安电子科技大学 A kind of SAR image change detection aided in based on optics

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620093B2 (en) * 2010-03-15 2013-12-31 The United States Of America As Represented By The Secretary Of The Army Method and system for image registration and change detection
US10037477B2 (en) * 2015-08-31 2018-07-31 Massachusetts Institute Of Technology Combined intensity and coherent change detection in images
CN105844279B (en) * 2016-03-22 2019-04-23 西安电子科技大学 SAR image change detection based on deep learning and SIFT feature
CN107239795A (en) * 2017-05-19 2017-10-10 西安电子科技大学 SAR image change detecting system and method based on sparse self-encoding encoder and convolutional neural networks
CN108447057B (en) * 2018-04-02 2021-11-30 西安电子科技大学 SAR image change detection method based on significance and depth convolution network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608698A (en) * 2015-12-25 2016-05-25 西北工业大学 Remote image change detection method based on SAE
CN106971402A (en) * 2017-04-21 2017-07-21 西安电子科技大学 A kind of SAR image change detection aided in based on optics

Also Published As

Publication number Publication date
CN109977968A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN109977968B (en) SAR change detection method based on deep learning classification comparison
CN111797716B (en) Single target tracking method based on Siamese network
CN111047551B (en) Remote sensing image change detection method and system based on U-net improved algorithm
KR102143108B1 (en) Lane recognition modeling method, device, storage medium and device, and recognition method, device, storage medium and device
CN109840556B (en) Image classification and identification method based on twin network
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN112183432B (en) Building area extraction method and system based on medium-resolution SAR image
CN109871823B (en) Satellite image ship detection method combining rotating frame and context information
CN108960404B (en) Image-based crowd counting method and device
Liu et al. A night pavement crack detection method based on image‐to‐image translation
CN113822352B (en) Infrared dim target detection method based on multi-feature fusion
KR101941043B1 (en) Method for Object Detection Using High-resolusion Aerial Image
CN111833353B (en) Hyperspectral target detection method based on image segmentation
CN110516525A (en) SAR image target recognition method based on GAN and SVM
Zang et al. Traffic lane detection using fully convolutional neural network
CN115131580A (en) Space target small sample identification method based on attention mechanism
CN113989718A (en) Human body target detection method facing radar signal heat map
JP2018124963A (en) Image processing device, image recognition device, image processing program, and image recognition program
CN110084203B (en) Full convolution network airplane level detection method based on context correlation
CN111223113A (en) Nuclear magnetic resonance hippocampus segmentation algorithm based on dual dense context-aware network
WO2018143278A1 (en) Image processing device, image recognition device, image processing program, and image recognition program
CN113537240B (en) Deformation zone intelligent extraction method and system based on radar sequence image
CN115953660A (en) Point cloud 3D target detection method based on pseudo label and oriented to automatic driving
Feng et al. Improved deep fully convolutional network with superpixel-based conditional random fields for building extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210312