CN109242889B - SAR image change detection method based on context significance detection and SAE - Google Patents

SAR image change detection method based on context significance detection and SAE Download PDF

Info

Publication number
CN109242889B
CN109242889B CN201810982878.8A CN201810982878A CN109242889B CN 109242889 B CN109242889 B CN 109242889B CN 201810982878 A CN201810982878 A CN 201810982878A CN 109242889 B CN109242889 B CN 109242889B
Authority
CN
China
Prior art keywords
sar image
sae
change detection
term
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810982878.8A
Other languages
Chinese (zh)
Other versions
CN109242889A (en
Inventor
王洪玉
耿杰
周晓君
马晓瑞
王兵
吴尚阳
赵雪松
韩科
谢蓓敏
尹维崴
李睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maintenance Company State Grid Jilinsheng Electric Power Supply Co
Dalian University of Technology
State Grid Corp of China SGCC
State Grid Jilin Electric Power Corp
Original Assignee
Maintenance Company State Grid Jilinsheng Electric Power Supply Co
Dalian University of Technology
State Grid Corp of China SGCC
State Grid Jilin Electric Power Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maintenance Company State Grid Jilinsheng Electric Power Supply Co, Dalian University of Technology, State Grid Corp of China SGCC, State Grid Jilin Electric Power Corp filed Critical Maintenance Company State Grid Jilinsheng Electric Power Supply Co
Priority to CN201810982878.8A priority Critical patent/CN109242889B/en
Publication of CN109242889A publication Critical patent/CN109242889A/en
Application granted granted Critical
Publication of CN109242889B publication Critical patent/CN109242889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

Based on context significance detection and SAE SAR image change detection method, inputting two time phase SAR images in the same area; calculating a logarithmic ratio difference map; extracting a saliency region of the difference map based on a context saliency detection algorithm; masking the backgrounds of the SAR images in the two time phases according to the saliency areas of the difference map to obtain a masked SAR image; extracting a change area of the SAR image after the mask based on the fuzzy C-means clustering to obtain a pseudo label training sample; training a multilayer sparse automatic encoder by using the masked SAR image and the pseudo label training sample; and extracting the final SAR image change area by adopting the trained network. The SAR image change detection method based on the multi-temporal SAR image change detection can effectively overcome the influence of speckle noise on SAR image change detection, meanwhile, a multi-layer sparse automatic encoder is designed, effective change characteristics of the SAR image are extracted, the change detection accuracy can be improved, and the SAR image change detection method based on the multi-temporal SAR image change detection can be used in the technical field of multi-temporal SAR image change detection and.

Description

SAR image change detection method based on context significance detection and SAE
Technical Field
The invention belongs to the technical field of remote sensing image processing, in particular to SAR image change detection, and particularly relates to a SAR image change detection method based on context significance detection and SAE (Sparse Auto-Encoders), which can be used in the technical fields of multi-temporal SAR image change detection and the like.
Background
Synthetic Aperture Radar (SAR) is an active microwave imaging system, has the advantages of all weather, all time, high imaging resolution and the like, and is an important way for people to obtain geographic information. In severe and extreme weather conditions, compared with an optical remote sensing image, the SAR can acquire stable data and has obvious advantages. The SAR image change detection is used for mining the change information of a ground object target by using data of different time phases and is applied to the fields of urban coverage investigation, land development and utilization and the like.
The SAR image change detection mainly comprises the steps of image preprocessing, change information extraction and change information processing. The image preprocessing comprises geometric correction, image registration, radiation correction, image filtering and the like. The change information extraction mainly generates a difference map by a difference method and a ratio method, wherein the difference method is to perform difference operation on two time phase SAR images, is easy to realize, and is seriously influenced by noise; the ratio method is to perform ratio operation on the two time phase SAR images, and can inhibit interference of multiplicative speckle noise. The change information processing is to divide the difference map into a change region and a non-change region, and generally employs a threshold method and a clustering method. In order to improve the detection effect, a classical classification algorithm is combined with artificial labeling to be used for supervision and classification, so that the influence of speckle noise on a detection result can be better overcome.
In recent years, the application prospect of SAR image change detection is more and more extensive, and a plurality of scholars obtain good research results. For example, in 2009, "under-changed detection in satellite images using principal component analysis and k-means clustering", which was published by IEEE Geoscience and Remote Sensing Letters, volume 6, volume 4, by Celik et al, proposed a change detection method based on principal component analysis and k-means clustering, which can effectively reduce data redundancy, highlight change information, and improve detection accuracy. "neighbor-based ratio adaptation for change detection in SAR images" was published by Chongocao et al in 2012 in IEEE Geoscience and Remote Sensing Letters, volume 9, No. 2, and a SAR image change detection method based on a Neighborhood ratio method was proposed, so that speckle noise can be reduced, and detail information can be well maintained. "Fuzzy clustering with a modified MRF energy function for changing detection induced characteristics radars images", published by IEEE Transactions on Fuzzy Systems volume 22, No. 1 in Touchao et al, improves the precision of change detection results by classifying changed regions and unchanged regions through Fuzzy C-means clustering and Markov random field energy functions. "unknown saliency-defined SAR image change detection" published by Pattern Recognition in 2017 by Zhengguang nations, Pyrolai adults and the like, provides SAR image transformation detection based on significance map guidance, adopts context significance detection to extract significant areas of images in different time phases, carries out principal component extraction on a logarithmic ratio difference map to obtain a final change detection result, and effectively weakens the influence of speckle noise. The method mainly processes the difference map, does not deeply excavate the change characteristics of the SAR images in different time phases, and improves the change detection effect and the space.
Disclosure of Invention
The invention aims to overcome the problems in the prior art, provides an SAR image change detection method based on context significance detection and SAE, and achieves the aim of improving the SAR image change detection accuracy.
The technical scheme of the invention is as follows:
a SAR image change detection method based on context significance detection and SAE comprises the following steps:
(1) two time-phase SAR images of the same area are input:
inputting two time phase SAR images of the same area for image registration to obtain two preprocessed time phase SAR images A and B;
(2) calculating a log ratio difference map:
calculating a logarithmic ratio difference graph D of the two time phase SAR images A and B;
(3) extracting a saliency region of the difference map based on a context saliency detection algorithm:
(3a) partitioning the logarithm ratio difference graph D into blocks and dividing the blocks into a plurality of scales;
(3b) under multiple scales, calculating the similarity between pixel blocks according to the gray value and the space distance to obtain the central pixel significant value of the pixel blocks;
(3c) adding context pixels for saliency correction to obtain a final saliency map;
(4) the background of the two phase SAR images is masked:
(4a) selecting a threshold value to carry out binarization on the significance map, namely obtaining a 0-1 binary map, wherein 0 represents an insignificant area, and 1 represents a significant area;
(4b) masking the two time phase SAR images A and B by using a binary image, removing a background unchanged part, and obtaining masked SAR images A 'and B';
(5) extracting a change area of the SAR image after the mask based on fuzzy C-means clustering:
(5a) calculating a neighborhood ratio difference map D ' of the SAR images A ' and B ' after the mask;
(5b) dividing the neighborhood ratio difference graph D' into three classes by adopting fuzzy C-means clustering, and respectively representing a non-change class, a middle uncertain class and a change class;
(5c) selecting p% of samples from the unchanged class and the changed class to form a pseudo label training sample set;
(6) the multi-layer SAE is trained using the post-mask SAR images and the pseudo-label samples:
(6a) constructing three layers of SAE networks, and randomly initializing the weight and the bias parameters of the networks;
(6b) pre-training each layer of network layer by utilizing a pseudo label training sample set to obtain pre-trained network parameters and implicit output of a third layer of SAE, wherein the implicit output is used as optimized sample characteristics;
(6c) inputting the optimized sample characteristics and the pseudo labels into a softmax classifier, and training to obtain the weight and the offset parameters of the softmax classifier;
(6d) performing reverse fine adjustment on parameters of the softmax classifier and the three layers of SAEs by using a pseudo label training sample set;
(7) extracting a final SAR image change detection result by adopting a trained network:
(7a) inputting the sample set of the whole image after the mask into three layers of SAEs, wherein the implicit output of the third layer of SAE is used as the optimized sample characteristic;
(7b) inputting the optimized sample characteristics into a softmax classifier to obtain a prediction label;
(7c) and drawing a final SAR image change detection result graph according to the predicted label vector and the spatial position.
Compared with the prior art, the invention mainly has the following advantages:
firstly, a significance region is obtained by using a context-based significance detection algorithm, and a training sample with high confidence coefficient is obtained by adopting fuzzy C-means clustering, so that the influence of speckle noise on SAR image change detection can be effectively overcome, and the change detection accuracy is improved;
secondly, the invention adopts the network structure of SAE, utilizes the excellent feature expression capability of the deep network, can obtain the difference feature with better discrimination of two time-phase SAR images, and extracts the effective change feature of the SAR images, thereby improving the effect of change detection.
Drawings
FIG. 1 is a flowchart of an implementation of a method for detecting SAR image change based on context saliency detection and SAE;
fig. 2 is San Francisco regional SAR data to which the present invention is applicable, in which fig. 2(a) is an SAR image taken in month 8 of 2003, fig. 2(b) is an SAR image taken in month 5 of 2004, and fig. 2(c) is a true value map of a change region;
fig. 3 is a graph showing the results of the change detection experiment of San Francisco data applicable to the present invention, in which fig. 3(a) is a graph showing the results of the change detection of SG-FCM in the comparative method, fig. 3(b) is a graph showing the results of the change detection of SAE in the comparative method, fig. 3(c) is a graph showing the results of the change detection of SGK in the comparative method, and fig. 3(d) is a graph showing the results of the change detection using the method of the present invention.
Detailed Description
The present invention will now be described in detail with reference to the following examples and accompanying drawings.
According to fig. 1, the method for detecting the change of the SAR image based on the context significance detection and the SAE comprises the following steps:
(1) two time-phase SAR images of the same area are input:
inputting two time phase SAR images of the same area for image registration to obtain two preprocessed time phase SAR images A and B;
(2) calculating a log ratio difference map:
calculating a logarithmic ratio difference graph D of the two time phase SAR images A and B, wherein the calculation formula is as follows:
Figure GDA0002448461730000051
(3) extracting a saliency region of the difference map based on a context saliency detection algorithm:
(3a) partitioning the log ratio difference graph D according to the window size of 7 multiplied by 7 and the overlapping of 50 percent, and dividing the log ratio difference graph D into 4 scales;
(3b) computing a block of pixels p from the gray values and the spatial distances at 4 scalestAnd pjSimilarity between them
Figure GDA0002448461730000052
Where μ denotes a position distance weighting factor, where μ ═ 3; then calculating the significant value of the central pixel block under the scale r
Figure GDA0002448461730000053
Where M denotes the number of similarity image blocks of the center pixel block, where M is 64, and the scale r belongs to { 100%, 80%, 50%, 30% };
(3c) adding context pixel to carry out significance correction of central pixel block, and calculating as follows
Figure GDA0002448461730000061
Wherein [ ·]Representing a normalization operator, the picture can be changed into the size of the original picture by the current scale,
Figure GDA0002448461730000062
representing the pixel t and the nearest pixel t at the scale rnearestNormalized position distance of (a); after the significant value of each central pixel is obtained, a final significant graph is generated;
(4) the background of the two phase SAR images is masked:
(4a) selecting a threshold value to carry out binarization on the significance map, namely obtaining a 0-1 binary map, wherein 0 represents an insignificant area, and 1 represents a significant area;
(4b) masking the two time phase SAR images A and B by using a binary image, removing a background unchanged part, and obtaining masked SAR images A 'and B';
(5) extracting a change area of the SAR image after the mask based on fuzzy C-means clustering:
(5a) calculating a neighborhood ratio difference map D ' of the SAR images A ' and B ' after the mask, and calculating a formula of
Figure GDA0002448461730000063
The first term is the ratio difference between pixel points, the second term is the ratio difference of pixel point neighborhoods, and omega represents the neighborhoods of the pixel points;
(5b) dividing the neighborhood ratio difference graph D' into three classes by adopting fuzzy C-means clustering, and respectively representing a non-change class, a middle uncertain class and a change class;
(5c) selecting 30% of samples from the unchanged class and the changed class to form a pseudo label training sample set;
(6) training the multi-layer SAE with the masked SAR image and the pseudo label sample set:
(6a) constructing three layers of SAE network, randomly initializing weight and bias parameters of network
Figure GDA0002448461730000064
(6b) Pre-training each layer of network layer by utilizing a pseudo label training sample set, and setting the characteristic of the pseudo label training sample as xiThe label is yiThe encoding procedure for the k-th layer SAE is:
Figure GDA0002448461730000065
the SAE decoding process is as follows:
Figure GDA0002448461730000071
the pre-training loss function for SAE is:
Figure GDA0002448461730000072
the first term of the above formula is a reconstruction error of encoding and decoding, and aims to enable the decoding output of the network to approach the input before encoding as much as possible, the second term is a weight constraint term and aims to prevent the weight from being too large, and the third term is a sparse constraint term and aims to ensure the sparsity of an implicit unit.
Figure GDA0002448461730000073
The weights and biases representing the encoding are,
Figure GDA0002448461730000074
representing the weight and the offset of the decoding,
Figure GDA0002448461730000075
indicating the input to the layer network(s),
Figure GDA0002448461730000076
an implicit output of the layer network is indicated,
Figure GDA0002448461730000077
indicating the decoded output of the layer network. f (-) and g (-) denote activation functions of encoding and decoding, respectively, KL (-) denotes a Kullback-Leibler divergence, ρ is a sparsity parameter,
Figure GDA0002448461730000078
mean activation value, M, representing the z-th implicit elementkRepresenting the number of hidden units, N representing trainingThe number of samples, λ and α represent the balance factor, | | · | | non |)FRepresenting the Forbenius norm. Optimizing by using a gradient descent method to obtain pre-trained network parameters and implicit output of the third layer SAE, wherein the implicit output is used as optimized sample characteristics
Figure GDA0002448461730000079
(6c) Inputting the optimized sample characteristics and the pseudo labels into a softmax classifier, training to obtain the weight and the bias parameters of the softmax classifier, and obtaining a loss function of
Figure GDA00024484617300000710
Wherein, the first term of the above equation represents the prediction error, and the second term represents the weight penalty term.
Figure GDA00024484617300000711
Represents the classifier prediction, W4Representing the weight of the softmax classifier and lambda representing the balance factor. Solving the target equation by using a back propagation algorithm to obtain a parameter of softmax;
(6d) and performing reverse fine adjustment on parameters of the softmax classifier and the three layers of SAEs by using a pseudo label training sample set, wherein the realized loss function is as follows:
Figure GDA0002448461730000081
wherein, the first term of the above equation is the average reconstruction error of the whole network, and the second term is the weight constraint term.
Figure GDA0002448461730000082
Indicating an implicit output of the k-th network,
Figure GDA0002448461730000083
representing the input to the k-th network. And optimizing by using a back propagation algorithm.
(7) Extracting a final SAR image change detection result by adopting a trained network:
(7a) the sample set of the entire masked image is input into the three-layer SAE, the implicit output of the third layer SAE being the optimized sample feature
Figure GDA0002448461730000084
(7b) Sample characteristics after optimization
Figure GDA0002448461730000085
Inputting the prediction probability into a softmax classifier, and obtaining the following prediction probability:
Figure GDA0002448461730000086
wherein,
Figure GDA0002448461730000087
and
Figure GDA0002448461730000088
the partial weights and offsets corresponding to class j,
Figure GDA0002448461730000089
and
Figure GDA00024484617300000810
corresponding to the partial weight and offset of class c. Finally, sample xiThe predictive signatures of (c) are as follows:
Figure GDA00024484617300000811
(7c) and drawing a final SAR image change detection result graph according to the predicted label vector and the spatial position.
The technical effects of the invention are explained by simulation experiments as follows:
1. simulation conditions and content
The experimental data of the present invention is San Francisco data, which is shown in fig. 2, and the pixel size is 256 × 256, where in fig. 2(c), the white part is a changed area, and the black part is a non-changed area; fig. 3 is a graph of experimental results of SanFrancisco data applicable to the present invention, in which fig. 3(a) is a graph of a change detection result of SG-FCM in a comparative method, fig. 3(b) is a graph of a change detection result of SAE in a comparative method, fig. 3(c) is a graph of a change detection result of SGK in a comparative method, and fig. 3(d) is a graph of a change detection result of SGK in a method using the present invention. Table one is a comparison of the change detection accuracy of these several techniques. In simulation experiments, the method and the comparison method are both realized in Matlab R2017a by programming.
In the three comparison methods adopted in the text, SG-FCM is based on a context significance detection method and a fuzzy C-means clustering method, SGK is based on a context significance detection method, PCA and k-means methods, and SAE is based on a neighborhood ratio and an automatic encoder method.
2. Analysis of simulation results
TABLE-comparison of data Change detection accuracy of San Francisco
Method of producing a composite material Number of false positives Number of missed tests Total number of errors Rate of accuracy of detection Kappa coefficient
SG-FCM 549 534 1083 0.9835 0.8757
SAE 669 300 969 0.9852 0.8925
SGK 702 383 1085 0.9834 0.8791
Method for producing a composite material 343 439 782 0.9881 0.9093
As can be seen from the results in the table, the method of the invention obtains higher change detection precision than the existing SG-FCM, SAE and SGK methods, and proves that the method of the invention effectively improves SAR image conversion detection precision. It can be known from the comparison of the methods in fig. 3 that the result of the method of the present invention is closest to the real change result, which shows that the context significance detection algorithm can effectively overcome the speckle noise of the SAR image, and the SAE-based deep network can obtain effective change characteristics, thereby achieving the effect of improving the change detection accuracy.

Claims (1)

1. A SAR image change detection method based on context significance detection and SAE is characterized by comprising the following steps:
(1) two time-phase SAR images of the same area are input:
inputting two time phase SAR images of the same area for image registration to obtain two preprocessed time phase SAR images A and B;
(2) calculating a log ratio difference map:
calculating a logarithmic ratio difference graph D of the two time phase SAR images A and B, wherein the calculation formula is as follows:
Figure FDA0002448461720000011
(3) extracting a saliency region of the difference map based on a context saliency detection algorithm:
(3a) partitioning the logarithm ratio difference graph D and dividing the logarithm ratio difference graph D into a plurality of scales;
(3b) calculating pixel blocks p from gray values and spatial distances at multiple scalestAnd pjSimilarity between them
Figure FDA0002448461720000012
Wherein μ represents a location distance weighting factor; then calculating the significant value of the central pixel block under the scale r
Figure FDA0002448461720000013
Wherein M represents the number of similarity image blocks of the central pixel block;
(3c) adding context pixel to carry out significance correction of central pixel block, and calculating as follows
Figure FDA0002448461720000014
Wherein [ ·]Representing a normalized operator, changing the current scale into the size of the original picture,
Figure FDA0002448461720000015
representing the pixel t and the nearest pixel t at the scale rnearestNormalized position distance of (a); after the significant value of each central pixel is obtained, a final significant graph is generated;
(4) the background of the two phase SAR images is masked:
(4a) selecting a threshold value to carry out binarization on the significance map, namely obtaining a 0-1 binary map, wherein 0 represents an insignificant area, and 1 represents a significant area;
(4b) masking the two time phase SAR images A and B by using a binary image, removing a background unchanged part, and obtaining masked SAR images A 'and B';
(5) extracting a change area of the SAR image after the mask based on fuzzy C-means clustering:
(5a) calculating a neighborhood ratio difference map D ' of the SAR images A ' and B ' after the mask, and calculating a formula of
Figure FDA0002448461720000021
The first term is the ratio difference between pixel points, the second term is the ratio difference of pixel point neighborhoods, and omega represents the neighborhoods of the pixel points;
(5b) dividing the neighborhood ratio difference graph D' into three classes by adopting fuzzy C-means clustering, and respectively representing a non-change class, a middle uncertain class and a change class;
(5c) selecting p% of samples from the unchanged class and the changed class to form a pseudo label training sample set;
(6) training the multi-layer SAE with the masked SAR image and the pseudo label sample set:
(6a) constructing three layers of SAE network, randomly initializing weight and bias parameters of network
Figure FDA0002448461720000022
(6b) Pre-training each layer of network layer by utilizing a pseudo label training sample set, and setting the characteristic of the pseudo label training sample as xiThe label is yiThe encoding procedure for the k-th layer SAE is:
Figure FDA0002448461720000023
the SAE decoding process is as follows:
Figure FDA0002448461720000024
the pre-training loss function for SAE is:
Figure FDA0002448461720000025
the first term of the above formula is a reconstruction error of encoding and decoding, and aims to enable the decoding output of the network to approach the input before encoding as much as possible, the second term is a weight constraint term and aims to prevent the weight from being overlarge, and the third term is a sparse limit term and aims to ensure the sparsity of an implicit unit;
Figure FDA0002448461720000031
the weights and biases representing the encoding are,
Figure FDA0002448461720000032
representing the weight and the offset of the decoding,
Figure FDA0002448461720000033
indicating the input to the layer network(s),
Figure FDA0002448461720000034
an implicit output of the layer network is indicated,
Figure FDA0002448461720000035
a decoded output representing the layer network; f (-) and g (-) denote activation functions of encoding and decoding, respectively, KL (-) denotes a Kullback-Leibler divergence, ρ is a sparsity parameter,
Figure FDA0002448461720000036
to represent the z-th hiddenContaining the mean activation value of the cell, MkRepresenting the number of hidden elements, N representing the number of training samples, λ and α representing the balance factors, | · | | survivalFRepresents the Forbenius norm; optimizing by using a gradient descent method to obtain pre-trained network parameters and implicit output of the third layer SAE, wherein the implicit output is used as optimized sample characteristics
Figure FDA0002448461720000037
(6c) Inputting the optimized sample characteristics and the pseudo labels into a softmax classifier, training to obtain the weight and the bias parameters of the softmax classifier, and obtaining a loss function of
Figure FDA0002448461720000038
Wherein, the first term of the above formula represents the prediction error, and the second term represents the weight penalty term;
Figure FDA0002448461720000039
represents the classifier prediction, W4Representing the weight of the softmax classifier, and λ representing a balance factor; solving the target equation by using a back propagation algorithm to obtain a parameter of softmax;
(6d) and performing reverse fine adjustment on parameters of the softmax classifier and the three layers of SAEs by using a pseudo label training sample set, wherein the realized loss function is as follows:
Figure FDA00024484617200000310
wherein, the first term of the above equation is the average reconstruction error of the whole network, and the second term is the weight constraint term;
Figure FDA00024484617200000311
indicating an implicit output of the k-th network,
Figure FDA00024484617200000312
representing networks in the k-th networkInputting; optimizing by using a back propagation algorithm;
(7) extracting a final SAR image change detection result by adopting a trained network:
(7a) the sample set of the entire masked image is input into the three-layer SAE, the implicit output of the third layer SAE being the optimized sample feature
Figure FDA0002448461720000041
(7b) Sample characteristics after optimization
Figure FDA0002448461720000042
Inputting the prediction probability into a softmax classifier, and obtaining the following prediction probability:
Figure FDA0002448461720000043
wherein,
Figure FDA0002448461720000044
and
Figure FDA0002448461720000045
the partial weights and offsets corresponding to class j,
Figure FDA0002448461720000046
and
Figure FDA0002448461720000047
partial weights and offsets corresponding to class c; finally, sample xiThe predictive signatures of (c) are as follows:
Figure FDA0002448461720000048
(7c) and drawing a final SAR image change detection result graph according to the predicted label vector and the spatial position.
CN201810982878.8A 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE Active CN109242889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810982878.8A CN109242889B (en) 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810982878.8A CN109242889B (en) 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE

Publications (2)

Publication Number Publication Date
CN109242889A CN109242889A (en) 2019-01-18
CN109242889B true CN109242889B (en) 2020-06-16

Family

ID=65069287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810982878.8A Active CN109242889B (en) 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE

Country Status (1)

Country Link
CN (1) CN109242889B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110133651B (en) * 2019-05-24 2021-04-06 中国科学院电子学研究所 Sparse SAR imaging self-adaptive sparsity estimation method and device
CN110210416B (en) * 2019-06-05 2021-10-01 中国科学技术大学 Sign language recognition system optimization method and device based on dynamic pseudo tag decoding
CN110263845B (en) * 2019-06-18 2023-05-02 西安电子科技大学 SAR image change detection method based on semi-supervised countermeasure depth network
CN110555841B (en) * 2019-09-10 2021-11-23 西安电子科技大学 SAR image change detection method based on self-attention image fusion and DEC
CN111783811B (en) * 2019-10-30 2024-06-21 北京京东尚科信息技术有限公司 Pseudo tag generation method and device
CN111161229B (en) * 2019-12-23 2022-11-18 大连理工大学 Change detection method based on geometric active contour model and sparse self-coding
CN112200137B (en) * 2020-10-29 2022-11-25 内蒙古工业大学 Image recognition method and corresponding device, storage medium and electronic equipment
CN112926484B (en) * 2021-03-11 2022-07-01 新疆大学 Low-illumination image change detection method and device based on automatic discrimination strategy
CN113033510B (en) * 2021-05-21 2021-10-15 浙江大华技术股份有限公司 Training and detecting method, device and storage medium for image change detection model
CN117173587B (en) * 2023-08-23 2024-07-12 哈尔滨工程大学 Feature refinement fusion change detection method based on heterogeneous image depth conversion
CN118570506B (en) * 2024-08-05 2024-10-18 山东科技大学 Railway perimeter prediction method based on multi-time-phase SAR image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923711A (en) * 2010-07-16 2010-12-22 西安电子科技大学 SAR (Synthetic Aperture Radar) image change detection method based on neighborhood similarity and mask enhancement
CN103198480A (en) * 2013-04-02 2013-07-10 西安电子科技大学 Remote sensing image change detection method based on area and Kmeans clustering
CN105869146A (en) * 2016-03-22 2016-08-17 西安电子科技大学 Saliency fusion-based SAR image change detection method
CN107256409A (en) * 2017-05-22 2017-10-17 西安电子科技大学 The High Resolution SAR image change detection method detected based on SAE and conspicuousness

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608698B (en) * 2015-12-25 2018-12-25 西北工业大学 A kind of method for detecting change of remote sensing image based on SAE
GB2553284B (en) * 2016-08-23 2020-02-05 Thales Holdings Uk Plc Multilook coherent change detection
CN107909109B (en) * 2017-11-17 2019-06-21 西安电子科技大学 SAR image classification method based on conspicuousness and multiple dimensioned depth network model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923711A (en) * 2010-07-16 2010-12-22 西安电子科技大学 SAR (Synthetic Aperture Radar) image change detection method based on neighborhood similarity and mask enhancement
CN103198480A (en) * 2013-04-02 2013-07-10 西安电子科技大学 Remote sensing image change detection method based on area and Kmeans clustering
CN105869146A (en) * 2016-03-22 2016-08-17 西安电子科技大学 Saliency fusion-based SAR image change detection method
CN107256409A (en) * 2017-05-22 2017-10-17 西安电子科技大学 The High Resolution SAR image change detection method detected based on SAE and conspicuousness

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Neighborhood-Based Ratio Approach for Change Detection in SAR Images;Maoguo Gong 等;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20120331;第9卷(第2期);全文 *
Change detection of SAR images based on supervised contractive autoencoders and fuzzy clustering;Jie Geng 等;《2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)》;20170626;全文 *
Feature learning and change feature classification based on deep learning for ternary change detection in SAR images;Maoguo Gong 等;《ISPRS Journal of Photogrammetry and Remote Sensing》;20170518;第2017年卷(第129期);全文 *
Saliency-Guided Change Detection for Aerial and Remote Sensing Imageries;Huili Tan 等;《2017 IEEE 2nd International Conference on Signal and Image Processing》;20171201;全文 *
Unsupervised saliency-guided SAR image change detection;Yaoguo Zheng 等;《Pattern Recognition》;20160804;全文 *
基于视觉显著性的高分遥感影像变化检测;刘亚琼 等;《北京建筑大学学报》;20180331;第34卷(第1期);全文 *

Also Published As

Publication number Publication date
CN109242889A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109242889B (en) SAR image change detection method based on context significance detection and SAE
CN109871902B (en) SAR small sample identification method based on super-resolution countermeasure generation cascade network
EP1127281B1 (en) Target acquisition system and radon transform based method for target azimuth aspect estimation
Venugopal Automatic semantic segmentation with DeepLab dilated learning network for change detection in remote sensing images
Schwegmann et al. Manifold adaptation for constant false alarm rate ship detection in South African oceans
CN107992891B (en) Multispectral remote sensing image change detection method based on spectral vector analysis
CN111914686A (en) SAR remote sensing image water area extraction method, device and system based on surrounding area association and pattern recognition
Nivedita Priyadarshini et al. A comparative study of advanced land use/land cover classification algorithms using Sentinel-2 data
Wang et al. Revisiting SLIC: Fast superpixel segmentation of marine SAR images using density features
CN112270285B (en) SAR image change detection method based on sparse representation and capsule network
Chen et al. A semisupervised context-sensitive change detection technique via gaussian process
CN106845343B (en) Automatic detection method for optical remote sensing image offshore platform
CN111666801A (en) Large-scene SAR image ship target detection method
CN109145993B (en) SAR image classification method based on multi-feature and non-negative automatic encoder
CN116778341A (en) Multi-view feature extraction and identification method for radar image
Shen et al. A dual sparsity constrained approach for hyperspectral target detection
CN112784777B (en) Unsupervised hyperspectral image change detection method based on countermeasure learning
Bao et al. Shadow-enhanced self-attention and anchor-adaptive network for video SAR moving target tracking
CN112419266B (en) Remote sensing image change detection method based on ground surface coverage category constraint
CN111967526B (en) Remote sensing image change detection method and system based on edge mapping and deep learning
Habibi et al. Classification of polarimetric SAR data based on object-based multiple classifiers for urban land-cover
CN115861669A (en) Infrared dim target detection method based on clustering idea
Zhang et al. Bipartite residual network for change detection in heterogeneous optical and radar images
CN114694014A (en) SAR image ship target detection method based on multilayer neural network
Raj et al. Change detection of images based on multivariate alteration detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant