CN109242889B - SAR image change detection method based on context significance detection and SAE - Google Patents

SAR image change detection method based on context significance detection and SAE Download PDF

Info

Publication number
CN109242889B
CN109242889B CN201810982878.8A CN201810982878A CN109242889B CN 109242889 B CN109242889 B CN 109242889B CN 201810982878 A CN201810982878 A CN 201810982878A CN 109242889 B CN109242889 B CN 109242889B
Authority
CN
China
Prior art keywords
sar image
sae
change detection
term
sar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810982878.8A
Other languages
Chinese (zh)
Other versions
CN109242889A (en
Inventor
王洪玉
耿杰
周晓君
马晓瑞
王兵
吴尚阳
赵雪松
韩科
谢蓓敏
尹维崴
李睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maintenance Company State Grid Jilinsheng Electric Power Supply Co
Dalian University of Technology
State Grid Corp of China SGCC
State Grid Jilin Electric Power Corp
Original Assignee
Maintenance Company State Grid Jilinsheng Electric Power Supply Co
Dalian University of Technology
State Grid Corp of China SGCC
State Grid Jilin Electric Power Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maintenance Company State Grid Jilinsheng Electric Power Supply Co, Dalian University of Technology, State Grid Corp of China SGCC, State Grid Jilin Electric Power Corp filed Critical Maintenance Company State Grid Jilinsheng Electric Power Supply Co
Priority to CN201810982878.8A priority Critical patent/CN109242889B/en
Publication of CN109242889A publication Critical patent/CN109242889A/en
Application granted granted Critical
Publication of CN109242889B publication Critical patent/CN109242889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

Based on context significance detection and SAE SAR image change detection method, inputting two time phase SAR images in the same area; calculating a logarithmic ratio difference map; extracting a saliency region of the difference map based on a context saliency detection algorithm; masking the backgrounds of the SAR images in the two time phases according to the saliency areas of the difference map to obtain a masked SAR image; extracting a change area of the SAR image after the mask based on the fuzzy C-means clustering to obtain a pseudo label training sample; training a multilayer sparse automatic encoder by using the masked SAR image and the pseudo label training sample; and extracting the final SAR image change area by adopting the trained network. The SAR image change detection method based on the multi-temporal SAR image change detection can effectively overcome the influence of speckle noise on SAR image change detection, meanwhile, a multi-layer sparse automatic encoder is designed, effective change characteristics of the SAR image are extracted, the change detection accuracy can be improved, and the SAR image change detection method based on the multi-temporal SAR image change detection can be used in the technical field of multi-temporal SAR image change detection and.

Description

基于上下文显著性检测与SAE的SAR图像变化检测方法A SAR Image Change Detection Method Based on Context Saliency Detection and SAE

技术领域technical field

本发明属于遥感图像处理技术领域,特别是SAR图像变化检测,具体是一种基于上下文显著性检测与SAE(Sparse Auto-Encoders,稀疏自动编码器)的SAR图像变化检测方法,可用于多时相SAR图像变化检测等技术领域。The invention belongs to the technical field of remote sensing image processing, in particular to SAR image change detection, in particular to a SAR image change detection method based on context saliency detection and SAE (Sparse Auto-Encoders, sparse automatic encoder), which can be used for multi-temporal SAR Image change detection and other technical fields.

背景技术Background technique

合成孔径雷达(Synthetic aperture radar,SAR)是一种主动式微波成像系统,具有全天候、全天时、成像分辨率高等优点,是人类获取地理信息的重要途径。在恶劣、极端气候情况下,相比光学遥感图像,SAR可以获取稳定的数据,具有明显优势。SAR图像变化检测是利用不同时相的数据挖掘地物目标的变化信息,在城市覆盖调研、土地开发利用等领域得到应用。Synthetic aperture radar (SAR) is an active microwave imaging system, which has the advantages of all-weather, all-day and high imaging resolution, and is an important way for humans to obtain geographic information. In harsh and extreme weather conditions, compared with optical remote sensing images, SAR can obtain stable data, which has obvious advantages. SAR image change detection is the use of data in different phases to mine the change information of ground objects, and it has been applied in the fields of urban coverage research, land development and utilization.

SAR图像变化检测主要步骤包括图像预处理、变化信息提取和变化信息处理。图像预处理包括几何校正、图像配准、辐射校正、图像滤波等处理。变化信息提取主要通过差值法和比值法生成差异图,差值法是将两个时相SAR图像做差值运算,易于实现,但受噪声影响严重;比值法是将两个时相SAR图像做比值运算,能够抑制乘性相干斑噪声的干扰。变化信息处理是将差异图分为变化区域和非变化区域,通常采用阈值法和聚类法。为了提高检测的效果,经典分类算法结合人工标注被用来进行监督分类,能够更好地克服相干斑噪声对检测结果的影响。The main steps of SAR image change detection include image preprocessing, change information extraction and change information processing. Image preprocessing includes geometric correction, image registration, radiometric correction, and image filtering. The variation information extraction is mainly generated by the difference method and the ratio method. The difference method is to perform the difference operation between the two time-phase SAR images, which is easy to implement, but is seriously affected by noise; the ratio method is to combine the two time-phase SAR images. Doing ratio operation can suppress the interference of multiplicative speckle noise. Change information processing is to divide the difference map into changing areas and non-changing areas, usually using threshold method and clustering method. In order to improve the detection effect, the classical classification algorithm combined with manual annotation is used for supervised classification, which can better overcome the influence of speckle noise on the detection results.

近年来,SAR图像变化检测的应用前景越来越广泛,许多学者取得了很好的研究成果。例如,Celik等人2009年在IEEE Geoscience and Remote Sensing Letters第6卷第4期上发表的《Unsupervised change detection in satellite images using principalcomponent analysis and k-means clustering》,提出了基于主成分分析和k-means聚类的变化检测方法,能够有效降低数据冗余,突出变化信息,提高检测精度。公茂果等人2012年在IEEE Geoscience and Remote Sensing Letters第9卷第2期上发表了《Neighborhood-based ratio approach for change detection in SAR images》,提出了基于邻域比值法的SAR图像变化检测方法,可以减弱相干斑噪声,同时很好的保持细节信息。公茂果等人2014年在IEEE Transactions on Fuzzy Systems第22卷第1期上发表的《Fuzzy clustering with a modified MRF energy function for change detection insynthetic aperture radar images》,通过模糊C均值聚类与马尔可夫随机场能量函数对变化区域和未改变区域进行分类,改善了变化检测结果的精度。郑耀国、焦李成等人2017年在Pattern Recognition上发表的《Unsupervised saliency-guided SAR image changedetection》,提出了基于显著图指导的SAR图像变换检测,采用上下文显著性检测提取不同时相图像的显著区域,将对数比值差异图进行主成分提取,得到最终的变化检测结果,有效减弱了相干斑噪声的影响。上述方法主要对差异图进行处理,未深入挖掘不同时相SAR图像的变化特征,变化检测效果还有提升空间。In recent years, the application prospect of SAR image change detection has become more and more extensive, and many scholars have achieved good research results. For example, Celik et al. published "Unsupervised change detection in satellite images using principal component analysis and k-means clustering" in IEEE Geoscience and Remote Sensing Letters, Vol. 6, Issue 4 in 2009, and proposed a method based on principal component analysis and k-means clustering. The clustering change detection method can effectively reduce data redundancy, highlight change information, and improve detection accuracy. Gong Maoguo et al. published "Neighborhood-based ratio approach for change detection in SAR images" in IEEE Geoscience and Remote Sensing Letters, Vol. 9, No. 2, in 2012, and proposed a SAR image change detection method based on the neighborhood ratio method. , which can reduce speckle noise while maintaining fine details. Gong Maoguo et al. published "Fuzzy clustering with a modified MRF energy function for change detection insynthetic aperture radar images" in IEEE Transactions on Fuzzy Systems, Vol. 22, No. 1 in 2014. Through fuzzy C-means clustering and Markov The random field energy function classifies the changed and unchanged regions, improving the accuracy of the change detection results. Zheng Yaoguo, Jiao Licheng and others published "Unsupervised saliency-guided SAR image change detection" on Pattern Recognition in 2017, and proposed a SAR image change detection based on saliency map guidance, using context saliency detection to extract salient regions of images in different phases , extract the principal components from the logarithmic ratio difference map, and obtain the final change detection result, which effectively reduces the influence of speckle noise. The above methods mainly deal with the difference map, and do not deeply mine the change characteristics of different time-phase SAR images, and there is still room for improvement in the change detection effect.

发明内容SUMMARY OF THE INVENTION

本发明旨在克服上述现有技术的问题,提出一种基于上下文显著性检测与SAE的SAR图像变化检测方法,达到提高SAR图像变化检测准确度的目标。The present invention aims to overcome the above-mentioned problems of the prior art, and proposes a SAR image change detection method based on context saliency detection and SAE, so as to achieve the goal of improving the accuracy of SAR image change detection.

本发明的技术方案:Technical scheme of the present invention:

基于上下文显著性检测与SAE的SAR图像变化检测方法,步骤如下:SAR image change detection method based on context saliency detection and SAE, the steps are as follows:

(1)输入同一地区的两个时相SAR图像:(1) Input two time-phase SAR images of the same area:

输入同一地区的两个时相SAR图像进行图像配准,获得预处理后的两个时相SAR图像A和B;Input two temporal SAR images of the same area for image registration, and obtain two temporal SAR images A and B after preprocessing;

(2)计算对数比值差异图:(2) Calculate the log ratio difference plot:

计算两个时相SAR图像A和B的对数比值差异图D;Calculate the log ratio difference map D of the two temporal SAR images A and B;

(3)基于上下文显著性检测算法提取差异图的显著性区域:(3) Extract the saliency region of the difference map based on the context saliency detection algorithm:

(3a)将对数比值差异图D分块,并划分为多个尺度;(3a) The log ratio difference map D is divided into blocks and divided into multiple scales;

(3b)在多个尺度下,根据灰度值和空间距离计算像素块之间的相似度,获得像素块的中心像素显著值;(3b) In multiple scales, the similarity between pixel blocks is calculated according to the gray value and the spatial distance, and the salient value of the central pixel of the pixel block is obtained;

(3c)加入上下文像素进行显著性修正,得到最终显著性图;(3c) adding context pixels for saliency correction to obtain the final saliency map;

(4)掩模两个时相SAR图像的背景:(4) Mask the background of the two temporal SAR images:

(4a)选取阈值对显著性图进行二值化,即获得0-1二值图,0表示不显著区域,1表示显著区域;(4a) Select a threshold to binarize the saliency map, that is, obtain a 0-1 binary map, where 0 represents an insignificant area, and 1 represents a significant area;

(4b)利用二值图对两个时相SAR图像A和B进行掩模处理,去掉背景无变化部分,获得掩模后SAR图像A’和B’;(4b) masking the two time-phase SAR images A and B using the binary image, removing the unchanged part of the background, and obtaining the masked SAR images A' and B';

(5)基于模糊C均值聚类提取掩模后SAR图像的变化区域:(5) The change area of the SAR image after the mask is extracted based on the fuzzy C-means clustering:

(5a)计算掩模后SAR图像A’和B’的邻域比值差异图D’;(5a) Calculate the neighborhood ratio difference map D' of the SAR images A' and B' after the mask;

(5b)采用模糊C均值聚类将邻域比值差异图D’分为三类,分别表示无变化类、中间不确定类、变化类;(5b) Using fuzzy C-means clustering to divide the neighborhood ratio difference map D' into three categories, representing no change, intermediate uncertainty, and change respectively;

(5c)从无变化类和变化类中选取p%的样本构成伪标签训练样本集合;(5c) Select p% of samples from the unchanged class and the changed class to form a pseudo-label training sample set;

(6)利用掩模后SAR图像和伪标签样本训练多层SAE:(6) Train a multi-layer SAE with post-mask SAR images and pseudo-label samples:

(6a)构建三层SAE网络,随机初始化网络的权重和偏置参数;(6a) Construct a three-layer SAE network and randomly initialize the weight and bias parameters of the network;

(6b)利用伪标签训练样本集合逐层预训练各层网络,获得预训练的网络参数和第三层SAE的隐含输出,隐含输出作为优化后的样本特征;(6b) Pre-training each layer of the network layer by layer using the pseudo-label training sample set to obtain the pre-trained network parameters and the implicit output of the third-layer SAE, and the implicit output is used as the optimized sample feature;

(6c)将优化后的样本特征和伪标签输入到softmax分类器,训练得到softmax分类器的权重和偏置参数;(6c) Input the optimized sample features and pseudo-labels to the softmax classifier, and train to obtain the weight and bias parameters of the softmax classifier;

(6d)利用伪标签训练样本集合对softmax分类器和三层SAE进行参数的反向微调;(6d) Use the pseudo-label training sample set to perform reverse fine-tuning of the parameters of the softmax classifier and the three-layer SAE;

(7)采用训练好的网络提取最终的SAR图像变化检测结果:(7) Use the trained network to extract the final SAR image change detection result:

(7a)将整个掩模后图像的样本集输入到三层SAE中,第三层SAE的隐含输出作为优化后的样本特征;(7a) Input the sample set of the entire masked image into the three-layer SAE, and the implicit output of the third-layer SAE is used as the optimized sample feature;

(7b)将优化后的样本特征输入到softmax分类器,得到预测标签;(7b) Input the optimized sample features into the softmax classifier to obtain the predicted label;

(7c)根据预测标签向量和空间位置,绘制最后的SAR图像变化检测结果图。(7c) Draw the final SAR image change detection result map according to the predicted label vector and spatial position.

本发明与现有技术相比,主要具有如下的优点:Compared with the prior art, the present invention mainly has the following advantages:

第一,本发明利用基于上下文显著性检测算法获取显著性区域,并采用模糊C均值聚类获得置信度高的训练样本,可以有效克服相干斑噪声对SAR图像变化检测的影响,有助于改善变化检测准确度;First, the present invention uses a context-based saliency detection algorithm to obtain saliency regions, and uses fuzzy C-means clustering to obtain training samples with high confidence, which can effectively overcome the influence of speckle noise on SAR image change detection and help improve Change detection accuracy;

第二,本发明采用SAE的网络结构,利用了深度网络优异的特征表达能力,可以获得两个时相SAR图像的区分度更好的差异特征,提取SAR图像有效的变化特征,从而提高变化检测的效果。Second, the present invention adopts the network structure of SAE, makes use of the excellent feature expression ability of the deep network, can obtain the difference features with better discrimination between the two time-phase SAR images, and extracts the effective change features of the SAR images, thereby improving the change detection. Effect.

附图说明Description of drawings

图1为基于上下文显著性检测与SAE的SAR图像变化检测方法的实现流程图;Fig. 1 is the realization flow chart of the SAR image change detection method based on context saliency detection and SAE;

图2为本发明适用的San Francisco地区SAR数据,其中图2(a)为2003年8月拍摄的SAR图像,图2(b)为2004年5月拍摄的SAR图像,图2(c)为变化区域真值图;Fig. 2 is the SAR data of the San Francisco area to which the present invention is applicable, wherein Fig. 2(a) is the SAR image taken in August 2003, Fig. 2(b) is the SAR image taken in May 2004, and Fig. 2(c) is the SAR image taken in May 2004. Ground truth map of changing area;

图3为本发明适用的San Francisco数据的变化检测实验结果图,其中图3(a)为对比方法SG-FCM的变化检测结果图,图3(b)为对比方法SAE的变化检测结果图,图3(c)为对比方法SGK的变化检测结果图,图3(d)为采用本发明方法的变化检测结果图。Fig. 3 is the change detection experiment result figure of the applicable San Francisco data of the present invention, wherein Fig. 3 (a) is the change detection result figure of contrast method SG-FCM, Fig. 3 (b) is the change detection result figure of contrast method SAE, Fig. 3(c) is a graph of the change detection result of the comparison method SGK, and Fig. 3(d) is a graph of the change detection result of the method of the present invention.

具体实施方式Detailed ways

下面结合具体实例和附图,对本发明作详细的阐述。The present invention will be described in detail below with reference to specific examples and accompanying drawings.

根据图1,基于上下文显著性检测与SAE的SAR图像变化检测方法,包括如下步骤:According to Figure 1, the SAR image change detection method based on context saliency detection and SAE includes the following steps:

(1)输入同一地区的两个时相SAR图像:(1) Input two time-phase SAR images of the same area:

输入同一地区的两个时相SAR图像进行图像配准,获得预处理后的两个时相SAR图像A和B;Input two temporal SAR images of the same area for image registration, and obtain two temporal SAR images A and B after preprocessing;

(2)计算对数比值差异图:(2) Calculate the log ratio difference plot:

计算两个时相SAR图像A和B的对数比值差异图D,计算公式如下:Calculate the log ratio difference map D of the two time-phase SAR images A and B, and the calculation formula is as follows:

Figure GDA0002448461730000051
Figure GDA0002448461730000051

(3)基于上下文显著性检测算法提取差异图的显著性区域:(3) Extract the saliency region of the difference map based on the context saliency detection algorithm:

(3a)将对数比值差异图D按照7×7的窗大小和50%的重叠进行分块,并划分为4个尺度;(3a) The log ratio difference map D is divided into 4 scales according to the window size of 7 × 7 and the overlap of 50%;

(3b)在4个尺度下根据灰度值和空间距离来计算像素块pt和pj之间的相似度(3b) Calculate the similarity between pixel blocks p t and p j according to gray value and spatial distance at 4 scales

Figure GDA0002448461730000052
Figure GDA0002448461730000052

其中,μ表示位置距离权重因子,这里μ=3;再计算中心像素块在尺度r下的显著值Among them, μ represents the position distance weight factor, where μ=3; then calculate the significant value of the central pixel block under the scale r

Figure GDA0002448461730000053
Figure GDA0002448461730000053

其中,M表示中心像素块的相似性图像块个数,这里M=64,尺度r∈{100%,80%,50%,30%};Among them, M represents the number of similar image blocks of the central pixel block, where M=64, and the scale r∈{100%, 80%, 50%, 30%};

(3c)加入上下文像素进行中心像素块的显著性修正,计算如下(3c) Adding context pixels to correct the saliency of the center pixel block, the calculation is as follows

Figure GDA0002448461730000061
Figure GDA0002448461730000061

其中,[·]表示规范化运算符,可将当前尺度将图片变为原图片大小,

Figure GDA0002448461730000062
表示在尺度r下像素t和最近的像素tnearest的归一化位置距离;得到每个中心像素的显著值后,生成最终的显著性图;Among them, [ ] represents the normalization operator, which can change the current scale to the original image size,
Figure GDA0002448461730000062
Represents the normalized position distance between pixel t and the nearest pixel t nearest at scale r; after obtaining the saliency value of each central pixel, the final saliency map is generated;

(4)掩模两个时相SAR图像的背景:(4) Mask the background of the two temporal SAR images:

(4a)选取阈值对显著性图进行二值化,即获得0-1二值图,0表示不显著区域,1表示显著区域;(4a) Select a threshold to binarize the saliency map, that is, obtain a 0-1 binary map, where 0 represents an insignificant area, and 1 represents a significant area;

(4b)利用二值图对两个时相SAR图像A和B进行掩模处理,去掉背景无变化部分,获得掩模后SAR图像A’和B’;(4b) masking the two time-phase SAR images A and B using the binary image, removing the unchanged part of the background, and obtaining the masked SAR images A' and B';

(5)基于模糊C均值聚类提取掩模后SAR图像的变化区域:(5) The change area of the SAR image after the mask is extracted based on the fuzzy C-means clustering:

(5a)计算掩模后SAR图像A’和B’的邻域比值差异图D’,计算公式为(5a) Calculate the neighborhood ratio difference map D' of the SAR images A' and B' after the mask, the calculation formula is

Figure GDA0002448461730000063
Figure GDA0002448461730000063

其中,第一项为像素点之间的比值差异,第二项为像素点邻域的比值差异,Ω表示像素点的邻域;Among them, the first item is the ratio difference between the pixel points, the second item is the ratio difference between the pixel point neighborhood, and Ω represents the neighborhood of the pixel point;

(5b)采用模糊C均值聚类将邻域比值差异图D’分为三类,分别表示无变化类、中间不确定类、变化类;(5b) Using fuzzy C-means clustering to divide the neighborhood ratio difference map D' into three categories, representing no change, intermediate uncertainty, and change respectively;

(5c)从无变化类和变化类中选取30%的样本构成伪标签训练样本集合;(5c) Select 30% of the samples from the unchanged class and the changed class to form a pseudo-label training sample set;

(6)利用掩模后SAR图像和伪标签样本集合训练多层SAE:(6) Train a multi-layer SAE using the post-mask SAR image and the pseudo-label sample set:

(6a)构建三层SAE网络,随机初始化网络的权重和偏置参数

Figure GDA0002448461730000064
(6a) Construct a three-layer SAE network and randomly initialize the weights and bias parameters of the network
Figure GDA0002448461730000064

(6b)利用伪标签训练样本集合逐层预训练各层网络,设伪标签训练样本的特征为xi,标签为yi,对第k层SAE的编码过程为:(6b) Use the pseudo-label training sample set to pre-train each layer of the network layer by layer. Let the features of the pseudo-label training samples be x i and the labels be y i , and the encoding process for the k-th layer of SAE is:

Figure GDA0002448461730000065
Figure GDA0002448461730000065

SAE解码过程为:The SAE decoding process is:

Figure GDA0002448461730000071
Figure GDA0002448461730000071

SAE的预训练损失函数为:The pre-training loss function of SAE is:

Figure GDA0002448461730000072
Figure GDA0002448461730000072

其中,上式第一项为编解码的重建误差,旨在使得网络的解码输出尽量逼近编码前输入,第二项为权重约束项,旨在防止权重过大,第三项为稀疏限制项,旨在保证隐含单元的稀疏性。

Figure GDA0002448461730000073
表示编码的权重和偏置,
Figure GDA0002448461730000074
表示解码的权重和偏置,
Figure GDA0002448461730000075
表示该层网络的输入,
Figure GDA0002448461730000076
表示该层网络的隐含输出,
Figure GDA0002448461730000077
表示该层网络的解码输出。f(·)和g(·)分别表示编码和解码的激活函数,KL(·)表示Kullback–Leibler散度,ρ为稀疏度参数,
Figure GDA0002448461730000078
表示第z个隐含单元的平均激活值,Mk表示隐含单元数目,N表示训练样本数目,λ和α表示平衡因子,||·||F表示Forbenius范数。利用梯度下降法进行优化,得到预训练的网络参数和第三层SAE的隐含输出,隐含输出作为优化后的样本特征
Figure GDA0002448461730000079
Among them, the first term of the above formula is the reconstruction error of the encoding and decoding, which aims to make the decoding output of the network as close as possible to the input before encoding, the second term is the weight constraint term, which aims to prevent the weight from being too large, and the third term is the sparse restriction term, Aims to guarantee the sparsity of hidden units.
Figure GDA0002448461730000073
represents the encoded weights and biases,
Figure GDA0002448461730000074
represents the decoding weights and biases,
Figure GDA0002448461730000075
represents the input of this layer of network,
Figure GDA0002448461730000076
represents the implicit output of this layer of network,
Figure GDA0002448461730000077
represents the decoded output of the network at this layer. f( ) and g( ) represent the activation functions for encoding and decoding, respectively, KL( ) represents the Kullback–Leibler divergence, ρ is the sparsity parameter,
Figure GDA0002448461730000078
represents the average activation value of the zth hidden unit, M k represents the number of hidden units, N represents the number of training samples, λ and α represent the balance factor, and ||·|| F represents the Forbenius norm. The gradient descent method is used for optimization, and the pre-trained network parameters and the implicit output of the third-layer SAE are obtained, and the implicit output is used as the optimized sample feature.
Figure GDA0002448461730000079

(6c)将优化后的样本特征和伪标签输入到softmax分类器,训练得到softmax分类器的权重和偏置参数,损失函数为(6c) Input the optimized sample features and pseudo-labels to the softmax classifier, and train to obtain the weight and bias parameters of the softmax classifier. The loss function is

Figure GDA00024484617300000710
Figure GDA00024484617300000710

其中,上式第一项表示预测误差,第二项表示权重惩罚项。

Figure GDA00024484617300000711
表示分类器预测结果,W4表示softmax分类器的权重,λ表示平衡因子。利用反向传播算法求解该目标方程,获得softmax的参数;Among them, the first term of the above formula represents the prediction error, and the second term represents the weight penalty term.
Figure GDA00024484617300000711
represents the classifier prediction result, W4 represents the weight of the softmax classifier, and λ represents the balance factor. Use the back propagation algorithm to solve the objective equation and obtain the parameters of softmax;

(6d)利用伪标签训练样本集合对softmax分类器和三层SAE进行参数的反向微调,实现的损失函数为:(6d) Use the pseudo-label training sample set to perform reverse fine-tuning of the parameters of the softmax classifier and the three-layer SAE, and the realized loss function is:

Figure GDA0002448461730000081
Figure GDA0002448461730000081

其中,上式第一项为整个网络的平均重建误差,第二项为权重约束项。

Figure GDA0002448461730000082
表示第k层网络的隐含输出,
Figure GDA0002448461730000083
表示第k层网络的输入。利用反向传播算法进行优化。Among them, the first term of the above equation is the average reconstruction error of the entire network, and the second term is the weight constraint term.
Figure GDA0002448461730000082
represents the implicit output of the k-th layer network,
Figure GDA0002448461730000083
represents the input of the k-th layer network. Optimized using the backpropagation algorithm.

(7)采用训练好的网络提取最终的SAR图像变化检测结果:(7) Use the trained network to extract the final SAR image change detection result:

(7a)将整个掩模后图像的样本集输入到三层SAE中,第三层SAE的隐含输出作为优化后的样本特征

Figure GDA0002448461730000084
(7a) Input the sample set of the entire masked image into the three-layer SAE, and the implicit output of the third-layer SAE is used as the optimized sample feature
Figure GDA0002448461730000084

(7b)将优化后的样本特征

Figure GDA0002448461730000085
输入到softmax分类器,得到预测概率如下:(7b) The optimized sample features
Figure GDA0002448461730000085
Input to the softmax classifier, the predicted probability is obtained as follows:

Figure GDA0002448461730000086
Figure GDA0002448461730000086

其中,

Figure GDA0002448461730000087
Figure GDA0002448461730000088
对应第j类的部分权重和偏置,
Figure GDA0002448461730000089
Figure GDA00024484617300000810
对应第c类的部分权重和偏置。最后,样本xi的预测标签如下:in,
Figure GDA0002448461730000087
and
Figure GDA0002448461730000088
The partial weights and biases corresponding to the jth class,
Figure GDA0002448461730000089
and
Figure GDA00024484617300000810
Partial weights and biases corresponding to class c. Finally, the predicted labels for sample xi are as follows:

Figure GDA00024484617300000811
Figure GDA00024484617300000811

(7c)根据预测标签向量和空间位置,绘制最后的SAR图像变化检测结果图。(7c) Draw the final SAR image change detection result map according to the predicted label vector and spatial position.

以下通过仿真实验,对本发明的技术效果进行说明:The technical effects of the present invention are described below through simulation experiments:

1、仿真条件与内容1. Simulation conditions and content

本发明的实验数据为San Francisco数据,分别如图2所示,像素大小为256×256,其中图2(c)白色部分为变化区域,黑色部分为非变化区域;图3为本发明适用的SanFrancisco数据的实验结果图,其中图3(a)为对比方法SG-FCM的变化检测结果图,图3(b)为对比方法SAE的变化检测结果图,图3(c)为对比方法SGK的变化检测结果图,图3(d)为采用本发明方法的变化检测结果图。表一为这几种技术的变化检测精度对比。仿真实验中,本发明和对比方法都是在Matlab R2017a中编程实现。The experimental data of the present invention are San Francisco data, respectively as shown in Figure 2, the pixel size is 256 × 256, wherein the white part of Figure 2 (c) is the change area, and the black part is the non-change area; Figure 3 is the application of the present invention The experimental results of SanFrancisco data, in which Figure 3(a) is the change detection result of the comparison method SG-FCM, Figure 3(b) is the change detection result of the comparison method SAE, and Figure 3(c) is the comparison method SGK. Figure 3(d) is the result of change detection using the method of the present invention. Table 1 shows the comparison of the change detection accuracy of these techniques. In the simulation experiment, both the present invention and the comparison method are implemented by programming in Matlab R2017a.

本文采用三种对比方法中,SG-FCM是基于上下文显著性检测方法和模糊C均值聚类的方法,SGK是基于上下文显著性检测方法、PCA和k-means的方法,SAE是基于邻域比值和自动编码器的方法。Among the three comparison methods used in this paper, SG-FCM is based on context saliency detection method and fuzzy C-means clustering method, SGK is based on context saliency detection method, PCA and k-means method, SAE is based on neighborhood ratio and autoencoder methods.

2、仿真结果分析2. Analysis of simulation results

表一San Francisco数据变化检测精度对比Table 1 Accuracy comparison of data change detection in San Francisco

方法method 错检个数number of errors 漏检个数Number of missed inspections 总错误数total errors 检测准确率Detection accuracy Kappa系数Kappa coefficient SG-FCMSG-FCM 549549 534534 10831083 0.98350.9835 0.87570.8757 SAESAE 669669 300300 969969 0.98520.9852 0.89250.8925 SGKSGK 702702 383383 10851085 0.98340.9834 0.87910.8791 本方法this method 343343 439439 782782 0.98810.9881 0.90930.9093

由表一结果可知,本发明的方法比现有SG-FCM、SAE、SGK方法都获得了更高的变化检测精度,证明本发明方法有效提高了SAR图像变换检测精度。从图3各个方法的对比可知,本发明方法的结果最接近与真实变化结果,说明上下文显著性检测算法能够有效克服SAR图像相干斑噪声,基于SAE的深度网络可以获得有效的变化特征,从而达到提高变化检测精度的效果。It can be seen from the results in Table 1 that the method of the present invention obtains higher change detection accuracy than the existing SG-FCM, SAE and SGK methods, which proves that the method of the present invention effectively improves the accuracy of SAR image transformation detection. From the comparison of each method in Fig. 3, it can be seen that the results of the method of the present invention are the closest to the real change results, indicating that the context saliency detection algorithm can effectively overcome the speckle noise of SAR images, and the deep network based on SAE can obtain effective change characteristics, so as to achieve The effect of improving the accuracy of change detection.

Claims (1)

1. A SAR image change detection method based on context significance detection and SAE is characterized by comprising the following steps:
(1) two time-phase SAR images of the same area are input:
inputting two time phase SAR images of the same area for image registration to obtain two preprocessed time phase SAR images A and B;
(2) calculating a log ratio difference map:
calculating a logarithmic ratio difference graph D of the two time phase SAR images A and B, wherein the calculation formula is as follows:
Figure FDA0002448461720000011
(3) extracting a saliency region of the difference map based on a context saliency detection algorithm:
(3a) partitioning the logarithm ratio difference graph D and dividing the logarithm ratio difference graph D into a plurality of scales;
(3b) calculating pixel blocks p from gray values and spatial distances at multiple scalestAnd pjSimilarity between them
Figure FDA0002448461720000012
Wherein μ represents a location distance weighting factor; then calculating the significant value of the central pixel block under the scale r
Figure FDA0002448461720000013
Wherein M represents the number of similarity image blocks of the central pixel block;
(3c) adding context pixel to carry out significance correction of central pixel block, and calculating as follows
Figure FDA0002448461720000014
Wherein [ ·]Representing a normalized operator, changing the current scale into the size of the original picture,
Figure FDA0002448461720000015
representing the pixel t and the nearest pixel t at the scale rnearestNormalized position distance of (a); after the significant value of each central pixel is obtained, a final significant graph is generated;
(4) the background of the two phase SAR images is masked:
(4a) selecting a threshold value to carry out binarization on the significance map, namely obtaining a 0-1 binary map, wherein 0 represents an insignificant area, and 1 represents a significant area;
(4b) masking the two time phase SAR images A and B by using a binary image, removing a background unchanged part, and obtaining masked SAR images A 'and B';
(5) extracting a change area of the SAR image after the mask based on fuzzy C-means clustering:
(5a) calculating a neighborhood ratio difference map D ' of the SAR images A ' and B ' after the mask, and calculating a formula of
Figure FDA0002448461720000021
The first term is the ratio difference between pixel points, the second term is the ratio difference of pixel point neighborhoods, and omega represents the neighborhoods of the pixel points;
(5b) dividing the neighborhood ratio difference graph D' into three classes by adopting fuzzy C-means clustering, and respectively representing a non-change class, a middle uncertain class and a change class;
(5c) selecting p% of samples from the unchanged class and the changed class to form a pseudo label training sample set;
(6) training the multi-layer SAE with the masked SAR image and the pseudo label sample set:
(6a) constructing three layers of SAE network, randomly initializing weight and bias parameters of network
Figure FDA0002448461720000022
(6b) Pre-training each layer of network layer by utilizing a pseudo label training sample set, and setting the characteristic of the pseudo label training sample as xiThe label is yiThe encoding procedure for the k-th layer SAE is:
Figure FDA0002448461720000023
the SAE decoding process is as follows:
Figure FDA0002448461720000024
the pre-training loss function for SAE is:
Figure FDA0002448461720000025
the first term of the above formula is a reconstruction error of encoding and decoding, and aims to enable the decoding output of the network to approach the input before encoding as much as possible, the second term is a weight constraint term and aims to prevent the weight from being overlarge, and the third term is a sparse limit term and aims to ensure the sparsity of an implicit unit;
Figure FDA0002448461720000031
the weights and biases representing the encoding are,
Figure FDA0002448461720000032
representing the weight and the offset of the decoding,
Figure FDA0002448461720000033
indicating the input to the layer network(s),
Figure FDA0002448461720000034
an implicit output of the layer network is indicated,
Figure FDA0002448461720000035
a decoded output representing the layer network; f (-) and g (-) denote activation functions of encoding and decoding, respectively, KL (-) denotes a Kullback-Leibler divergence, ρ is a sparsity parameter,
Figure FDA0002448461720000036
to represent the z-th hiddenContaining the mean activation value of the cell, MkRepresenting the number of hidden elements, N representing the number of training samples, λ and α representing the balance factors, | · | | survivalFRepresents the Forbenius norm; optimizing by using a gradient descent method to obtain pre-trained network parameters and implicit output of the third layer SAE, wherein the implicit output is used as optimized sample characteristics
Figure FDA0002448461720000037
(6c) Inputting the optimized sample characteristics and the pseudo labels into a softmax classifier, training to obtain the weight and the bias parameters of the softmax classifier, and obtaining a loss function of
Figure FDA0002448461720000038
Wherein, the first term of the above formula represents the prediction error, and the second term represents the weight penalty term;
Figure FDA0002448461720000039
represents the classifier prediction, W4Representing the weight of the softmax classifier, and λ representing a balance factor; solving the target equation by using a back propagation algorithm to obtain a parameter of softmax;
(6d) and performing reverse fine adjustment on parameters of the softmax classifier and the three layers of SAEs by using a pseudo label training sample set, wherein the realized loss function is as follows:
Figure FDA00024484617200000310
wherein, the first term of the above equation is the average reconstruction error of the whole network, and the second term is the weight constraint term;
Figure FDA00024484617200000311
indicating an implicit output of the k-th network,
Figure FDA00024484617200000312
representing networks in the k-th networkInputting; optimizing by using a back propagation algorithm;
(7) extracting a final SAR image change detection result by adopting a trained network:
(7a) the sample set of the entire masked image is input into the three-layer SAE, the implicit output of the third layer SAE being the optimized sample feature
Figure FDA0002448461720000041
(7b) Sample characteristics after optimization
Figure FDA0002448461720000042
Inputting the prediction probability into a softmax classifier, and obtaining the following prediction probability:
Figure FDA0002448461720000043
wherein,
Figure FDA0002448461720000044
and
Figure FDA0002448461720000045
the partial weights and offsets corresponding to class j,
Figure FDA0002448461720000046
and
Figure FDA0002448461720000047
partial weights and offsets corresponding to class c; finally, sample xiThe predictive signatures of (c) are as follows:
Figure FDA0002448461720000048
(7c) and drawing a final SAR image change detection result graph according to the predicted label vector and the spatial position.
CN201810982878.8A 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE Active CN109242889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810982878.8A CN109242889B (en) 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810982878.8A CN109242889B (en) 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE

Publications (2)

Publication Number Publication Date
CN109242889A CN109242889A (en) 2019-01-18
CN109242889B true CN109242889B (en) 2020-06-16

Family

ID=65069287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810982878.8A Active CN109242889B (en) 2018-08-27 2018-08-27 SAR image change detection method based on context significance detection and SAE

Country Status (1)

Country Link
CN (1) CN109242889B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110133651B (en) * 2019-05-24 2021-04-06 中国科学院电子学研究所 An adaptive sparsity estimation method and device for sparse SAR imaging
CN110210416B (en) * 2019-06-05 2021-10-01 中国科学技术大学 Optimization method and device for sign language recognition system based on dynamic pseudo-tag decoding
CN110263845B (en) * 2019-06-18 2023-05-02 西安电子科技大学 SAR Image Change Detection Method Based on Semi-supervised Adversarial Deep Network
CN110555841B (en) * 2019-09-10 2021-11-23 西安电子科技大学 SAR image change detection method based on self-attention image fusion and DEC
CN111783811B (en) * 2019-10-30 2024-06-21 北京京东尚科信息技术有限公司 Pseudo tag generation method and device
CN111161229B (en) * 2019-12-23 2022-11-18 大连理工大学 A Change Detection Method Based on Geometric Active Contour Model and Sparse Autoencoder
CN112200137B (en) * 2020-10-29 2022-11-25 内蒙古工业大学 Image recognition method and corresponding device, storage medium and electronic equipment
CN113610746A (en) * 2021-02-23 2021-11-05 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN112926484B (en) * 2021-03-11 2022-07-01 新疆大学 Method and device for detecting changes in low-light images based on automatic discrimination strategy
CN113033510B (en) * 2021-05-21 2021-10-15 浙江大华技术股份有限公司 Training and detecting method, device and storage medium for image change detection model
CN115131660A (en) * 2022-06-13 2022-09-30 武汉大学 Time sequence SAR image building area change time point detection method based on regional difference
CN117173587B (en) * 2023-08-23 2024-07-12 哈尔滨工程大学 Feature refinement fusion change detection method based on heterogeneous image depth conversion
CN118570506B (en) * 2024-08-05 2024-10-18 山东科技大学 Railway perimeter prediction method based on multi-time-phase SAR image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923711A (en) * 2010-07-16 2010-12-22 西安电子科技大学 SAR Image Change Detection Method Based on Neighborhood Similarity and Mask Enhancement
CN103198480A (en) * 2013-04-02 2013-07-10 西安电子科技大学 Remote sensing image change detection method based on area and Kmeans clustering
CN105869146A (en) * 2016-03-22 2016-08-17 西安电子科技大学 Saliency fusion-based SAR image change detection method
CN107256409A (en) * 2017-05-22 2017-10-17 西安电子科技大学 The High Resolution SAR image change detection method detected based on SAE and conspicuousness

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608698B (en) * 2015-12-25 2018-12-25 西北工业大学 A kind of method for detecting change of remote sensing image based on SAE
GB2553284B (en) * 2016-08-23 2020-02-05 Thales Holdings Uk Plc Multilook coherent change detection
CN107909109B (en) * 2017-11-17 2019-06-21 西安电子科技大学 SAR image classification method based on saliency and multi-scale deep network model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923711A (en) * 2010-07-16 2010-12-22 西安电子科技大学 SAR Image Change Detection Method Based on Neighborhood Similarity and Mask Enhancement
CN103198480A (en) * 2013-04-02 2013-07-10 西安电子科技大学 Remote sensing image change detection method based on area and Kmeans clustering
CN105869146A (en) * 2016-03-22 2016-08-17 西安电子科技大学 Saliency fusion-based SAR image change detection method
CN107256409A (en) * 2017-05-22 2017-10-17 西安电子科技大学 The High Resolution SAR image change detection method detected based on SAE and conspicuousness

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Neighborhood-Based Ratio Approach for Change Detection in SAR Images;Maoguo Gong 等;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20120331;第9卷(第2期);全文 *
Change detection of SAR images based on supervised contractive autoencoders and fuzzy clustering;Jie Geng 等;《2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)》;20170626;全文 *
Feature learning and change feature classification based on deep learning for ternary change detection in SAR images;Maoguo Gong 等;《ISPRS Journal of Photogrammetry and Remote Sensing》;20170518;第2017年卷(第129期);全文 *
Saliency-Guided Change Detection for Aerial and Remote Sensing Imageries;Huili Tan 等;《2017 IEEE 2nd International Conference on Signal and Image Processing》;20171201;全文 *
Unsupervised saliency-guided SAR image change detection;Yaoguo Zheng 等;《Pattern Recognition》;20160804;全文 *
基于视觉显著性的高分遥感影像变化检测;刘亚琼 等;《北京建筑大学学报》;20180331;第34卷(第1期);全文 *

Also Published As

Publication number Publication date
CN109242889A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109242889B (en) SAR image change detection method based on context significance detection and SAE
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN108960143B (en) Ship detection deep learning method in high-resolution visible light remote sensing image
Venugopal Automatic semantic segmentation with DeepLab dilated learning network for change detection in remote sensing images
Feng et al. Multiphase SAR image segmentation with $ G^{0} $-statistical-model-based active contours
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN111080678B (en) Multi-temporal SAR image change detection method based on deep learning
CN110334656B (en) Multi-source remote sensing image water body extraction method and device based on information source probability weighting
CN108021890B (en) High-resolution remote sensing image port detection method based on PLSA and BOW
CN101930547A (en) An Automatic Classification Method of Remote Sensing Images Based on Object-Oriented Unsupervised Classification
CN109145832A (en) Polarimetric SAR image semisupervised classification method based on DSFNN Yu non local decision
CN112270285B (en) SAR image change detection method based on sparse representation and capsule network
CN110008948A (en) Target detection method in hyperspectral image based on variational autoencoder network
CN113807206B (en) SAR image target identification method based on denoising task assistance
CN109145993B (en) SAR image classification method based on multi-feature and non-negative automatic encoder
CN106096658B (en) Aerial Images classification method based on unsupervised deep space feature coding
CN112381144A (en) Heterogeneous deep network method for non-European and European domain space spectrum feature learning
Venugopal Sample selection based change detection with dilated network learning in remote sensing images
Ma et al. Weakly supervised deep soft clustering for flood identification in SAR images
Zuo et al. A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields
CN114694014B (en) A method for ship target detection in SAR images based on multi-layer neural network
CN117437533A (en) Deep sea hyperspectral image target recognition method based on multi-modal fusion
Farhan et al. Efficient Approach for the Localization of Copy-Move Forgeries Using PointRend with RegNetX
CN116682020A (en) Flood detection method and device integrating optics and SAR
Lyu et al. Semantic Segmentation of Weakly Annotated Remote Sensing Images Based on Feature Adversary and Uncertainty Perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant