CN111275680B - SAR image change detection method based on Gabor convolution network - Google Patents

SAR image change detection method based on Gabor convolution network Download PDF

Info

Publication number
CN111275680B
CN111275680B CN202010056677.2A CN202010056677A CN111275680B CN 111275680 B CN111275680 B CN 111275680B CN 202010056677 A CN202010056677 A CN 202010056677A CN 111275680 B CN111275680 B CN 111275680B
Authority
CN
China
Prior art keywords
gabor
convolution
data set
layer
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010056677.2A
Other languages
Chinese (zh)
Other versions
CN111275680A (en
Inventor
高峰
张珊
董军宇
吕越
王俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202010056677.2A priority Critical patent/CN111275680B/en
Publication of CN111275680A publication Critical patent/CN111275680A/en
Application granted granted Critical
Publication of CN111275680B publication Critical patent/CN111275680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The SAR image change detection method based on the Gabor convolution network comprises the following steps: firstly, carrying out differential analysis by using a logarithmic ratio to obtain a differential image of a multi-phase SAR image; secondly, pre-classifying the differential image by using a multi-layer fuzzy C-means clustering algorithm to generate three categories of sample pixel points, namely a changed category, an unchanged category and a fuzzy category; then constructing a training data set and a testing data set by using the pre-classified results; and finally, using the training data set for training the Gabor convolution network, and using the test data set for testing the Gabor convolution network, thereby obtaining a final change result graph. The Gabor filter is used for modulating the learning filter to obtain the Gabor direction filter, and the Gabor direction filter is applied to a convolutional neural network, so that space information can be better captured, and the complexity of a model is reduced; meanwhile, by using a pre-classification strategy, a more reliable training data set can be obtained, and the classification accuracy is further improved.

Description

SAR image change detection method based on Gabor convolution network
Technical Field
The invention belongs to the technical field of image processing, in particular to a SAR (Synthetic Aperture Radar ) image change detection method based on a Gabor convolution network, which can detect the ground feature change of a multi-temporal SAR image and has important significance in the fields of natural disaster detection and evaluation, urban planning, land utilization and the like.
Background
The rapid development of the scientific technology in the world, especially the progress of satellite remote sensing technology, makes the acquisition of remote sensing image data easier. The remote sensing technology in China has complete spectrum forming wave bands and complete satellite and airborne systems, and the application satellite for weather, resources and the like is built. At the same time, the rapid population growth and the increasing human activity aggravate the changes of the surface landscape, such as the changes of various land use properties, the pulling of buildings in certain areas, the expansion of cities, the debris flow and collapse caused by torrential flood, etc. Therefore, the method has important application value for detecting the change of a certain surface landscape, and the problem is focused by more and more students and experts.
The synthetic aperture radar has all-weather and all-day imaging capability, can not be influenced by climatic conditions, can accurately acquire the change information of the ground surface landscape, and becomes a current research hotspot. The SAR image change detection is to obtain the required ground object change information by carrying out difference analysis on SAR images at different moments of the same place. At present, the SAR image change detection technology is widely applied to various fields, for example, in the military field, the SAR image change detection technology can be used for learning the battlefield situation, performing damage effect evaluation and the like. In the civil field, the SAR image change detection technology can be used for resource and environment monitoring, crop growth monitoring, natural disaster monitoring and evaluation and the like. However, since SAR images have a large amount of speckle noise, current methods often have difficulty accurately detecting a region of variation in the image.
In recent years, learners at home and abroad have made a great deal of research on detection of changes in SAR images. The prior knowledge is mainly classified into a supervised method and an unsupervised method according to whether the prior knowledge is required. The supervised method requires priori knowledge, such as learning models of limited boltzmann machines, extreme learning machines, convolutional neural networks and the like, a large number of label samples are required to be used for model training, and excellent performance is difficult to obtain under the conditions of poor label quality and insufficient quantity; the unsupervised method does not need a priori knowledge, such as a desired maximization method, a thresholding method, etc., but the unsupervised method is poor in noise robustness and adaptability. In combination, the current method also has a large lifting space for the change detection of the multi-phase SAR image.
Disclosure of Invention
The invention provides a remote sensing image change detection method based on a Gabor convolution network, which aims to solve the technical problems of large influence of noise on classification accuracy, low classification accuracy and the like.
The image change detection method based on the Gabor convolution network comprises the following steps:
obtaining a difference image by using a logarithmic ratio for two multi-temporal SAR images of the same geographic position;
pre-classifying the differential images to construct a training data set and a test data set;
using the training data set for Gabor convolutional network training;
and using the test data set for Gabor convolutional network test to obtain a change result graph.
The specific steps of the invention include the following steps:
(1) Carrying out difference analysis on two multi-temporal SAR images in the same geographic position by using a logarithmic ratio to obtain a differential image of the multi-temporal SAR images;
the calculation process of the differential analysis is as follows:
I DI =|logI 1 -logI 2 |
wherein I is 1 And I 2 Respectively representing two multi-temporal SAR images, I DI Representing differential images of two multi-temporal SAR images, wherein |·| is absolute value operation, and log represents logarithmic operation with 10 as a base;
(2) For differential image I DI Pre-classifying to construct a training data set and a test data set;
(2.1) pre-classifying the differential image by using a multi-layer fuzzy C-means clustering algorithm to obtain a pseudo tag matrix;
(2.2) extracting the spatial positions marked 0 and 1 in the pseudo tag matrix, and marking 0 and 1 in the differential image obtained in the step 1L multiplied by L neighborhood pixels are taken around the pixel points corresponding to the space positions as a training data set, L is an odd number not less than 3, and the number of samples in the obtained training data set is recorded as T 1
(2.3) extracting the space position marked as 0.5 in the pseudo tag matrix, taking L multiplied by L neighborhood pixels around the pixel point corresponding to the space position marked as 0.5 in the differential image obtained in the step 1 as a test data set, wherein the L value is an odd number not less than 3, and the number of samples in the obtained test data set is marked as T 2
The method is characterized by further comprising the following steps:
(3) The training data set of step 2.2 is used for Gabor convolutional network training,
the constructed Gabor convolutional network has the structure that: input layer- & gt data enhancement layer- & gt small-scale Gabor convolution layer- & gt medium-scale Gabor convolution layer- & gt large-scale Gabor convolution layer- & gt output layer:
(3.1) generating a Gabor direction filter;
the Gabor direction filter is obtained by modulating a learning filter by the Gabor filter, wherein the Gabor filter is represented by G (u, v), and u represents the filtering direction and v represents the filtering scale (namely wavelength);
at a given scale v, a Gabor filter is composed of u Gabor direction operators of different directions, which can be expressed as:
G(u,v)={g 1 ,g 2 ,...,g u }
wherein g 1 ,...,g u The Gabor direction operators are represented, the sizes of the Gabor direction operators are W multiplied by W, and the W value is an odd number not smaller than 3;
the learning filter is a three-dimensional convolution kernel in the convolution neural network, and is represented by C, wherein the size of C is m×m×n, m×m represents the height and width of the convolution kernel, M represents an odd number not less than 3, N represents the number of channels, and N represents a natural number, so the three-dimensional convolution kernel can be represented as:
C={c 1 ,c 2 ,...,c N }
wherein c 1 ,...,c N Representing the convolution kernel on each channelThe sizes of the two are M multiplied by M;
the Gabor direction filter is obtained by modulating a learning filter channel by each Gabor direction operator in the Gabor filter, and the calculation process is as follows:
GoF(u,v)={g 1 oC,g 2 oC,...,g u oC}
where GoF (u, v) denotes a Gabor direction filter, o denotes a channel-by-channel modulation, and the calculation process on each channel is as follows:
Figure BDA0002373133610000031
wherein s=1, 2, u,
Figure BDA0002373133610000032
representing pixel-by-pixel multiplication;
through the operation, u Gabor direction filters with the size of M multiplied by N are obtained;
(3.2) data enhancement;
t in the training data set obtained in the step 2.2 1 The training samples are used as input data of an input layer, sequentially input into the input layer, and each training sample is changed into N L multiplied by L pictures through the copy operation of a data enhancement layer and marked as F i ,i=1,2,3,...,T 1 As input to the next layer;
(3.3) enhanced feature maps were obtained by three scale Gabor convolution layers:
(3.3.1) extracting features by a Gabor convolution layer of small scale;
including H in a small scale Gabor convolution layer 1 Gabor direction filters GoF (u, v 1 ) The size of the convolution kernel is m×m×n; output characteristic diagram through convolution operation
Figure BDA0002373133610000033
The method comprises the following steps:
Figure BDA0002373133610000034
wherein H is 1 、v 1 Are all arbitrary natural numbers, G conv (g) Representing convolution operations, G 1 Representing a scale v 1 Gabor direction filter GoF (u, v) 1 ),F i The picture obtained in the step 3.2; then, the feature map obtained after Gabor convolution operation
Figure BDA0002373133610000035
Carrying out normalization and maximum value pooling treatment; finally, activation using the ReLu function, the resulting profile is noted +.>
Figure BDA0002373133610000036
(3.3.2) extracting features by a mesoscale Gabor convolution layer;
inclusion of H in a mesoscale Gabor convolutional layer 2 Gabor direction filters GoF (u, v 2 ) The size of the convolution kernel is m×m×n; output characteristic diagram through convolution operation
Figure BDA0002373133610000037
The method comprises the following steps:
Figure BDA0002373133610000041
wherein H is 2 Is greater than H 1 Natural number v of (v) 2 Is greater than v 1 Natural number of G conv (g) Representing convolution operations, G 2 Representing a scale v 2 Gabor direction filter GoF (u, v) 2 ),
Figure BDA0002373133610000042
Is the characteristic diagram obtained in the step 3.3.1; then, the feature map obtained after Gabor convolution operation is +.>
Figure BDA0002373133610000043
Carrying out normalization and maximum value pooling treatment; finally, activating by using ReLu function to obtain a characteristic diagramMarked as->
Figure BDA0002373133610000044
(3.3.3) extracting features by a large scale Gabor convolution layer;
inclusion of H in a large scale Gabor convolutional layer 3 Gabor direction filters GoF (u, v 3 ) The size of the convolution kernel is m×m×n; output characteristic diagram through convolution operation
Figure BDA0002373133610000045
The method comprises the following steps:
Figure BDA0002373133610000046
wherein H is 3 Is greater than H 2 Natural number v of (v) 3 Is greater than v 2 Natural number of G conv (g) Representing convolution operations, G 3 Representing a scale v 3 Gabor direction filter GoF (u, v) 3 ),
Figure BDA0002373133610000047
Is the characteristic diagram obtained in the step 3.3.2; then, the enhanced feature map obtained after Gabor convolution operation is +.>
Figure BDA0002373133610000048
Carrying out normalization treatment; finally, activation using ReLu function, resulting in the final profile +.>
Figure BDA0002373133610000049
(3.4) feature map to be extracted
Figure BDA00023731336100000410
Input to the output layer, in order->
Figure BDA00023731336100000411
Predictive label representing the i-th training sample output by the output layer, i=1, 2,3,.. 1
(3.5) calculating cross entropy loss and performing back propagation;
loss represents cross entropy Loss, and Loss is calculated by the following formula:
Figure BDA00023731336100000412
wherein y is i True tag for the ith sample in training dataset in step 2.2, y i =1 means that the label of the input sample is 1, i.e. the position pixel is changed, y i =0 means that the label of the input sample is 0, i.e. the position pixel is unchanged;
Figure BDA00023731336100000413
representing the predictive label output by the output layer in the step 3.4, wherein log represents the logarithm operation based on 10;
(4) Inputting the test data set in the step 2.3 into the Gabor convolution network after the operation of the step 3, and obtaining a prediction tag related to the test data set according to the process of the step 3;
(5) And (3) combining the training data set in the step (2.2) and the predictive label obtained in the step (4) to obtain a change result diagram of the geographic position in the step (1).
The remote sensing image change detection method based on the Gabor convolution network provided by the embodiment of the invention has the following advantages:
1. the difference image is generated by utilizing the logarithmic ratio, so that the speckle noise can be effectively restrained, and the robustness to the noise is improved.
2. The differential images are pre-classified by using multi-layer fuzzy C-means clustering, so that a more reliable training data set can be obtained. The method can meet the requirements of the change detection task in the unlabeled scene and improve the application range of the method.
3. The Gabor filter is used for modulating the learning filter to obtain a Gabor direction filter, and the Gabor direction filter is applied to a convolutional neural network, so that spatial information can be better captured, and particularly robustness in direction and scale change is improved. The Gabor direction filter also effectively reduces the complexity of network training, learns fewer parameters, and can improve the accuracy of change detection classification.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of an image processing method according to the present invention;
FIG. 3 is a schematic diagram of Gabor direction filter generation of the present invention;
FIG. 4 is a schematic diagram of a Gabor convolution network of the present invention;
FIG. 5 is a schematic diagram of input data according to the present invention;
FIG. 6 is a graph showing the comparison of the effects of the method of the embodiment and the prior art method.
Specific dimensions, structures and devices are labeled in the drawings in order to clearly realize the structure of the embodiment of the present invention, but this is only for illustrative purposes and is not intended to limit the present invention to the specific dimensions, structures, devices and environments, and those skilled in the art may make adjustments or modifications to these devices and environments according to specific needs, and the adjustments or modifications made remain included in the scope of the appended claims.
Detailed Description
In the following description, various aspects of the present invention will be described, however, it will be apparent to those skilled in the art that the present invention may be practiced with only some or all of the structures or processes of the present invention. For purposes of explanation, specific numbers, configurations and orders are set forth, it is apparent that the invention may be practiced without these specific details. In other instances, well-known features will not be described in detail so as not to obscure the invention.
Referring to fig. 1, the specific steps of the implementation of the present invention are as follows:
step 1: carrying out difference analysis on two multi-temporal SAR images in the same geographic position by using a logarithmic ratio to obtain a differential image of the multi-temporal SAR images;
the calculation process of the differential analysis is as follows:
I DI =|logI 1 -logI 2 |
wherein I is 1 And I 2 Respectively representing two multi-temporal SAR images, I DI Representing a differential image of two multi-temporal SAR images, wherein g is an absolute value operation, and log represents a logarithmic operation with a base of 10;
step 2: for differential image I DI Pre-classifying to construct a training data set and a test data set;
step 2.1: pre-classifying the differential images by using a multi-layer fuzzy C-means clustering algorithm to obtain a pseudo tag matrix;
step 2.2: extracting space positions marked as 0 and 1 in the pseudo tag matrix, taking 7 multiplied by 7 neighborhood pixels around pixel points corresponding to the space positions marked as 0 and 1 in the differential image obtained in the step 1 as a training data set, and marking the number of samples in the obtained training data set as T 1
Step 2.3: extracting the space position marked as 0.5 in the pseudo tag matrix, taking 7 multiplied by 7 neighborhood pixels around the pixel point corresponding to the space position marked as 0.5 in the differential image obtained in the step 1 as a test data set, and marking the number of samples in the obtained test data set as T 2
The method is characterized by further comprising the following steps:
step 3: the training data set of step 2.2 is used for Gabor convolutional network training,
the constructed Gabor convolutional network has the structure that: input layer- & gt data enhancement layer- & gt small-scale Gabor convolution layer- & gt medium-scale Gabor convolution layer- & gt large-scale Gabor convolution layer- & gt output layer:
step 3.1: generating a Gabor direction filter;
the Gabor direction filter is obtained by modulating a learning filter by the Gabor filter, wherein the Gabor filter is represented by G (u, v), and u represents the filtering direction and v represents the filtering scale (namely wavelength);
at a given scale v, a Gabor filter is composed of u Gabor direction operators of different directions, which can be expressed as:
G(u,v)={g 1 ,g 2 ,...,g u }
wherein g 1 ,...,g u The Gabor direction operators are represented, the sizes of the Gabor direction operators are W multiplied by W, and the W value is an odd number not smaller than 3;
the learning filter is a three-dimensional convolution kernel in the convolution neural network, and is represented by C, wherein the size of C is m×m×n, m×m represents the height and width of the convolution kernel, M represents an odd number not less than 3, N represents the number of channels, and N represents a natural number, so the three-dimensional convolution kernel can be represented as:
C={c 1 ,c 2 ,...,c N }
wherein c 1 ,...,c N Representing the convolution kernel on each channel as M x M in size;
the Gabor direction filter is obtained by modulating a learning filter channel by each Gabor direction operator in the Gabor filter, and the calculation process is as follows:
GoF(u,v)={g 1 oC,g 2 oC,...,g u oC}
where GoF (u, v) denotes a Gabor direction filter, o denotes a channel-by-channel modulation, and the calculation process on each channel is as follows:
Figure BDA0002373133610000061
wherein s=1, 2, u,
Figure BDA0002373133610000071
representing pixel-by-pixel multiplication;
through the operation, u Gabor direction filters with the size of M multiplied by N are obtained;
in this patent, u=4, w×w=3×3, m×m×n=3×3×4 are taken.
Step 3.2: enhancing data;
t in the training data set obtained in the step 2.2 1 The training samples are used as input data of an input layer, are sequentially input into the input layer, and are changed into 4 7 multiplied by 7 pictures through the copying operation of a data enhancement layer, and are marked as F i ,i=1,2,3,...,T 1 As input to the next layer;
step 3.3: enhanced feature maps are obtained by three scale Gabor convolution layers:
step 3.3.1: extracting features through a small-scale Gabor convolution layer;
the small-scale Gabor convolution layer comprises 20 Gabor direction filters GoF (4, 1), and the convolution kernel is 3 multiplied by 4; output characteristic diagram through convolution operation
Figure BDA0002373133610000072
The method comprises the following steps:
Figure BDA0002373133610000073
wherein G is conv (g) Representing convolution operations, G 1 Gabor direction filter GoF (4, 1), F, representing a scale of 1 i The picture obtained in the step 3.2; then, the feature map obtained after Gabor convolution operation
Figure BDA0002373133610000074
Carrying out normalization and maximum value pooling treatment; finally, activation using the ReLu function, the resulting profile is noted +.>
Figure BDA0002373133610000075
Step 3.3.2: extracting features through a mesoscale Gabor convolution layer;
the mesoscale Gabor convolution layer comprises 40 Gabor direction filters GoF (4, 2), and the convolution kernel is 3 multiplied by 4; output characteristic diagram through convolution operation
Figure BDA0002373133610000076
The method comprises the following steps:
Figure BDA0002373133610000077
wherein G is conv (g) Representing convolution operations, G 2 Representing a scale v 2 Gabor direction filter GoF (u, v) 2 ),
Figure BDA0002373133610000078
Is the characteristic diagram obtained in the step 3.3.1; then, the feature map obtained after Gabor convolution operation is +.>
Figure BDA0002373133610000079
Carrying out normalization and maximum value pooling treatment; finally, activation using the ReLu function, the resulting profile is noted +.>
Figure BDA00023731336100000710
Step 3.3.3: extracting features through a large-scale Gabor convolution layer;
the large-scale Gabor convolution layer comprises 80 Gabor direction filters GoF (4, 3), and the convolution kernel is 3 multiplied by 4; output characteristic diagram through convolution operation
Figure BDA00023731336100000711
The method comprises the following steps:
Figure BDA00023731336100000712
wherein G is conv (g) Representing convolution operations, G 3 Representing a scale v 3 Gabor direction filter GoF (u, v) 3 ),
Figure BDA0002373133610000081
Is the characteristic diagram obtained in the step 3.3.2; then, the enhanced feature map obtained after Gabor convolution operation is +.>
Figure BDA0002373133610000082
Carrying out normalization treatment; finally, activation using ReLu function, resulting in the final profile +.>
Figure BDA0002373133610000083
Step 3.4: feature map to be extracted
Figure BDA0002373133610000084
Input to the output layer, in order->
Figure BDA0002373133610000085
Predictive label representing the i-th training sample output by the output layer, i=1, 2,3,.. 1
Step 3.5: calculating cross entropy loss and carrying out back propagation;
loss represents cross entropy Loss, and Loss is calculated by the following formula:
Figure BDA0002373133610000086
wherein y is i True tag for the ith sample in training dataset in step 2.2, y i =1 means that the label of the input sample is 1, i.e. the position pixel is changed, y i =0 means that the label of the input sample is 0, i.e. the position pixel is unchanged;
Figure BDA0002373133610000087
representing the predictive label output by the output layer in the step 3.4, wherein log represents the logarithm operation based on 10;
step 4: inputting the test data set in the step 2.3 into the Gabor convolution network after the operation of the step 3, and obtaining a prediction tag related to the test data set according to the process of the step 3;
and 5, combining the training data set in the step 2.2 and the predictive label obtained in the step 4 to obtain a change result graph of the geographic position in the step 1.
The effects of the present invention are further described below in conjunction with simulation experiments:
the simulation experiment of the invention is carried out under the hardware environment of Intel (R) Xeon (R) CPU E5-2620, NVIDIA GTX 1080, memory 32GB and the software environment of Ubuntu 16.04.6 and Matlab2012a, and the experimental object is two groups of multi-phase SAR images San Francisco data sets and Farmland data sets. San Francisco dataset was photographed by ERS-2 satellite in 8/2004 and 5/2003 in San Francisco, where 256X 256 pixels were selected, as shown in the first line of FIG. 5, of original size 7749X 7713 pixels; the Farmland dataset was photographed by Radarsat satellites in yellow river areas at month 6 of 2008 and month 6 of 2009, with a size of 306 x 291 pixels, as shown in the second line of fig. 5. The simulation experimental data of the present invention are shown in fig. 5. Fig. 5 (c) is a change detection reference diagram of a simulation diagram of a real SAR image.
The comparison of the method of the present invention with the prior art more advanced change detection method is shown in fig. 6. The Principal Component Analysis and K-means cloning (hereinafter abbreviated as PCAKM) method in the comparative test is proposed in the article "Unsupervised change detection in satellite images using principal component analysis and k-means cloning"; the neighbor-based Ratio and Extreme Learning Machine (hereinafter abbreviated as NR-ELM) method is proposed in the article "Change detection from synthetic aperture radar images based on Neighborhood-based ratio and extreme learning machine"; gabor Feature Based on Two-Level cloning (hereinafter referred to as Gabor TLC) is described in the article "Gabor Feature Based Unsupervised Change Detection of Multitemporal SAR Images Based on Two-Level cloning"; fuzzy Clustering with a Modified MRF Energy Function (hereinafter referred to as MRFFCM) "Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images". As can be seen from fig. 6, when the input image has serious noise interference or difference in noise characteristics, the method of the present invention can still extract the fine variation information in the multi-phase SAR image more accurately, and has good noise robustness.
As shown in the first four columns of fig. 6, other methods are susceptible to noise interference, and it is difficult to accurately express the change information; the method of the invention uses Gabor direction filter, which can well inhibit noise influence; in particular, the method still achieves excellent performance in the presence of differences in noise characteristics in the Farmland dataset.
The invention compares objective indexes with the above method by using classification accuracy (PCC) and Kappa Coefficient (KC), and the calculation method is as follows:
Figure BDA0002373133610000091
/>
Figure BDA0002373133610000092
where N is the total number of pixels, oe=fp+fn is the total number of errors, FP is the number of false detections, and represents the number of pixels in the reference map that have not changed but have been detected as changed in the final change result map; FN is the number of missed samples, representing the number of pixels in the reference map that have changed but have been detected as unchanged in the final change result map. PRE represents the number and proportional relationship of false detection and missed detection, PRE= [ (TP+FP-FN) ×TP+ (TN+FN-FP) ×TN ]/(N×N), where TP is the number of pixels that are truly changed and TN is the number of pixels that are truly unchanged. The larger PCC and KC values indicate that the change detection result is more accurate and the noise suppression capability is stronger. Tables 1 and 2 show the results of the comparison of the present invention with the above-described method. It can be seen from the table that the PCC and KC values of the method of the present invention are both highest, which means that the method of the present invention is able to detect more accurately the change information in the input image and to suppress noise interference.
TABLE 1 variation detection method of San Francisco dataset experimental results
Figure BDA0002373133610000093
Figure BDA0002373133610000101
TABLE 2 Experimental results of the method for detecting changes in the Farmland dataset
Method PCC(%) KC(%)
PKAKM 94.03 62.92
MRFFCM 89.14 48.67
Gabor TLC 94.76 65.91
NR-ELM 97.70 76.05
The method of the invention 98.91 88.68
The method based on the Gabor convolution network is mainly used for improving analysis and understanding of the multi-temporal remote sensing image. But obviously, the method is also suitable for analyzing the image shot by a common imaging device such as a digital camera, and the obtained beneficial effects are similar.
The remote sensing image change detection method based on the Gabor convolution network provided by the invention is described in detail above, but obviously, the specific implementation form of the invention is not limited to the above. Various obvious modifications thereof will be within the scope of the invention to those skilled in the art without departing from the scope of the claims.

Claims (2)

1. A SAR image change detection method based on Gabor convolution network comprises the following steps:
step 1: carrying out difference analysis on two multi-temporal SAR images in the same geographic position by using a logarithmic ratio to obtain a differential image of the multi-temporal SAR images;
the calculation process of the differential analysis is as follows:
I DI =|logI 1 -logI 2 |
wherein I is 1 And I 2 Respectively representing two multi-temporal SAR images, I DI Representing differential images of two multi-temporal SAR images, wherein |·| is absolute value operation, and log represents logarithmic operation with 10 as a base;
step 2: for differential image I DI Pre-classifying to construct a training data set and a test data set;
step 2.1: pre-classifying the differential images by using a multi-layer fuzzy C-means clustering algorithm to obtain a pseudo tag matrix;
step 2.2: extracting space positions marked as 0 and 1 in the pseudo tag matrix, taking L multiplied by L neighborhood pixels around pixel points corresponding to the space positions marked as 0 and 1 in the differential image obtained in the step 1 as a training data set, wherein L is an odd number not less than 3, and the number of samples in the obtained training data set is marked as T 1
Step 2.3: extracting a space position marked as 0.5 in the pseudo tag matrix, taking L multiplied by L neighborhood pixels around a pixel point corresponding to the space position marked as 0.5 in the differential image obtained in the step 1 as a test data set, wherein L is an odd number not less than 3, and the number of samples in the obtained test data set is marked as T 2
The method is characterized by further comprising the following steps:
step 3: the training data set of step 2.2 is used for Gabor convolutional network training,
the constructed Gabor convolutional network has the structure that: input layer- & gt data enhancement layer- & gt small-scale Gabor convolution layer- & gt medium-scale Gabor convolution layer- & gt large-scale Gabor convolution layer- & gt output layer:
step 3.1: generating a Gabor direction filter;
the Gabor direction filter is obtained by modulating a learning filter by the Gabor filter, wherein the Gabor filter is represented by G (u, v), u represents the filtering direction, and v represents the filtering scale;
at a given scale v, a Gabor filter is composed of u Gabor direction operators of different directions, which can be expressed as:
G(u,v)={g 1 ,g 2 ,...,g u }
wherein g 1 ,...,g u The Gabor direction operators are represented, the sizes of the Gabor direction operators are W multiplied by W, and the W value is an odd number not smaller than 3;
the learning filter is a three-dimensional convolution kernel in the convolution neural network, and is represented by C, wherein the size of C is m×m×n, m×m represents the height and width of the convolution kernel, M represents an odd number not less than 3, N represents the number of channels, and N represents a natural number, so the three-dimensional convolution kernel can be represented as:
C={c 1 ,c 2 ,...,c N }
wherein c 1 ,...,c N Representing the convolution kernel on each channel as M x M in size;
the Gabor direction filter is obtained by modulating a learning filter channel by each Gabor direction operator in the Gabor filter, and the calculation process is as follows:
GoF(u,v)={g 1 oC,g 2 oC,...,g u oC}
where GoF (u, v) denotes a Gabor direction filter, o denotes a channel-by-channel modulation, and the calculation process on each channel is as follows:
Figure FDA0004129170130000021
wherein s=1, 2, u,
Figure FDA0004129170130000022
representing pixel-by-pixel multiplication; />
Through the operation, u Gabor direction filters with the size of M multiplied by N are obtained;
step 3.2: enhancing data;
t in the training data set obtained in the step 2.2 1 The training samples are used as input data of an input layer, sequentially input into the input layer, and each training sample is changed into N L multiplied by L pictures through the copy operation of a data enhancement layer and marked as F i ,i=1,2,3,...,T 1 As input to the next layer;
step 3.3: enhanced feature maps are obtained by three scale Gabor convolution layers:
step 3.3.1: extracting features through a small-scale Gabor convolution layer;
including H in a small scale Gabor convolution layer 1 Gabor direction filters GoF (u, v 1 ) The size of the convolution kernel is m×m×n; sequentially to F i Performing convolution operation to output characteristic diagram
Figure FDA0004129170130000023
The method comprises the following steps:
Figure FDA0004129170130000024
wherein H is 1 、v 1 Are all arbitrary natural numbers, G conv (g) Representing convolution operations, G 1 Representing the aforementioned scale v 1 Gabor direction filter GoF (u, v) 1 ),F i The picture obtained in the step 3.2; then, the feature map obtained after Gabor convolution operation
Figure FDA0004129170130000025
Normalization and maximum pooling were performed, i=1, 2,3,.. 1 The method comprises the steps of carrying out a first treatment on the surface of the Finally, reLu is usedFunction activation, the feature map obtained is marked +.>
Figure FDA0004129170130000026
Step 3.3.2: extracting features through a mesoscale Gabor convolution layer;
inclusion of H in a mesoscale Gabor convolutional layer 2 Gabor direction filters GoF (u, v 2 ) The size of the convolution kernel is m×m×n; sequentially to
Figure FDA0004129170130000027
Performing convolution operation to obtain characteristic diagram +.>
Figure FDA0004129170130000028
The method comprises the following steps:
Figure FDA0004129170130000029
wherein H is 2 Is greater than H 1 Natural number v of (v) 2 Is greater than v 1 Natural number of G conv (g) Representing convolution operations, G 2 Representing the aforementioned scale v 2 Gabor direction filter GoF (u, v) 2 ),
Figure FDA0004129170130000031
Is the characteristic diagram obtained in the step 3.3.1; then, the feature map obtained after Gabor convolution operation is +.>
Figure FDA0004129170130000032
Normalization and maximum pooling were performed, i=1, 2,3,.. 1 The method comprises the steps of carrying out a first treatment on the surface of the Finally, activation using the ReLu function, the resulting profile is noted +.>
Figure FDA0004129170130000033
Step 3.3.3: extracting features through a large-scale Gabor convolution layer;
inclusion of H in a large scale Gabor convolutional layer 3 Gabor direction filters GoF (u, v 3 ) The size of the convolution kernel is m×m×n; sequentially to
Figure FDA0004129170130000034
Performing convolution operation to obtain characteristic diagram +.>
Figure FDA0004129170130000035
The method comprises the following steps:
Figure FDA0004129170130000036
wherein H is 3 Is greater than H 2 Natural number v of (v) 3 Is greater than v 2 Natural number of G conv (g) Representing convolution operations, G 3 Representing the aforementioned scale v 3 Gabor direction filter GoF (u, v) 3 ),
Figure FDA0004129170130000037
Is the characteristic diagram obtained in the step 3.3.2; then, the enhanced feature map obtained after Gabor convolution operation is +.>
Figure FDA0004129170130000038
Normalization processing was performed, i=1, 2,3,.. 1 The method comprises the steps of carrying out a first treatment on the surface of the Finally, activation using ReLu function, resulting in the final profile +.>
Figure FDA0004129170130000039
Step 3.4: feature map to be extracted
Figure FDA00041291701300000310
Input to the output layer, in order->
Figure FDA00041291701300000311
Ith representing output of output layerPredictive label of training samples, i=1, 2, 3.. 1
Step 3.5: calculating cross entropy loss and carrying out back propagation;
loss represents cross entropy Loss, and Loss is calculated by the following formula:
Figure FDA00041291701300000312
wherein y is i True tag for the ith sample in training dataset in step 2.2, y i =1 means that the label of the input sample is 1, i.e. the position pixel is changed, y i =0 means that the label of the input sample is 0, i.e. the position pixel is unchanged;
Figure FDA00041291701300000313
the predictive label of the ith training sample output by the output layer in the step 3.4 is represented, and log represents the logarithm operation based on 10;
step 4: inputting the test data set in step 2.3 into the Gabor convolution network after the operation of step 3, obtaining T for the test data set according to the procedure described in step 3 2 A plurality of predictive labels;
and 5, combining the training data set in the step 2.2 and the predictive label obtained in the step 4 to obtain a change result graph of the geographic position in the step 1.
2. The SAR image change detection method based on Gabor convolution network according to claim 1, wherein l×l=7×7 in step 2.2, step 2.3, step 3.2;
in step 3, n=4, u=4, mxmxmxn=3×3×4, v 1 =1,v 2 =2,v 3 =3;H 1 =20,H 2 =40,H 3 =80。
CN202010056677.2A 2020-01-18 2020-01-18 SAR image change detection method based on Gabor convolution network Active CN111275680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010056677.2A CN111275680B (en) 2020-01-18 2020-01-18 SAR image change detection method based on Gabor convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010056677.2A CN111275680B (en) 2020-01-18 2020-01-18 SAR image change detection method based on Gabor convolution network

Publications (2)

Publication Number Publication Date
CN111275680A CN111275680A (en) 2020-06-12
CN111275680B true CN111275680B (en) 2023-05-26

Family

ID=70998741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010056677.2A Active CN111275680B (en) 2020-01-18 2020-01-18 SAR image change detection method based on Gabor convolution network

Country Status (1)

Country Link
CN (1) CN111275680B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734695B (en) * 2020-12-23 2022-03-22 中国海洋大学 SAR image change detection method based on regional enhancement convolutional neural network
CN112906497A (en) * 2021-01-29 2021-06-04 中国海洋大学 Embedded safety helmet detection method and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358203A (en) * 2017-07-13 2017-11-17 西安电子科技大学 A kind of High Resolution SAR image classification method based on depth convolution ladder network
CN107862741A (en) * 2017-12-10 2018-03-30 中国海洋大学 A kind of single-frame images three-dimensional reconstruction apparatus and method based on deep learning
CN108765465A (en) * 2018-05-31 2018-11-06 西安电子科技大学 A kind of unsupervised SAR image change detection
CN110060212A (en) * 2019-03-19 2019-07-26 中国海洋大学 A kind of multispectral photometric stereo surface normal restoration methods based on deep learning
CN110659591A (en) * 2019-09-07 2020-01-07 中国海洋大学 SAR image change detection method based on twin network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358203A (en) * 2017-07-13 2017-11-17 西安电子科技大学 A kind of High Resolution SAR image classification method based on depth convolution ladder network
CN107862741A (en) * 2017-12-10 2018-03-30 中国海洋大学 A kind of single-frame images three-dimensional reconstruction apparatus and method based on deep learning
CN108765465A (en) * 2018-05-31 2018-11-06 西安电子科技大学 A kind of unsupervised SAR image change detection
CN110060212A (en) * 2019-03-19 2019-07-26 中国海洋大学 A kind of multispectral photometric stereo surface normal restoration methods based on deep learning
CN110659591A (en) * 2019-09-07 2020-01-07 中国海洋大学 SAR image change detection method based on twin network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
automatic change detection in synthetic aperture radar images based on PCANet;feng gao;《IEEE》;全文 *
雷达溢油监测系统在海洋管理中的应用;曲亮;董军宇;;科技风(第22期);全文 *

Also Published As

Publication number Publication date
CN111275680A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
Pan Cloud removal for remote sensing imagery via spatial attention generative adversarial network
CN103456018B (en) Remote sensing image change detection method based on fusion and PCA kernel fuzzy clustering
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN107229918A (en) A kind of SAR image object detection method based on full convolutional neural networks
CN103258324B (en) Based on the method for detecting change of remote sensing image that controlled kernel regression and super-pixel are split
CN111339827A (en) SAR image change detection method based on multi-region convolutional neural network
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN116012364B (en) SAR image change detection method and device
CN111275680B (en) SAR image change detection method based on Gabor convolution network
CN112381144B (en) Heterogeneous deep network method for non-European and Euclidean domain space spectrum feature learning
Xu et al. Feature-based constraint deep CNN method for mapping rainfall-induced landslides in remote regions with mountainous terrain: An application to Brazil
CN103473559A (en) SAR image change detection method based on NSCT domain synthetic kernels
CN105184804A (en) Sea surface small target detection method based on airborne infrared camera aerially-photographed image
CN111414954A (en) Rock image retrieval method and system
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
CN112734695B (en) SAR image change detection method based on regional enhancement convolutional neural network
CN114612315A (en) High-resolution image missing region reconstruction method based on multi-task learning
CN106971402B (en) SAR image change detection method based on optical assistance
Kekre et al. SAR image segmentation using vector quantization technique on entropy images
CN111696167A (en) Single image super-resolution reconstruction method guided by self-example learning
CN111126508A (en) Hopc-based improved heterogeneous image matching method
CN112686871B (en) SAR image change detection method based on improved logarithmic comparison operator and Gabor_ELM
CN114120129B (en) Three-dimensional identification method for landslide slip surface based on unmanned aerial vehicle image and deep learning
CN107341798A (en) High Resolution SAR image change detection method based on global local SPP Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant