CN111275680A - SAR image change detection method based on Gabor convolution network - Google Patents
SAR image change detection method based on Gabor convolution network Download PDFInfo
- Publication number
- CN111275680A CN111275680A CN202010056677.2A CN202010056677A CN111275680A CN 111275680 A CN111275680 A CN 111275680A CN 202010056677 A CN202010056677 A CN 202010056677A CN 111275680 A CN111275680 A CN 111275680A
- Authority
- CN
- China
- Prior art keywords
- gabor
- data set
- convolution
- filter
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008859 change Effects 0.000 title claims abstract description 39
- 238000001514 detection method Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 44
- 238000012360 testing method Methods 0.000 claims abstract description 21
- 238000004458 analytical method Methods 0.000 claims abstract description 9
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 53
- 238000010586 diagram Methods 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000003064 k means clustering Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Abstract
The SAR image change detection method based on the Gabor convolutional network comprises the following steps: firstly, carrying out difference analysis by using a logarithmic ratio to obtain a differential image of a multi-temporal SAR image; secondly, pre-classifying the difference images by using a multi-layer fuzzy C-means clustering algorithm to generate three classes of sample pixel points, namely a changed class, an unchanged class and a fuzzy class; then, constructing a training data set and a testing data set by using the result of the pre-classification; and finally, using the training data set for training the Gabor convolutional network, and using the test data set for testing the Gabor convolutional network, thereby obtaining a final change result graph. The Gabor directional filter is obtained by modulating the learning filter by using the Gabor filter, is applied to a convolutional neural network, can better capture spatial information, and reduces the complexity of a model; meanwhile, a more reliable training data set can be obtained by using a pre-classification strategy, and the classification accuracy is further improved.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a Synthetic Aperture Radar (SAR) image change detection method based on a Gabor convolutional network.
Background
The rapid development of the world science and technology, especially the improvement of the satellite remote sensing technology, makes the acquisition of remote sensing image data easier. The remote sensing technology of China has complete spectrum-forming wave bands, and the satellite and the airborne system are complete, so that the application satellites of weather, resources and the like are built. Meanwhile, the population is rapidly increased, the activities of human beings are increasingly increased, and the change of the surface landscape is aggravated, such as the change of the use properties of various lands, the land pulling of buildings in certain areas, the urban expansion, the debris flow and the collapse caused by torrential flood, and the like. Therefore, the change detection of a certain landscape has important application value, and the problem is concerned by more and more scholars and experts.
The synthetic aperture radar has all-weather all-day imaging capability, is not influenced by weather conditions, can accurately acquire the change information of the earth surface landscape, and becomes a current research hotspot. The SAR image change detection is to obtain the required ground feature change information by carrying out difference analysis on SAR images at the same place and different moments. At present, the SAR image change detection technology is widely applied to various fields, for example, in the military field, the SAR image change detection technology can be used to learn the battlefield situation, perform damage effect evaluation, and the like. In the civil field, the SAR image change detection technology can be used for resource and environment monitoring, crop growth monitoring, natural disaster monitoring and evaluation and the like. However, since SAR images have a large amount of speckle noise, current methods often have difficulty accurately detecting a changing region in the image.
In recent years, learners at home and abroad carry out a great deal of research on the change detection of SAR images. The method can be mainly divided into a supervised method and an unsupervised method according to whether prior knowledge is needed. The supervised method needs prior knowledge, such as learning models of a limited Boltzmann machine, an extreme learning machine, a convolutional neural network and the like, needs a large number of label samples to be used for model training, and is difficult to obtain excellent performance under the conditions of poor label quality and insufficient number; the unsupervised method does not need prior knowledge such as expectation maximization method, thresholding method and the like, but the unsupervised method has poor noise robustness and adaptability. In summary, for change detection of multi-temporal SAR images, the current method has a great space for improvement.
Disclosure of Invention
The implementation of the invention expects to provide a remote sensing image change detection method based on a Gabor convolutional network, so as to solve the technical problems of large influence of noise on classification precision, low classification precision and the like.
The image change detection method based on the Gabor convolution network comprises the following steps:
obtaining a difference image by using a logarithmic ratio for two multi-temporal SAR images at the same geographic position;
pre-classifying the difference images to construct a training data set and a test data set;
using the training data set for Gabor convolutional network training;
and (5) using the test data set for Gabor convolution network test to obtain a change result graph.
The method comprises the following specific steps:
(1) performing difference analysis on two multi-temporal SAR images in the same geographic position by using a logarithmic ratio to obtain a differential image of the multi-temporal SAR images;
the calculation process of the difference analysis comprises the following steps:
IDI=|logI1-logI2|
wherein, I1And I2Respectively representing two multi-temporal SAR images, IDIThe method comprises the steps that differential images of two multi-temporal SAR images are represented, | · | is absolute value operation, and log represents logarithm operation with 10 as a base;
(2) for differential image IDIPre-classifying to construct a training data set and a test data set;
(2.1) pre-classifying the difference images by using a multilayer fuzzy C mean value clustering algorithm to obtain a pseudo label matrix;
(2.2) extracting the spatial positions marked as 0 and 1 in the pseudo label matrix, taking L multiplied by L neighborhood pixels around the pixel points corresponding to the spatial positions marked as 0 and 1 in the differential image obtained in the step 1 as a training data set, wherein the value of L is an odd number not less than 3, and the number of samples in the obtained training data set is marked as T1;
(2.3) extracting the spatial position marked with 0.5 in the pseudo label matrix, taking L multiplied by L neighborhood pixels around a pixel point corresponding to the spatial position marked with 0.5 in the differential image obtained in the step 1 as a test data set, wherein the value of L is an odd number not less than 3, and the number of samples in the obtained test data set is marked as T2;
The method is characterized by further comprising the following steps:
(3) the training data set of step 2.2 is used for Gabor convolutional network training,
the structure of the constructed Gabor convolution network is as follows: input layer → data enhancement layer → Gabor convolutional layer of small scale → Gabor convolutional layer of medium scale → Gabor convolutional layer of large scale → output layer:
(3.1) generating a Gabor directional filter;
the Gabor directional filter is obtained by modulating a learning filter by a Gabor filter, wherein the Gabor filter is represented by G (u, v), u represents the direction of filtering, and v represents the scale (i.e. wavelength) of filtering;
at a given scale v, a Gabor filter is composed of u Gabor directional operators in different directions, which can be expressed as:
G(u,v)={g1,g2,...,gu}
wherein, g1,...,guRepresenting Gabor directional operators and sizesW is multiplied by W, and the value of W is an odd number not less than 3;
the learning filter is a three-dimensional convolution kernel in a convolution neural network, and is represented by C, the size of C is M × N, where M × M represents the height and width of the convolution kernel, M is an odd number not less than 3, N represents the number of channels, and N is a natural number, so the three-dimensional convolution kernel can be represented as:
C={c1,c2,...,cN}
wherein, c1,...,cNThe convolution kernels on each channel are represented to be M multiplied by M;
the Gabor directional filter is obtained by performing channel-by-channel modulation on the learning filter by each Gabor directional operator in the Gabor filter, and the calculation process is as follows:
GoF(u,v)={g1oC,g2oC,...,guoC}
wherein GoF (u, v) denotes a Gabor directional filter, o denotes channel-by-channel modulation, and the calculation process on each channel is as follows:
obtaining u Gabor directional filters with the size of M multiplied by N through the operation;
(3.2) data enhancement;
t in the training data set obtained in step 2.21Using training samples as input data of input layer, inputting them into input layer in turn, making each training sample into N L pictures, and recording them as Fi,i=1,2,3,...,T1As input for the next layer;
(3.3) enhanced feature maps were obtained with three dimensions of Gabor convolutional layers:
(3.3.1) extracting features through a small-scale Gabor convolution layer;
small-scale Gabor convolutional layer comprising H1A Gabor directional filter GoF (u, v)1) The size of the convolution kernel is M × M × N; by convolution operation, the output characteristic diagramComprises the following steps:
wherein H1、v1Are all any natural number, Gconv(g) Representing a convolution operation, G1With the expression scale v1Of a Gabor directional filter GoF (u, v)1),FiIs the picture obtained in step 3.2; then, the characteristic diagram obtained after the Gabor convolution operation is carried outCarrying out normalization and maximum pooling; finally, activation is carried out by using the ReLu function, and the obtained characteristic diagram is marked as
(3.3.2) extracting features by a mesoscale Gabor convolutional layer;
the mesoscale Gabor convolutional layer comprises H2A Gabor directional filter GoF (u, v)2) The size of the convolution kernel is M × M × N; by convolution operation, the output characteristic diagramComprises the following steps:
wherein H2Is greater than H1Natural number of (v)2Is greater than v1Natural number of (G)conv(g) Representing a convolution operation, G2With the expression scale v2Of a Gabor directional filter GoF (u, v)2),Is the characteristic diagram obtained in step 3.3.1; then, the characteristic diagram obtained after the Gabor convolution operation is carried outCarrying out normalization and maximum pooling; finally, activation is carried out by using the ReLu function, and the obtained characteristic diagram is marked as
(3.3.3) extracting features through a large-scale Gabor convolution layer;
the large-scale Gabor convolution layer includes H3A Gabor directional filter GoF (u, v)3) The size of the convolution kernel is M × M × N; by convolution operation, the output characteristic diagramComprises the following steps:
wherein H3Is greater than H2Natural number of (v)3Is greater than v2Natural number of (G)conv(g) Representing a convolution operation, G3With the expression scale v3Of a Gabor directional filter GoF (u, v)3),Is the characteristic map obtained in step 3.3.2; then, the enhanced feature map obtained after the Gabor convolution operation is carried outCarrying out normalization processing; finally, activating by using a ReLu function to obtain a final characteristic diagram
(3.4) extracting the feature mapInput to the output layer toA prediction tag, i ═ 1,2,3,. and T, representing the ith training sample output by the output layer1;
(3.5) calculating cross entropy loss and carrying out back propagation;
loss represents the cross entropy Loss, which is calculated by the following formula:
wherein, yiFor the true label, y, of the ith sample in the training dataset in step 2.2i1 denotes that the label of the input sample is 1, i.e. the position pixel is changed, yi0 indicates that the label of the input sample is 0, i.e. the position pixel is unchanged;a prediction label output by the output layer in the step 3.4 is represented, and log represents logarithm operation with a base 10;
(4) inputting the test data set in the step 2.3 into the Gabor convolution network operated in the step 3, and obtaining a prediction label related to the test data set according to the process in the step 3;
(5) and (4) combining the training data set in the step 2.2 and the prediction label obtained in the step 4 to obtain a change result graph of the geographical position in the step 1.
The remote sensing image change detection method based on the Gabor convolution network provided by the embodiment of the invention has the following advantages:
1. the difference image is generated by utilizing the logarithmic ratio, so that the speckle noise can be effectively inhibited, and the robustness to the noise is improved.
2. And the difference images are pre-classified by using multi-layer fuzzy C-means clustering, so that a more reliable training data set can be obtained. The method can meet the requirement of change detection task in the label-free scene, and improves the application range of the method.
3. The Gabor directional filter obtained by using the Gabor filter modulation learning filter is applied to a convolutional neural network, so that spatial information can be captured better, and particularly the robustness to direction and scale changes can be realized. The use of the Gabor directional filter also effectively reduces the complexity of network training, learns fewer parameters, and can improve the precision of change detection classification.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram illustrating an image processing method according to the present invention;
FIG. 3 is a schematic diagram of the generation of a Gabor directional filter of the present invention;
FIG. 4 is a schematic diagram of a Gabor convolutional network of the present invention;
FIG. 5 is a schematic diagram of input data according to the present invention;
FIG. 6 is a graph comparing the effects of the method of the embodiment with those of the prior art.
To clearly illustrate the structure of embodiments of the present invention, certain dimensions, structures and devices are shown in the drawings, which are for illustrative purposes only and are not intended to limit the invention to the particular dimensions, structures, devices and environments, which may be adjusted or modified by one of ordinary skill in the art according to particular needs and are still included in the scope of the appended claims.
Detailed Description
In the following description, various aspects of the invention will be described, but it will be apparent to those skilled in the art that the invention may be practiced with only some or all of the structures or processes of the invention. Specific numbers, configurations and sequences are set forth in order to provide clarity of explanation, but it will be apparent that the invention may be practiced without these specific details. In other instances, well-known features have not been set forth in detail in order not to obscure the invention.
Referring to fig. 1, the method comprises the following specific steps:
step 1: performing difference analysis on two multi-temporal SAR images in the same geographic position by using a logarithmic ratio to obtain a differential image of the multi-temporal SAR images;
the calculation process of the difference analysis comprises the following steps:
IDI=|logI1-logI2|
wherein, I1And I2Respectively representing two multi-temporal SAR images, IDIRepresenting a difference image of two multi-temporal SAR images, | g | is an absolute value operation, and log represents a logarithm operation with 10 as a base;
step 2: for differential image IDIPre-classifying to construct a training data set and a test data set;
step 2.1: pre-classifying the difference image by using a multilayer fuzzy C-means clustering algorithm to obtain a pseudo tag matrix;
step 2.2: extracting the spatial positions marked as 0 and 1 in the pseudo label matrix, taking 7 multiplied by 7 neighborhood pixels around the pixel points corresponding to the spatial positions marked as 0 and 1 in the differential image obtained in the step 1 as a training data set, and marking the number of the samples in the obtained training data set as T1;
Step 2.3: extracting the spatial position marked with 0.5 in the pseudo label matrix, taking 7 multiplied by 7 neighborhood pixels around the pixel point corresponding to the spatial position marked with 0.5 in the differential image obtained in the step 1 as a test data set, and marking the number of the obtained samples in the test data set as T2;
The method is characterized by further comprising the following steps:
and step 3: the training data set of step 2.2 is used for Gabor convolutional network training,
the structure of the constructed Gabor convolution network is as follows: input layer → data enhancement layer → Gabor convolutional layer of small scale → Gabor convolutional layer of medium scale → Gabor convolutional layer of large scale → output layer:
step 3.1: generating a Gabor directional filter;
the Gabor directional filter is obtained by modulating a learning filter by a Gabor filter, wherein the Gabor filter is represented by G (u, v), u represents the direction of filtering, and v represents the scale (i.e. wavelength) of filtering;
at a given scale v, a Gabor filter is composed of u Gabor directional operators in different directions, which can be expressed as:
G(u,v)={g1,g2,...,gu}
wherein, g1,...,guRepresenting Gabor directional operators, wherein the sizes of the Gabor directional operators are W multiplied by W, and the value of W is an odd number not less than 3;
the learning filter is a three-dimensional convolution kernel in a convolution neural network, and is represented by C, the size of C is M × N, where M × M represents the height and width of the convolution kernel, M is an odd number not less than 3, N represents the number of channels, and N is a natural number, so the three-dimensional convolution kernel can be represented as:
C={c1,c2,...,cN}
wherein, c1,...,cNThe convolution kernels on each channel are represented to be M multiplied by M;
the Gabor directional filter is obtained by performing channel-by-channel modulation on the learning filter by each Gabor directional operator in the Gabor filter, and the calculation process is as follows:
GoF(u,v)={g1oC,g2oC,...,guoC}
wherein GoF (u, v) denotes a Gabor directional filter, o denotes channel-by-channel modulation, and the calculation process on each channel is as follows:
obtaining u Gabor directional filters with the size of M multiplied by N through the operation;
in this patent, u is 4, W × W is 3 × 3, and M × N is 3 × 3 × 4.
Step 3.2: data enhancement;
t in the training data set obtained in step 2.21Using training samples as input data of input layer, inputting them into input layer in turn, making each training sample become 4 7 × 7 pictures, and recording as Fi,i=1,2,3,...,T1As input for the next layer;
step 3.3: enhanced signatures were obtained by three scales of Gabor convolutional layers:
step 3.3.1: extracting features through a small-scale Gabor convolution layer;
the small-scale Gabor convolutional layer comprises 20 Gabor directional filters GoF (4,1), and the size of a convolutional kernel is 3 multiplied by 4; by convolution operation, the output characteristic diagramComprises the following steps:
wherein G isconv(g) Representing a convolution operation, G1Gabor directional filter GoF (4,1), F with scale 1iIs the picture obtained in step 3.2; then, the characteristic diagram obtained after the Gabor convolution operation is carried outCarrying out normalization and maximum pooling; finally, activation is carried out by using the ReLu function, and the obtained characteristic diagram is marked as
Step 3.3.2: extracting features through a mesoscale Gabor convolutional layer;
the mesoscale Gabor convolutional layer comprises 40 Gabor directional filters GoF (4,2), and the size of a convolutional kernel is 3 multiplied by 4; by convolution operation, the output characteristic diagramComprises the following steps:
wherein G isconv(g) Representing a convolution operation, G2With the expression scale v2Of a Gabor directional filter GoF (u, v)2),Is the characteristic diagram obtained in step 3.3.1; then, the characteristic diagram obtained after the Gabor convolution operation is carried outCarrying out normalization and maximum pooling; finally, activation is carried out by using the ReLu function, and the obtained characteristic diagram is marked as
Step 3.3.3: extracting features through a large-scale Gabor convolution layer;
the large-scale Gabor convolutional layer comprises 80 Gabor directional filters GoF (4,3), and the size of a convolutional kernel is 3 multiplied by 4; by convolution operation, the output characteristic diagramComprises the following steps:
wherein G isconv(g) Representing a convolution operation, G3With the expression scale v3Of a Gabor directional filter GoF (u, v)3),Is the characteristic map obtained in step 3.3.2; then, the enhanced feature map obtained after the Gabor convolution operation is carried outCarrying out normalization processing; finally, activating by using a ReLu function to obtain a final characteristic diagram
Step 3.4: extracting the feature mapInput to the output layer toA prediction tag, i ═ 1,2,3,. and T, representing the ith training sample output by the output layer1;
Step 3.5: calculating cross entropy loss and performing backward propagation;
loss represents the cross entropy Loss, which is calculated by the following formula:
wherein, yiFor the true label, y, of the ith sample in the training dataset in step 2.2i1 denotes that the label of the input sample is 1, i.e. the position pixel is changed, yi0 indicates that the label of the input sample is 0, i.e. the position pixel is unchanged;a prediction label output by the output layer in the step 3.4 is represented, and log represents logarithm operation with a base 10;
and 4, step 4: inputting the test data set in the step 2.3 into the Gabor convolution network operated in the step 3, and obtaining a prediction label related to the test data set according to the process in the step 3;
and 5, combining the training data set in the step 2.2 and the prediction label obtained in the step 4 to obtain a change result graph of the geographical position in the step 1.
The effect of the present invention is further explained by combining simulation experiments as follows:
the simulation experiment of the invention is carried out under the hardware environment of Intel (R) Xeon (R) CPU E5-2620, NVIDIA GTX 1080 and memory 32GB and the software environment of Ubuntu 16.04.6 and Matlab2012a, and the experimental objects are two groups of multi-time-phase SAR image San Francisco data sets and Farmland data sets. The San Francisco data set is shot by ERS-2 satellites in 8 months in 2003 and 5 months in 2004 in the San Francisco area, and has an original size of 7749 × 7713 pixels, wherein 256 × 256 pixels are selected, as shown in the first row of FIG. 5; the Farmland data set was captured by Radarsat satellites in the yellow river region at months 2008 and 6 2009 with a size of 306 x 291 pixels, as shown in the second row of fig. 5. The simulation experimental data of the present invention is shown in fig. 5. Fig. 5(c) is a change detection reference diagram of a simulation diagram of a real SAR image.
The results of the comparison of the method of the present invention with the prior art more advanced change detection method are shown in FIG. 6. The method of Principal Component Analysis and K-means Clustering (hereinafter abbreviated PCAKM) in comparative experiments is set forth in the article "unused change detection in satellite image using Principal Component Analysis and K-means Clustering"; the Neighborhod-based ratio and expression Learning Machine (abbreviated as NR-ELM hereinafter) method is proposed in the article "Change detection from synthetic aperture images based on neighbor-based ratio and expression Learning Machine"; the method of Gabor Feature Based on Two-Level Clustering (hereinafter referred to as Gabor TLC) is proposed in the article "Gabor Feature Based unscented detection of multilevel SAR Images Based on Two-Level Clustering"; the method is proposed in the "Fuzzy Clustering with a Modified MRF Energy Function (MRFFCM for short)" Fuzzy Clustering with a Modified MRF Energy Function for changing detection induced interference radar images ". As can be seen from fig. 6, when the input image has serious noise interference or the noise characteristics have differences, the method of the present invention can still extract the fine variation information in the multi-temporal SAR image more accurately, and has good noise robustness.
As shown in the first four columns of fig. 6, other methods are susceptible to noise interference, and it is difficult to accurately express change information; the method of the invention uses a Gabor directional filter, which can better inhibit the noise influence; in particular, the method can still obtain excellent performance under the condition that the Farmland data sets have noise characteristic differences.
The invention uses the classification accuracy (PCC) and the Kappa Coefficient (KC) to compare with the method on objective indexes, and the calculation method is as follows:
wherein, N is the total number of pixels, OE ═ FP + FN is the total number of errors, FP is the number of false positives, which indicates the number of pixels that have not changed but have been detected as changed in the final change result graph; FN is the number of missed detections and represents the number of pixels that change in the reference image but are detected as not changing in the final change result image. PRE represents the proportional relationship between the number of false detections and missed detections, and PRE ═ [ (TP + FP-FN) × TP + (TN + FN-FP) × TN ]/(N × N), where TP is the number of pixels that actually change and TN is the number of pixels that actually do not change. The larger PCC and KC values indicate that the change detection result is more accurate and the noise suppression capability is stronger. Tables 1 and 2 show the results of the comparison of the present invention with the above-described method. As can be seen from the table, the PCC and KC values of the method of the invention are the highest, which shows that the method of the invention can more accurately detect the variation information in the input image and can restrain the noise interference.
TABLE 1 Change detection method for San Francisco data set Experimental results
TABLE 2 Experimental results of the change detection method of the Farmland data set
Method of producing a composite material | PCC(%) | KC(%) |
PKAKM | 94.03 | 62.92 |
MRFFCM | 89.14 | 48.67 |
Gabor TLC | 94.76 | 65.91 |
NR-ELM | 97.70 | 76.05 |
The method of the invention | 98.91 | 88.68 |
The method based on the Gabor convolutional network is mainly used for improving analysis and understanding of the multi-temporal remote sensing image. However, obviously, the method is also suitable for analyzing the images shot by common imaging equipment such as a digital camera, and the obtained beneficial effects are similar.
The method for detecting changes in remote sensing images based on Gabor convolutional network provided by the present invention has been described in detail, but it is obvious that the specific implementation form of the present invention is not limited thereto. It will be apparent to those skilled in the art that various obvious changes may be made therein without departing from the scope of the invention as defined in the appended claims.
Claims (2)
1. A SAR image change detection method based on a Gabor convolution network comprises the following steps:
step 1: performing difference analysis on two multi-temporal SAR images in the same geographic position by using a logarithmic ratio to obtain a differential image of the multi-temporal SAR images;
the calculation process of the difference analysis comprises the following steps:
IDI=|logI1-logI2|
wherein, I1And I2Respectively representing two multi-temporal SAR images, IDIThe method comprises the steps that differential images of two multi-temporal SAR images are represented, | · | is absolute value operation, and log represents logarithm operation with 10 as a base;
step 2: for differential image IDIPre-classifying to construct a training data set and a test data set;
step 2.1: pre-classifying the difference image by using a multilayer fuzzy C-means clustering algorithm to obtain a pseudo tag matrix;
step 2.2: extracting the spatial positions marked as 0 and 1 in the pseudo label matrix, taking L multiplied by L neighborhood pixels around the pixel points corresponding to the spatial positions marked as 0 and 1 in the differential image obtained in the step 1 as a training data set, wherein the value of L is an odd number not less than 3, and the number of samples in the obtained training data set is marked as T1;
Step 2.3: extracting the spatial position marked as 0.5 in the pseudo label matrix, taking L multiplied by L neighborhood pixels around a pixel point corresponding to the spatial position marked as 0.5 in the differential image obtained in the step 1 as a test data set, wherein the value of L is an odd number not less than 3, and the number of samples in the obtained test data set is marked as T2;
The method is characterized by further comprising the following steps:
and step 3: the training data set of step 2.2 is used for Gabor convolutional network training,
the structure of the constructed Gabor convolution network is as follows: input layer → data enhancement layer → Gabor convolutional layer of small scale → Gabor convolutional layer of medium scale → Gabor convolutional layer of large scale → output layer:
step 3.1: generating a Gabor directional filter;
the Gabor directional filter is obtained by modulating a learning filter by a Gabor filter, wherein the Gabor filter is represented by G (u, v), u represents the direction of filtering, and v represents the scale (i.e. wavelength) of filtering;
at a given scale v, a Gabor filter is composed of u Gabor directional operators in different directions, which can be expressed as:
G(u,v)={g1,g2,...,gu}
wherein, g1,...,guRepresenting Gabor directional operators, wherein the sizes of the Gabor directional operators are W multiplied by W, and the value of W is an odd number not less than 3;
the learning filter is a three-dimensional convolution kernel in a convolution neural network, and is represented by C, the size of C is M × N, where M × M represents the height and width of the convolution kernel, M is an odd number not less than 3, N represents the number of channels, and N is a natural number, so the three-dimensional convolution kernel can be represented as:
C={c1,c2,...,cN}
wherein, c1,...,cNThe convolution kernels on each channel are represented to be M multiplied by M;
the Gabor directional filter is obtained by performing channel-by-channel modulation on the learning filter by each Gabor directional operator in the Gabor filter, and the calculation process is as follows:
GoF(u,v)={g1oC,g2oC,...,guoC}
wherein GoF (u, v) denotes a Gabor directional filter, o denotes channel-by-channel modulation, and the calculation process on each channel is as follows:
obtaining u Gabor directional filters with the size of M multiplied by N through the operation;
step 3.2: data enhancement;
t in the training data set obtained in step 2.21Using training samples as input data of input layer, inputting them into input layer in turn, making each training sample into N L pictures, and recording them as Fi,i=1,2,3,...,T1As input for the next layer;
step 3.3: enhanced signatures were obtained by three scales of Gabor convolutional layers:
step 3.3.1: extracting features through a small-scale Gabor convolution layer;
small-scale Gabor convolutional layer comprising H1A Gabor directional filter GoF (u, v)1) The size of the convolution kernel is M × M × N; in sequence to FiPerforming convolution operation to output characteristic diagramComprises the following steps:
wherein H1、v1Are all any natural number, Gconv(g) Representing a convolution operation, G1Denotes the aforementioned dimension v1Of a Gabor directional filter GoF (u, v)1),FiIs the picture obtained in step 3.2; then, the characteristic diagram obtained after the Gabor convolution operation is carried outNormalization and maximum pooling are performed, i is 1,2,3,...,T1(ii) a Finally, activation is carried out by using the ReLu function, and the obtained characteristic diagram is marked as
Step 3.3.2: extracting features through a mesoscale Gabor convolutional layer;
the mesoscale Gabor convolutional layer comprises H2A Gabor directional filter GoF (u, v)2) The size of the convolution kernel is M × M × N; in turn toPerforming convolution operation to output characteristic diagramComprises the following steps:
wherein H2Is greater than H1Natural number of (v)2Is greater than v1Natural number of (G)conv(g) Representing a convolution operation, G2Denotes the aforementioned dimension v2Of a Gabor directional filter GoF (u, v)2),Is the characteristic diagram obtained in step 3.3.1; then, the characteristic diagram obtained after the Gabor convolution operation is carried outNormalization and max pooling processes were performed, i ═ 1,2,31(ii) a Finally, activation is carried out by using the ReLu function, and the obtained characteristic diagram is marked as
Step 3.3.3: extracting features through a large-scale Gabor convolution layer;
the large-scale Gabor convolution layer includes H3A Gabor directional filter GoF (u, v)3) The size of the convolution kernel is M × M × N; in turn toPerforming convolution operation to output characteristic diagramComprises the following steps:
wherein H3Is greater than H2Natural number of (v)3Is greater than v2Natural number of (G)conv(g) Representing a convolution operation, G3Denotes the aforementioned dimension v3Of a Gabor directional filter GoF (u, v)3),Is the characteristic map obtained in step 3.3.2; then, the enhanced feature map obtained after the Gabor convolution operation is carried outNormalization processing is performed, i ═ 1,2,31(ii) a Finally, activating by using a ReLu function to obtain a final characteristic diagram
Step 3.4: extracting the feature mapInput to the output layer toA prediction tag, i ═ 1,2,3,. and T, representing the ith training sample output by the output layer1;
Step 3.5: calculating cross entropy loss and performing backward propagation;
loss represents the cross entropy Loss, which is calculated by the following formula:
wherein, yiFor the true label, y, of the ith sample in the training dataset in step 2.2i1 denotes that the label of the input sample is 1, i.e. the position pixel is changed, yi0 indicates that the label of the input sample is 0, i.e. the position pixel is unchanged;a prediction label of the ith training sample output by the output layer in the step 3.4 is represented, and log represents a logarithm operation with 10 as a base;
and 4, step 4: inputting the test data set in the step 2.3 into the Gabor convolution network after the operation of the step 3, and obtaining T relative to the test data set according to the process of the step 32A prediction tag;
and 5, combining the training data set in the step 2.2 and the prediction label obtained in the step 4 to obtain a change result graph of the geographical position in the step 1.
2. The method for detecting SAR image variation based on Gabor convolutional network of claim 1, wherein lxl in step 2.2, step 2.3, and step 3.2 is 7 × 7.
In step 3, N is 4, u is 4, M × N is 3 × 3 × 4, v1=1,v2=2,v3=3;H1=20,H2=40,H3=80。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010056677.2A CN111275680B (en) | 2020-01-18 | 2020-01-18 | SAR image change detection method based on Gabor convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010056677.2A CN111275680B (en) | 2020-01-18 | 2020-01-18 | SAR image change detection method based on Gabor convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111275680A true CN111275680A (en) | 2020-06-12 |
CN111275680B CN111275680B (en) | 2023-05-26 |
Family
ID=70998741
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010056677.2A Active CN111275680B (en) | 2020-01-18 | 2020-01-18 | SAR image change detection method based on Gabor convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111275680B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112734695A (en) * | 2020-12-23 | 2021-04-30 | 中国海洋大学 | SAR image change detection method based on regional enhancement convolutional neural network |
CN112906497A (en) * | 2021-01-29 | 2021-06-04 | 中国海洋大学 | Embedded safety helmet detection method and equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358203A (en) * | 2017-07-13 | 2017-11-17 | 西安电子科技大学 | A kind of High Resolution SAR image classification method based on depth convolution ladder network |
CN107862741A (en) * | 2017-12-10 | 2018-03-30 | 中国海洋大学 | A kind of single-frame images three-dimensional reconstruction apparatus and method based on deep learning |
CN108765465A (en) * | 2018-05-31 | 2018-11-06 | 西安电子科技大学 | A kind of unsupervised SAR image change detection |
CN110060212A (en) * | 2019-03-19 | 2019-07-26 | 中国海洋大学 | A kind of multispectral photometric stereo surface normal restoration methods based on deep learning |
CN110659591A (en) * | 2019-09-07 | 2020-01-07 | 中国海洋大学 | SAR image change detection method based on twin network |
-
2020
- 2020-01-18 CN CN202010056677.2A patent/CN111275680B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358203A (en) * | 2017-07-13 | 2017-11-17 | 西安电子科技大学 | A kind of High Resolution SAR image classification method based on depth convolution ladder network |
CN107862741A (en) * | 2017-12-10 | 2018-03-30 | 中国海洋大学 | A kind of single-frame images three-dimensional reconstruction apparatus and method based on deep learning |
CN108765465A (en) * | 2018-05-31 | 2018-11-06 | 西安电子科技大学 | A kind of unsupervised SAR image change detection |
CN110060212A (en) * | 2019-03-19 | 2019-07-26 | 中国海洋大学 | A kind of multispectral photometric stereo surface normal restoration methods based on deep learning |
CN110659591A (en) * | 2019-09-07 | 2020-01-07 | 中国海洋大学 | SAR image change detection method based on twin network |
Non-Patent Citations (2)
Title |
---|
FENG GAO: "automatic change detection in synthetic aperture radar images based on PCANet", 《IEEE》 * |
曲亮;董军宇;: "雷达溢油监测系统在海洋管理中的应用", 科技风 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112734695A (en) * | 2020-12-23 | 2021-04-30 | 中国海洋大学 | SAR image change detection method based on regional enhancement convolutional neural network |
CN112734695B (en) * | 2020-12-23 | 2022-03-22 | 中国海洋大学 | SAR image change detection method based on regional enhancement convolutional neural network |
CN112906497A (en) * | 2021-01-29 | 2021-06-04 | 中国海洋大学 | Embedded safety helmet detection method and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111275680B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110659591B (en) | SAR image change detection method based on twin network | |
CN104376330B (en) | Polarimetric SAR Image Ship Target Detection method based on super-pixel scattering mechanism | |
CN108596108B (en) | Aerial remote sensing image change detection method based on triple semantic relation learning | |
CN107229918A (en) | A kind of SAR image object detection method based on full convolutional neural networks | |
CN109871823B (en) | Satellite image ship detection method combining rotating frame and context information | |
CN111339827A (en) | SAR image change detection method based on multi-region convolutional neural network | |
CN110135438B (en) | Improved SURF algorithm based on gradient amplitude precomputation | |
Fang et al. | SAR-optical image matching by integrating Siamese U-Net with FFT correlation | |
CN115018773A (en) | SAR image change detection method based on global dynamic convolution neural network | |
CN105405132A (en) | SAR image man-made target detection method based on visual contrast and information entropy | |
CN103258324A (en) | Remote sensing image change detection method based on controllable kernel regression and superpixel segmentation | |
CN111275680B (en) | SAR image change detection method based on Gabor convolution network | |
CN112734695B (en) | SAR image change detection method based on regional enhancement convolutional neural network | |
CN114119621A (en) | SAR remote sensing image water area segmentation method based on depth coding and decoding fusion network | |
Gui et al. | A scale transfer convolution network for small ship detection in SAR images | |
CN111126508A (en) | Hopc-based improved heterogeneous image matching method | |
CN112686871B (en) | SAR image change detection method based on improved logarithmic comparison operator and Gabor_ELM | |
CN111967292B (en) | Lightweight SAR image ship detection method | |
CN107341798A (en) | High Resolution SAR image change detection method based on global local SPP Net | |
Sahu et al. | Digital image texture classification and detection using radon transform | |
CN114037897A (en) | Polarization SAR image change detection method based on dotted line singularity fusion | |
CN112766032A (en) | SAR image saliency map generation method based on multi-scale and super-pixel segmentation | |
Mutreja et al. | Comparative Assessment of Different Deep Learning Models for Aircraft Detection | |
Yuan et al. | Satellite attitude change recognition based on multi-frame image by 3d convolutional neural networks | |
Wang et al. | Deep convolutional neural network and its application in image recognition of road safety projects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |