CN108846835B - Image change detection method based on depth separable convolutional network - Google Patents

Image change detection method based on depth separable convolutional network Download PDF

Info

Publication number
CN108846835B
CN108846835B CN201810550412.0A CN201810550412A CN108846835B CN 108846835 B CN108846835 B CN 108846835B CN 201810550412 A CN201810550412 A CN 201810550412A CN 108846835 B CN108846835 B CN 108846835B
Authority
CN
China
Prior art keywords
image
sample set
convolutional network
network
separable convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810550412.0A
Other languages
Chinese (zh)
Other versions
CN108846835A (en
Inventor
焦李成
刘若辰
张浪浪
任蕊
冯捷
慕彩红
李阳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201810550412.0A priority Critical patent/CN108846835B/en
Publication of CN108846835A publication Critical patent/CN108846835A/en
Application granted granted Critical
Publication of CN108846835B publication Critical patent/CN108846835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention provides an image change detection method based on a depth separable convolution network, which is used for solving the technical problem of low detection accuracy in the existing image change detection method. The method comprises the following implementation steps: the method comprises the steps of constructing a training sample set, a verification sample set and a test sample set, constructing a deep separable convolutional network by taking a variant U-Net of a full convolutional network as a basic network, constructing a loss function of the training deep separable convolutional network, training, testing and verifying the deep separable convolutional network, and testing by using the verified finally trained deep separable convolutional network to obtain a change detection result graph. The image feature semantics and structure information extracted by the depth separable convolutional network are rich, the expression capability and the discriminativity of the image are strong, the change detection accuracy is improved, and the method can be used in the technical fields of land cover detection, disaster assessment, video monitoring and the like.

Description

Image change detection method based on depth separable convolutional network
Technical Field
The invention belongs to the technical field of image processing, relates to an image change detection method, and particularly relates to an image change detection method based on a depth separable convolution network in the technical field of remote sensing image change detection, which can be used in the technical fields of land cover detection, disaster assessment, video monitoring and the like.
Background
Image change detection refers to the determination and analysis of surface changes using multi-temporal phase acquisition of images covering the same surface area and other ancillary data. It utilizes computer image processing system to identify and analyze the change of target or phenomenon state in different time periods; it can determine the change of ground features or phenomena in a certain time interval and provide qualitative and quantitative information of the spatial distribution of the ground features and the change thereof. According to the basic unit of image data processing, the change detection methods are mainly classified into three main categories: 1) a pixel-based approach; 2) a window-based approach; 3) an image-based method.
The method based on the pixel points takes the pixel points as a basic unit for image analysis, fully utilizes spectral characteristics, does not consider spatial context, is simple and easy to understand, but has poor robustness to noise, and is difficult to process a high-resolution remote sensing image. The common processing method of the window-based method is as follows: obtaining a difference image through operators such as a difference method or a ratio method, sliding a window on the difference image to obtain blocks, and classifying center pixel points of the window or pixel points in the whole window into two categories, wherein the size of the window is not more than 9 multiplied by 9 generally. Compared with a method based on pixel points, the method based on the window considers the spatial neighborhood and the context information, has better robustness to noise, but when processing a large batch of image data and a complex high-resolution remote sensing image, because the sliding window is applied to the large batch of image data, huge time cost is needed, and when the image resolution is very high and the terrain environment is complex (such as building change detection), the efficiency of the method is reduced greatly.
The main difference between the image-based approach and the window-based approach is that the image-based approach trains labels using images, rather than class scalars. The image-based method not only greatly reduces the time overhead and realizes the real-time performance of detection, but also fully considers the global information of the image, so that the context information is richer, and the detection accuracy is improved. Zhan et al, in a published paper "Change Detection Based on Deep space conditional Network for optical materials" (IEEE Geoscience & Remote Sensing Letters,2017, PP (99):1-5), discloses a method for detecting changes in an optical Aerial image Based on a weight-coupled Deep Convolutional Network. The method trains the whole network in an end-to-end mode, 5 layers of convolution are adopted, and the input size and the output size of the network are equal. And directly inputting the images at two moments during prediction, outputting Euclidean distance maps of the images at two moments, and performing post-processing operations such as segmentation or clustering on the Euclidean distance maps to obtain a final change detection result map. The method has the disadvantages of limited image characterization capability and strong dependence on post-processing operation, resulting in low accuracy of change detection.
Deep separable convolution is an efficient convolution operation that performs spatial convolution independently at each channel, and then performs point-by-point convolution, i.e., 1 x1 convolution, to map the deeply convolved channels to a new channel space. The depth separable convolution reduces parameters required by fitting data, reduces the risk of overfitting during model training, and improves the convolution effect.
Disclosure of Invention
The invention aims to provide an image change detection method based on a depth separable convolutional network aiming at the defects of the prior art, which is used for solving the technical problem of low detection accuracy in the existing image change detection method.
The idea of the invention for realizing the above purpose is as follows: the method comprises the steps of constructing a training sample set, a verification sample set and a test sample set, constructing a deep separable convolutional network by taking a variant U-Net of a full convolutional network as a basic network, constructing a loss function of the training deep separable convolutional network, training, testing and verifying the deep separable convolutional network, and testing by using the verified finally trained deep separable convolutional network to obtain a change detection result graph.
According to the technical idea, the technical scheme adopted for achieving the purpose of the invention comprises the following steps:
(1) constructing a training sample set, a verification sample set and a test sample set:
(1a) obtaining an original image sample set:
acquiring a plurality of groups of image sample pairs shot at different moments in the same place, and labeling real change areas of each group of image sample pairs to obtain an original image sample set containing a plurality of real manual labeling images;
(1b) normalizing each group of image sample pairs in the original image sample set:
normalizing each group of image sample pairs in the original image sample set to obtain a plurality of groups of normalized image sample pairs;
(1c) and updating the normalized multiple groups of image sample pairs:
t in each group of normalized image sample pairs2Time of day image sample Stack to t1Obtaining a plurality of image samples of two channels on the image sample at a moment;
(1d) acquiring a training sample set, a verification sample set and a test sample set:
taking most of the image samples of the multiple two-channel image samples as a training sample set, taking one part of the image samples in the rest image samples as a verification sample set, and taking the other part of the image samples as a test sample set;
(2) building a depth separable convolutional network:
(2a) taking the variant U-Net of the full convolution network as a basic network of the deep separable convolution network to be built;
(2b) configuring the convolution layer of the basic network of the depth separable convolution network to be constructed into depth separable convolution to obtain a depth separable convolution network;
(3) constructing a Loss function Loss of the training depth separable convolutional network:
(3a) constructing a cross entropy loss function BCE:
Figure BDA0001681074180000031
wherein n is the number of pixel points, yiAs the category of the true pixel point, yiIs the output value of the depth separable convolutional network, i is the ith sample;
(3b) constructing a weighted DICE coefficient loss function DICE _ loss:
Figure BDA0001681074180000032
w is a preference control parameter of precision ratio and recall ratio;
(3c) obtaining a Loss function Loss of the training depth separable convolutional network:
Loss=BCE+DICE_loss;
(4) training the deep separable convolutional network:
(4a) setting the iteration number N as 1, the minimum threshold value of the loss function as delta l, and the learning rate as α;
(4b) inputting the training sample set into a depth separable convolution network to obtain a plurality of probability graphs, calculating the Loss function of each probability graph and the corresponding real manual labeling graph to obtain the Loss function Loss of the probability graphs and the real manual labeling graphsN
Figure BDA0001681074180000033
Wherein the content of the first and second substances,
Figure BDA0001681074180000034
the loss function of the ith probability graph and the corresponding real manual labeling graph is given;
(4c) judging Loss function LossNIf the Loss function is less than the minimum threshold delta l, if so, the Loss function Loss is carried outNThe corresponding depth separable convolutional network is used as the trained depth separable convolutional network, otherwise, the step (4d) is executed;
(4d) updating the parameter theta of the convolution layer of the depth separable convolution network by adopting a gradient descent method to obtain the parameter theta of the convolution layernewOf a deep separable convolutional network, thetanewThe calculation formula of (2) is as follows:
Figure BDA0001681074180000035
(4e) let N be N +1, and simultaneously replace the depth separable convolution network with the convolution layer parameter thetanewAnd performing step (4 b);
(5) testing the trained deep separable convolutional network:
inputting a plurality of two-channel image samples in a verification sample set into a trained depth separable convolution network to obtain a probability map of changes of each group of image sample pairs corresponding to the image samples in the verification sample set in an original image sample set, and binarizing each probability map to obtain a plurality of binary maps;
(6) verifying the trained deep separable convolutional network:
calculating the average accuracy rate of the change detection of the plurality of binary images output by the trained deep separable convolutional network, judging whether the average accuracy rate is smaller than a set accuracy rate threshold value, if so, adjusting the learning rate α, and executing the step (4), otherwise, taking the deep separable convolutional networks corresponding to the plurality of binary images as the finally trained deep separable convolutional network, and executing the step (7);
(7) testing the finally trained deep separable convolutional network:
(7a) inputting a plurality of image samples of two channels in a test sample set into a finally trained depth separable convolution network to obtain a probability graph of the change of each group of image sample pairs corresponding to the image samples in the test sample set in an original image sample set;
(7b) and carrying out binarization on each probability map to obtain a plurality of change detection result maps.
Compared with the prior art, the invention has the following advantages:
firstly, when the finally trained deep separable convolutional network is tested, the deep separable convolutional network is adopted to extract the characteristics, compared with the characteristics extracted by the deep convolutional network adopted in the prior art, the characteristics are more abstract, the semantic and structural information is richer, the image has stronger expression capability and discrimination, and the detection accuracy is effectively improved.
Secondly, because the invention adopts the end-to-end output of the image when testing the finally trained deep separable convolutional network, compared with the prior art that the output of the deep convolutional network is subjected to post-processing operations such as segmentation or clustering, the invention omits the post-processing operation, reduces the negative influence of the post-processing operation on the change detection result, saves the detection time, improves the detection accuracy on one hand and improves the detection efficiency on the other hand.
Thirdly, because the loss function for training the deep separable convolutional network is constructed by the method, preference control on the precision ratio and the recall ratio is added, so that the method improves the detection flexibility when image change detection is carried out.
Drawings
FIG. 1 is a block diagram of an implementation flow of the present invention;
FIG. 2 is a block diagram of the deep separable convolutional network of the present invention;
FIG. 3 is a schematic diagram of the depth separable convolution of the present invention;
FIG. 4 is a graph of the variation detection output of the variation detection on a test sample set according to the present invention and the prior art;
fig. 5 is a graph of the variation detection result of the variation detection output of the test sample set under different preference control parameters w according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples:
referring to fig. 1, the image change detection method based on the depth separable convolutional network includes the following steps:
step 1, constructing a training sample set, a verification sample set and a test sample set:
(1a) in the embodiment, each group of image sample pairs in the existing sample set SZTAKI AirChange Benchmark set is normalized to obtain multiple groups of normalized image sample pairs, and t in each group of normalized image sample pairs is2Time of day image sample Stack to t1Obtaining a plurality of image samples of two channels on the image sample at a moment, wherein the adopted normalization formula is as follows:
Figure BDA0001681074180000051
Figure BDA0001681074180000052
wherein, IAAnd IBRepresenting image samples at different times of the same location, IA' is represented by IANormalized image sample, IB' is represented by IBNormalizing the image sample;
(1b) in order to enlarge the scale of the sample set, in this example, 3 image samples of a plurality of two-channel image samples are used as a test sample set, the remaining 9 image samples are used for constructing a training and verification set, the 9 image samples are expanded to 1800 images by randomly shearing corresponding positions on each image sample and a real manual annotation graph thereof, 1440 image samples are randomly selected as the training sample set, and the remaining 360 image samples are used as the verification sample set.
Step 2, building a depth separable convolutional network, and referring to fig. 2:
(2a) and taking the variant U-Net of the full convolution network as a basic network of the deep separable convolution network to be constructed, wherein the U-Net architecture combines the low-level feature map and the high-level feature map by using skip connection (skip connection), so as to bring accurate pixel-level positioning. The whole network only has a convolution layer and a pooling layer, and comprises a contraction path for capturing context information and a symmetrical expansion path for accurate positioning. The systolic path follows a typical convolutional network architecture, i.e., alternating convolution and pooling operations, and downsamples the feature maps step by step while increasing the number of feature maps layer by layer. Each stage of the expansion path consists of a feature map upsampled and followed by convolutional layers. The last layer of the network is 1 multiplied by 1 convolution, and information integration is carried out on different channels;
(2b) the convolution layer of the basic network is configured into a depth separable convolution to obtain a depth separable convolution network, wherein the depth separable convolution process is as shown in fig. 3, convolution kernels with the size of 3 x1 are used for respectively performing convolution with each channel of input data to generate a new feature map, the number of the new feature map is equal to that of the input data channels, information fusion between different channels is performed on the newly obtained feature map by using the convolution kernels with the size of 1 x1, and the purpose of performing decoupling on spatial information and depth information is to effectively use parameters and improve the convolution efficiency, and the accuracy of change detection can be improved by applying the depth separable convolution to a U-Net network architecture.
Step 3, constructing a Loss function Loss of the training depth separable convolutional network:
(3a) constructing a cross entropy loss function BCE:
Figure BDA0001681074180000061
wherein n is the number of pixel points, yiAs the category of the true pixel point, yiIs the output value of the depth separable convolutional network, i is the ith sample;
(3b) for the segmentation problem, the DICE coefficient is an important observation index of the model performance, generally, the evaluation index should be as close as possible to the training target, so the weighted DICE coefficient loss function DICE _ loss is constructed by adding a part related to the evaluation index into the loss function:
Figure BDA0001681074180000062
w is a preference control parameter of precision ratio and recall ratio;
(3c) obtaining a Loss function Loss of the training depth separable convolutional network:
Loss=BCE+DICE_loss。
step 4, training the depth separable convolution network:
(4a) setting the iteration number N as 1, the minimum threshold value of the loss function as delta l, and the learning rate as α;
(4b) inputting the training sample set into a depth separable convolution network to obtain a plurality of probability graphs, calculating the loss function of each probability graph and the corresponding real manual labeling graph to obtain a plurality of probability graphs and a plurality of real manual labeling graphsLoss function LossN
Figure BDA0001681074180000071
Wherein the content of the first and second substances,
Figure BDA0001681074180000072
the loss function of the ith probability graph and the corresponding real manual labeling graph is given;
(4c) judging Loss function LossNIf the Loss function is less than the minimum threshold delta l, if so, the Loss function Loss is carried outNThe corresponding depth separable convolutional network is used as the trained depth separable convolutional network, otherwise, the step (4d) is executed;
(4d) updating the parameter theta of the convolution layer of the depth separable convolution network by adopting a gradient descent method to obtain the parameter theta of the convolution layernewOf a deep separable convolutional network, thetanewThe calculation formula of (2) is as follows:
Figure BDA0001681074180000073
(4e) let N be N +1, and simultaneously replace the depth separable convolution network with the convolution layer parameter thetanewMay separate the convolutional network and perform step (4 b).
Step 5, testing the trained depth separable convolution network:
inputting a plurality of two-channel image samples in a verification sample set into a trained depth separable convolution network to obtain a probability map of changes of each group of image sample pairs corresponding to the image samples in the verification sample set in an original image sample set, and binarizing each probability map to obtain a plurality of binary maps.
Step 6, verifying the trained deep separable convolutional network:
calculating the average accuracy rate of change detection of a plurality of binary images output by the trained deep separable convolutional network, and judging whether the average accuracy rate is smaller than a set accuracy rate threshold, if so, adjusting the learning rate α, and executing the step 4, otherwise, taking the deep separable convolutional networks corresponding to the plurality of binary images as the finally trained deep separable convolutional network, and executing the step 7, wherein the calculation formula of the accuracy rate of change detection is as follows:
Figure BDA0001681074180000074
wherein, TP is the number of pixels correctly detected as changed in the binary image, TN is the number of pixels correctly detected as unchanged in the binary image, and N is the total number of pixels in the binary image.
And 7, testing the trained deep separable convolution network:
(7a) inputting a plurality of image samples of two channels in a test sample set into a finally trained depth separable convolution network to obtain a probability graph of the change of each group of image sample pairs corresponding to the image samples in the test sample set in an original image sample set;
(7b) and carrying out binarization on each probability map to obtain a plurality of change detection result maps.
The technical effects of the invention are further described by combining simulation experiments as follows:
1. simulation conditions and contents:
the experimental data set adopts an SZTAKI AirChange Benchmark set image change detection data set, and the algorithm simulation platform of the embodiment is as follows: the main frequency is CPU of 4.00GHz, memory of 16.0GB, graphic card gtx1070ti, Windows 10(64 bits) operating system, Keras and Python development platform.
The simulation parameters used in the simulation experiment of the invention are as follows:
precision ratio: counting the proportion of the pixels which are actually changed in the experiment result graph:
Figure BDA0001681074180000081
wherein, TP is the number of the positive detection change pixels, FP is the number of the false detection change pixels.
The recall ratio is as follows: counting the proportion of correctly detected change pixels in the experimental result graph:
Figure BDA0001681074180000082
wherein, TP is the number of pixels with changed type of positive detection, FN is the number of pixels with unchanged type of false detection.
F1The value: the weighted average of the comprehensiveness index, precision and recall:
Figure BDA0001681074180000083
wherein, P is precision ratio and R is recall ratio.
Simulation 1, using the invention and the prior art to perform change detection on a test sample set, wherein a graph of the output change detection result is shown in fig. 4;
simulation 2, using the invention to perform change detection on the test sample set under different preference control parameters w, wherein the output change detection result graph is shown in fig. 5;
2. and (3) simulation result analysis:
referring to fig. 4, in which,
FIGS. 4(a) and 4(b) are image samples of scene one, and FIG. 4(a) is t1Time of day image sample, t in FIG. 4(b)2A time image sample, fig. 4(c) is a real manual labeling diagram of a scene one, fig. 4(d) is a change detection result diagram of the scene one output by the present invention, and fig. 4(e) is a change detection result diagram of the scene one output by the prior art;
FIGS. 4(f) and 4(g) are image samples of scene two, and FIG. 4(f) is t1Time of day image sample, t in FIG. 4(g)2A time image sample, fig. 4(h) is a real manual labeling diagram of a scene two, fig. 4(i) is a diagram of a change detection result of the scene two output by the present invention, and fig. 4(j) is a diagram of a change detection result of the scene two output by the prior art;
FIGS. 4(k) and 4(l) are image samples of scene three, with t in FIG. 4(k)1Time of day image sample, t in FIG. 4(l)2Time of day imageA sample, fig. 4(m) is a real manual labeling diagram of a scene three, fig. 4(n) is a diagram of a change detection result of the scene three output by the present invention, and fig. 4(o) is a diagram of a change detection result of the scene three output by the prior art;
comparing the change detection result graphs of fig. 4(d), fig. 4(i) and fig. 4(n) with the real manual labeling graphs of the same scene corresponding to fig. 4(c), fig. 4(h) and fig. 4(m), it can be seen that, because the features extracted by the depth separable convolution network of the present invention are more abstract, the semantic and structural information are more abundant, and the expression ability and the discrimination ability for the image are stronger, the obtained change detection result has fewer noise points and high change detection accuracy. Comparing the change detection result graphs of fig. 4(e), fig. 4(j) and fig. 4(o) with the actual manual labeling graphs of the same scene corresponding to fig. 4(c), fig. 4(h) and fig. 4(m), it can be seen that, because the representation capability of the graph to the image is weak, the obtained change detection results have many noise points, so that the change detection accuracy is low.
Referring to fig. 5, in which,
FIG. 5(a) is a diagram of a variation detection result output for a scene three image sample in a test sample set under the condition that a preference control parameter w is equal to 0.5;
FIG. 5(b) is a diagram of the variation detection result output for the scene three image samples in the test sample set under the condition that the preference control parameter w is equal to 1.0;
FIG. 5(c) is a diagram of the variation detection result output for the scene three image samples in the test sample set under the condition that the preference control parameter w is equal to 2.0;
precision, recall and F were calculated from the three-dimensional change test result plot in FIG. 51The results are shown in table 1 below:
list of variation detection results obtained from different preference control parameters
Figure BDA0001681074180000091
TABLE 1
As can be seen from fig. 5, the smaller the preference control parameter w, the larger the white area, and the greater the recall rate; the larger the preference control parameter w is, the better the image detail retention is, and the higher the precision ratio is. This conclusion can be verified from table 1, where the preference control parameter w is 1, which is a compromise between precision and recall.

Claims (3)

1. An image change detection method based on a depth separable convolutional network is characterized by comprising the following steps:
(1) constructing a training sample set, a verification sample set and a test sample set:
(1a) obtaining an original image sample set:
acquiring a plurality of groups of image sample pairs shot at different moments in the same place, and labeling real change areas of each group of image sample pairs to obtain an original image sample set containing a plurality of real manual labeling images;
(1b) normalizing each group of image sample pairs in the original image sample set:
normalizing each group of image sample pairs in the original image sample set to obtain a plurality of groups of normalized image sample pairs;
(1c) and updating the normalized multiple groups of image sample pairs:
t in each group of normalized image sample pairs2Time of day image sample Stack to t1Obtaining a plurality of image samples of two channels on the image sample at a moment;
(1d) acquiring a training sample set, a verification sample set and a test sample set:
taking most of the image samples of the multiple two-channel image samples as a training sample set, taking one part of the image samples in the rest image samples as a verification sample set, and taking the other part of the image samples as a test sample set;
(2) building a depth separable convolutional network:
(2a) taking the variant U-Net of the full convolution network as a basic network of the deep separable convolution network to be built;
(2b) configuring the convolution layer of the basic network of the depth separable convolution network to be constructed into depth separable convolution to obtain a depth separable convolution network;
(3) constructing a Loss function Loss of the training depth separable convolutional network:
(3a) constructing a cross entropy loss function BCE:
Figure FDA0002327626090000011
wherein n is the number of pixel points, yiThe classification of the true pixel point is the true pixel point,
Figure FDA0002327626090000012
is the output value of the depth separable convolutional network, i is the ith sample;
(3b) constructing a weighted DICE coefficient loss function DICE _ loss:
Figure FDA0002327626090000021
wherein w is a preference control parameter of precision ratio and recall ratio, and the value range of DICE _ loss is [0,1 ];
(3c) obtaining a Loss function Loss of the training depth separable convolutional network:
Loss=BCE+DICE_loss;
(4) training the deep separable convolutional network:
(4a) setting the iteration number N as 1, the minimum threshold value of the loss function as delta l, and the learning rate as α;
(4b) inputting the training sample set into a depth separable convolution network to obtain a plurality of probability graphs, calculating the Loss function of each probability graph and the corresponding real manual labeling graph to obtain the Loss function Loss of the probability graphs and the real manual labeling graphsN
Figure FDA0002327626090000022
Wherein the content of the first and second substances,
Figure FDA0002327626090000023
the loss function of the ith probability graph and the corresponding real manual labeling graph is given;
(4c) judging Loss function LossNIf the Loss function is less than the minimum threshold delta l, if so, the Loss function Loss is carried outNThe corresponding depth separable convolutional network is used as the trained depth separable convolutional network, otherwise, the step (4d) is executed;
(4d) updating the parameter theta of the convolution layer of the depth separable convolution network by adopting a gradient descent method to obtain the parameter theta of the convolution layernewOf a deep separable convolutional network, thetanewThe calculation formula of (2) is as follows:
Figure FDA0002327626090000024
(4e) let N be N +1, and simultaneously replace the depth separable convolution network with the convolution layer parameter thetanewAnd performing step (4 b);
(5) testing the trained deep separable convolutional network:
inputting a plurality of two-channel image samples in a verification sample set into a trained depth separable convolution network to obtain a probability map of changes of each group of image sample pairs corresponding to the image samples in the verification sample set in an original image sample set, and binarizing each probability map to obtain a plurality of binary maps;
(6) verifying the trained deep separable convolutional network:
calculating the average accuracy rate of the change detection of the plurality of binary images output by the trained deep separable convolutional network, judging whether the average accuracy rate is smaller than a set accuracy rate threshold value, if so, adjusting the learning rate α, and executing the step (4), otherwise, taking the deep separable convolutional networks corresponding to the plurality of binary images as the finally trained deep separable convolutional network, and executing the step (7);
(7) testing the finally trained deep separable convolutional network:
(7a) inputting a plurality of image samples of two channels in a test sample set into a finally trained depth separable convolution network to obtain a probability graph of the change of each group of image sample pairs corresponding to the image samples in the test sample set in an original image sample set;
(7b) and carrying out binarization on each probability map to obtain a plurality of change detection result maps.
2. The method of image change detection based on a depth separable convolutional network of claim 1, wherein: normalizing each group of image sample pairs in the original image sample set in the step (1b), wherein a normalization formula is as follows:
Figure FDA0002327626090000031
Figure FDA0002327626090000032
wherein, IAAnd IBRepresenting image samples at different times of the same location, IA' is represented by IANormalized image sample, IB' is represented by IBNormalized image samples.
3. The method of image change detection based on a depth separable convolutional network of claim 1, wherein: calculating the average accuracy rate of change detection of the binary image in the step (6), wherein the calculation formula is as follows:
Figure FDA0002327626090000033
wherein, TP is the number of pixels correctly detected as changed in the binary image, TN is the number of pixels correctly detected as unchanged in the binary image, and N is the total number of pixels in the binary image.
CN201810550412.0A 2018-05-31 2018-05-31 Image change detection method based on depth separable convolutional network Active CN108846835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810550412.0A CN108846835B (en) 2018-05-31 2018-05-31 Image change detection method based on depth separable convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810550412.0A CN108846835B (en) 2018-05-31 2018-05-31 Image change detection method based on depth separable convolutional network

Publications (2)

Publication Number Publication Date
CN108846835A CN108846835A (en) 2018-11-20
CN108846835B true CN108846835B (en) 2020-04-14

Family

ID=64211039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810550412.0A Active CN108846835B (en) 2018-05-31 2018-05-31 Image change detection method based on depth separable convolutional network

Country Status (1)

Country Link
CN (1) CN108846835B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113168684B (en) 2018-11-26 2024-04-05 Oppo广东移动通信有限公司 Method, system and computer readable medium for improving quality of low brightness images
CN109711280B (en) * 2018-12-10 2020-10-16 北京工业大学 ST-Unet-based video anomaly detection method
CN109740759A (en) * 2018-12-13 2019-05-10 平安科技(深圳)有限公司 Learning model optimization and selection method, electronic device and computer equipment
CN109766936B (en) * 2018-12-28 2021-05-18 西安电子科技大学 Image change detection method based on information transfer and attention mechanism
CN109766467B (en) * 2018-12-28 2019-12-13 珠海大横琴科技发展有限公司 Remote sensing image retrieval method and system based on image segmentation and improved VLAD
CN109902717A (en) * 2019-01-23 2019-06-18 平安科技(深圳)有限公司 Lesion automatic identifying method, device and computer readable storage medium
CN110059658B (en) * 2019-04-26 2020-11-24 北京理工大学 Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network
CN110163852B (en) * 2019-05-13 2021-10-15 北京科技大学 Conveying belt real-time deviation detection method based on lightweight convolutional neural network
CN111950723A (en) * 2019-05-16 2020-11-17 武汉Tcl集团工业研究院有限公司 Neural network model training method, image processing method, device and terminal equipment
CN110176024B (en) * 2019-05-21 2023-06-02 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for detecting target in video
CN110309836B (en) * 2019-07-01 2021-05-18 北京地平线机器人技术研发有限公司 Image feature extraction method, device, storage medium and equipment
CN110516761A (en) * 2019-09-03 2019-11-29 成都容豪电子信息科技有限公司 Object detection system, method, storage medium and terminal based on deep learning
CN110706232A (en) * 2019-09-29 2020-01-17 五邑大学 Texture image segmentation method, electronic device and computer storage medium
CN111259853A (en) * 2020-02-04 2020-06-09 中国科学院计算技术研究所 High-resolution remote sensing image change detection method, system and device
CN111680553A (en) * 2020-04-29 2020-09-18 北京联合大学 Pathological image identification method and system based on depth separable convolution
CN113421194B (en) * 2021-06-04 2022-07-15 贵州省地质矿产勘查开发局 Method for extracting hidden fault according to Booth gravity anomaly image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN107516082A (en) * 2017-08-25 2017-12-26 西安电子科技大学 Based on the SAR image change region detection method from step study
CN107948529A (en) * 2017-12-28 2018-04-20 北京麒麟合盛网络技术有限公司 Image processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11157814B2 (en) * 2016-11-15 2021-10-26 Google Llc Efficient convolutional neural networks and techniques to reduce associated computational costs

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN107516082A (en) * 2017-08-25 2017-12-26 西安电子科技大学 Based on the SAR image change region detection method from step study
CN107948529A (en) * 2017-12-28 2018-04-20 北京麒麟合盛网络技术有限公司 Image processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《An Aircraft Detection Framework Based on Reinforcement Learning and Convolutional Neural Networks in Remote Sensing Images》;Li Yang et al;《ResearchGate》;20180206;全文 *

Also Published As

Publication number Publication date
CN108846835A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN108846835B (en) Image change detection method based on depth separable convolutional network
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109766936B (en) Image change detection method based on information transfer and attention mechanism
CN110705457A (en) Remote sensing image building change detection method
CN106897681B (en) Remote sensing image contrast analysis method and system
CN108960404B (en) Image-based crowd counting method and device
CN113591968A (en) Infrared weak and small target detection method based on asymmetric attention feature fusion
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN108509826B (en) Road identification method and system for remote sensing image
CN116206112A (en) Remote sensing image semantic segmentation method based on multi-scale feature fusion and SAM
CN114155200B (en) Remote sensing image change detection method based on convolutional neural network
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN116012709B (en) High-resolution remote sensing image building extraction method and system
CN115456957B (en) Method for detecting change of remote sensing image by full-scale feature aggregation
CN110728316A (en) Classroom behavior detection method, system, device and storage medium
KR102416714B1 (en) System and method for city-scale tree mapping using 3-channel images and multiple deep learning
CN107230201B (en) Sample self-calibration ELM-based on-orbit SAR (synthetic aperture radar) image change detection method
CN111931721B (en) Method and device for detecting color and number of annual inspection label and electronic equipment
Al-Shammri et al. A Combined Method for Object Detection under Rain Conditions Using Deep Learning
CN112348011B (en) Vehicle damage assessment method and device and storage medium
CN114186784A (en) Electrical examination scoring method, system, medium and device based on edge calculation
CN107590824B (en) Rock particle identification and displacement tracking method based on three-dimensional image processing technology
CN111860258A (en) Examination room global event detection method and system based on three-dimensional convolutional neural network
CN111476129A (en) Soil impurity detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant