CN115526886A - Optical satellite image pixel level change detection method based on multi-scale feature fusion - Google Patents

Optical satellite image pixel level change detection method based on multi-scale feature fusion Download PDF

Info

Publication number
CN115526886A
CN115526886A CN202211320523.5A CN202211320523A CN115526886A CN 115526886 A CN115526886 A CN 115526886A CN 202211320523 A CN202211320523 A CN 202211320523A CN 115526886 A CN115526886 A CN 115526886A
Authority
CN
China
Prior art keywords
feature
change detection
image
feature fusion
characteristic diagram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211320523.5A
Other languages
Chinese (zh)
Other versions
CN115526886B (en
Inventor
周文明
赵利华
张志军
丁峰
胡朝鹏
张�浩
甘俊
张冠军
谭兆
齐春雨
王爱辉
李平苍
赵振洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Surveying And Mapping Geographic Information Research Center
China Railway Design Corp
Original Assignee
Tianjin Surveying And Mapping Geographic Information Research Center
China Railway Design Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Surveying And Mapping Geographic Information Research Center, China Railway Design Corp filed Critical Tianjin Surveying And Mapping Geographic Information Research Center
Priority to CN202211320523.5A priority Critical patent/CN115526886B/en
Publication of CN115526886A publication Critical patent/CN115526886A/en
Application granted granted Critical
Publication of CN115526886B publication Critical patent/CN115526886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an optical satellite image pixel level change detection method based on multi-scale feature fusion, and belongs to a remote sensing image processing method. The invention comprises a new convolutional neural network, wherein a new multi-scale feature fusion strategy is designed in the network, and the strategy can resist registration errors existing between double time phase satellite remote sensing images, so that the change detection precision of the remote sensing images is effectively improved; meanwhile, the network provides a half-group convolution module, the inference speed of the network can be effectively improved by embedding the module in a network model, and the change detection efficiency is improved. The method is used for calculating two input remote sensing images with the same size, resolution and geographic coverage range through the network to obtain a change detection result graph with the same size. The change detection method can obtain excellent remote sensing image change detection precision.

Description

Optical satellite image pixel level change detection method based on multi-scale feature fusion
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to an optical satellite image pixel level change detection method based on multi-scale feature fusion.
Background
Most of the existing remote sensing image change detection models focus on accurately positioning a change area, but neglect the efficiency of the method, which limits the application of the change detection technology in actual tasks, especially in large-scale and urgent change detection tasks. Although some researchers design some efficient processing algorithms to improve the efficiency of change detection, the design strategy of such methods often causes the precision loss of the change detection result. For example, in a process of processing a remote sensing image described in patent document with patent number CN201710220588.5 and name of a remote sensing image building detection method based on multi-scale and multi-feature fusion, all high-resolution remote sensing images are uniformly down-sampled, and then edge images with different scales are fused after multi-group feature calculation, and the processing process sacrifices sampling precision and reduces the dimension of feature extraction, so that the processing capability of registration error between satellite images is lacked, and the performance of the existing remote sensing image change detection model in practical application is also severely limited. This shows that a pixel level change detection network more suitable for remote sensing images is in urgent need of being proposed. Aiming at the problems, the invention provides an efficient optical satellite image pixel level change detection method based on a multi-scale feature fusion strategy.
Disclosure of Invention
Therefore, the invention aims to provide a method for detecting the pixel-level change of an optical satellite image based on multi-scale feature fusion, so as to solve the problem of insufficient registration precision and realize the precision of the pixel-level change of a remote sensing image.
In order to achieve the above object, the present invention provides a method for detecting pixel-level changes of an optical satellite image based on multi-scale feature fusion, which comprises the following steps:
s1, acquiring a plurality of groups of change image pairs to be detected, and inputting the change image pairs into a trained change detection model; the change detection model adopts a convolution neural network fusion multi-scale feature fusion strategy; the multi-scale feature fusion strategy is used for carrying out different processing on different types of images to obtain a plurality of feature graphs with different scales and different feature levels;
s2, taking the feature map processed by the multi-scale feature fusion strategy as the input of a convolutional neural network, wherein the convolutional neural network comprises a plurality of half groups of convolution modules, and each half group of convolution modules comprises a separation part and a cascade part; the separation part is used for recombining and grouping parts in the input feature maps to form a plurality of sub-feature maps, and the cascade part is used for carrying out cascade processing on the input feature maps and the sub-feature maps;
and S3, carrying out secondary classification on the final fusion characteristic graph formed after multiple separation and cascade connection by using a classifier to obtain a final change detection result graph.
Further preferably, in S1, the image to be detected is a large-range remote sensing image acquired for 2 different periods, and is subjected to geometric correction, orthorectification and resampling.
Further preferably, in S1, the multi-scale feature fusion strategy includes setting a plurality of spatial feature extraction branches and a plurality of image down-sampling processing branches, where the number of the spatial feature extraction branches and the number of the down-sampling processing branches are the same.
Further preferably, each spatial feature extraction branch adopts two 3 × 3 convolution kernels to connect the pooled layer extraction features, so as to obtain a feature map of each spatial feature extraction branch.
Further preferably, the down-sampling process of each image down-sampling processing branch is performed by using a bilinear interpolation method.
Further preferably, in S2, the step of using the feature map processed by the multi-scale feature fusion policy as an input of a convolutional neural network and finally forming a fusion feature map by using a plurality of half-group convolution modules includes the following steps:
s201, image pair { (I) is changed 1 ,I 2 ,CM * ) t I T =1,2, \8230, T is used as the input of the change detection model phi, and the input characteristic diagram L under different scales is obtained through a multi-scale characteristic fusion strategy 1 -0,…L 4-0
S202, passing through two branch pairs L 1-0 Extracting high-level features to obtain a feature map L 2-1 And L 2-2 (ii) a Characteristic diagram L 2-0 ,L 2-1 And L 2-2 Forming a first half set of convolution modules omega 1 The input of (1); omega 1 Is obtained as an output characteristic map S p1 ={L 3-1 ,L 3-2 ,L 3-3 Fourthly, obtaining an output characteristic diagram C of the cascade part p1 ={L 2-c };
S203, characteristic diagram L 3-0 And omega 1 Output feature map ofCascade processing to obtain L 3-4 (ii) a Characteristic diagram L 3-0 And omega 1 The separation section of (2) outputs a result S p1 And characteristic diagram L 3-4 Together forming a second half set of convolution modules omega 2 The input of (1); omega 2 Respectively is S p2 ={L 4-1 ,L 4-2 ,L 4-3 ,L 4-4 And C p2 ={L 3-c };
S204, feature map L 4-0 And omega 2 The output characteristic diagram is cascaded to obtain L 4-5
S205, supplementing deep image information and model phi to L 3-c And L 4-5 Carrying out cascade processing to obtain a characteristic diagram L 4-6 And performing a channel compression operation to obtain L 4-c
S206, matching feature map L 4-c Performing deconvolution operation and upsampling operation, and comparing with the feature map L 3-c Cascading to obtain a characteristic diagram L 3-u (ii) a For feature map set { L 3-u ,L 2-c And { L } 2-u ,L 2-c VI, the step is repeated to obtain L in sequence 2-u And L 1-u
Further preferably, in S3, when the final fused feature map formed by multiple separation and concatenation is classified by a classifier, the classifier is represented by the following formula:
Figure BDA0003910160500000031
wherein, f i For the output vector of the convolutional layer, exp () is a logarithmic function, F (F) i ) Outputting the classified result; as a binary task, F (F) i ) Has an output range of [0,1 ]]Indicating the probability of a pixel change.
Further preferably, in S3, the method further includes binarizing the change probability results of all pixels to obtain a prediction result CM of change detection, and using the prediction result CM and the true value CM * The degree of similarity between them calculates the loss function.
Further preferably, the loss function is expressed by the following formula:
E=E bce +λE dc
wherein λ is a weight control parameter for regulating E bce And E dc Ratio of between, E bce As a result of a cross-entropy loss function of the binary class, E dc And calculating a result for the Dice coefficient loss function.
Compared with the prior art, the optical satellite image pixel level change detection method based on multi-scale feature fusion at least comprises the following advantages:
the invention provides a multi-scale feature fusion strategy aiming at the detection of the change position in the satellite remote sensing images acquired at different time before and after, which can effectively resist the registration error between the satellite images and further improve the precision of pixel-level change detection. The half-group convolution designed by the invention can effectively improve the processing efficiency of the model and reduce the processing time of change detection. Based on a multi-scale feature fusion strategy and a half-group convolution module, the change detection model constructed by the method has better feature extraction capability and higher processing efficiency, can resist the problem of registration error between satellite images, and is more suitable for the pixel-level change detection of the optical satellite images.
Drawings
FIG. 1 is a schematic diagram of a multi-scale feature fusion strategy proposed by the present invention;
FIG. 2 is a diagram of a half set of convolution modules according to the present invention;
FIG. 3 is a schematic structural diagram of a change detection model according to the present invention;
FIG. 4 is a diagram of some examples of the detection of changes in test data.
FIG. 5 is a flowchart of the method for detecting pixel-level variation of optical satellite images based on multi-scale feature fusion according to the present invention,
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description.
As shown in fig. 1, according to an implementation sequence, the method for detecting pixel level changes of an optical satellite image based on multi-scale feature fusion according to an embodiment of the present invention includes the following steps of input and output definition, model training, and use:
as shown in fig. 5, when in use, comprises
S1, acquiring a plurality of groups of change image pairs to be detected, and inputting the change image pairs into a trained change detection model; the change detection model adopts a convolution neural network fusion multi-scale feature fusion strategy; the multi-scale feature fusion strategy is used for carrying out different processing on different types of images to obtain a plurality of feature graphs with different scales and different feature levels;
s2, taking the feature map processed by the multi-scale feature fusion strategy as the input of a convolutional neural network, wherein the convolutional neural network comprises a plurality of half groups of convolution modules, and each half group of convolution modules comprises a separation part and a cascade part; the separation part is used for recombining and grouping parts in the input feature maps to form a plurality of sub-feature maps, and the cascade part is used for carrying out cascade processing on the input feature maps and the sub-feature maps;
and S3, carrying out secondary classification on the final fusion characteristic graph formed after multiple separation and cascade connection by using a classifier to obtain a final change detection result graph.
The model definition and training comprises the following steps:
input-output definition: the input data of the method are two remote sensing images I which need to be subjected to change detection 1 And I 2 .I 1 And I 2 The method is characterized in that large-range remote sensing images acquired in different periods are acquired through strict geometric correction, orthorectification, resampling and other steps. The output data of the method is a binary change detection result image, namely a binary image CM. Image I 1 、I 2 And CM have identical image size, ground resolution and geographic coverage. In the image CM, the pixel value C (m, n) =0 in the mth column and nth row indicates that no feature change has occurred at the position, and C (m, n) =1 indicates that a feature change has occurred at the position.
Model training: annotating data by mass of human beings { (I) 1 ,I 2 ,CM * ) t I T =1,2, \ 8230A T }trainingAnd (3) refining the proposed change detection model phi = { theta, K, gamma }. Wherein CM * Detecting a pattern spot for a manually annotated change, hereinafter referred to as a truth value; Θ represents the model parameters to be trained; k represents a designed network characteristic diagram; Γ denotes a change detection classifier. The invention adopts a multi-scale feature fusion strategy in the proposed change detection model phi to improve the robustness of the model to the image registration error and the change detection precision, and the structure of the strategy is shown in figure 1. And a half-group rolling module is designed to improve the detection efficiency of the model, and the structure of the module is shown in figure 2. During the training process, the output of the network phi is K c And c represents the number of feature map channels. To K c Performing feature dimension reduction to obtain a feature map K 1 Then, the change detection classifier Γ { K } can be used 1 2 pair K 1 And (5) carrying out binary classification processing to obtain a binary change detection result diagram CM. Model prediction result graph CM and actual change condition CM through calculation * The degree of similarity between them supervises the training process of the model and updates the learnable parameters in the model by a back-propagation strategy. The training process needs iteration, loss functions are reduced and model performance is improved by continuously updating model parameters until an iteration stop condition is met.
Model prediction: and (3) carrying out change detection on the image to be detected by using the model phi = { theta, K, gamma } parameters obtained through full training to obtain a change detection binary image. During use, the model parameters Φ = { Θ, K, Γ } are fixed.
Preferably, the convolutional neural network model Φ used in the model training includes a multi-scale feature fusion strategy Ψ = { F = { (F) i ,DI j }. The strategy includes two types of image processing operations, namely a spatial feature extraction operation F and an image down-sampling process DI. F i Subscript i in the drawing indicates an output feature graph obtained by each branch after i spatial feature extraction branch processes; DI j The subscript j in (a) indicates the output result obtained by each branch after the image down-sampling branch processing by j branches.
A specific use of this structure comprises the following sub-steps:
step (i): fig. 1 shows a multi-scale feature fusion strategy proposed by the present invention. For an input image, the processing procedure outputs a plurality of image features with different scales and different feature levels. Preferably, the spatial feature extraction branch and the downsampling processing branch in the multi-scale feature fusion strategy are the same in number, i.e. i = j.
Step (ii) preferably, the convolution operations taken by the spatial extraction branch are both two convolution kernels of size {3 × 3}, and F i Has a convolution step size of 2 i-1 . The spatial feature extraction operation is followed by pooling operation to obtain an output feature map F of the processing branch i . Preferably, a maximum pooling operation is employed, and the step size of the pooling operation is 2.
Step (iii) the down-sampling operation is preferably performed using bilinear interpolation, and DI 1 Down-sampling scale of 2 i+1 . Sequential DI 2 To DI j Has the scale of { 2} i+2 ,…,2 i+j }. Preferably, the image is downsampled and then convolved to obtain the output DI of the branch j . Preferably, the convolution process is performed using a convolution kernel of {1 × 1} size.
Preferably, the convolutional neural network model Φ used for model training includes a half set of convolutional modules Ω = { S = { p ,C p }. The half set of convolution modules Ω contains two components: separation of fraction S p And a cascade part C p . The separation part recombines and groups the parts in the input characteristic diagram, and the cascade part cascades the characteristic diagram without the separation part in the input characteristic diagram. The characteristic diagram of the separation part is processed through corresponding convolution operation and is further cascaded with the characteristic diagram of the cascading part to obtain a final output characteristic diagram.
The specific use of the module comprises the following sub-steps:
step (i): fig. 2 shows a half set of convolution modules designed by the present invention. The half-set convolution splits a process from an input profile to an output profile into a split part and a concatenated part S p And C p . Preferably, S is p And C p Have the sameThe number of channels in (2) is 1/2 of the number of channels of the input feature map.
Step (ii) preferably, a fraction S is separated p Dividing a given characteristic diagram into g groups, wherein the number of channels of each group is 1/g. After the grouping of the feature maps is completed, each grouping is followed by two convolution kernels with the size of {3 x 3} to carry out high-level feature extraction.
And (iii) preferably, the cascading part performs cascading processing on the characteristic maps according to the channel dimension, namely performing element-level summation operation on the given characteristic map in the channel dimension.
Fig. 3 shows an efficient optical satellite image pixel-level variation detection model based on a multi-scale feature fusion strategy designed by the present invention. Preferably, the specific training process of the model Φ includes the following sub-steps:
step I: will change the detection image to { (I) 1 ,I 2 ,CM * ) t I T =1,2, \8230, T is used as the input of the model phi, and an input characteristic diagram L under different scales is obtained through a multi-scale characteristic fusion strategy 1-0, …L (i+j)-0
Step II: through two branch pairs L 1-0 Extracting high-level features to obtain a feature map L 2-1 And L 2-2 . Characteristic diagram L 2-0 ,L 2-1 And L 2-2 Form the first packet convolution omega 1 Is input. Omega 1 The output characteristic map S obtained by the separation section of p1 ={L 3-1 ,L 3-2 ,L 3-3 Fourthly, obtaining an output characteristic diagram C of the cascade part p1 ={L 2-c }。
Step III: characteristic diagram L 3-0 And omega 1 The output characteristic diagram is cascaded to obtain L 3-4 . Characteristic diagram L 3-0 ,Ω 1 The separation section of (2) outputs a result S p1 And characteristic diagram L 3-4 Together forming a second packet convolution omega 2 Is input. The outputs of which are respectively S p2 ={L 4-1 ,L 4-2 ,L 4-3 ,L 4-4 } and C p2 ={L 3-c }。
Step IV, feature map L 4-0 And omega 2 The output characteristic diagram is cascaded to obtain L 4-5
V, supplementing deep image information, model omega to L 3-c And L 4-5 Carrying out cascade processing to obtain a characteristic diagram L 4-6 And performing a channel compression operation to obtain L 4-c
VI, comparing the characteristic diagram L 4-c Performing deconvolution operation and upsampling operation, and comparing with the feature map L 3-c Cascading to obtain a characteristic diagram L 3-u . For feature map set { L 3-u ,L 2-c And { L } 2-u ,L 2-c Repeating the step VI to obtain L in sequence 2-u And L 1-u
Step VII, to L 1-u Performing deconvolution processing to obtain a feature map K of the convolution layer c (m, n), wherein c represents the number of channels of the feature map, and (m, n) represents the number of rows and columns of the image. To K c Performing dimension transformation to obtain K 1 Where 1 indicates that the result obtained is a single channel vector. Add classifier Γ = { K ] after convolutional layers 1 ,2}. T is the input feature vector K 1 Two classifications are made. Preferably, the classifier Γ may be defined as:
Figure BDA0003910160500000081
wherein f is i For the output vector of the convolutional layer, exp () is the logarithm function, F (F) i ) And outputting the classification result. As a binary task, F (F) i ) Has an output range of [0,1 ]]And represents the probability of the pixel (m, n) changing. And carrying out binarization on the change probability results of all pixels to obtain a prediction result CM of change detection.
Preferably, the model training step uses a loss function consisting of a two-class cross-entropy loss function E bce And Dice coefficient loss function E dc In combination of wherein E bce And E dc Can be defined as:
Figure BDA0003910160500000091
wherein N is an image I 1 The total number of pixels. y is n=1 Indicating the number of pixels that change in the image. y is n=0 Indicating the number of pixels unchanged. p is a radical of formula n Indicating the probability of change.
Figure BDA0003910160500000092
Wherein Y represents a given variation diagram true value,
Figure BDA0003910160500000093
a graph showing the predicted change results.
Preferably, the loss function used in the model training process may be defined as:
E=E bce +λE dc
wherein λ is a weight control parameter for regulating E bce And E dc To each other. Preferably, it is set to 0.5.
The training process needs iteration, loss functions are reduced and network performance is improved by continuously updating model parameters until an iteration stop condition is met.
And (3) model prediction, namely fixing the network phi, and carrying out change detection on each pair of images to be detected to obtain a change detection result graph CM of the corresponding size. Fig. 4 is a diagram showing the true value of the change detection and the result of the change detection of the images 1 and 2 obtained by the method of the present invention.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should it be exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (9)

1. An optical satellite image pixel level change detection method based on multi-scale feature fusion is characterized by comprising the following steps:
s1, acquiring a plurality of groups of change image pairs to be detected, and inputting the change image pairs into a trained change detection model; the change detection model adopts a convolution neural network fusion multi-scale feature fusion strategy; the multi-scale feature fusion strategy is used for carrying out different processing on different types of images to obtain a plurality of feature graphs with different scales and different feature levels;
s2, taking the feature map processed by the multi-scale feature fusion strategy as the input of a convolutional neural network, wherein the convolutional neural network comprises a plurality of half groups of convolution modules, and each half group of convolution modules comprises a separation part and a cascade part; the separation part is used for recombining and grouping parts in the input feature maps to form a plurality of sub-feature maps, and the cascade part is used for carrying out cascade processing on the input feature maps and the sub-feature maps;
and S3, carrying out secondary classification on the final fusion characteristic graph formed after multiple separation and cascade connection by using a classifier to obtain a final change detection result graph.
2. The multi-scale feature fusion-based optical satellite image pixel-level change detection method according to claim 1, wherein in S1, the change image to be detected is a large-range remote sensing image obtained at 2 different periods and is subjected to geometric correction, orthorectification and resampling.
3. The method for detecting pixel-level variation of optical satellite images based on multi-scale feature fusion according to claim 1, wherein in S1, the multi-scale feature fusion strategy includes setting a plurality of spatial feature extraction branches and a plurality of image down-sampling processing branches, and the number of the spatial feature extraction branches and the number of the down-sampling processing branches are the same.
4. The method for detecting pixel-level variation of optical satellite images based on multi-scale feature fusion of claim 3, wherein each spatial feature extraction branch uses two 3 × 3 convolution kernels to connect pooling layer extraction features to obtain a feature map of each spatial feature extraction branch.
5. The method for detecting pixel-level changes of optical satellite images based on multi-scale feature fusion as claimed in claim 3, wherein the down-sampling process of each image down-sampling processing branch is performed by using a bilinear interpolation method.
6. The method for detecting the pixel-level change of the optical satellite image based on the multi-scale feature fusion as claimed in claim 1, wherein in S2, the feature map processed by the multi-scale feature fusion strategy is used as the input of a convolutional neural network, and a fusion feature map is finally formed by using a plurality of half-group convolution modules, comprising the following steps:
s201, image pair { (I) is changed 1 ,I 2 ,CM * ) t I T =1,2, \ 8230A T } is used as the input of the change detection model phi, and the input characteristic diagram L under different scales is obtained through a multi-scale characteristic fusion strategy 1-0 ,…L 4-0
S202, passing through two branch pairs L 1-0 Extracting high-level features to obtain a feature map L 2-1 And L 2-2 (ii) a Characteristic diagram L 2-0 ,L 2-1 And L 2-2 Forming a first half set of convolution modules omega 1 The input of (1); omega 1 The output characteristic map S obtained by the separation section of p1 ={L 3-1 ,L 3-2 ,L 3-3 Fourthly, obtaining an output characteristic diagram C of the cascade part p1 ={L 2-c };
S203, characteristic diagram L 3-0 And omega 1 The output characteristic diagram is cascaded to obtain L 3-4 (ii) a Characteristic diagram L 3-0 And omega 1 The separation section of (2) outputs a result S p1 And characteristic diagram L 3-4 Together forming a second half set of convolution modules omega 2 The input of (1); omega 2 Respectively is S p2 ={L 4-1 ,L 4-2 ,L 4-3 ,L 4-4 } and C p2 ={L 3-c };
S204, feature diagram L 4-0 And omega 2 The output characteristic diagram is cascaded to obtain L 4-5
S205, supplementing deep image information and model phi to L 3-c And L 4-5 Performing cascade processing to obtain a characteristic diagram L 4-6 And performing a channel compression operation to obtain L 4-c
S206, comparing feature map L 4-c Performing deconvolution operation and upsampling operation, and comparing with the feature map L 3-c Cascading to obtain a characteristic diagram L 3-u (ii) a For feature map set { L 3-u ,L 2-c And { L } 2-u ,L 2-c Repeating the step VI to obtain L in sequence 2-u And L 1-u
7. The method for detecting pixel-level variation of optical satellite images based on multi-scale feature fusion of claim 1, wherein in S3, when the final fused feature map formed by multiple separation and concatenation is classified by a classifier, the classifier is represented by the following formula:
Figure FDA0003910160490000031
wherein, f i For the output vector of the convolutional layer, exp () is a logarithmic function, F (F) i ) Outputting the classified result; as a binary task, F (F) i ) Has an output range of [0,1 ]]Indicating the probability of a pixel change.
8. The method for detecting pixel-level variation of optical satellite images based on multi-scale feature fusion as claimed in claim 7, further comprising in S3 binarizing the variation probability results of all pixels to obtain a prediction result CM of variation detection, and using the prediction result CM and a true value CM * The degree of similarity between them calculates the loss function.
9. The method for detecting pixel-level changes of optical satellite imagery based on multi-scale feature fusion according to claim 8, wherein the loss function is expressed by the following formula:
E=E bce +λE dc
wherein λ is a weight control parameter for regulating E bce And E dc Ratio of E to E bce As a result of a cross-entropy loss function of the binary class, E dc And calculating a result for the Dice coefficient loss function.
CN202211320523.5A 2022-10-26 2022-10-26 Optical satellite image pixel level change detection method based on multi-scale feature fusion Active CN115526886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211320523.5A CN115526886B (en) 2022-10-26 2022-10-26 Optical satellite image pixel level change detection method based on multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211320523.5A CN115526886B (en) 2022-10-26 2022-10-26 Optical satellite image pixel level change detection method based on multi-scale feature fusion

Publications (2)

Publication Number Publication Date
CN115526886A true CN115526886A (en) 2022-12-27
CN115526886B CN115526886B (en) 2023-05-26

Family

ID=84703555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211320523.5A Active CN115526886B (en) 2022-10-26 2022-10-26 Optical satellite image pixel level change detection method based on multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN115526886B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348036A (en) * 2020-11-26 2021-02-09 北京工业大学 Self-adaptive target detection method based on lightweight residual learning and deconvolution cascade
CN113420662A (en) * 2021-06-23 2021-09-21 西安电子科技大学 Remote sensing image change detection method based on twin multi-scale difference feature fusion
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method
CN114913434A (en) * 2022-06-02 2022-08-16 大连理工大学 High-resolution remote sensing image change detection method based on global relationship reasoning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348036A (en) * 2020-11-26 2021-02-09 北京工业大学 Self-adaptive target detection method based on lightweight residual learning and deconvolution cascade
CN113420662A (en) * 2021-06-23 2021-09-21 西安电子科技大学 Remote sensing image change detection method based on twin multi-scale difference feature fusion
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method
CN114913434A (en) * 2022-06-02 2022-08-16 大连理工大学 High-resolution remote sensing image change detection method based on global relationship reasoning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ABOLFAZL ABDOLLAHI 等: ""Building footprint extraction from high resolution aerial images using generative adversarial network architecture"" *
LIMING ZHOU: ""Ship target detection in optical remote sensing images based on multiscale feature enhancement"" *
严慧;徐立峰;徐侃;: "局部紧H半群上概率测度序列的组合收敛性" *
严慧;徐立峰;徐侃;: "局部紧H半群上概率测度序列的组合收敛性", 数学进展 *

Also Published As

Publication number Publication date
CN115526886B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
US10984289B2 (en) License plate recognition method, device thereof, and user equipment
CN109102469B (en) Remote sensing image panchromatic sharpening method based on convolutional neural network
CN111695467B (en) Spatial spectrum full convolution hyperspectral image classification method based on super-pixel sample expansion
CN110443805B (en) Semantic segmentation method based on pixel density
CN111612017B (en) Target detection method based on information enhancement
CN112561910A (en) Industrial surface defect detection method based on multi-scale feature fusion
CN112862774B (en) Accurate segmentation method for remote sensing image building
CN111860683B (en) Target detection method based on feature fusion
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
CN114283120B (en) Domain-adaptive-based end-to-end multisource heterogeneous remote sensing image change detection method
CN113159452B (en) Wind power cluster power prediction method based on time-space correlation
CN110119805B (en) Convolutional neural network algorithm based on echo state network classification
CN111325134B (en) Remote sensing image change detection method based on cross-layer connection convolutional neural network
CN115457311B (en) Hyperspectral remote sensing image band selection method based on self-expression transfer learning
CN113298032A (en) Unmanned aerial vehicle visual angle image vehicle target detection method based on deep learning
CN114898217A (en) Hyperspectral classification method based on neural network architecture search
CN115861260A (en) Deep learning change detection method for wide-area city scene
CN112801204B (en) Hyperspectral classification method with lifelong learning ability based on automatic neural network
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN113221997A (en) High-resolution image rape extraction method based on deep learning algorithm
CN112529908A (en) Digital pathological image segmentation method based on cascade convolution network and model thereof
CN117392065A (en) Cloud edge cooperative solar panel ash covering condition autonomous assessment method
CN115526886A (en) Optical satellite image pixel level change detection method based on multi-scale feature fusion
CN112084941A (en) Target detection and identification method based on remote sensing image
CN116597203A (en) Knowledge distillation-based anomaly detection method for asymmetric self-encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhao Dou

Inventor after: Tan Zhao

Inventor after: Qi Chunyu

Inventor after: Wang Aihui

Inventor after: Li Pingcang

Inventor after: Zhao Zhenyang

Inventor after: Zhou Wenming

Inventor after: Zhao Lihua

Inventor after: Zhang Zhijun

Inventor after: Ding Feng

Inventor after: Hu Chaopeng

Inventor after: Zhang Hao

Inventor after: Gan Jun

Inventor after: Zhang Guanjun

Inventor before: Zhou Wenming

Inventor before: Qi Chunyu

Inventor before: Wang Aihui

Inventor before: Li Pingcang

Inventor before: Zhao Zhenyang

Inventor before: Zhao Lihua

Inventor before: Zhang Zhijun

Inventor before: Ding Feng

Inventor before: Hu Chaopeng

Inventor before: Zhang Hao

Inventor before: Gan Jun

Inventor before: Zhang Guanjun

Inventor before: Tan Zhao

GR01 Patent grant
GR01 Patent grant