CN114897880A - Remote sensing image change detection method based on self-adaptive image regression - Google Patents
Remote sensing image change detection method based on self-adaptive image regression Download PDFInfo
- Publication number
- CN114897880A CN114897880A CN202210650366.8A CN202210650366A CN114897880A CN 114897880 A CN114897880 A CN 114897880A CN 202210650366 A CN202210650366 A CN 202210650366A CN 114897880 A CN114897880 A CN 114897880A
- Authority
- CN
- China
- Prior art keywords
- image
- regression
- remote sensing
- images
- adaptive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008859 change Effects 0.000 title claims abstract description 59
- 238000001514 detection method Methods 0.000 title claims abstract description 59
- 238000012549 training Methods 0.000 claims abstract description 38
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 238000007500 overflow downdraw method Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000005728 strengthening Methods 0.000 claims abstract description 3
- 238000000034 method Methods 0.000 claims description 45
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 238000013145 classification model Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 2
- 230000008569 process Effects 0.000 description 5
- GVJHHUAWPYXKBD-UHFFFAOYSA-N (±)-α-Tocopherol Chemical compound OC1=C(C)C(C)=C2OC(CCCC(C)CCCC(C)CCCC(C)C)(C)CCC2=C1C GVJHHUAWPYXKBD-UHFFFAOYSA-N 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 229930003427 Vitamin E Natural products 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- WIGCFUFOHFEKBI-UHFFFAOYSA-N gamma-tocopherol Natural products CC(C)CCCC(C)CCCC(C)CCCC1CCC2C(C)C(O)C(C)C(C)C2O1 WIGCFUFOHFEKBI-UHFFFAOYSA-N 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 229940046009 vitamin E Drugs 0.000 description 2
- 235000019165 vitamin E Nutrition 0.000 description 2
- 239000011709 vitamin E Substances 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 241001125046 Sardina pilchardus Species 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 235000019512 sardine Nutrition 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a remote sensing image change detection method based on self-adaptive image regression, which comprises the following steps of; step 1): processing the images before and after the fact to generate regression images based on a multi-output self-adaptive regression model; step 2): calculating difference images between the regression image obtained in the step 1) and the images before and after the regression image; step 3): analyzing the difference image obtained in the step 2) based on a fuzzy local information C-means algorithm to obtain a certain number of significant sample pairs, namely a variable sample pair and a non-variable sample pair; step 4): based on an incidence relation driven fusion method (AF), strengthening the characteristics of the images before and after the events, and further fusing the characteristics of the original images and the strengthened characteristics; step 5): and (4) constructing a training set by using the significant sample pairs in the step 3), and further predicting to obtain a change detection result by combining the feature training classifier model after fusion in the step 4). The invention has high detection precision and low time cost, effectively avoids the interference of noise information, and can adapt to the change detection task of a complex ground feature scene.
Description
Technical Field
The invention relates to the technical field of remote sensing image processing and pattern recognition, in particular to a remote sensing image change detection method based on self-adaptive image regression.
Background
Since the heterogeneous remote sensing image has different imaging characteristics of the sensor, the characteristic space of the image before and after the heterogeneous remote sensing image is inconsistent, and a large amount of noise is often generated by directly comparing the difference, the homogeneous remote sensing image change detection method is difficult to obtain higher detection precision in a heterogeneous scene, but the heterogeneous remote sensing image change detection method effectively improves the limitation. The existing heterogeneous remote sensing image change detection method mainly comprises methods based on classification, similarity and deep learning, the classification-based method depends on the performance of a classifier, accumulated errors are easy to generate, and the method is particularly easy to be influenced by speckle noise of a Synthetic Aperture Radar (SAR) image and difficult to correctly classify; the similarity-based method only depends on the heterogeneity between data which is difficult to characterize by the invariant pixel pairs, and the detection precision under the complex ground object scene is low; the deep learning based approach requires supervision of a large amount of labeled training data, and the training process is complex and time consuming. In summary, the main disadvantages of the existing heterogeneous remote sensing image change detection method are as follows: 1) the method is easily interfered by noise information and is difficult to classify correctly; 2) the detection precision is not high, and the method cannot adapt to complex ground object scenes; 3) depending on the labeled training data, the training process is complex and time consuming.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a remote sensing image change detection method based on adaptive image regression, which has high detection precision and low time cost, effectively avoids the interference of noise information, and can adapt to the change detection task of a complex ground object scene.
In order to achieve the purpose, the invention adopts the technical scheme that:
a remote sensing image change detection method based on self-adaptive image regression comprises the following steps;
step 1): processing the images before and after the fact to generate regression images based on a multi-output self-adaptive regression model;
step 2): calculating difference images between the regression image obtained in the step 1) and the images before and after the regression image;
step 3): analyzing the difference image obtained in the step 2) based on a fuzzy local information C-means algorithm to obtain a certain number of significant sample pairs, namely a variable sample pair and a non-variable sample pair;
step 4): based on an incidence relation driven fusion method (AF), strengthening the characteristics of the images before and after the events, and further fusing the characteristics of the original images and the strengthened characteristics;
step 5): and (4) constructing a training set by using the significant sample pairs in the step 3), and further predicting to obtain a change detection result by combining the feature training classifier model after fusion in the step 4).
In the step 1), based on the multi-output self-adaptive regression model, one of the pre-image and the post-image which contains more information is used as the characteristic of a training set and a testing set, the other image is used as a label of the training set, the regression model is trained according to the self-adaptive regression direction so as to obtain the regression image for subsequent analysis, and the strategy can ensure that the model extracts potential characteristics from the remote sensing image which contains more information to represent the remote sensing image which contains less information.
The quality of the difference image in the step 2) directly influences the final change detection result, the larger the gray value of the difference image is, the higher the possibility that the corresponding position changes is, and in view of the fact that the regression image and the original remote sensing image have more consistent feature representation, the difference image with higher quality can be obtained, so that the difference image between the regression image and the original remote sensing image in advance and in the future is calculated, and the changed area is described more accurately.
The step 3) adopts an FLICM clustering method to segment the difference images, the clustering number is set to be 3 (namely an unchanged area with a small difference value, an uncertain area with a difference value in a middle range and a changed area with a large difference value), the uncertain area which is difficult to divide in the middle can be identified, then isolated noise is filtered to further ensure the reliability of the significant samples, and finally reliable significant sample pairs, namely a changed sample pair and an unchanged sample pair, are obtained.
The step 4) encodes the data to a new feature space by using the incidence relation between the high-order information and the features of the data; firstly, stacking the prior and the subsequent remote sensing images to construct an initial data set, modeling high-order information of the initial data set, and improving the nonlinear expression capability of the original data to obtain enhanced characteristics; furthermore, the obtained high-order data set can assist in training a classification model so as to obtain a more classification result, so that the original remote sensing data and the enhanced features are fused to obtain a feature data set for decision making.
And 5) constructing a training data set by using the reliable significant sample pair obtained in the step 3), then obtaining a multi-layer perceptron classifier model by using the fused features in the step 4) as auxiliary training, inputting the prior and subsequent remote sensing images into the model, and finally generating a high-precision binary change detection result image.
The invention has the beneficial effects that:
the invention provides a remote sensing image change detection method based on self-adaptive image regression, which adopts the expression capability of the nonlinear depth characteristics of a multilayer perceptron to obtain a regression image, and does not consume a large amount of calculation time while utilizing the depth characteristics; the difference analysis is carried out based on the fuzzy local information C mean value algorithm, so that uncertain regions which are difficult to divide in the middle can be identified, isolated noise is filtered, and the reliability of the obvious sample pairs is further ensured; in addition, incidence relations among different features are considered, high-order information of remote sensing image data is fully utilized, and the expression capacity of the features is enhanced.
The method provided by the invention has high detection precision, can effectively inhibit the influence of noise, and can still provide a high-precision change detection result in a complex ground feature scene.
The method comprises the steps of firstly, utilizing the difference of information quantity among multi-modal data to self-adapt to a regression direction based on an information entropy theory, and obtaining a regression image which is similar to a characteristic space of a previous image or a subsequent image by adopting a multi-output multi-layer perceptron image regression algorithm; secondly, calculating by using the regression image and the original remote sensing image data to obtain a difference image, and providing a difference analysis method based on a fuzzy local information C mean value (FLICM) algorithm, aiming at finding a certain number of significant sample pairs for subsequent detection; secondly, in order to consider the incidence relation among different characteristics of the remote sensing data and fully utilize potential high-order information in the data, a fusion algorithm based on the incidence relation characteristics is adopted to enhance the stacking characteristics of the original image; and finally, training a classification model by using the fused features to generate a high-precision change detection result graph.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a schematic diagram of the multi-output multi-layered perceptron adaptive image regression algorithm of the present invention.
FIG. 3 is a graph showing the effect of the enhancement rate L of hyper-parametric features on the change detection result of the method of the present invention.
Fig. 4 is a diagram of a binary change detection result of the remote sensing image change detection method based on adaptive image regression according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The specific steps of the present invention are described in detail with reference to fig. 1 and 2.
The method comprises the following steps: and processing the images in advance and after the events based on the multi-output self-adaptive regression model to generate regression images. The first step is to determine the regression direction by a learning strategy of self-adapting the optimal regression direction, which is to obtain the optimal regression image self-adaptively and ensure that the regression image has more consistent visual representation with the original image. The method has the core idea that the regression image is generated based on the multi-output self-adaptive regression model, one part of the images in advance and after the events, which contains more information, is used as the characteristic of a training set and a testing set, the other image is used as the label of the training set, and the regression model is trained according to the self-adaptive regression direction so as to obtain the regression image for subsequent analysis.
Due to the fact that imaging principles of different sensor images are different, multi-mode data imaging effects are often different greatly, and information content of remote sensing images is different. When the regression model is trained, information is inevitably lost in the regression process due to reconstruction errors in the mapping relation between the pre-image and the post-image. In fact, the bidirectional regression of the before and after remote sensing images can not necessarily obtain a better regression effect, and in order to obtain an optimal regression image in a self-adaptive manner and ensure that the regression image and the original image have more consistent visual representation, the invention designs a learning strategy for the self-adaptive optimal regression direction.
It is assumed that the before and after images are respectively represented asWherein H and W represent the height and width of the image, respectively, and B X And B Y The number of channels is X and Y, respectively. The entropy is a measurement of the uncertainty of the random variable, namely the degree of disorder, and the information entropy of the remote sensing image can better reflect the information quantity contained in the remote sensing image and is directly related to the gray level change degree of the remote sensing image. Therefore, the information entropy of the remote sensing image is used to measure the amount of information contained in the remote sensing image, taking the prior image X as an example, assuming that Q gray levels are distributed in H × W pixels,is the probability of the occurrence of the ith gray level, f i Is the grayscale frequency, the amount of information contained by X:
according to the difference of information content in the remote sensing image, the following self-adaptive regression direction strategy is provided:
if H (X) > H (Y) and | H (X) < H > < Y > | ε, the regression training set is Regression test set is T e Training a regression model to predict and obtain a regression image R [ { X | X ∈ X } ] X ;
If H (X) < H (Y) and | H (X) < H (Y) | > ε, the regression training set is Regression test set is T e Training a regression model to predict and obtain a regression image R [ [ Y ] epsilon ] Y ] ] Y ;
Thirdly, if | H (X) -H (Y) | is less than epsilon, simultaneously carrying out strategies of (i) and (ii), respectively using X and Y as test sets, and predicting to obtain a regression image R X And R Y And fusing the two parts of difference images when calculating the difference images in the second step.
Wherein,andthe positions of the invariant pixel pairs in the prior information respectively correspond to the characteristic values of X and Y, and the pairs of X and YShould be a single pixel. In practice, a small amount of unchanged prior information can be obtained, and the original image data contains a label of a part of unchanged samples. In the invention, the parameter epsilon is 0 by default, and is used for improving the robustness of the future actual change detection scene. When the difference between the information amount of the image before the event and the information amount of the image after the event is small, the difference between the results of forward regression and backward regression is small, bidirectional regression can be performed by adjusting epsilon, a more robust difference image is obtained by combining the forward difference and the backward difference, and an alternative scheme is provided for a changeable change detection scene in the future.
With the increase of the hidden layer, the feature learning capability of the network is stronger. Aiming at the regression requirement of remote sensing data, the invention builds a multilayer perceptron regression model, the structure of which comprises 1 input layer, 8 middle hidden layers and 1 output layer, and the specific network structure is set as B Y -16-32-64-128-128-64-32-16-B X The activation function selects Relu. The specific training details are as follows: the maximum iteration number is set to 100, the Adam optimizer is used to optimize the weight, and the L2 penalty factor is set to 0.0001. Firstly, the regression direction is determined according to the self-adaptive regression strategy, and then N-8 multiplied by 100 training samples are selected from the pre-image and the post-image according to the regression direction to form a regression training set T r After the training of the regression model of the multilayer perceptron is finished, the regression test set T is matched e Predicting to obtain a regression matrix R, and finally imaging to obtain a regression image R Y 。
Step two: and (4) calculating the difference image between the regression image obtained in the step one and the images before and after the regression image.
In the invention, the remote sensing image multi-temporal difference is used for extracting the pseudo labels of a small amount of samples, and the obvious samples can represent the real change trend compared with the samples with the unobvious characteristic difference. The quality of the difference image directly affects the final change detection result, and the larger the gray value of the difference image is, the higher the possibility that the corresponding position is changed is. The regression image and the original remote sensing image have more consistent characteristic representation, and a difference image with higher quality can be obtained, so that the changed area can be more accurately described.
The difference image calculation formulas corresponding to different regression strategies are as follows:
D ③ (h,w)=(D ① (h,w)+D ② (h,w))/2
wherein | represents absolute distance, H is more than or equal to 1 and less than or equal to H, W is more than or equal to 1 and less than or equal to W, r X And y is the corresponding position belonging to R X And pixel value of Y, r Y And x is respectively a corresponding position belonging to R Y And the pixel value of X.
Step three: and analyzing the difference image obtained in the step two based on a fuzzy local information C-means algorithm to obtain a certain number of significant sample pairs, namely a variable sample pair and a constant sample pair.
The FLICM clustering method is adopted for segmenting the difference images, the clustering number is set to be 3 (namely an unchanging region with a small difference value, an uncertain region with a difference value in a middle range and a changing region with a large difference value), the uncertain region which is difficult to divide in the middle can be identified, then isolated noise is filtered to further ensure the reliability of the significant sample, and finally a reliable significant sample pair is obtained.
Step four: based on an incidence relation driven fusion method (AF), the characteristics of the images in advance and in the afterward are enhanced, and then the characteristics of the original images and the enhanced characteristics are fused. And the fourth step is to design a fusion rule in an interpretable mode, and encode the data into a new feature space by using the association relationship between the high-order information and the features of the data. Firstly, stacking the prior remote sensing images and the subsequent remote sensing images to construct an initial data set, modeling high-order information of the initial data set, and improving the nonlinear expression capability of the original data to obtain enhanced characteristics; furthermore, the obtained high-order data set can assist in training a classification model so as to obtain a more classification result, so that the original remote sensing data and the enhanced features are fused to obtain a feature data set for decision making. The characteristic fusion method based on the incidence relation greatly utilizes complementary information among different modes in heterogeneous remote sensing image data to fuse the internal characteristics of the data, the expression capability of the fused characteristics is stronger, and the precision of change detection is effectively improved.
First, X and Y are stacked to construct an initial datasetModeling high-order information of S characteristics, improving the nonlinear expression capability of original data, and establishing mapping phi: s → E, data from original (B) X +B Y ) Vitamin E to vitamin E (B) X +B Y ) In the L dimension, the feature enhancement process is represented as:
wherein,the hyperparameter L is the characteristic enhancement rate, i.e. the highest power representing the characteristic enhancement. The initial data set S is converted intoThen the sample s i Is correspondingly expanded to { e j ,1≤j≤(Bx+B Y )L}。
Feature fusion (AF) algorithm based on incidence relation in order to measure the incidence relation between different features, a relation fusion matrix R is defined based on the enhanced data E. Each element r (e) i ,e j ) Express feature e i And e j The correlation degree between the features is measured by a Pearson correlation coefficient and is expressed as:
wherein,Andare respectively e i And e j The standard deviation of (a) is determined,andare respectively e i And e j Average value of (a). Then, the enhanced data E is fused by using the relation fusion matrix to establish mappingAnd fusing different characteristics of the enhancement data. The larger L is, the more diverse the association relationship is. According to the Taylor series, the following fusion strategy is adopted:
whereinR kj =r(e k ,e j ) Denotes e k And e j The degree of correlation of (c). Data set obtained after conversion Given an original sample s i After obtaining corresponding enhancement features
The high-order data set C obtained by AF can assist in training the classifier to obtainObtaining a better classification result, and fusing the original remote sensing data S and C to obtain a characteristic data set for decision makingAs can be seen from fig. 3, when L is 2 or 3, the change detection performance is better than that of the original feature-based method, and the number of features is smaller and the execution time is shorter. Considering the performance and the time cost comprehensively, the invention recommends that the super parameter L takes 2 or 3 as a default parameter. In addition, experiments show that the characteristic fusion method based on the incidence relation effectively improves the performance of the method for change detection because the fusion space has strong characteristic expression capability.
Step five: and (4) constructing a training set by using the significant sample pairs in the third step, and further combining the feature training classifier model after the fusion in the fourth step to predict to obtain a change detection result.
And fifthly, training a classifier model to generate a change detection result. The purpose of change detection is to highlight differences between image pairs at different times, which can essentially be translated into a binary classification task. Compared with a method for clustering or thresholding difference images, the method based on the classification model can weaken the noise influence in the difference images and obtain a more robust change result image. The method adopts a multilayer perceptron classification model as an automatic supervision classification model, constructs a training data set by using the reliable and obvious sample pair obtained in the step 3), further obtains a multilayer perceptron classification model by using the characteristics after fusion in the step 4) as auxiliary training, inputs the prior and the subsequent remote sensing images into the model, and finally generates a high-precision binary change detection result graph.
Example (b): change detection is carried out on actual before and after heterogeneous remote sensing images
The effectiveness and the accuracy of the method are proved by a group of actual change detection experiments of the heterogeneous remote sensing images before and after the fact. As shown in fig. 4(a) and 4(b), the set of images is from the island of italian sardine, 412 x 300 in size, where the prior remote sensing image fig. 4(a) is a Landsat-5TM image taken by the american land satellite Thematic Mapper (TM) at 9 months 1995 and the subsequent remote sensing image fig. 4(b) is an RGB optical image taken from google earth at the same location at 7 months 1996. Further, fig. 4(c) is a true change reference image drawn by a field survey.
Comparing the invention with a heterogeneous remote sensing image change detection (SCCN) method based on a symmetrical convolution coupling network, the comparison results are shown in fig. 4(d) and 4(e), fig. 4(d) is a change detection result diagram generated by the SCCN method, fig. 4(e) is a change detection result diagram of the method of the invention, in addition, white pixel points represent changed pixel points, and black pixel points represent unchanged pixel points. Although the SCCN method can detect most of the changed regions, a large number of false-alarm pixels (actually, unchanged pixels but erroneously detected as changed pixels) also appear. The change detection result graph obtained by the method is very clear, and the number of undetected and false-alarm pixel points in the graph is small, because aiming at the change detection of the heterogeneous remote sensing images, the self-adaptive regression method of the multi-output multi-layer sensor provided by the invention quantizes the information content contained in the heterogeneous remote sensing images at different time phases by utilizing the characteristic of multi-mode data information content difference, and self-adapts to the optimal regression direction to obtain the optimal regression image for the change detection; further, an FLICM algorithm considering the spatial neighborhood information is adopted to identify the uncertain region in the difference map, so that the negative influence of a fuzzy region between change and invariance on the acquisition of the obvious sample pair is effectively avoided, and the obvious sample pair can be accurately identified; and finally, modeling the high-order information of the characteristics based on the fusion of the incidence relation characteristics, and simultaneously improving the nonlinear expression capability of the original data. As can be seen from the graph of the change detection result in fig. 4(e), the method of the present invention can make the detection result have the least false alarm and missing detection regions, which indicates that the method of the present invention can effectively suppress the noise existing in the final result and improve the change detection performance. Table 1 is an assessment of the accuracy of the results of two variation detection methods, where OE is the gross error and R is the total error m Is the miss rate, R f Is false alarm rate, F 1 The score is an index for measuring the accuracy of the binary model, and the Kappa (KC) coefficient canThe accuracy of change detection can be comprehensively reflected. The heterogeneous remote sensing image change detection result obtained by the method of the invention is compared with a reference image, and the omission factor R m And false alarm rate R f Is obviously reduced, F 1 The score is significantly improved and the method proposed by the present invention performs more optimally on Kappa coefficients than the SCCN method, which is an improvement of 79.61%.
TABLE 1 evaluation of accuracy of results of variation detection by different methods
Method | OE | R m | R f | F 1 | KC |
SCCN method | 8298 | 0.4100 | 0.4319 | 0.4191 | 0.5506 |
The method of the invention | 2867 | 0.2069 | 0.1582 | 0.1757 | 0.7961 |
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (6)
1. A remote sensing image change detection method based on self-adaptive image regression is characterized by comprising the following steps;
step 1): processing the images before and after the fact to generate regression images based on a multi-output self-adaptive regression model;
step 2): calculating difference images between the regression image obtained in the step 1) and the images before and after the regression image;
step 3): analyzing the difference image obtained in the step 2) based on a fuzzy local information C-means algorithm to obtain a certain number of significant sample pairs, namely a variable sample pair and a non-variable sample pair;
step 4): based on an incidence relation driven fusion method (AF), strengthening the characteristics of the images before and after the events, and further fusing the characteristics of the original images and the strengthened characteristics;
step 5): and (4) constructing a training set by using the significant sample pairs in the step 3), and further predicting to obtain a change detection result by combining the feature training classifier model after fusion in the step 4).
2. The method for detecting the change of the remote sensing image based on the adaptive image regression as claimed in claim 1, wherein in the step 1), based on a multi-output adaptive regression model, a regression image is generated, one of the images before and after the fact that the information content is large is used as the feature of a training set and a test set, the other image is used as a label of the training set, the regression model is trained according to the adaptive regression direction to obtain the regression image for subsequent analysis, and the strategy can ensure that the model extracts potential features from the remote sensing image containing the information content to represent the remote sensing image containing the information content less.
3. The method for detecting the change of the remote sensing image based on the adaptive image regression as claimed in claim 1, wherein the quality of the difference image in the step 2) directly affects the final change detection result, the larger the gray value of the difference image is, the greater the possibility of indicating the change of the corresponding position is, and in view of the fact that the regression image and the original remote sensing image have more consistent feature representation, the difference image with higher quality can be obtained, so that the difference image between the regression image and the original remote sensing image before and after the fact is calculated, and the changed area can be described more accurately.
4. The method for detecting the change of the remote sensing image based on the adaptive image regression as claimed in claim 1, wherein in the step 3), an FLICM clustering method is adopted to segment the difference image, the clustering number is set to 3 (namely an unchanged region with a small difference value, an uncertain region with a difference value in a middle range and a changed region with a large difference value), the uncertain region which is difficult to divide in the middle can be identified, then isolated noise is filtered to further ensure the reliability of the significant sample, and finally a reliable significant sample pair, namely a changed sample pair and an unchanged sample pair, is obtained.
5. The method for detecting the change of the remote sensing image based on the adaptive image regression as claimed in claim 1, wherein the step 4) encodes the data into a new feature space by using the incidence relation between the high-order information of the data and the features; firstly, stacking the prior and the subsequent remote sensing images to construct an initial data set, modeling high-order information of the initial data set, and improving the nonlinear expression capability of the original data to obtain enhanced characteristics; furthermore, the obtained high-order data set can assist in training a classification model so as to obtain a more classification result, so that the original remote sensing data and the enhanced features are fused to obtain a feature data set for decision making.
6. The method for detecting the change of the remote sensing image based on the adaptive image regression is characterized in that in the step 5), a training data set is constructed by using the reliable significant sample pairs obtained in the step 3), a multi-layer perceptron classifier model is obtained by using the features fused in the step 4) as auxiliary training, the prior and the subsequent remote sensing images are input into the model, and a high-precision binary change detection result graph is finally generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210650366.8A CN114897880A (en) | 2022-06-10 | 2022-06-10 | Remote sensing image change detection method based on self-adaptive image regression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210650366.8A CN114897880A (en) | 2022-06-10 | 2022-06-10 | Remote sensing image change detection method based on self-adaptive image regression |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897880A true CN114897880A (en) | 2022-08-12 |
Family
ID=82728752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210650366.8A Pending CN114897880A (en) | 2022-06-10 | 2022-06-10 | Remote sensing image change detection method based on self-adaptive image regression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897880A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116740182A (en) * | 2023-08-11 | 2023-09-12 | 摩尔线程智能科技(北京)有限责任公司 | Ghost area determining method and device, storage medium and electronic equipment |
-
2022
- 2022-06-10 CN CN202210650366.8A patent/CN114897880A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116740182A (en) * | 2023-08-11 | 2023-09-12 | 摩尔线程智能科技(北京)有限责任公司 | Ghost area determining method and device, storage medium and electronic equipment |
CN116740182B (en) * | 2023-08-11 | 2023-11-21 | 摩尔线程智能科技(北京)有限责任公司 | Ghost area determining method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472627B (en) | End-to-end SAR image recognition method, device and storage medium | |
CN111539316B (en) | High-resolution remote sensing image change detection method based on dual-attention twin network | |
US11809485B2 (en) | Method for retrieving footprint images | |
Lei et al. | Multiscale superpixel segmentation with deep features for change detection | |
Liu et al. | Remote sensing image change detection based on information transmission and attention mechanism | |
CN111126202A (en) | Optical remote sensing image target detection method based on void feature pyramid network | |
Zhan et al. | Iterative feature mapping network for detecting multiple changes in multi-source remote sensing images | |
CN108492298B (en) | Multispectral image change detection method based on generation countermeasure network | |
CN106295124A (en) | Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount | |
CN112132012B (en) | High-resolution SAR ship image generation method based on generation countermeasure network | |
Sekar et al. | Automatic road crack detection and classification using multi-tasking faster RCNN | |
CN112288778B (en) | Infrared small target detection method based on multi-frame regression depth network | |
CN105787950A (en) | Infrared image sea-sky-line detection algorithm based on line gradient accumulation | |
CN111709487A (en) | Underwater multi-source acoustic image substrate classification method and system based on decision-level fusion | |
Yang et al. | Graph evolution-based vertex extraction for hyperspectral anomaly detection | |
CN116563262A (en) | Building crack detection algorithm based on multiple modes | |
CN114897880A (en) | Remote sensing image change detection method based on self-adaptive image regression | |
CN117455868A (en) | SAR image change detection method based on significant fusion difference map and deep learning | |
CN112784777B (en) | Unsupervised hyperspectral image change detection method based on countermeasure learning | |
CN105654042B (en) | The proving temperature character identifying method of glass-stem thermometer | |
Xiao et al. | Multiresolution-Based Rough Fuzzy Possibilistic C-Means Clustering Method for Land Cover Change Detection | |
CN117197682B (en) | Method for blind pixel detection and removal by long-wave infrared remote sensing image | |
CN117351194A (en) | Graffiti type weak supervision significance target detection method based on complementary graph inference network | |
CN115375925A (en) | Underwater sonar image matching algorithm based on phase information and deep learning | |
CN114973164A (en) | Image style migration-based ship target fusion identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |