CN105046241A - Target level remote sensing image change detection method based on RBM model - Google Patents

Target level remote sensing image change detection method based on RBM model Download PDF

Info

Publication number
CN105046241A
CN105046241A CN201510512212.2A CN201510512212A CN105046241A CN 105046241 A CN105046241 A CN 105046241A CN 201510512212 A CN201510512212 A CN 201510512212A CN 105046241 A CN105046241 A CN 105046241A
Authority
CN
China
Prior art keywords
layer
represent
boltzmann machine
limited boltzmann
machine rbm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510512212.2A
Other languages
Chinese (zh)
Other versions
CN105046241B (en
Inventor
马文萍
焦李成
胡天妤
刘嘉
李豪
李志舟
王倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510512212.2A priority Critical patent/CN105046241B/en
Publication of CN105046241A publication Critical patent/CN105046241A/en
Application granted granted Critical
Publication of CN105046241B publication Critical patent/CN105046241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2111Selection of the most significant subset of features by using evolutionary computational techniques, e.g. genetic algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Abstract

The present invention discloses a target level remote sensing image change detection method based on a RBM model, mainly aims at the defects of an existing change detection method, combines a Restricted Boltzmann Machine (RBM) with target level remote sensing image change detection and is applied to remote sensing image change detection. The target level remote sensing image change detection method comprises the implementing steps of: (1) inputting gray matrices of two remote sensing images; (2) carrying out fuzzy clustering to obtain the segmented gray matrices of two remote sensing images; (3) constructing a logarithm ratio difference gray matrix to be detected; (4) carrying out pre-classification on the logarithm ratio difference gray matrix; (5) selecting a training sample; (6) training the RBM; and (7) outputting a change detection result. According to the present invention, the dependency on remote sensing image registration accuracy is reduced; the target level remote sensing image change detection method has the excellent noise immunity; and accuracy and classification accuracy of remote sensing image change detection are improved.

Description

Based on the target level method for detecting change of remote sensing image of RBM model
Technical field
The invention belongs to field of computer technology, further relate to the target level method for detecting change of remote sensing image based on RBM model in technical field of image processing.By Remote Sensing Imagery Change Detection, can be used for atural object and cover and utilization, Natural calamity monitoring and assessment, city planning, the fields such as map rejuvenation.
Background technology
Method for detecting change of remote sensing image is a kind of important technology analyzing and understand remote sensing images of many times, be to areal different time obtain many times remote Sensing Image Analysis a kind of method, a quasi-mode classification problem in essence, the differential image obtained by certain mode is divided into change and does not change two large classes, and what stress is the change of atural object in identification two width remote sensing images.
From the angle of image process target levels of abstraction, method for detecting change of remote sensing image is divided into Pixel-level, feature level and target level 3 classifications.Traditional Pixel-level method for detecting change of remote sensing image requires higher to registration accuracy, is not suitable for high-resolution remote sensing image.Feature level method for detecting change of remote sensing image needs parameter and the decision threshold of set algorithm under manual intervention, and automaticity is not high.The topmost feature of target level Change Detection Methods of Remote Sensing Image image is regarded as multiple combination with the object of semantic information, then using the basic processing unit of these objects as change detection.Along with the raising of remote sensing images spatial resolution, target level Change Detection Methods of Remote Sensing Image better can extract geometric configuration and the structural information of atural object on remote sensing images.
In patent " a kind of method for detecting change of remote sensing image based on texture primitive " (number of patent application: CN201410363422.5, publication number: CN104143191A) that Institute of Remote Sensing and Digital Earth Chinese Academy of Sciences applies at it, a kind of method for detecting change of remote sensing image based on texture primitive is proposed.This detection method is by splitting the textural characteristics of image block in phase images during statistics first, obtain texture primitive histogram, again using the segmentation result of phase images time previous as restricted boundary, when adding up latter one, the textural characteristics of each segmentation image block in phase images, obtains respective texture unit histogram.Carry out similarity system design to the textural characteristics of the correspondence image block in two width images, distance is greater than certain threshold value, then think and there occurs change, otherwise, then think and do not change; Region of variation is marked on original segmentation result, obtains the change-detection images based on texture feature extraction.This detection method make use of spatial information and the texture information of image, avoids the generation of error detection, but the weak point still existed is, to need according to concrete remote sensing images and process problem, to select suitable classification thresholds, can not generally be suitable for.
A kind of method for detecting change of remote sensing image based on region and Kmeans cluster is proposed in patent that Xian Electronics Science and Technology University applies at it " method for detecting change of remote sensing image based on region and Kmeans cluster " (number of patent application CN201310114150.0, publication number CN103198480A).The method extracts area-of-interest and non-region of variation by carrying out maximum entropy threshold analysis to differential image, and utilizes the feature in these two regions to adopt Kmeans method to classify to two regions, completes and detects the change of remote sensing images.But the weak point that the method still exists is, directly the gray-scale value of differential image vegetarian refreshments is processed, do not utilize the information in the space of image, have higher dependence to registration accuracy.
Summary of the invention
The object of the invention is to overcome above-mentioned the deficiencies in the prior art, a kind of target level method for detecting change of remote sensing image based on RBM model is proposed, the present invention is by segmentation remote sensing images to be detected, obtain the gray matrix of target level remote sensing images, make full use of spectral information and the spatial information of pixel in remote sensing images.On the basis that tradition change detects, add limited Boltzmann machine RBM simultaneously, carry out change by the unsupervised learning of the limited Boltzmann machine RBM trained and detect, effectively raise the precision of Remote Sensing Imagery Change Detection.
Concrete steps of the present invention are as follows:
(1) input gray level matrix.
Input the gray matrix of two the same area of registration, the remote sensing images of different time.
(2) gray matrix is split.
(2a) fuzzy C-means clustering method is adopted, to the gray matrix I of two one of them remote sensing images of gray matrix of the same area of registration, the remote sensing images of different time 1carry out fuzzy clustering, obtain the gray matrix X after first segmentation 1.
(2b) fuzzy C-means clustering method is adopted, to the gray matrix I of two gray matrix wherein another remote sensing images of the same area of registration, the remote sensing images of different time 2carry out fuzzy clustering, obtain the gray matrix X after second segmentation 2.
(3) log ratio difference gray matrix to be detected is constructed.
According to the following formula, gray matrix X is calculated 1and X 2log ratio difference gray matrix, obtain log ratio difference gray matrix to be detected:
D=|log(X 2+1)-log(X 1+1)|
Wherein, D represents gray matrix X 1and X 2log ratio difference gray matrix, X 2represent the gray matrix after second segmentation, X 1represent the gray matrix after first segmentation, || represent absolute value operation, log represents that denary logarithm operates.
(4) presort.
(4a) adopt fuzzy C-means clustering method, fuzzy clustering is carried out to log ratio difference gray matrix to be detected, obtains the fuzzy clustering matrix of log ratio difference gray matrix.
(4b) by change in fuzzy clustering matrix class be subordinate to the large pixel of angle value be judged to the ownership of into change class, do not change class to be subordinate to the large pixel of angle value and to be judged to the ownership of as not changing class, all pixels be divided into change class and do not change class, obtaining initial change and detect gray matrix.
(5) training sample is chosen.
(5a) detect in gray matrix in initial change, choose one centered by a jth pixel, size is the square window of 5 × 5, and total pixel number of square window is 25.
(5b) gray average of pixel in square window according to the following formula, is calculated:
E = Σ y = 1 25 I y 25
Wherein, E represents the gray average of pixel in square window, and Σ represents sum operation, and y represents y pixel of square window, I yrepresent the gray-scale value of y pixel in square window.
(5c) according to the following formula, training sample is chosen:
|I j-E|≤0.3
Wherein, I jrepresent the gray-scale value of a jth pixel, E represents the gray average of pixel in square window, || represent absolute value operation.
(5d) judge whether to have inspected all pixels in initial change detection gray matrix, if so, obtain training sample, perform step (6), otherwise, perform step (5a).
(6) limited Boltzmann machine RBM is trained.
(6a) network structure of limited Boltzmann machine RBM is set as two-layer visible layer and two-layer hidden layer.
(6b) with the weights of the limited Boltzmann RBM of random number initialization on interval [-0.001,0.001].
(6c) the limited bias vector of Boltzmann machine RBM visible layer and the bias vector of hidden layer are initialized as null vector.
(6d) training sample is input in limited Boltzmann machine RBM and trains, utilize sdpecific dispersion method, obtain the limited Boltzmann machine RBM after training.
(7) exporting change testing result.
Being input to the limited Boltzmann machine RBM after training by obtaining log ratio difference gray matrix to be detected, finally being changed testing result.
The present invention has the following advantages compared with prior art:
First, because the present invention adopts fuzzy C-means clustering method to carry out fuzzy clustering to the gray matrix of remote sensing images, make use of the spatial information of remote sensing images, overcoming prior art adopts traditional method for detecting change of remote sensing image very responsive to picture noise, higher dependent shortcoming is had to registration accuracy, the present invention is had affect by picture noise little, the advantage that change testing result accuracy is high.
Second, because the present invention adopts limited Boltzmann machine RBM, detect in gray matrix in initial change and choose the large training sample be correctly validated of possibility, utilize the unsupervised learning of limited Boltzmann machine RBM to carry out change to remote sensing images to detect, overcoming prior art adopts traditional method for detecting change of remote sensing image to need to arrange specific classification thresholds, can not blanket shortcoming, the present invention is generally suitable for, the advantage that change testing result nicety of grading is high to Remote Sensing Imagery Change Detection.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is analogous diagram of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
With reference to accompanying drawing 1, performing step of the present invention is as follows.
Step 1, input gray level matrix.
Input two width the same area of registration, the remote sensing images of different time, obtain two the same area of registration, the remote sensing images gray matrixs of different time.
Step 2, segmentation gray matrix.
Adopt fuzzy C-means clustering method, to the gray matrix I of two one of them remote sensing images of gray matrix of the same area of registration, the remote sensing images of different time 1carry out fuzzy clustering, obtain the gray matrix X after first segmentation 1.
Adopt fuzzy C-means clustering method, to the gray matrix I of two gray matrix wherein another remote sensing images of the same area of registration, the remote sensing images of different time 2carry out fuzzy clustering, obtain the gray matrix X after second segmentation 2.
The gray matrix of remote sensing images is divided into multiple target level object composition with semantic information, the basic processing unit that these target level objects detect as change, make use of spatial information and the structural information of remote sensing images.
Fuzzy C-mean algorithm method concrete steps are as follows:
1st step, according to the following formula, the degree of membership with pixel in the random number initialization gray matrix on interval [0,1]:
Σ i = 1 N Σ k = 1 2 u k i = 1
Wherein, N represents the pixel number of gray matrix, and i represents i-th pixel of gray matrix, and k represents the fuzzy clustering classification of gray matrix, and Σ represents sum operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix.
2nd step, according to the following formula, calculates the cluster centre of gray matrix:
v k = Σ i = 1 N u k i x i Σ i = 1 N u k i
Wherein, v krepresent the cluster centre of kth class in gray matrix, k represents the fuzzy clustering classification of gray matrix, and N represents the number of the pixel of gray matrix, and i represents i-th pixel of gray matrix, and Σ represents sum operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, x irepresent the feature of i-th pixel in gray matrix.
3rd step, according to the following formula, upgrades the degree of membership of pixel in gray matrix:
u k i = Σ i = 1 N Σ k = 1 2 d ( x i , v k ) d ( x i , v k )
Wherein, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, k represents the fuzzy clustering classification of gray matrix, and i represents i-th pixel of gray matrix, and N represents the pixel number of gray matrix, and Σ represents sum operation, x irepresent the feature of i-th pixel in gray matrix, v krepresent the cluster centre of kth class in gray matrix, d (x i, v k) to represent in gray matrix the Euclidean distance of i-th pixel to the cluster centre of kth class.
4th step, according to the following formula, calculates the target function value of fuzzy C-means clustering method:
J = Σ i = 1 N Σ k = 1 2 [ ( u k i ) 2 × d ( x i , v k ) ]
Wherein, J represents the target function value of fuzzy C-means clustering method, and N represents the pixel number of gray matrix, and i represents i-th pixel of gray matrix, and k represents the fuzzy clustering classification of gray matrix, and Σ represents sum operation, () 2represent square operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, x irepresent the feature of i-th pixel in gray matrix, v krepresent the cluster centre of kth class in gray matrix, d (x i, v k) to represent in gray matrix the Euclidean distance of i-th pixel to the cluster centre of kth class.
5th step, judges whether the knots modification of target function value before and after iteration is less than 0.000001, if so, stops iteration, otherwise, perform the 2nd step.
Step 3, constructs log ratio difference gray matrix to be detected.
According to the following formula, gray matrix X is calculated 1and X 2log ratio difference gray matrix, obtain log ratio difference gray matrix to be detected:
D=|log(X 2+1)-log(X 1+1)|
Wherein, D represents gray matrix X 1and X 2log ratio difference gray matrix, X 2represent the gray matrix after second segmentation, X 1represent the gray matrix after first segmentation, || represent absolute value operation, log represents that denary logarithm operates.
Step 4, presorts.
Adopt fuzzy C-means clustering method, fuzzy clustering is carried out to log ratio difference gray matrix to be detected, obtains the fuzzy clustering matrix of log ratio difference gray matrix.
Be subordinate to the large pixel of angle value be judged to the ownership of changing class in fuzzy clustering matrix as change class, do not change class and be subordinate to the large pixel of angle value and be judged to the ownership of as not changing class, all pixels be divided into change class and do not change class, obtain initial change detection gray matrix.
Fuzzy C-mean algorithm method concrete steps are as follows:
1st step, according to the following formula, the degree of membership with pixel in the random number initialization gray matrix on interval [0,1]:
Σ i = 1 N Σ k = 1 2 u k i = 1
Wherein, N represents the pixel number of gray matrix, and i represents i-th pixel of gray matrix, and k represents the fuzzy clustering classification of gray matrix, and Σ represents sum operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix.
2nd step, according to the following formula, calculates the cluster centre of gray matrix:
v k = Σ i = 1 N u k i x i Σ i = 1 N u k i
Wherein, v krepresent the cluster centre of kth class in gray matrix, k represents the fuzzy clustering classification of gray matrix, and N represents the number of the pixel of gray matrix, and i represents i-th pixel of gray matrix, and Σ represents sum operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, x irepresent the feature of i-th pixel in gray matrix.
3rd step, according to the following formula, upgrades the degree of membership of pixel in gray matrix:
u k i = Σ i = 1 N Σ k = 1 2 d ( x i , v k ) d ( x i , v k )
Wherein, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, k represents the fuzzy clustering classification of gray matrix, and i represents i-th pixel of gray matrix, and N represents the pixel number of gray matrix, and Σ represents sum operation, x irepresent the feature of i-th pixel in gray matrix, v krepresent the cluster centre of kth class in gray matrix, d (x i, v k) to represent in gray matrix the Euclidean distance of i-th pixel to the cluster centre of kth class.
4th step, according to the following formula, calculates the target function value of fuzzy C-means clustering method:
J = Σ i = 1 N Σ k = 1 2 [ ( u k i ) 2 × d ( x i , v k ) ]
Wherein, J represents the target function value of fuzzy C-means clustering method, and N represents the pixel number of gray matrix, and i represents i-th pixel of gray matrix, and k represents the fuzzy clustering classification of gray matrix, and Σ represents sum operation, () 2represent square operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, x irepresent the feature of i-th pixel in gray matrix, v krepresent the cluster centre of kth class in gray matrix, d (x i, v k) to represent in gray matrix the Euclidean distance of i-th pixel to the cluster centre of kth class.
5th step, judges whether the knots modification of target function value before and after iteration is less than 0.000001, if so, stops iteration, otherwise, perform the 2nd step.
Step 5, chooses training sample.
The training sample chosen is that large the detecting in gray matrix in initial change of possibility is correctly detected as change class and the pixel not changing class.
The concrete steps choosing training sample are as follows.
1st step, detect in gray matrix in initial change, choose one centered by a jth pixel, size is the square window of 5 × 5, and total pixel number of square window is 25.
2nd step, according to the following formula, calculates the gray average of square window pixel:
E = Σ y = 1 25 I y 25
Wherein, E represents the gray average of square window pixel, and Σ represents sum operation, and y represents y pixel of square window, I yrepresent the gray-scale value of y pixel in square window.
3rd step, according to the following formula, choose training sample:
|I j-E|≤0.3
Wherein, I jrepresent the gray-scale value of a jth pixel, E represents the gray average of pixel in square window, || represent absolute value operation.
4th step, judge whether to check initial change to detect all pixels of gray matrix, what if so, obtain limited Boltzmann machine RBM treats training sample, performs step 6, otherwise, perform the 1st step.
Step 6, trains limited Boltzmann machine RBM.
The network structure of limited Boltzmann machine RBM is set as two-layer visible layer and two-layer hidden layer.
With the weights of the limited Boltzmann RBM of random number initialization on interval [-0.001,0.001].
The limited bias vector of Boltzmann machine RBM visible layer and the bias vector of hidden layer are initialized as null vector.
Training sample is input in limited Boltzmann machine RBM and trains, utilize sdpecific dispersion method, obtain the limited Boltzmann machine RBM after training.
Treat that training sample is that the change be correctly validated that possibility is large detects pixel due to what choose, then the limited Boltzmann machine RBM after training is that an energy carries out the network of comparatively precise classification to the pixel in the log ratio difference gray matrix of input.
As follows to the concrete steps of sdpecific dispersion method:
1st step, will treat that training sample is input to limited Boltzmann machine RBM the 1st layer of visible layer, and the 1st layer of visible layer obtaining the graceful machine RBM of limited Wurz exports.
2nd step, according to the following formula, the 1st layer of hidden layer that sampling obtains limited Boltzmann machine RBM exports.
h l n ~ P ( h l n = 1 | v 1 ) = 1 1 + e - ( Σ n = 1 N w l n × v 1 + b l n )
Wherein, h 1nrepresent the output of the n-th node in the 1st layer of hidden layer, n represents the n-th node of the 1st layer of hidden layer, ~ represent sampling operation, P (h 1n=1|v 1) represent that the 1st layer of visible layer exports as v 1time the 1st layer of hidden layer in n-th node export be 1 probability, v 1represent the output of the 1st layer of visible layer, e represents Euler's constant, and Σ represents sum operation, and N represents the node total number of the 1st layer of hidden layer, w 1nrepresent the weights of the n-th node and the 1st layer of visible layer in connection the 1st layer of hidden layer, b 1nrepresent the bias vector of the n-th node in the 1st layer of hidden layer.
3rd step, according to the following formula, the 2nd layer of visible layer that sampling obtains limited Boltzmann machine RBM exports:
v 2 m ~ P ( v 2 m = 1 | h 1 ) = 1 1 + e - ( Σ m = 1 M w 2 m × h 1 + a 2 m )
Wherein, v 2mrepresent the output of m node in the 2nd layer of visible layer, m represents m node in the 2nd layer of visible layer, ~ represent sampling operation, P (v 2m=1|h 1) represent that the 1st layer of hidden layer exports as h 1time the 2nd layer of visible layer in m node export be 1 probability, h 1represent the output of the 1st layer of hidden layer, e represents Euler's constant, and Σ represents sum operation, and M represents the node total number of the 2nd layer of visible layer, w 2mrepresent the weights of m node and the 1st layer of hidden layer in connection the 2nd layer of visible layer, a 2mrepresent the bias vector of m node in the 2nd layer of visible layer.
4th step, according to the following formula, the 2nd layer of hidden layer that sampling obtains limited Boltzmann machine RBM exports:
h 2 n ~ P ( h 2 n = 1 | v 2 ) = 1 1 + e - ( Σ n = 1 N w 2 n × v 2 + b 2 n )
Wherein, h 2nrepresent the output of the n-th node in the 2nd layer of hidden layer, n represents the n-th node of the 2nd layer of hidden layer, ~ represent sampling operation, P (h 2n=1|v 2) represent that the 2nd layer of visible layer exports as v 2time the 2nd layer of hidden layer in the output of the n-th node be the probability of 1, v 2represent the output of the 2nd layer of visible layer, e represents Euler's constant, and Σ represents sum operation, and N represents the node total number of the 2nd layer of hidden layer, w 2nrepresent the weights of the n-th node and the 2nd layer of visible layer in connection the 2nd layer of hidden layer, b 2nrepresent the bias vector of the n-th node in the 2nd layer of hidden layer.
5th step, according to the following formula, upgrades the weight of limited Boltzmann machine RBM:
w t+1=w t+(P(h 1=1|v 1)×v 1)-(P(h 2=1|v 2)×v 2)
Wherein, w t+1represent the weight of limited Boltzmann machine RBM after upgrading for the t+1 time, w trepresent the weight of limited Boltzmann machine RBM after upgrading for the t time, P (h 1=1|v 1) represent that the 1st layer of visible layer exports as v 1time the 1st layer of hidden layer export be 1 probability, h 1represent the output of the 1st layer of hidden layer, v 1represent the output of the 1st layer of visible layer, (×) represents inner product operation, P (h 2=1|v 2) represent that the 2nd layer of visible layer exports as v 2time the 2nd layer of hidden layer export be 1 probability, h 2represent the output of the 2nd layer of hidden layer, v 2represent the output of the 2nd layer of visible layer.
6th step, according to the following formula, upgrades the bias vector of limited Boltzmann machine RBM visible layer:
a t+1=a t+v 1-v 2
Wherein, a t+1represent the bias vector of limited Boltzmann machine RBM visible layer after upgrading for the t+1 time, a trepresent the bias vector of limited Boltzmann machine RBM visible layer after upgrading for the t time, v 1represent the output of the 1st layer of visible layer, v 2represent the output of the 2nd layer of visible layer.
7th step, according to the following formula, upgrades the bias vector of limited Boltzmann machine RBM hidden layer:
b t+1=b t+P(h 1=1|v 1)-P(h 2=1|v 2)
Wherein, b t+1represent the bias vector of limited Boltzmann machine RBM hidden layer after upgrading for the t+1 time, b trepresent the bias vector of limited Boltzmann machine RBM hidden layer after upgrading for the t time, P (h 1=1|v 1) represent that the 1st layer of visible layer exports as v 1time the 1st layer of hidden layer export be 1 probability, h 1represent the output of the 1st layer of hidden layer, v 1represent the output of the 1st layer of visible layer, P (h 2=1|v 2) represent that the 2nd layer of visible layer exports as v 2time the 2nd layer of hidden layer export be 1 probability, h 2represent the output of the 2nd layer of hidden layer, v 2represent the output of the 2nd layer of visible layer;
8th step, judges whether the cycle of training of limited Boltzmann machine RBM reaches 300 seconds, if, stop the bias vector upgrading the limited weights of Boltzmann machine RBM, the bias vector of visible layer and hidden layer, obtain the limited Boltzmann machine RBM trained, otherwise, perform the 2nd step.
Step 7, exporting change testing result.
Log ratio difference gray matrix to be detected is input to the limited Boltzmann machine RBM after training, is finally changed testing result.
Below in conjunction with emulation experiment, effect of the present invention is described further.
1. emulation experiment condition:
The hardware test platform of emulation experiment of the present invention is: processor is IntelCorei5CPU, and dominant frequency is 2.40GHz, internal memory 2GB, and software platform is: Windows8.1 operating system and MatlabR2014a.
2. emulation experiment content:
Emulation experiment content of the present invention is the Remote Sensing Imagery Change Detection of Ottawa area floods, respectively the Pixel-level Remote Sensing Imagery Change Detection based on threshold method, the Pixel-level Remote Sensing Imagery Change Detection based on K mean cluster and the target level Remote Sensing Imagery Change Detection based on RBM model are done to the remote sensing images of Ottawa area floods in an experiment, and compared for the experimental result of three kinds of change detecting methods.
As shown in Figure 2, image size is 290 × 350 to analogous diagram of the present invention.Fig. 2 (a) and Fig. 2 (b) is the remote sensing images of the Ottawa area floods in April, 2004 and in May, 2004 respectively, and Fig. 2 (c) is the Remote Sensing Imagery Change Detection reference diagram of Ottawa area floods.Fig. 2 (d), Fig. 2 (e) and Fig. 2 (f) are the simulation result of the Pixel-level method for detecting change of remote sensing image based on threshold method respectively, the simulation result of the simulation result based on the Pixel-level method for detecting change of remote sensing image of K mean cluster and the target level method for detecting change of remote sensing image based on RBM model, in Fig. 2 (d), Fig. 2 (e) and Fig. 2 (f), white point represents the pixel being detected as and changing, and stain represents and is detected as unchanged pixel.
3. analysis of simulation result:
By contrasting with reference to figure 2 (c) with the Remote Sensing Imagery Change Detection of Ottawa area floods, can find out that simulation result Fig. 2 (d) based on the Pixel-level method for detecting change of remote sensing image of threshold method is owing to existing very many noise spots, change detect accuracy and nicety of grading lower.Based on the Pixel-level method for detecting change of remote sensing image of K mean cluster simulation result Fig. 2 (e) due to noise spot more, loss in detail is serious, and the region having part to change does not detect.Simulation result Fig. 2 (f) noise spot based on the target level method for detecting change of remote sensing image of RBM model is few, and details retains better, and edge treated is smooth.
To the experiment simulation figure with reference diagram, quantitative change can be adopted to detect and to analyze.
According to the following formula, the total accuracy weighing change testing result is calculated:
P C C = T P + T N T P + F P + T N + F N
Wherein, PCC represents total accuracy of change testing result, TP represent in reference diagram change and be correctly detected in experimental result into change pixel number, TN represents in reference diagram and does not change and be correctly detected as unchanged pixel number in experimental result, FP represents in reference diagram and does not change but be the pixel number of change by error-detecting in experimental result, changes but be unchanged pixel number by error-detecting in experimental result in FN reference diagram.
According to the following formula, the nicety of grading weighing testing result is calculated:
K a p p a = P C C - P R E 1 - P R E
P R E = ( T P + F P ) · ( T P + F N ) + ( F N + T N ) · ( F P + T N ) T P + T N + F P + F N
Wherein, Kappa represents the nicety of grading of testing result, PCC represents the total accuracy of change testing result, TP represent in reference diagram change and be correctly detected in experimental result into change pixel number, TN represents in reference diagram and does not change and be correctly detected as unchanged pixel number in experimental result, FP represents in reference diagram and does not change but be the pixel number of change by error-detecting in experimental result, changes but be unchanged pixel number by error-detecting in experimental result in FN reference diagram.
In sum, quantitative change is done to the simulation result of three kinds of change detecting methods and detect analysis.PCC represents that change detects total accuracy, is worth larger, illustrates that classifying quality is better.Kappa represents that change detects nicety of grading, is worth larger, illustrates that classifying quality is better.
Table 1 detects analysis for doing quantitative change to the simulation result of three kinds of change detecting methods.
The simulation result of table 1 three kinds of change detecting methods
Method PCC Kappa
Threshold method 0.8666 0.6379
K averaging method 0.9621 0.8456
The present invention 0.9754 0.9057
Total accuracy of the present invention's simulation result on the remote sensing images of Ottawa area floods is higher than threshold method and K averaging method as can be seen from Table 1, and nicety of grading is the highest, and visible the inventive method improves the effect of Remote Sensing Imagery Change Detection.

Claims (3)

1., based on a target level method for detecting change of remote sensing image for RBM model, comprise the steps:
(1) input gray level matrix:
Input the gray matrix of two the same area of registration, the remote sensing images of different time;
(2) gray matrix is split:
(2a) fuzzy C-means clustering method is adopted, to the gray matrix I of two one of them remote sensing images of gray matrix of the same area of registration, the remote sensing images of different time 1carry out fuzzy clustering, obtain the gray matrix X after first segmentation 1;
(2b) fuzzy C-means clustering method is adopted, to the gray matrix I of two gray matrix wherein another remote sensing images of the same area of registration, the remote sensing images of different time 2carry out fuzzy clustering, obtain the gray matrix X after second segmentation 2;
(3) log ratio difference gray matrix to be detected is constructed:
According to the following formula, gray matrix X is calculated 1and X 2log ratio difference gray matrix, obtain log ratio difference gray matrix to be detected:
D=|log(X 2+1)-log(X 1+1)|
Wherein, D represents gray matrix X 1and X 2log ratio difference gray matrix, X 2represent the gray matrix after second segmentation, X 1represent the gray matrix after first segmentation, || represent absolute value operation, log represents that denary logarithm operates;
(4) presort:
(4a) adopt fuzzy C-means clustering method, fuzzy clustering is carried out to log ratio difference gray matrix to be detected, obtains the fuzzy clustering matrix of log ratio difference gray matrix;
(4b) by change in fuzzy clustering matrix class be subordinate to the large pixel of angle value be judged to the ownership of into change class, do not change class to be subordinate to the large pixel of angle value and to be judged to the ownership of as not changing class, all pixels be divided into change class and do not change class, obtaining initial change and detect gray matrix;
(5) training sample is chosen:
(5a) detect in gray matrix in initial change, choose one centered by a jth pixel, size is the square window of 5 × 5, and total pixel number of square window is 25;
(5b) gray average of pixel in square window according to the following formula, is calculated:
E = Σ y = 1 25 I y 25
Wherein, E represents the gray average of pixel in square window, and Σ represents sum operation, and y represents y pixel of square window, I yrepresent the gray-scale value of y pixel in square window;
(5c) according to the following formula, training sample is chosen:
|I j-E|≤0.3
Wherein, I jrepresent the gray-scale value of a jth pixel, E represents the gray average of pixel in square window, || represent absolute value operation;
(5d) judge whether to have inspected all pixels in initial change detection gray matrix, if so, obtain training sample, perform step (6), otherwise, perform step (5a);
(6) limited Boltzmann machine RBM is trained:
(6a) network structure of limited Boltzmann machine RBM is set as two-layer visible layer and two-layer hidden layer;
(6b) with the weights of the limited Boltzmann RBM of random number initialization on interval [-0.001,0.001];
(6c) the limited bias vector of Boltzmann machine RBM visible layer and the bias vector of hidden layer are initialized as null vector;
(6d) training sample is input in limited Boltzmann machine RBM and trains, utilize sdpecific dispersion method, obtain the limited Boltzmann machine RBM after training;
(7) exporting change testing result:
Being input to the limited Boltzmann machine RBM after training by obtaining log ratio difference gray matrix to be detected, finally being changed testing result.
2. the target level method for detecting change of remote sensing image based on RBM model according to claim 1, it is characterized in that, described in step (2a), step (2b), step (4a), the concrete steps of fuzzy C-means clustering method are as follows:
1st step, according to the following formula, the degree of membership with pixel in the random number initialization gray matrix on interval [0,1]:
Σ i = 1 N Σ k = 1 2 u k i = 1
Wherein, N represents the pixel number of gray matrix, and i represents i-th pixel of gray matrix, and k represents the fuzzy clustering classification of gray matrix, and Σ represents sum operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix;
2nd step, according to the following formula, calculates the cluster centre of gray matrix:
v k = Σ i = 1 N u k i x i Σ i = 1 N u k i
Wherein, v krepresent the cluster centre of kth class in gray matrix, k represents the fuzzy clustering classification of gray matrix, and N represents the number of the pixel of gray matrix, and i represents i-th pixel of gray matrix, and Σ represents sum operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, x irepresent the feature of i-th pixel in gray matrix;
3rd step, according to the following formula, upgrades the degree of membership of pixel in gray matrix:
u k i = Σ i = 1 N Σ k = 1 2 d ( x i , v k ) d ( x i , v k )
Wherein, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, k represents the fuzzy clustering classification of gray matrix, and i represents i-th pixel of gray matrix, and N represents the pixel number of gray matrix, and Σ represents sum operation, x irepresent the feature of i-th pixel in gray matrix, v krepresent the cluster centre of kth class in gray matrix, d (x i, v k) to represent in gray matrix the Euclidean distance of i-th pixel to the cluster centre of kth class;
4th step, according to the following formula, calculates the target function value of fuzzy C-means clustering method:
J = Σ i = 1 N Σ k = 1 2 [ ( u k i ) 2 × d ( x i , v k ) ]
Wherein, J represents the target function value of fuzzy C-means clustering method, and N represents the pixel number of gray matrix, and i represents i-th pixel of gray matrix, and k represents the fuzzy clustering classification of gray matrix, and Σ represents sum operation, () 2represent square operation, u kirepresent the degree of membership of i-th pixel in kth class in gray matrix, x irepresent the feature of i-th pixel in gray matrix, v krepresent the cluster centre of kth class in gray matrix, d (x i, v k) to represent in gray matrix the Euclidean distance of i-th pixel to the cluster centre of kth class;
5th step, judges whether the knots modification of target function value before and after iteration is less than 0.000001, if so, stops iteration, otherwise, perform the 2nd step.
3. the target level method for detecting change of remote sensing image based on RBM model according to claim 1, is characterized in that, as follows to the concrete steps of sdpecific dispersion method described in step (6d):
1st step, will treat that training sample is input to the ground floor of limited Boltzmann machine RBM visible layer, and the ground floor obtaining the visible layer of the graceful machine RBM of limited Wurz exports;
2nd step, according to the following formula, the ground floor that sampling obtains the hidden layer of limited Boltzmann machine RBM exports:
h l n ~ P ( h l n = 1 | v 1 ) = 1 1 + e - ( Σ n = 1 N w l n × v 1 + b l n )
Wherein, h 1nrepresent the output of the n-th node in the ground floor of the hidden layer of limited Boltzmann machine RBM, n represents the n-th node in the ground floor of the hidden layer of limited Boltzmann machine RBM, ~ represent sampling operation, P (h 1n=1|v 1) represent that the ground floor of the visible layer of limited Boltzmann machine RBM exports as v 1time limited Boltzmann machine RBM hidden layer ground floor in n-th node export be 1 probability, v 1represent that the ground floor of the visible layer of limited Boltzmann machine RBM exports, e represents Euler's constant, and Σ represents sum operation, and N represents the node total number of the ground floor of the hidden layer of limited Boltzmann machine RBM, w 1nrepresent the weights of the ground floor of the visible layer of the n-th node and limited Boltzmann machine RBM in the ground floor of the hidden layer connecting limited Boltzmann machine RBM, b 1nrepresent the bias vector of the n-th node in the ground floor of the hidden layer of limited Boltzmann machine RBM;
3rd step, according to the following formula, the second layer that sampling obtains the visible layer of limited Boltzmann machine RBM exports:
v 2 m ~ P ( v 2 m = 1 | h 1 ) = 1 1 + e - ( Σ m = 1 M w 2 m × h 1 + a 2 m )
Wherein, v 2mrepresent the output of m node in the second layer of the visible layer of limited Boltzmann machine RBM, m represents m node in the second layer of the visible layer of limited Boltzmann machine RBM, ~ represent sampling operation, P (v 2m=1|h 1) represent that the ground floor of the hidden layer of limited Boltzmann machine RBM exports as h 1time limited Boltzmann machine RBM visible layer the second layer in m node export be 1 probability, h 1represent that the ground floor of the hidden layer of limited Boltzmann machine RBM exports, e represents Euler's constant, and Σ represents sum operation, and M represents the node total number of the second layer of the visible layer of limited Boltzmann machine RBM, w 2mrepresent the weights of the ground floor of the hidden layer of m node and limited Boltzmann machine RBM in the second layer of the connection visible layer of limited Boltzmann machine RBM, a 2mrepresent the bias vector of m node in the second layer of the visible layer of limited Boltzmann machine RBM;
4th step, according to the following formula, the second layer that sampling obtains the hidden layer of limited Boltzmann machine RBM exports:
h 2 n ~ P ( h 2 n = 1 | v 2 ) = 1 1 + e - ( Σ n = 1 N w 2 n × v 2 + b 2 n )
Wherein, h 2nthe output of the n-th node in the second layer of the hidden layer of limited Boltzmann machine RBM, n represents the n-th node in the second layer of the hidden layer of limited Boltzmann machine RBM, ~ represent sampling operation, P (h 2n=1|v 2) represent that the second layer of the visible layer of limited Boltzmann machine RBM exports as v 2time limited Boltzmann machine RBM hidden layer the second layer in n-th node export be 1 probability, v 2represent that the second layer of the visible layer of limited Boltzmann machine RBM exports, e represents Euler's constant, and Σ represents sum operation, and N represents the node total number of the second layer of the hidden layer of limited Boltzmann machine RBM, w 2nrepresent the weights of the second layer of the visible layer of the n-th node and limited Boltzmann machine RBM in the second layer of the hidden layer connecting limited Boltzmann machine RBM, b 2nrepresent the bias vector of the n-th node in the second layer of the hidden layer of limited Boltzmann machine RBM;
5th step, according to the following formula, upgrades the weight of limited Boltzmann machine RBM:
w t+1=w t+(P(h 1=1|v 1)×v 1)-(P(h 2=1|v 2)×v 2)
Wherein, w t+1represent the weight of limited Boltzmann machine RBM after upgrading for the t+1 time, w trepresent the weight of limited Boltzmann machine RBM after upgrading for the t time, P (h 1=1|v 1) represent that the ground floor of the visible layer of limited Boltzmann machine RBM exports as v 1time limited Boltzmann machine RBM hidden layer ground floor export be 1 probability, h 1represent that the ground floor of the hidden layer of limited Boltzmann machine RBM exports, v 1represent that the ground floor of the visible layer of limited Boltzmann machine RBM exports, (×) represents inner product operation, P (h 2=1|v 2) represent that the second layer of the visible layer of limited Boltzmann machine RBM exports as v 2time limited Boltzmann machine RBM hidden layer the second layer export be 1 probability, h 2represent that the second layer of the hidden layer of limited Boltzmann machine RBM exports, v 2represent that the second layer of the visible layer of limited Boltzmann machine RBM exports;
6th step, according to the following formula, upgrades the bias vector of limited Boltzmann machine RBM visible layer:
a t+1=a t+v 1-v 2
Wherein, a t+1represent the bias vector of limited Boltzmann machine RBM visible layer after upgrading for the t+1 time, a trepresent the bias vector of limited Boltzmann machine RBM visible layer after upgrading for the t time, v 1represent that the ground floor of the visible layer of limited Boltzmann machine RBM exports, v 2represent that the second layer of the visible layer of limited Boltzmann machine RBM exports;
7th step, according to the following formula, upgrades the bias vector of limited Boltzmann machine RBM hidden layer:
b t+1=b t+P(h 1=1|v 1)-P(h 2=1|v 2)
Wherein, b t+1represent the bias vector of limited Boltzmann machine RBM hidden layer after upgrading for the t+1 time, b trepresent the bias vector of limited Boltzmann machine RBM hidden layer after upgrading for the t time, P (h 1=1|v 1) represent that the ground floor of the visible layer of limited Boltzmann machine RBM exports as v 1time limited Boltzmann machine RBM hidden layer ground floor export be 1 probability, h 1represent that the ground floor of the hidden layer of limited Boltzmann machine RBM exports, v 1represent that the ground floor of the visible layer of limited Boltzmann machine RBM exports, P (h 2=1|v 2) represent that the second layer of the visible layer of limited Boltzmann machine RBM exports as v 2time limited Boltzmann machine RBM hidden layer the second layer export be 1 probability, h 2represent that the second layer of the hidden layer of limited Boltzmann machine RBM exports, v 2represent that the second layer of the visible layer of limited Boltzmann machine RBM exports;
8th step, judges whether the cycle of training of limited Boltzmann machine RBM reaches 300 seconds, if, stop the bias vector upgrading the limited weights of Boltzmann machine RBM, the bias vector of visible layer and hidden layer, obtain the limited Boltzmann machine RBM trained, otherwise, perform the 2nd step.
CN201510512212.2A 2015-08-19 2015-08-19 Target level method for detecting change of remote sensing image based on RBM models Active CN105046241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510512212.2A CN105046241B (en) 2015-08-19 2015-08-19 Target level method for detecting change of remote sensing image based on RBM models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510512212.2A CN105046241B (en) 2015-08-19 2015-08-19 Target level method for detecting change of remote sensing image based on RBM models

Publications (2)

Publication Number Publication Date
CN105046241A true CN105046241A (en) 2015-11-11
CN105046241B CN105046241B (en) 2018-04-17

Family

ID=54452770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510512212.2A Active CN105046241B (en) 2015-08-19 2015-08-19 Target level method for detecting change of remote sensing image based on RBM models

Country Status (1)

Country Link
CN (1) CN105046241B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512661A (en) * 2015-11-25 2016-04-20 中国人民解放军信息工程大学 Multi-mode-characteristic-fusion-based remote-sensing image classification method
CN105608698A (en) * 2015-12-25 2016-05-25 西北工业大学 Remote image change detection method based on SAE
CN107437091A (en) * 2016-03-23 2017-12-05 西安电子科技大学 Multilayer limits the positive and negative class change detecting method of SAR image of Boltzmann machine
CN107545548A (en) * 2017-07-05 2018-01-05 河南师范大学 Displacement aliased image blind separating method and system based on limited Boltzmann machine
CN108764090A (en) * 2018-05-18 2018-11-06 腾讯大地通途(北京)科技有限公司 It is a kind of regionality unusual fluctuation determine method, apparatus, server and storage medium
CN109635836A (en) * 2018-11-09 2019-04-16 广西壮族自治区遥感信息测绘院 Remote sensing image Forest road hierarchy detection method based on sparse DBN model
CN111832647A (en) * 2020-07-10 2020-10-27 上海交通大学 Abnormal flow detection system and method
CN112991352A (en) * 2021-03-04 2021-06-18 扬州微地图地理信息科技有限公司 High-resolution remote sensing image segmentation system based on information tracing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1022682A3 (en) * 1999-01-20 2001-03-21 University of Washington Color clustering for scene change detection and object tracking in video sequences
EP1881454A1 (en) * 2006-07-17 2008-01-23 Mitsubishi Electric Information Technology Centre Europe B.V. Image processing for change detection
CN102968790A (en) * 2012-10-25 2013-03-13 西安电子科技大学 Remote sensing image change detection method based on image fusion
CN103077525A (en) * 2013-01-27 2013-05-01 西安电子科技大学 Treelet image fusion-based remote sensing image change detection method
CN103218823A (en) * 2013-05-08 2013-07-24 西安电子科技大学 Remote sensing image change detection method based on nuclear transmission

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1022682A3 (en) * 1999-01-20 2001-03-21 University of Washington Color clustering for scene change detection and object tracking in video sequences
EP1881454A1 (en) * 2006-07-17 2008-01-23 Mitsubishi Electric Information Technology Centre Europe B.V. Image processing for change detection
CN102968790A (en) * 2012-10-25 2013-03-13 西安电子科技大学 Remote sensing image change detection method based on image fusion
CN103077525A (en) * 2013-01-27 2013-05-01 西安电子科技大学 Treelet image fusion-based remote sensing image change detection method
CN103218823A (en) * 2013-05-08 2013-07-24 西安电子科技大学 Remote sensing image change detection method based on nuclear transmission

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512661A (en) * 2015-11-25 2016-04-20 中国人民解放军信息工程大学 Multi-mode-characteristic-fusion-based remote-sensing image classification method
CN105512661B (en) * 2015-11-25 2019-02-26 中国人民解放军信息工程大学 A kind of Remote Image Classification based on multi-modal Fusion Features
CN105608698A (en) * 2015-12-25 2016-05-25 西北工业大学 Remote image change detection method based on SAE
CN105608698B (en) * 2015-12-25 2018-12-25 西北工业大学 A kind of method for detecting change of remote sensing image based on SAE
CN107437091A (en) * 2016-03-23 2017-12-05 西安电子科技大学 Multilayer limits the positive and negative class change detecting method of SAR image of Boltzmann machine
CN107545548A (en) * 2017-07-05 2018-01-05 河南师范大学 Displacement aliased image blind separating method and system based on limited Boltzmann machine
CN108764090A (en) * 2018-05-18 2018-11-06 腾讯大地通途(北京)科技有限公司 It is a kind of regionality unusual fluctuation determine method, apparatus, server and storage medium
CN108764090B (en) * 2018-05-18 2022-05-17 腾讯大地通途(北京)科技有限公司 Regional transaction determination method, device, server and storage medium
CN109635836A (en) * 2018-11-09 2019-04-16 广西壮族自治区遥感信息测绘院 Remote sensing image Forest road hierarchy detection method based on sparse DBN model
CN111832647A (en) * 2020-07-10 2020-10-27 上海交通大学 Abnormal flow detection system and method
CN112991352A (en) * 2021-03-04 2021-06-18 扬州微地图地理信息科技有限公司 High-resolution remote sensing image segmentation system based on information tracing

Also Published As

Publication number Publication date
CN105046241B (en) 2018-04-17

Similar Documents

Publication Publication Date Title
CN105046241A (en) Target level remote sensing image change detection method based on RBM model
CN103020978B (en) SAR (synthetic aperture radar) image change detection method combining multi-threshold segmentation with fuzzy clustering
CN103048329B (en) A kind of road surface crack detection method based on active contour model
CN104182985B (en) Remote sensing image change detection method
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN105741267A (en) Multi-source image change detection method based on clustering guided deep neural network classification
CN108171119B (en) SAR image change detection method based on residual error network
CN103605972A (en) Non-restricted environment face verification method based on block depth neural network
CN102855491A (en) Brain function magnetic resonance image classification method based on network centrality
CN105825502A (en) Saliency-guidance-based weak supervision image analysis method of dictionary learning
CN106022254A (en) Image recognition technology
CN104252625A (en) Sample adaptive multi-feature weighted remote sensing image method
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
CN103824302A (en) SAR (synthetic aperture radar) image change detecting method based on direction wave domain image fusion
CN104794729A (en) SAR image change detection method based on significance guidance
CN105095913A (en) Remote sensing image classification method and system based on neighbor regular joint sparse representation
CN105205807A (en) Remote sensing image change detection method based on sparse automatic code machine
CN110516700B (en) Fine-grained image classification method based on metric learning
CN114372493A (en) Computer cable electromagnetic leakage characteristic analysis method
Kaur et al. A methodology for the performance analysis of cluster based image segmentation
CN107657615A (en) High Resolution SAR image change detection method based on increment CAE
CN112613354A (en) Heterogeneous remote sensing image change detection method based on sparse noise reduction self-encoder
CN116466408B (en) Artificial neural network superbedrock identification method based on aeromagnetic data
CN104751174A (en) Polarized SAR (Specific Absorption Rate) image classifying method based on super-vector coding
CN104463227A (en) Polarimetric SAR image classification method based on FQPSO and target decomposition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant