CN105405133B - A kind of remote sensing image variation detection method - Google Patents
A kind of remote sensing image variation detection method Download PDFInfo
- Publication number
- CN105405133B CN105405133B CN201510742564.7A CN201510742564A CN105405133B CN 105405133 B CN105405133 B CN 105405133B CN 201510742564 A CN201510742564 A CN 201510742564A CN 105405133 B CN105405133 B CN 105405133B
- Authority
- CN
- China
- Prior art keywords
- mrow
- msup
- msub
- remote sensing
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 36
- 230000003287 optical effect Effects 0.000 claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims abstract description 15
- 238000010606 normalization Methods 0.000 claims abstract description 14
- 230000005855 radiation Effects 0.000 claims abstract description 13
- 238000012937 correction Methods 0.000 claims abstract description 12
- 230000004075 alteration Effects 0.000 claims abstract description 6
- 238000001228 spectrum Methods 0.000 claims abstract description 6
- 230000008859 change Effects 0.000 claims description 39
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000005192 partition Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 241001269238 Data Species 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Processing (AREA)
- Measurement Of Radiation (AREA)
Abstract
The invention discloses a kind of remote sensing image variation detection method, including:Obtain two phase high-resolution optical remote sensing image X1And X2;To X1And X2Carry out Image registration;Utilize Multivariate alteration detection method X1And X2Carry out radiation normalization correction;X after being corrected according to radiation normalization1And X2Diverse vector amplitude X is calculated respectivelyMWith spectrum angle information XSA;According to XMOptimum segmentation threshold value T is calculated using Bayes principles and EM algorithm;According to T and XMSelect pseudo- training sample region;By XMAnd XSAThe input as core FCM is combined, optimal model parameters value selection is carried out to core FCM combination space neighborhood informations model according to the pseudo- training sample region;According to the optimal model parameters value of selection, using the method for core FCM combination space neighborhood informations, region of variation and the non-changing region of optical remote sensing image are determined.The present invention is more sane, precision is higher.
Description
Technical field
The present invention relates to remote sensing image change detection techniques field, more particularly to a kind of remote sensing image variation detection method.
Background technology
With the continuous accumulation of multidate high-definition remote sensing data and the successive foundation of spatial database, how from this
Extraction and detection change information have turned into the important subject of remote sensing science and Geographical Information Sciences in a little remotely-sensed datas.According to
The remote sensing image of the same area difference phase, the information of the dynamic changes such as city, environment can be extracted, be resource management and rule
Draw, the department such as environmental protection provides the foundation of science decision.China " 12 " will increase expansion implementation Eleventh Five-Year Plan and have been turned on
The high-resolution earth observation engineering of implementation, concern include high-definition remote sensing target and space environment signature analysis and highly reliable
Property basic theory and the key technology research such as automatic interpretation, turning into and solving national security and the great demand of socio-economic development
Research focus.
The change detection of remote sensing image is exactly in remotely-sensed data never of the same period, quantitatively analyzes and determine earth's surface change
Feature and process.Scholars propose many effective detection algorithms with application study from different angles, such as change arrow
Measure analytic approach (Change Vector Analysis, CVA), clustering method based on Fuzzy C-means (FCM) etc..Wherein,
Traditional change detection of the multidate optical remote sensing based on FCM, first carries out CVA conversion, then the amplitude of diverse vector is entered more
Row FCM is clustered, and then obtains changing testing result.In such technology, the deficiency using FCM is to be only applicable to spherical or ellipsoid
Cluster, and it is extremely sensitive to noise and its outlier (Outlier).In addition, the amplitude of diverse vector is only used only so that original more
Spectral information is not excavated sufficiently, not enough steadily and surely, precision it is not high.
In view of the above-mentioned problems, many scholars attempt the constraint by adding different spatial neighborhoods in FCM object functions
To solve, but the complication of high resolution image detection environment and target prior information scarcity etc., cause these algorithms all
There is certain limitation, precision be not high.For this reason, it is necessary to study new High Resolution Visible Light Remote Sensing Imagery Change Detection
Technology effectively overcomes above-mentioned difficult point.
The content of the invention
The technical problems to be solved by the invention are, there is provided a kind of remote sensing image variation detection method, this method are one
Kind joint CVA and SAM self-adaptive kernel FCM multi-temporal remote sensing image change detecting method, present invention change testing result is more
Add that sane, precision is higher.
In order to solve the above-mentioned technical problem, the invention provides a kind of remote sensing image variation detection method, including:
Obtain two phase high-resolution optical remote sensing image X1And X2;
To optical remote sensing image X1And X2Carry out Image registration;
Using Multivariate alteration detection method to optical remote sensing image X1And X2Carry out radiation normalization correction;
Optical remote sensing image X after being corrected according to radiation normalization1And X2Diverse vector amplitude X is calculated respectivelyMAnd spectral modeling
Information XSA;
According to diverse vector amplitude XMOptimum segmentation threshold value T is calculated using Bayes principles and EM algorithm;
According to optimum segmentation threshold value T and diverse vector amplitude XMSelect pseudo- training sample region;
By XMAnd XSAThe input as core FCM is combined, according to the pseudo- training sample region to core FCM combination spatial neighborhoods
Information model carries out optimal model parameters value selection;
According to the optimal model parameters value of selection, using the method for core FCM combination space neighborhood informations, optical remote sensing is determined
The region of variation of image and non-changing region.
Implement the present invention, have the advantages that:The present invention joint multi-temporal remote sensing image diverse vector amplitude and
Input of the spectral modeling mapping graph (Spectral Angle Mapper, SAM) of multidate as core FCM, then tied based on core FCM
The method for closing space neighborhood information, obtains final change testing result.Wherein, nuclear parameter in core FCM object functions etc., lead to
The pseudo- training sample based on CVA technical limit spacings is crossed to select, change testing result is more sane, precision is higher.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of one embodiment of remote sensing image variation detection method provided by the invention;
Fig. 2 is original high resolution optical remote sensing image figure;
Fig. 3 is the experimental result comparison diagram of the inventive method and other method
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
Fig. 1 is the schematic flow sheet of one embodiment of remote sensing image variation detection method provided by the invention, the present invention
It is a kind of multi-temporal remote sensing image change detecting method, is primarily adapted for use in high-resolution optical remote sensing image, as shown in figure 1, this
Invention includes step:
S101, obtain two phase high-resolution optical remote sensing image X1And X2。
Wherein, X1、X2It is two panel height resolution Optical remote sensing images of the same area difference phase.
S102, to optical remote sensing image X1And X2Carry out Image registration.
Specifically, step S102 specifically includes step:
S1021, using ENVI14.8 remote sensing softwares to optical remote sensing image X1And X2Carry out geometric approximate correction.
Geometric approximate correction concrete operation step is:(1) reference images and image to be corrected are shown;(2) ground control is gathered
Point GCPs;GCPs should be evenly distributed in entire image, and GCPs number is at least above equal to 9;(3) calculation error;(4) select
Multinomial model;(5) resampling output is carried out using bilinear interpolation.Bilinearity differential technique therein is:If seek unknown function f
In point P=(x, y) value, it is assumed that our known function f are in Q11=(x1,y1),Q12=(x1,y2),Q21=(x2,y1), and Q22=
(x2,y2) four points value.If selection one coordinate system cause this four points coordinate be respectively (0,0), (0,1), (1,
0) and (1,1), then bilinear interpolation formula can is expressed as:
f(x,y)≈f(0,0)(1-x)(1-y)+f(1,0)x(1-y)+f(0,1)(1-x)y+f(1,1)xy。
S1022, using Auto-matching and Triangulation Method to the X after geometric approximate correction1And X2Carry out geometric accurate correction.
Wherein, Triangulation Method is to build Delaunay triangulation network using incremental algorithm, to each triangle, profit
The imitative of the triangle interior is determined with the geographical coordinate of the corresponding reference images same place of the ranks number on its three summits
Transformation model parameter is penetrated, correcting image is treated and is corrected, the remote sensing image after being corrected.
S103, using Multivariate alteration detection method (Multivariate Alteration Detection, MAD) to light
Learn remote sensing image X1And X2Carry out radiation normalization correction.
Specifically, step S103 specifically includes step:
S1031, obtain optical remote sensing image X1And X2The linear combination of each wave band brightness value, obtain change information enhancing
Difference image;
S1032, region of variation and non-region of variation determined by threshold value according to the difference image;
S1033, the mapping equation by two phase pixels pair corresponding to non-region of variation, complete relative detector calibration.
S104, corrected according to radiation normalization after optical remote sensing image X1And X2Diverse vector amplitude X is calculated respectivelyMWith
Spectrum angle information XSA。
Specifically, step S104 includes step:
S1041, corrected according to radiation normalization after optical remote sensing image X1And X2Diverse vector amplitude X is calculatedM。
Wherein,
In formula, B represents the wave band number of each phase remote sensing image, and (i, j) is the coordinate of image, X1bRepresent X1B wave bands
Image, X2bRepresent X2B wave band images;
S1042, corrected according to radiation normalization after optical remote sensing image X1And X2Spectrum angle information X is calculatedSA,
Wherein,
S105, according to diverse vector amplitude XMUtilize Bayes principles and EM algorithm (Expectation-
Maximization, EM) optimum segmentation threshold value T is calculated.
Specifically, step S105 specifically includes step:
S1051, using EM algorithm estimate XMClass ω is not changed on imagenAverage mnAnd variances sigman, change class ωc
Average mcIt is σ with variancec, wherein,
In formula, t represents iterations, and t subscripts represent value during the t times iteration of Current Content, for example,Represent mn
Value during the t+1 times iteration, other expressions are similar,RepresentValue during the t+1 times iteration,
I and J represents the line number and columns of image respectively,Represent XM
Class ω is not changed on imagenThe Gaussian Profile of obedience,Represent XMOn image
Change class ωcThe Gaussian Profile of obedience;
S1052, according to Bayes minimum error principles, solution formula
Obtain optimum segmentation threshold value T.
S106, according to optimum segmentation threshold value T and diverse vector amplitude XMSelect pseudo- training sample region.
Specifically, step S106 includes step:
S1061, according to optimum segmentation threshold value T and diverse vector amplitude XMSelection do not change class puppet training set sample for
S1062, according to optimum segmentation threshold value T and diverse vector amplitude XMSelection change class puppet training set sample for
Wherein, δ XMThe 15% of dynamic range.
S107, by XMAnd XSAThe input as core FCM is combined, core FCM is combined according to the pseudo- training sample region empty
Between neighborhood information model carry out optimal model parameters value selection.
Specifically, step S107 specifically includes step:
S1071, by XMAnd XSAThe input as core FCM is combined, structure core FCM combination space neighborhood information models are:
In formula, C is clusters number, and N is the sum of sample,Represent fuzzy membership of the kth sample for c class cluster centres
Degree, m be degree of membership Weighted Index, uck∈ [0,1] andParameter alpha control punishment effect,For XMPart
Average image and XSALocal mean value image combination,
S1072, setup parameter α and nuclear parameter g spans, utilize pseudo- training sample set, search variability index CindexFor
The value of α and g when minimum are as optimal model parameters value.
Wherein, variability index Cindex=DindexkT,kTRepresent model ginseng
Kappa coefficient of the number on pseudo- training sample set, Nn(α, g) represents what is obtained in given α and g using the minimization of object function
The non-changing number of pixels of whole image;Nc(α, g) is represented in given α and g, the change number of pixels of whole image;TNn(α,
G) represent in given α and g, the non-changing number of pixels that pseudo- training sample is concentrated;TNc(α, g) is represented in given α and g, pseudo-
The change number of pixels that training sample is concentrated.
S108, the optimal model parameters value according to selection, using the method for core FCM combination space neighborhood informations, determine light
Learn region of variation and the non-changing region of remote sensing image.
Specific, step S108 is specifically included:
Clusters number C=2 in S1081, setting core FCM combination space neighborhood information models, does not change as initial
The center of class and change class, selection and diverse vector amplitude XMMinimum value and the corresponding vector of maximum;If degree of membership adds
Exponent m=2, ε is weighed as the constant more than 0, parameter alpha and nuclear parameter g value are the selected optimal model parameters value;
S1082, calculate XM, XSALocal window average, window size is arranged to 3 × 3;
S1083, using formula
Update fuzzy partition matrix;
S1084, using formula
Update cluster centre;
S1085, renewal fuzzy partition matrix and cluster centre are repeated until the cluster centre cluster of adjacent iteration twice is small
In ε;
S1086, according to fuzzy partition matrix uckIt is determined that final change detection figure, obtains the variation zone of optical remote sensing image
Domain and non-changing region.
The effect of the present invention can be further illustrated by following experimental result and analysis:
The present invention experimental data for French Littoral area multidate SPOT high-resolution image datas, image size
For 400 × 400, tri- wave bands of B1, B2 and B3 are used.In order to verify effectiveness of the invention, by change detecting method of the present invention
It is compared with following change detecting methods:
(1) [Italian Bruzzone L. etc. are in article " Automatic for the EM methods (CVA-EM) based on CVA
analysis of difference image for unsupervised change detection”(IEEE
Transactions on Geoscience and Remote Sensing,2000,38(3):1171-1182.) in carried
Detection method].
(2) [Chen songchan etc. are in article " Robust for the sorting technique (FCM-S) of FCM combinations space neighborhood information
Image Segmentation Using FCM With Spatial Constraints Based on New Kernel-
Induced Distance Measure”(IEEE Transactions on Systems,Man,and Cybernetics—
Part B:Cybernetics,2004,34(4):1907-1916.) in the method that is carried]
(3) the inventive method.
Detection performance is weighed with four false retrieval number FP, missing inspection number FN, total error number OE and Kappa coefficients indexs.FP、FN
Show that the performance of change detecting method is better closer to 1 closer to 0, Kappa coefficients with OE.The testing result such as institute of table 1
Show.From Fig. 2, Fig. 3 and table 1, the detection method performance that the present invention is carried is better than other two kinds of detection methods, and this shows this hair
Bright carried change detecting method is effective.
The multidate SPOT5 remote sensing imagery change detection results contrasts in table 1Littoral areas
Method | FP | FN | OE | k |
CVA-EM | 7919 | 3882 | 11801 | 0.705 |
FCM-S | 1822 | 6928 | 8750 | 0.737 |
The inventive method | 2511 | 4689 | 7200 | 0.797 |
It is preferable | 0 | 0 | 0 | 1 |
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
In several embodiments provided herein, it should be understood that disclosed method can be by others side
Formula is realized.For example, system embodiment described above is only schematical, professional further appreciates that,
The unit and algorithm steps of each example described with reference to the embodiments described herein, can be soft with electronic hardware, computer
Part or the combination of the two are realized, in order to clearly demonstrate the interchangeability of hardware and software, have been pressed in the above description
The composition and step of each example are generally described according to function.These functions are performed with hardware or software mode actually,
Application-specific and design constraint depending on technical scheme.Professional and technical personnel can be used each specific application
Distinct methods realize described function, but this realization is it is not considered that beyond the scope of this invention.Software module can be with
It is placed in random access memory (RAM), internal memory, read-only storage (ROM), electrically programmable ROM, electrically erasable ROM, deposit
In any other form of storage medium well known in device, hard disk, moveable magnetic disc, CD-ROM or technical field.
The foregoing description of the disclosed embodiments, professional and technical personnel in the field are enable to realize or using the present invention.
A variety of modifications to these embodiments will be apparent for those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention
The embodiments shown herein is not intended to be limited to, and is to fit to and principles disclosed herein and features of novelty phase one
The most wide scope caused.
Claims (7)
- A kind of 1. remote sensing image variation detection method, it is characterised in that including:Obtain two phase high-resolution optical remote sensing image X1And X2;To optical remote sensing image X1And X2Carry out Image registration;Using Multivariate alteration detection method to optical remote sensing image X1And X2Carry out radiation normalization correction;Optical remote sensing image X after being corrected according to radiation normalization1And X2Diverse vector amplitude X is calculated respectivelyMWith spectrum angle information XSA;According to diverse vector amplitude XMOptimum segmentation threshold value T is calculated using Bayes principles and EM algorithm;According to optimum segmentation threshold value T and diverse vector amplitude XMSelect pseudo- training sample region;By XMAnd XSAThe input as core FCM is combined, according to the pseudo- training sample region to core FCM combination space neighborhood informations Model carries out optimal model parameters value selection;According to the optimal model parameters value of selection, using the method for core FCM combination space neighborhood informations, optical remote sensing image is determined Region of variation and non-changing region;It is described by XMAnd XSAThe input as core FCM is combined, according to the pseudo- training sample region to core FCM combination spatial neighborhoods Information model carries out optimal model parameters selection, specifically includes:By XMAnd XSAThe input as core FCM is combined, structure core FCM combination space neighborhood information models are:In formula, C is clusters number, and N is the sum of sample,Represent fuzzy membership of the kth sample for c class cluster centres Degree, m be degree of membership Weighted Index, uck∈ [0,1] andParameter alpha control punishment effect,For XMPart Average image and XSALocal mean value image combination,Setup parameter α and nuclear parameter g spans, utilize pseudo- training sample set, search variability index CindexFor minimum when α and G value is as optimal model parameters value;Wherein, variability index Cindex=Dindex/kT, kTRepresent Kappa coefficient of the model parameter on pseudo- training sample set, Nn(α, g) represents to utilize object function in given α and g Minimize the non-changing number of pixels of the whole image obtained;Nc(α, g) is represented in given α and g, the change picture of whole image Plain number;TNn(α, g) is represented in given α and g, the non-changing number of pixels that pseudo- training sample is concentrated;TNc(α, g) is represented During given α and g, the change number of pixels of pseudo- training sample concentration.
- 2. remote sensing image variation detection method as claimed in claim 1, it is characterised in that described to optical remote sensing image X1With X2Image registration is carried out, is specifically included:Using ENVI14.8 remote sensing softwares to optical remote sensing image X1And X2Carry out geometric approximate correction;Using Auto-matching and Triangulation Method to the X after geometric approximate correction1And X2Carry out geometric accurate correction.
- 3. remote sensing image variation detection method as claimed in claim 1, it is characterised in that described to utilize Multivariate alteration detection side Method is to optical remote sensing image X1And X2Radiation normalization correction is carried out, is specifically included:Obtain optical remote sensing image X1And X2The linear combination of each wave band brightness value, obtain the difference image of change information enhancing;Region of variation and non-region of variation are determined by threshold value according to the difference image;By the mapping equation of two phase pixels pair corresponding to non-region of variation, relative detector calibration is completed.
- 4. remote sensing image variation detection method as claimed in claim 1, it is characterised in that described to be corrected according to radiation normalization Optical remote sensing image X afterwards1And X2Diverse vector amplitude X is calculated respectivelyMWith spectrum angle information XSA, specifically include:Optical remote sensing image X after being corrected according to radiation normalization1And X2Diverse vector amplitude X is calculatedM, wherein,In formula, B represents the wave band number of each phase remote sensing image, and (i, j) is the coordinate of image, X1bRepresent X1B wave bands Image, X2bRepresent X2B wave band images;Optical remote sensing image X after being corrected according to radiation normalization1And X2Spectrum angle information X is calculatedSA,Wherein,
- 5. remote sensing image variation detection method as claimed in claim 1, it is characterised in that described according to diverse vector amplitude XM Optimum segmentation threshold value T is calculated using Bayes principles and EM algorithm, specifically includes:X is estimated using EM algorithmMClass ω is not changed on imagenAverage mnAnd variances sigman, change class ωcAverage mcWith Variance is σc, wherein,<mrow> <msubsup> <mi>m</mi> <mi>n</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>/</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mo>}</mo> </mrow><mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&sigma;</mi> <mi>n</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mo>&lsqb;</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>m</mi> <mi>n</mi> <mi>t</mi> </msubsup> <mo>&rsqb;</mo> <mo>}</mo> <mo>/</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mo>}</mo> </mrow><mrow> <msubsup> <mi>m</mi> <mi>c</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>/</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mo>}</mo> </mrow><mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&sigma;</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mo>&lsqb;</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>m</mi> <mi>c</mi> <mi>t</mi> </msubsup> <mo>&rsqb;</mo> <mo>}</mo> <mo>/</mo> <mo>{</mo> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> <mo>}</mo> </mrow>In formula, t represents iterations, and t subscripts represent value during the t times iteration of Current Content,<mrow> <msup> <mi>p</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> </mrow> <mrow> <mi>I</mi> <mi>J</mi> </mrow> </mfrac> <mo>,</mo> <msup> <mi>p</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mo>&Sigma;</mo> <mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>X</mi> <mi>M</mi> </msub> </mrow> </munder> <mfrac> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <msub> <mi>&omega;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>t</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mfrac> </mrow> <mrow> <mi>I</mi> <mi>J</mi> </mrow> </mfrac> <mo>,</mo> </mrow>I and J represents the line number and columns of image respectively,Represent XMShadow Class ω is not changed as uppernThe Gaussian Profile of obedience,Represent XMOn image Change class ωcThe Gaussian Profile of obedience;According to Bayes minimum error principles, solution formulaObtain optimum segmentation threshold value T.
- 6. remote sensing image variation detection method as claimed in claim 1, it is characterised in that described according to optimum segmentation threshold value T With diverse vector amplitude XMPseudo- training sample region is selected, is specifically included:According to optimum segmentation threshold value T and diverse vector amplitude XMSelection do not change class puppet training set sample forAccording to optimum segmentation threshold value T and diverse vector amplitude XMSelection change class puppet training set sample forWherein, δ XMThe 15% of dynamic range.
- 7. remote sensing image variation detection method as claimed in claim 1, it is characterised in that the optimal models according to selection Parameter value, using the method for core FCM combination space neighborhood informations, determine the region of variation of high-resolution optical remote sensing image and non- Region of variation, specifically include:The clusters number C=2 in core FCM combination space neighborhood information models is set, does not change class and change class as initial Center, selection and diverse vector amplitude XMMinimum value and the corresponding vector of maximum;If Weighting exponent m=2 of degree of membership, ε is constant more than 0, and parameter alpha and nuclear parameter g value are the selected optimal model parameters value;Calculate XM, XSALocal window average, window size is arranged to 3 × 3;Using formulaUpdate fuzzy partition matrix;Using formulaUpdate cluster centre;Renewal fuzzy partition matrix and cluster centre are repeated until the cluster centre cluster of adjacent iteration twice is less than ε;According to fuzzy partition matrix uckIt is determined that final change detection figure, obtains the region of variation and non-changing of optical remote sensing image Region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510742564.7A CN105405133B (en) | 2015-11-04 | 2015-11-04 | A kind of remote sensing image variation detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510742564.7A CN105405133B (en) | 2015-11-04 | 2015-11-04 | A kind of remote sensing image variation detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105405133A CN105405133A (en) | 2016-03-16 |
CN105405133B true CN105405133B (en) | 2018-01-19 |
Family
ID=55470600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510742564.7A Expired - Fee Related CN105405133B (en) | 2015-11-04 | 2015-11-04 | A kind of remote sensing image variation detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105405133B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106372612A (en) * | 2016-09-09 | 2017-02-01 | 河海大学 | Multi-temporal remote sensing image change detection method combining FCM with MRF model |
CN106384352A (en) * | 2016-09-09 | 2017-02-08 | 河海大学 | Multi-temporal remote sensing image change detection method based on fusion strategy and FCM |
CN109934799B (en) * | 2016-09-09 | 2021-06-25 | 南京工程学院 | Multi-time-phase difference image module value calculation and change detection method |
CN106373120B (en) * | 2016-09-09 | 2019-01-08 | 河海大学 | Multi-temporal remote sensing image change detecting method based on Non-negative Matrix Factorization and core FCM |
CN107248172A (en) * | 2016-09-27 | 2017-10-13 | 中国交通通信信息中心 | A kind of remote sensing image variation detection method based on CVA and samples selection |
CN106646469B (en) * | 2016-12-21 | 2019-01-29 | 中国科学院遥感与数字地球研究所 | SAR ship detection optimization method based on VC Method |
CN107346549B (en) * | 2017-06-09 | 2020-04-14 | 中国矿业大学 | Multi-class change dynamic threshold detection method utilizing multiple features of remote sensing image |
CN109146933B (en) * | 2017-06-28 | 2020-12-01 | 中国石油化工股份有限公司 | Multi-scale digital core modeling method and computer-readable storage medium |
CN107688777B (en) * | 2017-07-21 | 2022-11-18 | 同济大学 | Urban green land extraction method for collaborative multi-source remote sensing image |
CN107480634A (en) * | 2017-08-12 | 2017-12-15 | 天津市测绘院 | A kind of geographical national conditions ground mulching monitoring method based on multistage target classification |
CN107481235A (en) * | 2017-08-24 | 2017-12-15 | 河海大学 | The multi-temporal remote sensing image change detecting method that a kind of mathematical morphology filter converts with reference to card side |
CN107578040A (en) * | 2017-09-30 | 2018-01-12 | 中南大学 | A kind of house change detecting method based on Pulse Coupled Neural Network |
CN107992891B (en) * | 2017-12-01 | 2022-01-25 | 西安电子科技大学 | Multispectral remote sensing image change detection method based on spectral vector analysis |
CN110232302B (en) * | 2018-03-06 | 2020-08-25 | 香港理工大学深圳研究院 | Method for detecting change of integrated gray value, spatial information and category knowledge |
CN109191503B (en) * | 2018-08-23 | 2021-08-27 | 河海大学 | Remote sensing image change detection method and system based on conditional random field |
CN109300115B (en) * | 2018-09-03 | 2021-11-30 | 河海大学 | Object-oriented multispectral high-resolution remote sensing image change detection method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1760888A (en) * | 2005-11-03 | 2006-04-19 | 复旦大学 | Method for recognizing change of earth's surface by using satellite SAR carried images at multiple time phases |
CN101950361A (en) * | 2010-09-06 | 2011-01-19 | 中国科学院遥感应用研究所 | Adaptive extraction method of remote sensing image thematic information based on spectrum matching degree |
CN103150580A (en) * | 2013-03-18 | 2013-06-12 | 武汉大学 | Method and device for Hyperspectral image semi-supervised classification |
CN104200458A (en) * | 2014-07-30 | 2014-12-10 | 浙江工业大学 | MeanShift based high-resolution remote sensing image segmentation distance measurement optimization method |
CN104751166A (en) * | 2013-12-30 | 2015-07-01 | 中国科学院深圳先进技术研究院 | Spectral angle and Euclidean distance based remote-sensing image classification method |
-
2015
- 2015-11-04 CN CN201510742564.7A patent/CN105405133B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1760888A (en) * | 2005-11-03 | 2006-04-19 | 复旦大学 | Method for recognizing change of earth's surface by using satellite SAR carried images at multiple time phases |
CN101950361A (en) * | 2010-09-06 | 2011-01-19 | 中国科学院遥感应用研究所 | Adaptive extraction method of remote sensing image thematic information based on spectrum matching degree |
CN103150580A (en) * | 2013-03-18 | 2013-06-12 | 武汉大学 | Method and device for Hyperspectral image semi-supervised classification |
CN104751166A (en) * | 2013-12-30 | 2015-07-01 | 中国科学院深圳先进技术研究院 | Spectral angle and Euclidean distance based remote-sensing image classification method |
CN104200458A (en) * | 2014-07-30 | 2014-12-10 | 浙江工业大学 | MeanShift based high-resolution remote sensing image segmentation distance measurement optimization method |
Non-Patent Citations (4)
Title |
---|
An Automatic Unsupervised Method Based on Context-Sensitive Spectral Angle Mapper for Change Detection of Remote Sensing Images;Tauqir Ahmed Moughal and Fusheng Yu;《ADMA2014》;20141231;第151-162页 * |
一种基于光谱角和光谱距离自动加权融合分类方法;余先川 等;《地质学刊》;20120331;第36卷(第1期);一种基于光谱角和光谱距离自动加权融合分类方法 * |
光谱角-欧氏距离的高光谱图像辐射归一化;孙艳丽 等;《遥感学报》;20150212;第19卷(第4期);第3.1.1-3.1.2节 * |
基于模糊C均值聚类和了邻域分析的无监督多通道遥感图像变化检测;赵磊 等;《数据采集与处理》;20110731;第26卷(第4期);第396页左栏第3段,右栏第2段 * |
Also Published As
Publication number | Publication date |
---|---|
CN105405133A (en) | 2016-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105405133B (en) | A kind of remote sensing image variation detection method | |
Nutkiewicz et al. | Data-driven Urban Energy Simulation (DUE-S): A framework for integrating engineering simulation and machine learning methods in a multi-scale urban energy modeling workflow | |
CN105389817B (en) | A kind of two phase remote sensing image variation detection methods | |
Gilleland et al. | Intercomparison of spatial forecast verification methods | |
Szcześniak et al. | A method for using street view imagery to auto-extract window-to-wall ratios and its relevance for urban-level daylighting and energy simulations | |
CN105551031B (en) | Multi-temporal remote sensing image change detecting method based on FCM and evidence theory | |
CN102222313B (en) | Urban evolution simulation structure cell model processing method based on kernel principal component analysis (KPCA) | |
Ranjan et al. | Review of preprocessing methods for univariate volatile time-series in power system applications | |
Biard et al. | Automated detection of weather fronts using a deep learning neural network | |
CN107481235A (en) | The multi-temporal remote sensing image change detecting method that a kind of mathematical morphology filter converts with reference to card side | |
CN106447653B (en) | The multi-temporal remote sensing image change detecting method that card side based on space constraint converts | |
Peeters et al. | Automated recognition of urban objects for morphological urban analysis | |
Huang et al. | Automatic building change image quality assessment in high resolution remote sensing based on deep learning | |
CN106372612A (en) | Multi-temporal remote sensing image change detection method combining FCM with MRF model | |
CN113657324A (en) | Urban functional area identification method based on remote sensing image ground object classification | |
Kim et al. | Automated classification of thermal defects in the building envelope using thermal and visible images | |
CN106373120B (en) | Multi-temporal remote sensing image change detecting method based on Non-negative Matrix Factorization and core FCM | |
Christiansen | Analysis of ensemble mean forecasts: The blessings of high dimensionality | |
CN105354845B (en) | A kind of semi-supervised change detecting method of remote sensing image | |
CN106096622A (en) | Semi-supervised Classification of hyperspectral remote sensing image mask method | |
Palacios-Rodríguez et al. | Generalized Pareto processes for simulating space-time extreme events: an application to precipitation reanalyses | |
Fan | Research on deep learning energy consumption prediction based on generating confrontation network | |
CN102609721B (en) | Remote sensing image clustering method | |
CN106384352A (en) | Multi-temporal remote sensing image change detection method based on fusion strategy and FCM | |
Arbia et al. | Contextual classification in image analysis: an assessment of accuracy of ICM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180119 |