CN104794497B - Multicenter approximating method in a kind of classification hyperspectral imagery - Google Patents

Multicenter approximating method in a kind of classification hyperspectral imagery Download PDF

Info

Publication number
CN104794497B
CN104794497B CN201510227125.2A CN201510227125A CN104794497B CN 104794497 B CN104794497 B CN 104794497B CN 201510227125 A CN201510227125 A CN 201510227125A CN 104794497 B CN104794497 B CN 104794497B
Authority
CN
China
Prior art keywords
classification
division
sample
class
sample number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510227125.2A
Other languages
Chinese (zh)
Other versions
CN104794497A (en
Inventor
刘治
唐波
肖晓燕
郑成云
李晓梅
聂明钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201510227125.2A priority Critical patent/CN104794497B/en
Publication of CN104794497A publication Critical patent/CN104794497A/en
Application granted granted Critical
Publication of CN104794497B publication Critical patent/CN104794497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses multicenter approximating method in a kind of classification hyperspectral imagery, comprising: the sample of the label of Stochastic choice known class, build training sample set X and label matrix y; Initialization controling parameters: the maximum division number K that every class sample is set, deviation division threshold value σ t, and smallest sample number N in the rear class of division min; Calculate all kinds of matching center c iwith all kinds of mean deviation division decision-making; Division efficiency assessment, smallest sample number N in class after sample number and division in the subclass comparing division minsize, if sample number to be all greater than after division smallest sample number N in class in two subclasses min, then division effectively, until classification convergence; If sample number has one to be less than smallest sample number N in the rear class of division in subclass min, then illustrate that this classification restrains, division terminates.The present invention is directed to mixed pixel problem in hyperspectral classification, in multidimensional feature space, decision region can be divided more accurately.

Description

Multicenter approximating method in a kind of classification hyperspectral imagery
Technical field
The present invention relates to Hyperspectral imagery processing field, particularly relate to multicenter approximating method in a kind of hyperspectral classification.
Background technology
High light spectrum image-forming, based on multispectral imaging, from ultraviolet near infrared spectral range, adopts imaging spectrometer, to target object continuous imaging on the tens of of spectral coverage or hundreds of spectral bands.While acquisition object space characteristic imaging, also obtain the spectral information of testee.Spectral imaging technology has super multiband, high spectral resolution, wave band is narrow, spectral range is wide and the feature of collection of illustrative plates unification.It is advantageous that the image information that collects is abundant, resolution is compared with high and data descriptive model is many.Because it is in the excellent performance of field of detecting, be widely used in reality.
In recent years, along with the widespread use of high light spectrum image-forming technology, high spectrum image analyzing and processing technology is developed rapidly.Therefore, hyperspectral classification problem receives much concern.The existing defects of high spectrum image own, such as dimension is too high, and data redudancy is large, and mixed pixel etc., seriously hinder the development of hyperspectral technique.Wherein, mixed pixel is due to real space region corresponding to pixel single during high light spectrum image-forming, and not merely there is a kind of material in this region, and therefore, the information of this pixel record is the superposition of target complete information in one's respective area.The existence of mixed pixel has a strong impact on the nicety of grading of remote sensing.On the other hand, the reference sample of high spectrum image Supervised classification is selected very limited, is generally choose from known high spectrum image region, and the training data of process like this exists higher spectral mixing.Classic algorithm in hyperspectral classification, as spectral modeling coupling (SAM), k-arest neighbors (KNN) etc., all need to carry out matching to training set, due to the existence of mixed pixel, sample average center in single classification there will be poor fitting phenomenon, show as single center low to entirety training set fitting degree, directly cause classification accuracy low.
Summary of the invention
For solving the deficiency that prior art exists, the invention discloses multicenter approximating method in a kind of classification hyperspectral imagery, for small-sample learning classification and mixed pixel problem, divided by constraint, obtain the multiple matching center of same classification, while realizing the matching of the multicenter overall situation, the effective nicety of grading improving whole categorizing system.
For achieving the above object, concrete scheme of the present invention is as follows:
Multicenter approximating method in a kind of classification hyperspectral imagery, comprises the following steps:
Step (1): the sample of the label of Stochastic choice known class, builds training sample set X and label matrix y;
Step (2): initialization controling parameters: the maximum division number K that every class sample is set, deviation division threshold value σ t, and smallest sample number N in the rear class of division min;
Step (3): calculate all kinds of matching center c iwith all kinds of mean deviation
Step (4): division decision-making, if mean deviation in classification be greater than the deviation division threshold value σ of setting t, then need to divide this classification, adopt k means clustering method to obtain the multiple subclasses after dividing, be labeled as different labels respectively, otherwise division terminate;
Step (5): division efficiency assessment, sample number and smallest sample number N in the rear class of division in the subclass that comparison step (4) divides minsize, if sample number to be all greater than after division smallest sample number N in class in two subclasses min, then division effectively, repeats step (3) ~ (5) to subclass, until classification convergence; If sample number has one to be less than smallest sample number N in the rear class of division in subclass min, then illustrate that this classification restrains, division terminate, division terminate after, obtain a component split after subclass;
Step (6): the matching center c calculating the rear each subclass of division ij, i=1,2 ..., C, C are classification numbers, and j is the subclass number of the i-th classification division.
The concrete grammar that described step (1) builds training sample set X and label matrix y is:
Adopt the bootstrap method of sampling to randomly draw training sample, namely to specific class i, i ∈ [1,2 ..., C], C represents classification number, has the sampling of putting back at random, obtains training sample set and label matrix y i = [ y i 1 y i 2 , . . . , y iN i ] ;
Wherein, x ijrepresent a jth sample in classification i, l is characteristic number, y ij=i, i=1,2 ..., C, j=1,2 ... N i, N irepresent the total sample number in classification i, it is set of real numbers.
Described step (3) calculates all kinds of matching center c iconcrete grammar be:
The computing method at all kinds of matching center are the average of all samples in compute classes, are shown below:
c i = 1 N Σ j = 1 N i x ij - - - ( 1 )
Wherein, c ifor the class matching center of classification i, x ijrepresent a jth sample in classification i, N irepresent the total sample number in classification i, i=1,2 ..., C, C are classification numbers.
All kinds of mean deviation account form as follows:
σ j = | | x ij - c i | | 2
σ ‾ i = 1 N i Σ j = 1 N i σ j
Wherein, σ jrepresent a jth sample x in classification i ijwith class center c istandard deviation, j=1,2 ..., N i, N irepresent the total sample number in classification i, represent the mean deviation of classification i, i=1,2 ..., C, C are classification numbers.
The method of described step (4) division decision-making is:
Step (4-1): the mean deviation of classification i in comparison step (3) threshold value σ is divided with step (2) large deviations tif, and when predivision number of times is less than maximum division number K, then need to carry out binary fission in classification to this; Otherwise, illustrate that this classification restrains, stop fission process;
Step (4-2): in classification, the method for binary fission is k mean cluster, k=2, carries out two taxonomic clusterings to all samples in classification i, classification i is split into two subclasses, and gives different class labels respectively, be designated as C sub1, C sub2.
Described step (4-2) k means clustering method is:
Step (4-2-1): to classification i, in random selecting classification i, any two samples are as initial cluster centre, are designated as μ 1, μ 2, μ 1corresponding subclasses C sub1, μ 2corresponding subclasses C sub2;
Step (4-2-2): to arbitrary sample x ij, j=1,2 ..., N i, calculate x respectively ijwith two initial cluster center μ in step (4-2-1) 1, μ 2euclidean distance square:
d 1 = | | x ij - μ 1 | | 2 d 2 = | | x ij - μ 2 | | 2
D 1represent x ijto cluster centre μ 1euclidean distance square, d 2represent x ijto cluster centre μ 2euclidean distance square;
If d 1≤ d 2, then sample x ijbelong to subclasses C sub1otherwise, sample x ijbelong to subclasses C sub2;
Step (4-2-3): upgrade cluster centre μ 1, μ 2, mode is as follows:
μ 1 = 1 N sub 1 Σ x ∈ C sub 1 x
μ 2 = 1 N sub 2 Σ x ∈ C sub 2 x
N sub1represent subclasses C sub1interior sample number, N sub2represent subclasses C sub2interior sample number, be a sample, l is characteristic number;
Step (4-2-4): repetition step (4-2-2) and step (4-2-3), until convergence, obtain two subclasses C sub1, C sub2.
The method of described step (5) division Efficient Evaluation is:
Subclasses C in comparison step (4) sub1, C sub2interior number of samples and smallest sample number N in the rear class of division minsize, if sample number to be all greater than after division smallest sample number N in class in two subclasses min, then division effectively, to two subclasses C in step (4) sub1, C sub2repeat step (3) ~ (5) until convergence.If sample number has one to be less than smallest sample number N in the rear class of division in subclass minor reach maximum division number of times K, then illustrate that this classification restrains, division terminates.Division terminate after, obtain a component split after subclass.
Described step (6) calculates the matching center c of the rear each subclass of division ij, i=1,2 ..., C, C are classification numbers, and j is the subclass number of the i-th classification division, and the computing method at the matching center of each subclass are the average calculating all samples in each subclass.
Beneficial effect of the present invention:
(1) core concept of the present invention is under pre-conditioned constraint, for each classification divides multiple matching center, minimizes class mean deviation, has good stability;
(2) the present invention is by arranging maximum division number of times and division threshold value, realizes the controllability of classification division;
(3) the present invention is directed to mixed pixel problem in hyperspectral classification, in multidimensional feature space, decision region can be divided more accurately.
(4) progressively divide convergence under category realization condition constraint of the present invention, obtain multiple matching center, the effective nicety of grading improving whole categorizing system.
Accompanying drawing explanation
Multicenter approximating method process flow diagram in Fig. 1 hyperspectral classification of the present invention;
Equalization fitting result in three class division classes in Fig. 2 a two-dimensional space;
Three class division multicenter fitting results in Fig. 2 b two-dimensional space.
Embodiment:
Below in conjunction with accompanying drawing, the present invention is described in detail:
As shown in Figure 1, in a kind of classification hyperspectral imagery, the process of multicenter approximating method is:
(1) .bootstrap sampling obtains training sample.The sample having the extraction of putting back to certain from the sample marked at random is as training sample.
(2). parameter initialization: parameter comprises the maximum division number K of every class, deviation division threshold value σ tand smallest sample number N in the rear class of division min.
(3). calculate all kinds of matching center c respectively iand mean deviation
(4). the mean deviation of classification i in comparison step 3 threshold value σ is divided with initialized deviation in step (2) tif, and when predivision number of times is less than maximum division number K, then carries out k mean cluster, k=2, current class is divided into two subclasses, and gives different class labels respectively; Otherwise, illustrate that this classification restrains, stop fission process.
(5). division efficiency assessment.For two subclasses that step (4) divides, if sample number is all greater than smallest sample number N in the rear class of division in two subclasses min, then classification effectively, repeats step (3) ~ (5) until convergence to subclass.If sample number has one to be less than smallest sample number N in the rear class of division in subclass minor reach maximum division number of times K, then illustrate that this classification restrains, division terminates.Division terminate after, obtain a component split after subclass.
(6). calculate the matching center c of the rear each subclass of division ij, i=1,2 ..., C, C are classification numbers, and j is the subclass number of classification i division.
Detailed protocol is:
The concrete grammar that step (1) builds training sample set X and label matrix y is:
Adopt the bootstrap method of sampling to randomly draw training sample, namely to specific class i, i ∈ [1,2 ..., C], C represents classification number, has the sampling of putting back at random, obtains training sample set and label matrix y i = [ y i 1 y i 2 , . . . , y iN i ] ;
Wherein, x ijrepresent a jth sample in classification i, l is characteristic number, y ij=i, i=1,2 ..., C, j=1,2 ... N i, N irepresent the total sample number in classification i, it is set of real numbers.
Step (3) calculates all kinds of matching center c iconcrete grammar be:
The computing method at all kinds of matching center are the average of all samples in compute classes, are shown below:
c i = 1 N Σ j = 1 N i x ij - - - ( 1 )
Wherein, c ifor the class center of classification i, x ijrepresent a jth sample in classification i, N irepresent the total sample number in classification i, i=1,2 ..., C, C are classification numbers.
The account form of all kinds of mean deviation is as follows:
σ j = | | x ij - c i | | 2
σ ‾ i = 1 N i Σ j = 1 N i σ j
Wherein, σ jrepresent a jth sample x in classification i ijwith class center c istandard deviation, j=1,2 ..., N i, N irepresent the total sample number in classification i, represent the mean deviation of classification i, i=1,2 ..., C, C are classification numbers.
The method of step (4) division decision-making is:
Step (4-1): the mean deviation of classification i in comparison step (3) threshold value σ is divided with step (2) large deviations tif, and when predivision number of times is less than maximum division number K, then need to carry out binary fission in classification to this; Otherwise, illustrate that this classification restrains, stop fission process;
Step (4-2): in classification, the method for binary fission is k mean cluster, k=2, carries out two taxonomic clusterings to all samples in classification i, classification i is split into two subclasses, and gives different class labels respectively, be designated as C sub1, C sub2.
Step (4-2) k means clustering method is:
Step (4-2-1): to classification i, in random selecting classification i, any two samples are as initial cluster centre, are designated as μ 1, μ 2, μ 1corresponding subclasses C sub1, μ 2corresponding subclasses C sub2;
Step (4-2-2): to arbitrary sample x ij, j=1,2 ..., N i, calculate x respectively ijwith two initial cluster center μ in step (4-2-1) 1, μ 2euclidean distance square:
d 1 = | | x ij - μ 1 | | 2 d 2 = | | x ij - μ 2 | | 2
D 1represent x ijto cluster centre μ 1euclidean distance square, d 2represent x ijto cluster centre μ 2euclidean distance square;
If d 1≤ d 2, then sample x ijbelong to subclasses C sub1otherwise, sample x ijbelong to subclasses C sub2;
Step (4-2-3): upgrade cluster centre μ 1, μ 2, mode is as follows:
μ 1 = 1 N sub 1 Σ x ∈ C sub 1 x
μ 2 = 1 N sub 2 Σ x ∈ C sub 2 x
N sub1represent subclasses C sub1interior sample number, N sub2represent subclasses C sub2interior sample number, be a sample, l is characteristic number;
Step (4-2-4): repetition step (4-2-2) and step (4-2-3), until convergence, obtain two subclasses C sub1, C sub2.
The method of step (5) division Efficient Evaluation is:
Subclasses C in comparison step (4) sub1, C sub2interior number of samples and smallest sample number N in the rear class of division minsize, if sample number to be all greater than after division smallest sample number N in class in two subclasses min, then division effectively, to two subclasses C in step (4) sub1, C sub2repeat step (3) ~ (5) until convergence.If sample number has one to be less than smallest sample number N in the rear class of division in subclass minor reach maximum division number of times K, then illustrate that this classification restrains, division terminates.Division terminate after, obtain a component split after subclass.
Step (6) calculates the matching center c of the rear each subclass of division ij, i=1,2 ..., C, C are classification numbers, and j is the subclass number of classification i division, and method is the average calculating all samples in subclass.
Three class division results in two-dimensional space: Fig. 2 a is equalization fitting result in class, and Fig. 2 b is multicenter fitting result.
By reference to the accompanying drawings the specific embodiment of the present invention is described although above-mentioned; but not limiting the scope of the invention; one of ordinary skill in the art should be understood that; on the basis of technical scheme of the present invention, those skilled in the art do not need to pay various amendment or distortion that creative work can make still within protection scope of the present invention.

Claims (6)

1. a multicenter approximating method in classification hyperspectral imagery, is characterized in that, comprise the following steps:
Step (1): the sample of the label of Stochastic choice known class, builds training sample set X and label matrix y;
Step (2): initialization controling parameters: the maximum division number K that every class sample is set, deviation division threshold value σ t, and smallest sample number N in the rear class of division min;
Step (3): calculate all kinds of matching center c iwith all kinds of mean deviation
Step (4): division decision-making, if mean deviation in classification be greater than the deviation division threshold value σ of setting t, then need to divide this classification, adopt k means clustering method to obtain the multiple subclasses after dividing, be labeled as different labels respectively, otherwise division terminate;
Step (5): division efficiency assessment, sample number and smallest sample number N in the rear class of division in the subclass that comparison step (4) divides minsize, if sample number to be all greater than after division smallest sample number N in class in two subclasses min, then division effectively, repeats step (3) ~ (5) to subclass, until classification convergence; If sample number has one to be less than smallest sample number N in the rear class of division in subclass min, then illustrate that this classification restrains, division terminate, division terminate after, obtain a component split after subclass;
Step (6) calculates the matching center c of the rear each subclass of division ij, i=1,2 ..., C, C are classification numbers, and j is the subclass number of classification i division;
The method of described step (4) division decision-making is:
Step (4-1): the mean deviation of classification i in comparison step (3) threshold value σ is divided with step (2) large deviations tif, and when predivision number of times is less than maximum division number K, then need to carry out binary fission in classification to this; Otherwise, illustrate that this classification restrains, stop fission process;
Step (4-2): in classification, the method for binary fission is k mean cluster, k=2, carries out two taxonomic clusterings to all samples in classification i, classification i is split into two subclasses, and gives different class labels respectively, be designated as C sub1, C sub2;
Described step (4-2) k means clustering method is:
Step (4-2-1): to classification i, in random selecting classification i, any two samples are as initial cluster centre, are designated as μ 1, μ 2, μ 1corresponding subclasses C sub1, μ 2corresponding subclasses C sub2;
Step (4-2-2): to arbitrary sample x ij, j=1,2 ..., N i, calculate x respectively ijwith two initial cluster center μ in step (4-2-1) 1, μ 2euclidean distance square:
d 1 = | | x i j - μ 1 | | 2 d 2 = | | x i j - μ 2 | | 2
D 1represent x ijto cluster centre μ 1euclidean distance square, d 2represent x ijto cluster centre μ 2euclidean distance square;
If d 1≤ d 2, then sample x ijbelong to subclasses C sub1otherwise, sample x ijbelong to subclasses C sub2;
Step (4-2-3): upgrade cluster centre μ 1, μ 2, mode is as follows:
μ 1 = 1 N s u b 1 Σ x ∈ C s u b 1 x
μ 2 = 1 N s u b 2 Σ x ∈ C s u b 2 x
N sub1represent subclasses C sub1interior sample number, N sub2represent subclasses C sub2interior sample number, be a sample, l is characteristic number;
Step (4-2-4): repetition step (4-2-2) and step (4-2-3), until convergence, obtain two subclasses C sub1, C sub2.
2. multicenter approximating method in a kind of classification hyperspectral imagery as claimed in claim 1, is characterized in that, the concrete grammar that described step (1) builds training sample set X and label matrix y is:
Adopt the bootstrap method of sampling to randomly draw training sample, namely to specific class i, i ∈ [1,2 ..., C], C represents classification number, has the sampling of putting back at random, obtains training sample set and label matrix y i = y i 1 y i 2 , ... , y iN i ;
Wherein, x ijrepresent a jth sample in classification i, l is characteristic number, y ij=i, i=1,2 ..., C, j=1,2 ... N i, N irepresent the total sample number in classification i, it is set of real numbers.
3. multicenter approximating method in a kind of classification hyperspectral imagery as claimed in claim 1, is characterized in that, described step (3) calculates all kinds of matching center c iconcrete grammar be:
The computing method at all kinds of matching center are the average of all samples in compute classes, are shown below:
c i = 1 N i Σ j = 1 N i x i j - - - ( 1 )
Wherein, c ifor the class center of classification i, x ijrepresent a jth sample in classification i, N irepresent the total sample number in classification i, i=1,2 ..., C, C are classification numbers.
4. multicenter approximating method in a kind of classification hyperspectral imagery as described in claim 1 or 3, is characterized in that, all kinds of mean deviation account form as follows:
σ j = | | x i j - c i | | 2
σ ‾ i = 1 N i Σ j = 1 N i σ j
Wherein, σ jrepresent a jth sample x in classification i ijwith class center c istandard deviation, j=1,2 ..., N i, N irepresent the total sample number in classification i, represent the mean deviation of classification i, i=1,2 ..., C, C are classification numbers.
5. multicenter approximating method in a kind of classification hyperspectral imagery as claimed in claim 1, is characterized in that, the method for described step (5) division Efficient Evaluation is:
Subclasses C in comparison step (4) sub1, C sub2interior number of samples and smallest sample number N in the rear class of division minsize, if sample number to be all greater than after division smallest sample number N in class in two subclasses min, then division effectively, to two subclasses C in step (4) sub1, C sub2repeat step (3) ~ (5) until convergence, if sample number has one to be less than smallest sample number N in the rear class of division in subclass minor reach maximum division number of times K, then illustrate that this classification restrains, division terminates.Division terminate after, obtain a component split after subclass.
6. multicenter approximating method in a kind of classification hyperspectral imagery as claimed in claim 1, is characterized in that, described step (6) calculates the matching center c of the rear each subclass of division ij, i=1,2 ..., C, C are classification numbers, and j is the subclass number of the i-th classification division, and the computing method at the matching center of each subclass are the average calculating all samples in each subclass.
CN201510227125.2A 2015-05-06 2015-05-06 Multicenter approximating method in a kind of classification hyperspectral imagery Active CN104794497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510227125.2A CN104794497B (en) 2015-05-06 2015-05-06 Multicenter approximating method in a kind of classification hyperspectral imagery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510227125.2A CN104794497B (en) 2015-05-06 2015-05-06 Multicenter approximating method in a kind of classification hyperspectral imagery

Publications (2)

Publication Number Publication Date
CN104794497A CN104794497A (en) 2015-07-22
CN104794497B true CN104794497B (en) 2016-04-13

Family

ID=53559284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510227125.2A Active CN104794497B (en) 2015-05-06 2015-05-06 Multicenter approximating method in a kind of classification hyperspectral imagery

Country Status (1)

Country Link
CN (1) CN104794497B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345906B (en) * 2018-02-09 2022-02-22 无锡英臻科技有限公司 Non-invasive electrical appliance identification method based on Boost model
CN109784398B (en) * 2019-01-11 2023-12-05 广东奥普特科技股份有限公司 Classifier based on feature scale and subclass splitting
CN113255832B (en) * 2021-06-23 2021-10-01 成都考拉悠然科技有限公司 Method for identifying long tail distribution of double-branch multi-center

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012159320A1 (en) * 2011-07-07 2012-11-29 华为技术有限公司 Method and device for clustering large-scale image data
CN103456019A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Image segmentation method of semi-supervised kernel k-mean clustering based on constraint pairs
CN104463247A (en) * 2014-12-09 2015-03-25 山东大学 Extracting method of optical spectrum vector cross-correlation features in hyper-spectral image classification
CN104484880A (en) * 2014-12-20 2015-04-01 西安电子科技大学 SAR image segmentation method based on coherence map migration and clustering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012159320A1 (en) * 2011-07-07 2012-11-29 华为技术有限公司 Method and device for clustering large-scale image data
CN103456019A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Image segmentation method of semi-supervised kernel k-mean clustering based on constraint pairs
CN104463247A (en) * 2014-12-09 2015-03-25 山东大学 Extracting method of optical spectrum vector cross-correlation features in hyper-spectral image classification
CN104484880A (en) * 2014-12-20 2015-04-01 西安电子科技大学 SAR image segmentation method based on coherence map migration and clustering

Also Published As

Publication number Publication date
CN104794497A (en) 2015-07-22

Similar Documents

Publication Publication Date Title
CN108388927B (en) Small sample polarization SAR terrain classification method based on deep convolution twin network
CN107451614B (en) Hyperspectral classification method based on fusion of space coordinates and space spectrum features
CN110399909B (en) Hyperspectral image classification method based on label constraint elastic network graph model
CN102324047B (en) Hyper-spectral image ground object recognition method based on sparse kernel representation (SKR)
CN102708370B (en) Method and device for extracting multi-view angle image foreground target
CN103971115A (en) Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN103824090B (en) Adaptive face low-level feature selection method and face attribute recognition method
CN103208011B (en) Based on average drifting and the hyperspectral image space-spectral domain classification method organizing sparse coding
CN103426158B (en) The method of two phase Remote Sensing Imagery Change Detection
CN112016400B (en) Single-class target detection method and device based on deep learning and storage medium
CN103559500A (en) Multispectral remote sensing image land feature classification method based on spectrum and textural features
CN103886342A (en) Hyperspectral image classification method based on spectrums and neighbourhood information dictionary learning
CN103413151A (en) Hyperspectral image classification method based on image regular low-rank expression dimensionality reduction
CN104239902A (en) Hyper-spectral image classification method based on non-local similarity and sparse coding
CN103390170A (en) Surface feature type texture classification method based on multispectral remote sensing image texture elements
CN113705641B (en) Hyperspectral image classification method based on rich context network
CN103678483A (en) Video semantic analysis method based on self-adaption probability hypergraph and semi-supervised learning
CN104794497B (en) Multicenter approximating method in a kind of classification hyperspectral imagery
CN104751469A (en) Image segmentation method based on Kernel Fuzzy C means clustering
CN106682675A (en) Space spectrum combined feature extracting method for hyperspectral images
CN104680169A (en) Semi-supervised diagnostic characteristic selecting method aiming at thematic information extraction of high-spatial resolution remote sensing image
CN104463219A (en) Polarimetric SAR image classification method based on eigenvector measurement spectral clustering
CN104408472A (en) Wishart and SVM (support vector machine)-based polarimetric SAR (synthetic aperture radar) image classification method
CN103218617A (en) Multi-linear large space feature extraction method
CN104217440A (en) Method for extracting built-up area from remote sensing image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant