CN104778482A - Hyperspectral image classifying method based on tensor semi-supervised scale cutting dimension reduction - Google Patents

Hyperspectral image classifying method based on tensor semi-supervised scale cutting dimension reduction Download PDF

Info

Publication number
CN104778482A
CN104778482A CN201510224055.5A CN201510224055A CN104778482A CN 104778482 A CN104778482 A CN 104778482A CN 201510224055 A CN201510224055 A CN 201510224055A CN 104778482 A CN104778482 A CN 104778482A
Authority
CN
China
Prior art keywords
sample
represent
tensor
matrix
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510224055.5A
Other languages
Chinese (zh)
Other versions
CN104778482B (en
Inventor
张向荣
焦李成
莫玉
冯婕
侯彪
马文萍
白静
李阳阳
郭智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510224055.5A priority Critical patent/CN104778482B/en
Publication of CN104778482A publication Critical patent/CN104778482A/en
Application granted granted Critical
Publication of CN104778482B publication Critical patent/CN104778482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image classifying method based on tensor semi-supervised scale cutting dimension reduction. The problems that as the dimensions of hyperspectral images are excessive, the calculation amount is large, and space information is lost in an existing method are mainly solved. The hyperspectral image classifying method includes the steps of expressing a hyperspectral data set as a sub-data cube set with the full-wave band; selecting a marked training set, a testing set and a total training set from the sub-data cube set; creating an inter-class dissimilarity matrix and an in-class dissimilarity matrix of the marked training set and a sample similarity matrix of the total training set; creating a tensor semi-supervised scale cutting objective function through the three matrixes; solving the objective function to obtain a projection matrix; projecting the marked training set and the testing set into low-dimensional space to obtain a new marked training set and a new testing set; inputting the new marked training set and the new testing set into a supporting vector machine for classifying, and obtaining class information of the testing set. By means of the hyperspectral image classifying method, the high classifying accuracy can be obtained, and the hyperspectral image classifying method can be used for map making and vegetation investigation.

Description

The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor
Technical field
The invention belongs to technical field of image processing, relate to the dimensionality reduction of high dimensional data, for the classification to high-spectrum remote sensing.
Background technology
High-spectrum remote sensing process emerging is in recent years the cutting edge technology of remote sensing.High-spectrum remote sensing treatment technology utilizes imaging spectrometer with nano level spectral resolution, with tens or a hundreds of wave band simultaneously to earth's surface thing imaging, thus obtain the continuous spectrum information of atural object, make synchronously to obtain the spatial information of atural object, radiation information, spectral information become possibility, its the most significant characteristic is exactly " collection of illustrative plates unification ", and the development of these technology makes the mankind achieve qualitative leap in earth observation and information obtaining ability.
The Indian Pines data set that hyperspectral image data conventional in research has the unloaded visible ray/Infrared Imaging Spectrometer AVIRIS of NASA NASA jet propulsion laboratory to obtain and the University of Pavia data set that ROSIS spectrometer obtains.
Classification hyperspectral imagery is exactly that each pixel in high spectrum image is assigned to the process of going in corresponding classification by the feature combining high spectrum image.Although the spectrum information enriched that the image that high spectrum resolution remote sensing technique obtains comprises, spatial information, radiation information make classification become easier thing, but the classification of high spectrum image is still facing to huge difficulty and challenge: (1) data volume is large, storage, display etc. are faced with difficulty, wave band number is higher, and calculated amount is large; (2) dimension disaster, the redundant information that too high dimension brings can make nicety of grading reduce; (3) wave band number is many, and correlativity is high, thus increases the demand of number of training, because do not have enough samples, can reduce the reliability of disaggregated model parameter to a certain extent.Therefore, higher nicety of grading be expected, about will subtract the dimension of EO-1 hyperion, reduce data volume.
There is a lot of Dimensionality Reduction methods at present, supervision dimension is had about to subtract method, without supervision Dimensionality Reduction method and semi-supervised Dimensionality Reduction method according to there being the utilization power of marker samples to be divided into, such as principal component analysis (PCA) PCA and local linearly embedding LLE method belong to unsupervised approaches, linear discriminant analysis LDA belongs to measure of supervision, and semi-supervised discriminant analysis method SDA belongs to semi-supervised method.Having supervision dimension about to subtract method has utilized flag data to train exactly, tries to achieve lower dimensional space according to classification information.Supervise contrary with having, about subtract method there is no class mark information without supervision dimension, but by finding the immanent structure feature of data, select the low dimensional feature vector best embodying data structure.Semi-supervised methods combining has supervision and the feature without these two kinds of methods of supervision, and while considering classification information, the structural information of mining data itself, maximizes the utilization of resource, thus obtain better lower dimensional space.
And these methods based on vector need image vector, therefore, they have only relied on spectral property and have ignored space distribution.In order to solve these shortcomings, academia introduces a kind of high spectrum image based on tensor and represents, structure between space and spectrum is analyzed simultaneously, obtains good result.
The people such as Dacheng Tao propose the method for tensor LDA for Gait Recognition in paper " General Tensor Discriminant Analysis and Gabor Featuresfor Gait Recognition " (PAMI 2007), and the method for LDA is mainly generalized in the calculating of tensor by the method.Although the method make use of the information of label, it can not process Singular variance and Multi-state data.
Summary of the invention
The object of the invention is to propose a kind of newly cut based on the semi-supervised scale of tensor the hyperspectral image classification method that dimension about subtracts, a small amount of marker samples and a large amount of data untaggeds is had to utilize, the immanent structure of mining data better, solve prior art to need image vector, spatial information lose and the problem of Singular variance and Multi-state data can not be processed, improve nicety of grading.
Technical thought of the present invention is: by learning the feature in image, appropriate symbolize general character between different pieces of information and the opposite sex, utilize the dimensionality reduction of tensor Related Computational Methods realization to high spectrum image, overcome " dimension disaster ", by finding the valuable intrinsic low dimensional structures information be embedded in high dimensional data, seek the essential laws of things, by mapping projections to a low dimensional feature space, obtain the expression that raw data is more simplified.Implementation step comprises as follows:
(1) high-spectral data collection A ∈ R is inputted m × n × D, this data set comprises c class atural object, and wherein m × n represents the number of image space size and pixel, and D represents total wave band number of data set;
(2) centered by each pixel of A, get the neighborhood block of 5*5, obtain Q and have full wave subdata cube, each subdata cube represents as a sample three rank tensors, obtains sample set wherein c arepresent a sample, Q represents the total number of sample, Q=m × n;
(3) from sample set middle Stochastic choice is N number of has marker samples to be configured with mark training sample set corresponding class mark vector is designated as: all the other Q-N unmarked composition of sample test sample book collection , y u∈ R 5 × 5 × D, wherein, χ tindicate t sample of mark training set, l trepresent that t has the category label of mark belonging to training sample, y urepresent u sample of test set;
(4) from Q-N unmarked sample, select h unmarked sample, with N number of have together with marker samples form total training set wherein, s krepresent a kth training sample of total training set, N+ η is the number of samples of total training set, 1≤η≤Q-N;
(5) dissimilarity matrix B between the class marking training set X is configured with:
B = Σ p = 1 c Σ i ∈ V p Σ j ∈ V P ′ 1 n p n c ( j ) mat 3 ( ( ( χ i - χ j ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i - χ j ) × 1 U 1 T × 2 U 2 T ) )
Wherein, V prepresent that p class has the set of mark training sample, V p' represent the set be made up of all samples in the set of mark training sample except p class sample, n prepresent that p class has the number of samples in mark training set, n c (j)represent that jth has the number of samples of class belonging to mark training sample, χ irepresent that i-th of p class has mark training sample, χ jrepresent V p' in jth have mark training sample, U 1represent the projection matrix of all band subdata cube horizontal direction, U 1=1 5 × 5, U 2represent the projection matrix of all band subdata cube vertical direction, U 2=1 5 × 5, U 1and U 2the value of each element be 1, represent tensor and matrix carry out mould 1, mould 2 amasss, T represents transposition, () ∈ R 5 × 5 × Drepresent a tensor, expression is contractd to the first long-pending rank of two tensors and second-order, and obtaining a size after contracing is the matrix of D × D, () ∈ R 5 × 5 × Drepresent three rank tensors, () (3)∈ R d × (5 × 5)represent that tensor () is launched into a matrix by mode 3;
(6) dissimilarity matrix W in the class marking training set X is configured with:
W = Σ p = 1 c Σ i ∈ V p Σ h ∈ V p 1 n p n p mat 3 ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) )
Wherein, χ hrepresent V ph in training set has mark training sample;
(7) the similarity matrix M of all samples in total training set S is constructed:
M = 1 2 Σ i ′ , j ′ N + η m i ′ j ′ ( mat 3 ( ( ( χ i ′ - χ j ′ ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i ′ - χ j ′ ) × 1 U 1 T × 2 U 2 T ) ) )
Wherein, m i'j'represent sample c i'and c j'between similarity, χ i'represent the individual training sample of i-th in S ', χ j'jth in expression S ' individual training sample;
(8) by having dissimilarity matrix B between the class of mark training set, having dissimilarity matrix W and the semi-supervised scale of similarity matrix M structure tensor in the class of mark training set to cut objective function:
U 3 * = arg max U 3 tr ( U 3 T BU 3 ) tr ( U 3 T ( W + βM ) U 3 )
Wherein, parameter beta be one fine setting parameter, its value by people for being appointed as 0.001, U 3for the projection matrix on required feature dimensions direction, the mark of tr representing matrix;
(9) cut objective function to the semi-supervised scale of tensor to solve, obtain the projection matrix in feature dimensions direction
(10) mark training set will be had respectively and test set project to by the lower dimensional space opened, obtain newly after projecting having mark training set with new test set wherein be t the new feature tensor having mark training sample, be the new feature tensor of u test sample book, represent tensor and matrix carry out mould 3 to amass;
(11) new there is mark training set category label collection with new test set be input in supporting vector machine SVM and classify, obtain the classification results of test set wherein, l u' represent category label belonging to u test sample book.The present invention compared with prior art, has the following advantages:
The first, the present invention carries out dimensionality reduction owing to adopting dimension reduction algorithm to hyperspectral image data, then classifies, and greatly reduces calculated amount, improves the speed of classification.
Second, the present invention is based at an area of space among a small circle, the atural object classification that high spectrum image is corresponding is single, the feature of very large similarity is had between pixel, each schedule of samples is shown as one and there is full wave subdata cube, utilize tensor computation, avoid and image is carried out vectorization process, regional space correlativity and light Spectral correlation can be maximally utilised;
3rd, the present invention, compared with existing tensor LDA method, does not need each class data to meet the variance distributions such as Gauss, but constructs dissimilarity matrix by the dissimilarity calculated between sample and sample, eliminates the impact at class center.
4th, this invention takes full advantage of the information that marker samples provides, and finds the projector space that can keep the separability of class better, simultaneously owing to make use of the geometry information of unmarked sample mining data, therefore can reflect the geometric properties of data essence;
Contrast experiment shows, the present invention effectively reduces the complexity of calculating, improves the classification accuracy of high-spectrum remote sensing.
Accompanying drawing explanation
Fig. 1 is realization flow figure of the present invention;
Fig. 2 is the Indian Pines image that the present invention emulates employing;
Fig. 3 is that the present invention and existing method are to the classification results figure of Indian Pines image.
Concrete implementing measure
Below in conjunction with accompanying drawing, the technical scheme of invention and effect are described further.
With reference to Fig. 1, as follows to performing step of the present invention:
Step 1, input high-spectral data collection A ∈ R m × n × D, this data set comprises c=16 class atural object, and wherein m × n represents the number of image space size and pixel, and D represents total wave band number of data set, and R represents real number field;
Step 2, has chosen mark training set X, test set Y and total training set S.
Centered by each pixel of A, 2a) get the neighborhood block of 5*5, obtain Q and have full wave subdata cube, each subdata cube represents as a sample three rank tensors, obtains sample set wherein c arepresent a sample, Q represents the total number of sample, Q=m × n;
2b) from sample set middle Stochastic choice is N number of has marker samples to be configured with mark training sample set corresponding class mark vector is designated as: all the other Q-N unmarked composition of sample test sample book collection , y u∈ R 5 × 5 × D, wherein, χ tindicate t sample of mark training set, l trepresent that t has the category label of mark belonging to training sample, y urepresent u sample of test set;
2c) from Q-N unmarked sample, select h unmarked sample, with N number of have together with marker samples form total training set wherein, s krepresent a kth training sample of total training set, N+ η is the number of samples of total training set, 1≤η≤Q-N.
Step 3, dissimilarity matrix W in dissimilarity matrix B and class between the class being configured with mark training set X.
P class in mark training set X 3a) will be had to have marker samples to form similar sample set p=1,2 ..., c, wherein χ irepresent that i-th of p class has mark training sample, n prepresent set V pin have mark training sample number;
3b) all composition of sample inhomogeneity sample sets in the set of mark training sample except p class sample will be had wherein χ jrepresent V p' in jth have mark training sample, n c (j)represent that jth has the number of samples of class belonging to mark training sample;
3c) calculate similar sample set V pin have marker samples and inhomogeneity sample set V p' in the class having the dissimilarity between marker samples to obtain each class between dissimilarity matrix B p:
B p = Σ i ∈ V p Σ j ∈ V P ′ 1 n p n c ( j ) mat 3 ( ( ( χ i - χ j ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i - χ j ) × 1 U 1 T × 2 U 2 T ) )
U 1represent the projection matrix of all band subdata cube horizontal direction, U 1=1 5 × 5, U 2represent the projection matrix of all band subdata cube vertical direction, U 2=1 5 × 5, U 1and U 2the value of each element be 1, represent tensor and matrix carry out mould 1, mould 2 amasss, T represents transposition, expression is contractd to the first long-pending rank of two tensors and second-order, and obtaining a size after contracing is the matrix of D × D, () ∈ R 5 × 5 × Drepresent three rank tensors, () (3)∈ R d × (5 × 5)represent that tensor () is launched into a matrix by mode 3 ,t represents transposition;
3d) calculate similar sample set V pin the dissimilarity had between marker samples, dissimilarity matrix W in the class obtaining each class p:
W p = Σ i ∈ V p Σ h ∈ V p 1 n p n p mat 3 ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) )
χ hrepresent V ph in training set has mark training sample;
3e) to step 3c) each class have mark training sample class between dissimilarity matrix B psue for peace, obtain mark training set class between dissimilarity matrix B:
B = Σ p = 1 c B P = Σ p = 1 c Σ i ∈ V p Σ j ∈ V P ′ 1 n p n c ( j ) mat 3 ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) )
3f) to step 3d) each class have mark training sample class in dissimilarity matrix W psue for peace, obtain mark training set class between dissimilarity matrix W:
W = Σ p = 1 c W P = Σ p = 1 c Σ i ∈ V p Σ h ∈ V p 1 n p n p mat 3 ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i - χ h ) × 1 U 1 T × 2 U 2 T ) )
Step 4, constructs without supervision sample similarity matrix M according to total training set S.
4a) calculate the similarity between any two samples in total training set S:
Wherein, m i'j'represent sample c i'and c j'between similarity, χ i'represent the individual training sample of i-th in S ', χ j'jth in expression S ' individual training sample, δ is nuclear parameter;
4b) calculate without supervision sample similarity matrix M according to total training set S:
M = 1 2 Σ i ′ , j ′ N + η m i ′ j ′ ( mat 3 ( ( ( χ i ′ - χ j ′ ) × 1 U 1 T × 2 U 2 T ) ) mat 3 T ( ( ( χ i ′ - χ j ′ ) × 1 U 1 T × 2 U 2 T ) ) )
Step 5, by having dissimilarity matrix B between the class of mark training set, having dissimilarity matrix W and the semi-supervised scale of similarity matrix M structure tensor in the class of mark training set to cut objective function:
U 3 * = arg max U 3 tr ( U 3 T BU 3 ) tr ( U 3 T ( W + βM ) U 3 )
Wherein, parameter beta be one fine setting parameter, its value by people for being appointed as 0.001, U 3for the projection matrix on required feature dimensions direction, the mark of tr representing matrix;
Step 6, cuts objective function to the semi-supervised scale of tensor and solves, and obtains the projection matrix U in feature dimensions direction 3.
6a) semi-supervised for tensor scale is cut objective function and is transformed into following form:
Wherein an adjustment parameter, for (B+W+ β × M) -the eigenvalue of maximum that B is corresponding, (B+W+ β × M) -represent and B+W+ β × M is inverted;
6b) arrange dimension about subtract after the value of intrinsic dimensionality d, and by above-mentioned objective function item carries out svd, draws d eigenvalue of maximum and the proper vector u corresponding to this d eigenvalue of maximum 1, u 2..., u d, wherein the value of d is integer, and 0 < d≤D;
6c) use proper vector u 1, u 2..., u dprojection matrix on constitutive characteristic dimension direction
Step 7, will have mark training set X and test set Y to project to by projection matrix U respectively * 3the lower dimensional space opened, to obtain after projecting new mark training set X' and new test set Y'.
7a) original there is mark training set project to by projection matrix space in, obtain new having mark training set wherein, be t the new characteristic tensor having mark training sample;
7b) by original test set project to by projection matrix in the space of opening, obtain new test set it is the new characteristic tensor of u test sample book;
Step 8, has mark training set by new category label collection with new test set be input in supporting vector machine SVM and classify, obtain the classification results of test set wherein, l u' represent category label belonging to u test sample book.
Effect of the present invention can be further illustrated by following emulation experiment:
1. simulated conditions:
The image adopted in emulation experiment is the Indian Pines image that the unloaded visible ray/Infrared Imaging Spectrometer AVIRIS of NASA NASA jet propulsion laboratory obtains in June, 1992 in the northwestward, Indiana, as shown in Figure 2.Have 16 class terrestrial object informations in Fig. 2, class name and the number of samples of every class terrestrial object information are as shown in table 1.
Table 1 Indian Pines categories of datasets situation
Classification Item name Number
1 Alfafa 54
2 Corn-notill 1434
3 Corn-min 834
4 Corn 234
5 Grass/Pasture 497
6 Grass/Trees 747
7 Grass/Pasture-mowed 26
8 Hay-windrowed 489
9 Oats 20
10 Soybeans-notill 968
11 Soybeans-min 2468
12 Soybeans-clean 614
13 Wheat 212
14 Woods 1294
15 Building-Grass-Trees-Drives 380
16 Stone-steel Towers 95
In Fig. 2, image size is 145 × 145, totally 220 wave bands, and the wave band removing noise and air and waters absorption also has 200 wave bands.Emulation experiment of the present invention is at AMD (TM) A8CPU, dominant frequency 1.90GHz, and the MATLAB 2011b on internal memory 8G, Windows7 (64bit) platform realizes.
2. emulate content and analysis
Use the present invention and existing two kinds of methods to carry out dimension to high spectrum image Indian Pines about to subtract, existing two kinds of methods respectively: scale cuts dimensionality reduction SC, tensor linear discriminant analysis TLDA.
Classify to about subtracting three dimensionality reduction images that method obtains with the present invention and existing SC, TLDA tri-kinds of dimensions, wherein the nuclear parameter γ of sorter SVM adopts five retransposing verification methods to obtain, and penalty factor is set to 100.The nuclear parameter σ of similarity matrix M is set to 1, and weight parameter β is set to 0.001, and Using Non-labeled Training Sample number η is fixed as 2000.
Emulation 1,10 samples are selected as there being marker samples from each class 16 class data shown in table 1, using the residue sample in these 16 class data as unmarked sample, by the present invention and existing two kinds of methods, 20 dimensionality reduction classification experiments are carried out to 16 class data, get the mean value of 20 subseries results, as final classification accuracy rate, result is as shown in table 2.
The overall classification accuracy of table 2 distinct methods on Indian Pines data set
As can be seen from Table 2, the present invention has an enormous advantage compared with two kinds of existing methods based on vector; When intrinsic dimensionality is greater than 10, the classification accuracy rate of the inventive method reaches more than 60%, apparently higher than existing method;
As can also be seen from Table 2, after dimension is greater than 25, result of the present invention tends towards stability, and therefore only needs employing 25 dimensional feature, namely can obtain higher discrimination, thus greatly reduce calculated amount.
Emulation 2, from each class pixel of 16 class data shown in table 1, select 10 conducts have marker image vegetarian refreshments, the residual pixel point of view picture Indian Pines image is as unmarked pixel, classify with all pixels of above-mentioned three kinds of methods to view picture Indian Pines image, intrinsic dimensionality in each method after dimensionality reduction is set to 25, result as shown in Figure 3, wherein scheming (3a) is classification results figure of the present invention, figure (3b) is the classification results figure of existing SC+SVM, the classification results figure that figure (3c) is existing TLDA+SVM
As can be seen from figure (3a), scheme (3b) and scheme (3c) these three figure, result figure of the present invention is more level and smooth than what obtained by existing two kinds of methods, and classification results is better.
To sum up, the present invention carries out dimension to high spectrum image and about subtracts rear use svm classifier, uses the related operation of tensor to avoid image vector on the one hand, takes full advantage of spatial information; On the other hand, utilize and have mark and unmarked information, the geometry information of abundant mining data, improves nicety of grading, has advantage compared with the existing methods.

Claims (3)

1. cut based on the semi-supervised scale of tensor the hyperspectral image classification method that dimension about subtracts, comprise the following steps:
(1) high-spectral data collection A ∈ R is inputted m × n × D, this data set comprises c class atural object, and wherein m × n represents the number of image space size and pixel, and D represents total wave band number of data set, and R represents real number field;
(2) centered by each pixel of A, get the neighborhood block of 5*5, obtain Q and have full wave subdata cube, each subdata cube represents as a sample three rank tensors, obtains sample set wherein χ arepresent a sample, Q represents the total number of sample, Q=m × n;
(3) from sample set middle Stochastic choice is N number of has marker samples to be configured with mark training sample set corresponding class mark vector is designated as: all the other Q-N unmarked composition of sample test sample book collection y u∈ R 5 × 5 × D, wherein, χ tindicate t sample of mark training set, l trepresent that t has the category label of mark belonging to training sample, y urepresent u sample of test set;
(4) from Q-N unmarked sample, select η unmarked sample, with N number of have together with marker samples form total training set wherein, s krepresent a kth training sample of total training set, N+ η is the number of samples of total training set, 1≤η≤Q-N;
(5) dissimilarity matrix B between the class marking training set X is configured with:
B = &Sigma; p = 1 c &Sigma; i &Element; V p &Sigma; j &Element; V P &prime; 1 n p n c ( j ) mat 3 ( ( ( &chi; i - &chi; j ) &times; 1 U 1 T &times; 2 U 2 T ) ) mat 3 T ( ( ( &chi; i - &chi; j ) &times; 1 U 1 T &times; 2 U 2 T ) )
Wherein, V prepresent that p class has the set of mark training sample, V ' prepresent the set be made up of all samples in the set of mark training sample except p class sample, n prepresent that p class has the number of samples in mark training set, n c (j)represent that jth has the number of samples of class belonging to mark training sample, χ irepresent that i-th of p class has mark training sample, χ jrepresent V ' pin jth have mark training sample, U 1represent the projection matrix of all band subdata cube horizontal direction, U 1=1 5 × 5, U 2represent the projection matrix of all band subdata cube vertical direction, U 2=1 5 × 5, U 1and U 2the value of each element be 1, represent tensor and matrix carry out mould 1, mould 2 amasss, T represents transposition, () ∈ R 5 × 5 × Drepresent a tensor, expression is contractd to the first long-pending rank of two tensors and second-order, and obtaining a size after contracing is the matrix of D × D, () ∈ R 5 × 5 × Drepresent three rank tensors, () (3)∈ R d × (5 × 5)represent that tensor () is launched into a matrix by mode 3 ;
(6) dissimilarity matrix W in the class marking training set X is configured with:
W = &Sigma; p = 1 c &Sigma; i &Element; V p &Sigma; h &Element; V P 1 n p n p mat 3 ( ( ( &chi; i - &chi; h ) &times; 1 U 1 T &times; 2 U 2 T ) ) mat 3 T ( ( ( &chi; i - &chi; h ) &times; 1 U 1 T &times; 2 U 2 T ) )
Wherein, χ hrepresent V ph in training set has mark training sample;
(7) the similarity matrix M of all samples in total training set S is constructed:
M = 1 2 &Sigma; i &prime; , j &prime; N + &eta; m i &prime; j &prime; ( mat 3 ( ( ( &chi; i &prime; - &chi; j &prime; ) &times; 1 U 1 T &times; 2 U 2 T ) ) mat 3 T ( ( ( &chi; i &prime; - &chi; j &prime; ) &times; 1 U 1 T &times; 2 U 2 T ) ) )
Wherein, m i'j'represent sample χ i'and χ j'between similarity, χ i'represent the individual training sample of i-th in S ', χ j'jth in expression S ' individual training sample;
(8) by having dissimilarity matrix B between the class of mark training set, having dissimilarity matrix W and the semi-supervised scale of similarity matrix M structure tensor in the class of mark training set to cut objective function:
U 3 * = arg max U 3 tr ( U 3 T BU 3 ) tr ( U 3 T ( W + &beta;M ) U 3 )
Wherein, parameter beta be one fine setting parameter, its value by people for being appointed as 0.001, U 3for the projection matrix on required feature dimensions direction, the mark of tr representing matrix;
(9) cut objective function to the semi-supervised scale of tensor to solve, obtain the projection matrix in feature dimensions direction
(10) mark training set will be had respectively and test set project to by the lower dimensional space opened, obtain newly after projecting having mark training set with new test set wherein be t the new feature tensor having mark training sample, be the new feature tensor of u test sample book, represent tensor and matrix carry out mould 3 to amass;
(11) new there is mark training set category label collection with new test set be input in supporting vector machine SVM and classify, obtain the classification results of test set wherein, l ' urepresent the category label belonging to u test sample book.
2. according to claim 1ly cut based on the semi-supervised scale of tensor the hyperspectral image classification method that dimension about subtracts, sample χ in required step (7) i'and χ j'between similarity m i'j', by following formulae discovery:
Wherein, δ is nuclear parameter.
3. according to claim 1ly cut based on the semi-supervised scale of tensor the hyperspectral image classification method that dimension about subtracts, in described step (9), objective function cut to the semi-supervised scale of tensor and solve, carry out as follows:
9a) semi-supervised for tensor scale is cut objective function and is transformed into following form:
Wherein an adjustment parameter, for (B+W+ β × M) -the eigenvalue of maximum that B is corresponding, (B+W+ β × M) -represent and B+W+ β × M is inverted;
9b) arrange dimension about subtract after the value of intrinsic dimensionality d, and by above-mentioned objective function item carries out svd, draws d eigenvalue of maximum and the proper vector u corresponding to this d eigenvalue of maximum 1, u 2..., u d, wherein the value of d is integer, and 0 < d≤D;
9c) use proper vector u 1, u 2..., u dprojection matrix on constitutive characteristic dimension direction
U 3 * = [ u 1 , u 2 , . . . , u d ] &Element; R D &times; d .
CN201510224055.5A 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor Active CN104778482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510224055.5A CN104778482B (en) 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510224055.5A CN104778482B (en) 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor

Publications (2)

Publication Number Publication Date
CN104778482A true CN104778482A (en) 2015-07-15
CN104778482B CN104778482B (en) 2018-03-13

Family

ID=53619935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510224055.5A Active CN104778482B (en) 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor

Country Status (1)

Country Link
CN (1) CN104778482B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740884A (en) * 2016-01-22 2016-07-06 厦门理工学院 Hyper-spectral image classification method based on singular value decomposition and neighborhood space information
CN105956603A (en) * 2016-04-15 2016-09-21 天津大学 Video sequence classifying method based on tensor time domain association model
CN108595555A (en) * 2018-04-11 2018-09-28 西安电子科技大学 The image search method returned based on semi-supervised tensor subspace
CN108875958A (en) * 2017-05-11 2018-11-23 广州异构智能科技有限公司 Use the primary tensor processor of outer product unit
CN111368691A (en) * 2020-02-28 2020-07-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
CN111898710A (en) * 2020-07-15 2020-11-06 中国人民解放军火箭军工程大学 Method and system for selecting characteristics of graph
CN112101381A (en) * 2020-08-30 2020-12-18 西南电子技术研究所(中国电子科技集团公司第十研究所) Tensor collaborative drawing discriminant analysis remote sensing image feature extraction method
CN114972118A (en) * 2022-06-30 2022-08-30 抖音视界(北京)有限公司 Noise reduction method and device for inspection image, readable medium and electronic equipment
US11748393B2 (en) 2018-11-28 2023-09-05 International Business Machines Corporation Creating compact example sets for intent classification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814148A (en) * 2010-04-30 2010-08-25 霍振国 Remote sensing hyperspectral image classification method based on semi-supervised kernel adaptive learning
CN102024153A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Hyperspectral image supervised classification method
CN102208037A (en) * 2011-06-10 2011-10-05 西安电子科技大学 Hyper-spectral image classification method based on Gaussian process classifier collaborative training algorithm
CN102208034A (en) * 2011-07-16 2011-10-05 西安电子科技大学 Semi-supervised dimension reduction-based hyper-spectral image classification method
US20140029793A1 (en) * 2012-07-30 2014-01-30 Wei Chen Method of optimal out-of-band correction for multispectral remote sensing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814148A (en) * 2010-04-30 2010-08-25 霍振国 Remote sensing hyperspectral image classification method based on semi-supervised kernel adaptive learning
CN102024153A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Hyperspectral image supervised classification method
CN102208037A (en) * 2011-06-10 2011-10-05 西安电子科技大学 Hyper-spectral image classification method based on Gaussian process classifier collaborative training algorithm
CN102208034A (en) * 2011-07-16 2011-10-05 西安电子科技大学 Semi-supervised dimension reduction-based hyper-spectral image classification method
US20140029793A1 (en) * 2012-07-30 2014-01-30 Wei Chen Method of optimal out-of-band correction for multispectral remote sensing

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740884B (en) * 2016-01-22 2019-06-07 厦门理工学院 Hyperspectral Image Classification method based on singular value decomposition and neighborhood space information
CN105740884A (en) * 2016-01-22 2016-07-06 厦门理工学院 Hyper-spectral image classification method based on singular value decomposition and neighborhood space information
CN105956603A (en) * 2016-04-15 2016-09-21 天津大学 Video sequence classifying method based on tensor time domain association model
CN108875958A (en) * 2017-05-11 2018-11-23 广州异构智能科技有限公司 Use the primary tensor processor of outer product unit
CN108874745A (en) * 2017-05-11 2018-11-23 北京异构智能科技有限公司 The segmentation of primary tensor processor and contraction of tensor
CN108595555B (en) * 2018-04-11 2020-12-08 西安电子科技大学 Image retrieval method based on semi-supervised tensor quantum space regression
CN108595555A (en) * 2018-04-11 2018-09-28 西安电子科技大学 The image search method returned based on semi-supervised tensor subspace
US11748393B2 (en) 2018-11-28 2023-09-05 International Business Machines Corporation Creating compact example sets for intent classification
CN111368691A (en) * 2020-02-28 2020-07-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
CN111368691B (en) * 2020-02-28 2022-06-14 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
CN111898710A (en) * 2020-07-15 2020-11-06 中国人民解放军火箭军工程大学 Method and system for selecting characteristics of graph
CN111898710B (en) * 2020-07-15 2023-09-29 中国人民解放军火箭军工程大学 Feature selection method and system of graph
CN112101381A (en) * 2020-08-30 2020-12-18 西南电子技术研究所(中国电子科技集团公司第十研究所) Tensor collaborative drawing discriminant analysis remote sensing image feature extraction method
CN114972118A (en) * 2022-06-30 2022-08-30 抖音视界(北京)有限公司 Noise reduction method and device for inspection image, readable medium and electronic equipment

Also Published As

Publication number Publication date
CN104778482B (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN104778482A (en) Hyperspectral image classifying method based on tensor semi-supervised scale cutting dimension reduction
CN106815601B (en) Hyperspectral image classification method based on recurrent neural network
CN104281855B (en) Hyperspectral image classification method based on multi-task low rank
CN103971123B (en) Hyperspectral image classification method based on linear regression Fisher discrimination dictionary learning (LRFDDL)
CN103632168B (en) Classifier integration method for machine learning
CN102208034B (en) Semi-supervised dimension reduction-based hyper-spectral image classification method
CN104392251B (en) Hyperspectral image classification method based on semi-supervised dictionary learning
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN110309868A (en) In conjunction with the hyperspectral image classification method of unsupervised learning
CN105069468A (en) Hyper-spectral image classification method based on ridgelet and depth convolution network
CN107451545B (en) The face identification method of Non-negative Matrix Factorization is differentiated based on multichannel under soft label
CN105989336B (en) Scene recognition method based on deconvolution deep network learning with weight
CN103208011B (en) Based on average drifting and the hyperspectral image space-spectral domain classification method organizing sparse coding
CN104408478A (en) Hyperspectral image classification method based on hierarchical sparse discriminant feature learning
CN105678261B (en) Based on the direct-push Method of Data with Adding Windows for having supervision figure
CN103886336A (en) Polarized SAR image classifying method based on sparse automatic encoder
CN103413151A (en) Hyperspectral image classification method based on image regular low-rank expression dimensionality reduction
CN104331698A (en) Remote sensing type urban image extracting method
CN106897669A (en) A kind of pedestrian based on consistent iteration various visual angles transfer learning discrimination method again
CN104298999B (en) EO-1 hyperion feature learning method based on recurrence autocoding
CN109492625A (en) A kind of human face identification work-attendance checking method based on width study
CN105335756A (en) Robust learning model and image classification system
CN105205449A (en) Sign language recognition method based on deep learning
CN104298977A (en) Low-order representing human body behavior identification method based on irrelevance constraint
CN103268485A (en) Sparse-regularization-based face recognition method capable of realizing multiband face image information fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant