CN104778482B - The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor - Google Patents

The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor Download PDF

Info

Publication number
CN104778482B
CN104778482B CN201510224055.5A CN201510224055A CN104778482B CN 104778482 B CN104778482 B CN 104778482B CN 201510224055 A CN201510224055 A CN 201510224055A CN 104778482 B CN104778482 B CN 104778482B
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
tensor
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510224055.5A
Other languages
Chinese (zh)
Other versions
CN104778482A (en
Inventor
张向荣
焦李成
莫玉
冯婕
侯彪
马文萍
白静
李阳阳
郭智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510224055.5A priority Critical patent/CN104778482B/en
Publication of CN104778482A publication Critical patent/CN104778482A/en
Application granted granted Critical
Publication of CN104778482B publication Critical patent/CN104778482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The present invention disclose it is a kind of the hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor, mainly solving the problems, such as that high spectrum image dimension is too high causes computationally intensive and existing method loss spatial information.Implementation step is:By high-spectral data set representations into full wave subdata cube set;Mark training set, test set and total training set have been selected from subdata cube set;Be configured with mark training set class between, the sample similarity matrix of dissimilarity matrix and total training set in class;Object function is cut by the semi-supervised scale of three above matrix construction tensor;Projection matrix is solved to object function;There to be a mark training set and test set projects to lower dimensional space and obtains new having mark training set and test set;There are mark training set and test set input supporting vector machine to be classified new, obtain the classification information of test set.The present invention can obtain higher classification accuracy rate, available for map making, vegetation investigation.

Description

Hyperspectral image classification method based on tensor semi-supervised scale tangent dimension reduction
Technical Field
The invention belongs to the technical field of image processing, relates to dimension reduction of high-dimensional data, and is used for classifying hyperspectral remote sensing images.
Background
The emerging hyperspectral remote sensing image processing in recent years is the leading edge technology of remote sensing. The hyperspectral remote sensing image processing technology utilizes an imaging spectrometer to simultaneously image earth surface objects at dozens or hundreds of wave bands with nanometer-level spectral resolution, thereby obtaining continuous spectral information of the earth surface objects, enabling synchronous acquisition of spatial information, radiation information and spectral information of the earth surface objects to be possible, and the most remarkable characteristic is 'map-in-one', and the development of the technologies enables human beings to make a qualitative leap in the aspects of earth observation and information acquisition capability.
Common hyperspectral image data in research comprises Indian pins data obtained by an unloaded visible light/infrared imaging spectrometer AVIRIS of a NASA jet propulsion laboratory and University of Pavia data obtained by a ROSIS spectrometer.
The hyperspectral image classification is a process of classifying each pixel in the hyperspectral image into a corresponding category by combining the characteristics of the hyperspectral image. Although the images obtained by the hyperspectral remote sensing technology contain abundant spectral information, spatial information and radiation information, which makes the classification easier, the classification of the hyperspectral images still faces huge difficulties and challenges: (1) the data volume is large, the storage, the display and the like are difficult, the wave band number is high, and the calculation amount is large; (2) dimension disasters, redundant information brought by too high dimension can reduce the classification precision; (3) the number of wave bands is large, the correlation is high, and therefore the requirement for the number of training samples is increased, and the reliability of the classification model parameters is reduced to a certain extent because of insufficient samples. Therefore, to obtain a high classification accuracy, the dimensionality of the hyperspectral image is reduced to reduce the data size.
There are many dimension reduction methods, and there are supervised dimension reduction methods, unsupervised dimension reduction methods, and semi-supervised dimension reduction methods, for example, principal component analysis PCA and local linear embedding LLE methods belong to the unsupervised method, linear discriminant analysis LDA belongs to the supervised method, and semi-supervised discriminant analysis SDA belongs to the semi-supervised method, depending on the utilization of the marked samples. The supervised dimensionality reduction method is to train by using labeled data and obtain a low-dimensional space according to category information. In contrast to supervised dimensionality reduction methods, unsupervised dimensionality reduction methods do not have class information, but rather select low-dimensional feature vectors that best embody the data structure by finding the inherent structural features of the data. The semi-supervised method combines the characteristics of the supervised method and the unsupervised method, takes the category information into consideration, simultaneously excavates the structural information of the data, maximizes the utilization of resources and further obtains better low-dimensional space.
These vector-based methods require vectorization of the image, and therefore they rely only on spectral characteristics and ignore spatial distributions. In order to solve the defects, the academic world introduces a tensor-based hyperspectral image representation, and the spatial and inter-spectral structures are analyzed simultaneously, so that a good result is obtained.
Dacheng Tao et al in the article "General transducer characterization and gaborFeatures for gap Recognition" (PAMI 2007) proposed the method of Tensor LDA for Gait Recognition, which mainly generalizes the method of LDA to Tensor computation. This method, while utilizing tagged information, is unable to handle heterovariance and polymorphic data.
Disclosure of Invention
The invention aims to provide a novel hyperspectral image classification method based on tensor semi-supervised scale tangent dimension reduction, which is used for better mining the internal structure of data by utilizing a small amount of marked samples and a large amount of unmarked data, solving the problems that image vectorization, space information loss and incapability of processing heterovariance and polymorphic data are required in the prior art and improving the classification precision.
The technical idea of the invention is as follows: by learning the features in the image, the commonality and the heterology between different data are represented properly, the dimensionality reduction of the hyperspectral image is realized by utilizing a tensor correlation calculation method, the dimensionality disaster is overcome, the essential rule of an object is sought by finding valuable intrinsic low-dimensional structure information embedded in high-dimensional data, and the original data is represented more compactly by mapping and projecting to a low-dimensional feature space. The method comprises the following implementation steps:
(1) inputting a hyperspectral data set A epsilon Rm×n×DThe data set comprises c types of ground objects, wherein m multiplied by n represents the size of an image space, namely the number of pixel points, and D represents the total wave band number of the data set;
(2) taking 5 × 5 neighborhood blocks by taking each pixel point of A as a center to obtain Q sub-data cubes with full wave bands, and making each sub-data cube as a sub-data cubeExpressing a sample by a third-order tensor to obtain a sample setWherein c isaRepresenting the a sample, Q represents the total number of samples, and Q is m multiplied by n;
(3) from a set of samplesRandomly selecting N marked samples to form a marked training sample setThe corresponding class label vector is noted as:the other Q-N unmarked samples form a test sample set,yu∈R5×5×DTherein, xtT-th sample, l, representing a labeled training settDenotes the class label to which the t-th labeled training sample belongs, yuThe u-th sample representing the test set;
(4) h unlabeled samples are selected from Q-N unlabeled samples and form a total training set together with N labeled samplesWherein s iskRepresenting the kth training sample of the total training set, wherein N + η is the number of samples of the total training set, and Q-N is more than or equal to 1 and less than or equal to η;
(5) constructing an inter-class dissimilarity matrix B with a labeled training set X:
wherein, VpIndicating that the p-th class is labeledSet of training samples, Vp' denotes a set of all samples except the p-th type sample in the labeled training sample set, npIndicates the number of samples in the p-th labeled training set, nc(j)Indicates the number of samples, χ, to which the jth labeled training sample belongsiThe ith labeled training sample, χ, representing the p-th classjRepresents Vp' the j-th labeled training sample, U1Projection matrix, U, representing the horizontal direction of a full-band sub-data cube1=15×5,U2Projection matrix, U, representing the vertical direction of a full-band sub-data cube2=15×5,U1And U2Each element of (a) has a value of 1,representing tensor and matrixPerforming modulo 1 and modulo 2 product, T represents transposition, (. epsilon.) epsilon.R5×5×DA tensor is represented which is,the first and second stages, representing the product of the two tensors, are compressed to obtain a matrix of size dxd,(·)∈R5×5×Drepresenting a third order tensor, (.)(3)∈RD ×(5×5)The representation tensor (-) is expanded into a matrix according to mode 3;
(6) constructing an intra-class dissimilarity matrix W with a labeled training set X:
wherein, χhRepresents VpThe h-th labeled training sample in the training set;
(7) constructing a similarity matrix M of all samples in the total training set S:
wherein m isi'j'Represents a sample ci'And cj'Similarity therebetween, χi'Denotes the ith' training sample, χ, in Sj'Represents the jth training sample in S;
(8) constructing a tensor semi-supervised scale cutting objective function by an inter-class dissimilarity matrix B of the marked training set, an intra-class dissimilarity matrix W of the marked training set and a similarity matrix M:
where parameter β is a fine tuning parameter whose value is artificially specified as 0.001, U3For the projection matrix in the direction of the required feature dimension, tr represents the trace of the matrix;
(9) solving the tensor semi-supervised scale cutting objective function to obtain a projection matrix of the characteristic dimension direction
(10) Respectively will have a training set of marksAnd test setIs projected toThe formed low-dimensional space is used for obtaining a new marked training set after projectionAnd a new test setWhereinFor the new feature tensor for the t-th labeled training sample,for the new feature tensor for the u-th test sample,representing tensor and matrixPerforming mould 3 product;
(11) new labeled training setCategory label setAnd a new test setInputting the data into a Support Vector Machine (SVM) for classification to obtain a classification result of the test setWherein lu' denotes a class label to which the u-th test sample belongs. Compared with the prior art, the invention has the following advantages:
firstly, the invention reduces the dimension of the hyperspectral image data by adopting a dimension reduction algorithm and then classifies the hyperspectral image data, thereby greatly reducing the calculated amount and improving the classification speed.
Secondly, based on the characteristics that in a small-range space region, the ground object type corresponding to the hyperspectral image is single, and the pixels have great similarity, each sample is expressed into a sub-data cube with a full wave band, tensor calculation is utilized, vectorization processing of the image is avoided, and regional space correlation and inter-spectrum correlation can be utilized to the maximum extent;
thirdly, compared with the existing tensor LDA method, each class of data is not required to meet Gaussian equal variance distribution, and the dissimilarity matrix is constructed by calculating the dissimilarity between samples, so that the influence of class centers is eliminated.
Fourthly, the method fully utilizes the information provided by the marked samples to search a projection space capable of better keeping the separability of the classes, and simultaneously, as the geometric structure information of the data mined by the unmarked samples is utilized, the essential geometric characteristics of the data can be reflected;
a contrast experiment shows that the method effectively reduces the complexity of calculation and improves the classification accuracy of the hyperspectral remote sensing image.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is an Indian Pines image used in the simulation of the present invention;
FIG. 3 is a diagram of the classification results of Indian Pines images according to the present invention and the prior art method.
Detailed description of the preferred embodiments
The technical solution and effects of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, the implementation steps of the invention are as follows:
step 1, inputting a hyperspectral data set A epsilonRm×n×DThe data set comprises 16 types of ground objects, wherein m × n represents the size of an image space, namely the number of pixel points, D represents the total wave band number of the data set, and R represents a real number domain;
and 2, selecting a marked training set X, a test set Y and a total training set S.
2a) Taking 5 × 5 neighborhood blocks by taking each pixel point of A as a center to obtain Q sub-data cubes with full wave bands, expressing each sub-data cube as a sample by using a third-order tensor to obtain a sample setWherein c isaRepresenting the a sample, Q represents the total number of samples, and Q is m multiplied by n;
2b) from a set of samplesRandomly selecting N marked samples to form a marked training sample setThe corresponding class label vector is noted as:the other Q-N unmarked samples form a test sample set,yu∈R5×5×DTherein, xtT-th sample, l, representing a labeled training settDenotes the class label to which the t-th labeled training sample belongs, yuThe u-th sample representing the test set;
2c) h unlabeled samples are selected from Q-N unlabeled samples and form a total training set together with N labeled samplesWherein s iskShow general trainingThe kth training sample of the training set, N + η is the number of samples of the total training set, and Q-N is more than or equal to 1 and less than or equal to η.
And 3, constructing an inter-class dissimilarity matrix B and an intra-class dissimilarity matrix W of the labeled training set X.
3a) Forming the p-th type labeled sample in the labeled training set X into a homogeneous sample set1,2, c, wherein χiI-th labeled training sample, n, representing the p-th classpA set of representations VpThe number of marked training samples;
3b) all samples except the p-th type sample in the marked training sample set form a different type sample setWherein xjRepresents Vp' the jth labeled training sample, nc(j)Representing the number of samples of the class to which the jth marked training sample belongs;
3c) calculating homogeneous sample set VpMarked sample and heterogeneous sample set V in (1)pThe dissimilarity between the labeled samples in' yields an inter-class dissimilarity matrix B for each classp
U1Projection matrix, U, representing the horizontal direction of a full-band sub-data cube1=15×5,U2Projection matrix, U, representing the vertical direction of a full-band sub-data cube2=15×5,U1And U2Each element of (a) has a value of 1,representing tensor and matrixPerforming modulo 1 and modulo 2 product, T represents transposition,the first and second stages of the product of the two tensors are compressed to obtain a matrix of size DxD, (. epsilon. R)5×5×DRepresenting a third order tensor, (.)(3)∈RD×(5×5)The representation tensor (-) is expanded into a matrix in mode 3T represents transposition;
3d) calculating homogeneous sample set VpThe dissimilarity among the marked samples in (1) to obtain an intra-class dissimilarity matrix W of each classp
χhRepresents VpThe h-th labeled training sample in the training set;
3e) inter-class dissimilarity matrix B for each class of labeled training samples of step 3c)pAnd summing to obtain an inter-class dissimilarity matrix B with a labeled training set:
3f) for each class of labeled training samples of step 3d), an intra-class dissimilarity matrix WpAnd summing to obtain an inter-class dissimilarity matrix W with a labeled training set:
and 4, constructing an unsupervised sample similarity matrix M according to the total training set S.
4a) Calculating the similarity between any two samples in the total training set S:
wherein m isi'j'Represents a sample ci'And cj'Similarity therebetween, χi'Denotes the ith' training sample, χ, in Sj'Representing the jth training sample in S, wherein delta is a kernel parameter;
4b) calculating an unsupervised sample similarity matrix M according to the total training set S:
step 5, constructing a tensor semi-supervised scale cutting objective function by the inter-class dissimilarity matrix B of the marked training set, the intra-class dissimilarity matrix W of the marked training set and the similarity matrix M:
where parameter β is a fine tuning parameter whose value is artificially specified as 0.001, U3For the projection matrix in the direction of the required feature dimension, tr represents the trace of the matrix;
step 6, solving the tensor semi-supervised scale tangent objective function to obtain a projection matrix U in the characteristic dimension direction3
6a) The tensor semi-supervised scale cut objective function is transformed into the form:
whereinIs a parameter for the adjustment of the position of the object,is (B + W + β X M)-B maximum eigenvalue corresponding to (B + W + β XM)-Represents inverting B + W + β × M;
6b) setting the value of the characteristic dimension d after dimensionality reduction, and performing the above-mentioned operation on the obtained valueSingular value decomposition is carried out on the items to obtain d maximum eigenvalues and eigenvectors u corresponding to the d maximum eigenvalues1,u2,...,udWherein the value of D is an integer, and D is more than 0 and less than or equal to D;
6c) using feature vectors u1,u2,...,udForming a projection matrix in the direction of the feature dimension
Step 7, respectively projecting the marked training set X and the marked testing set Y to a projection matrix U* 3And (4) obtaining a projected new marked training set X 'and a new testing set Y' from the formed low-dimensional space.
7a) The original labeled training setProjection to the projection matrixIn the space of the formation, a new marked training set is obtainedWherein,a new feature tensor for the tth labeled training sample;
7b) original test setProjection to the projection matrixIn the space formed by stretching, a new test set is obtainedA new feature tensor for the u-th test sample;
step 8, new marked training setCategory label setAnd a new test setInputting the data into a Support Vector Machine (SVM) for classification to obtain a classification result of the test setWherein lu' denotes a class label to which the u-th test sample belongs.
The effect of the invention can be further illustrated by the following simulation experiment:
1. simulation conditions are as follows:
the image used in the simulation experiment was an Indian Pines image taken by the AVIRIS of the NASA jet propulsion laboratory in the united states space agency, NASA, on north of indiana, 6 months 1992, as shown in fig. 2. The general 16 types of feature information are shown in fig. 2, and the class name and the number of samples of each type of feature information are shown in table 1.
TABLE 1 Indian Pines dataset Categories case
Categories Category name Number of
1 Alfafa 54
2 Corn-notill 1434
3 Corn-min 834
4 Corn 234
5 Grass/Pasture 497
6 Grass/Trees 747
7 Grass/Pasture-mowed 26
8 Hay-windrowed 489
9 Oats 20
10 Soybeans-notill 968
11 Soybeans-min 2468
12 Soybeans-clean 614
13 Wheat 212
14 Woods 1294
15 Building-Grass-Trees-Drives 380
16 Stone-steel Towers 95
The image size in fig. 2 is 145 × 145, 220 bands, and 200 bands for removing noise and absorption by the atmosphere and water. The simulation experiment of the invention is realized on MATLAB 2011b on AMD (TM) A8CPU, main frequency 1.90GHz, memory 8G and Windows7(64bit) platforms.
2. Emulated content and analysis
The invention and the existing two methods are used for dimensionality reduction of the hyperspectral image Indian Pines, and the two existing methods are respectively as follows: scaling down the dimension SC, tensor linear discriminant analysis TLDA.
The three dimensionality-reduced images obtained by the method and the existing three dimensionality reduction method of SC and TLDA are classified, wherein a kernel parameter gamma of a classifier SVM is obtained by a quintuple cross validation method, a penalty factor C is set to be 100, a kernel parameter sigma of a similarity matrix M is set to be 1, a weight parameter β is set to be 0.001, and the number η of unmarked training samples is fixed to be 2000.
Simulation 1, selecting 10 samples from each of 16 types of data shown in table 1 as marked samples, using the remaining samples in the 16 types of data as unmarked samples, performing 20 dimensionality reduction classification experiments on the 16 types of data by using the method of the invention and the existing two methods, and taking the average value of 20 classification results as the final classification accuracy, wherein the results are shown in table 2.
TABLE 2 Overall Classification accuracy of different methods on Indian Pines datasets
As can be seen from Table 2, the present invention is very superior to the two prior methods based on vectors; when the feature dimension is larger than 10, the classification accuracy of the method reaches more than 60 percent, which is obviously higher than that of the existing method;
as can be seen from table 2, after the dimension is greater than 25, the result of the present invention tends to be stable, so that only 25-dimensional features need to be adopted to obtain a higher recognition rate, thereby greatly reducing the amount of calculation.
Simulation 2, selecting 10 pixels from each type of pixels of 16 types of data shown in table 1 as marked pixels, using the remaining pixels of the whole Indian Pines image as unmarked pixels, classifying all the pixels of the whole Indian Pines image by the above three methods, setting the characteristic dimension after dimensionality reduction to be 25 in each method, and the result is shown in fig. 3, wherein a graph (3a) is a classification result graph of the invention, a graph (3b) is a classification result graph of the existing SC + SVM, and a graph (3c) is a classification result graph of the existing TLDA + SVM,
as can be seen from the three graphs of fig. 3a, 3b and 3c, the result graph of the present invention is smoother and better classified than that obtained by the two existing methods.
In conclusion, the dimensionality reduction is carried out on the hyperspectral image, and then SVM classification is used, so that on one hand, vectorization of the image is avoided by using tensor correlation operation, and spatial information is fully utilized; on the other hand, the geometric structure information of the data is fully mined by utilizing the information with the mark and the information without the mark, the classification precision is improved, and the method has the advantages compared with the existing method.

Claims (3)

1. A hyperspectral image classification method based on tensor semi-supervised scale tangent dimension reduction comprises the following steps:
(1) inputting a hyperspectral data set A epsilon Rm×n×DThe data set comprises c types of ground objects, wherein m multiplied by n represents the size of an image space, namely the number of pixel points, D represents the total wave band number of the data set, and R represents a real number domain;
(2) taking 5 × 5 neighborhood blocks by taking each pixel point of A as a center to obtain Q sub-data cubes with full wave bands, and representing each sub-data cube as a sample by using a third-order tensorObtaining a sample setWherein xaRepresenting the a sample, Q represents the total number of samples, and Q is m multiplied by n;
(3) from a set of samplesRandomly selecting N marked samples to form a marked training sample setThe corresponding class label vector is noted as:the other Q-N unmarked samples form a test sample setyu∈R5 ×5×DTherein, xtT-th sample, l, representing a labeled training settDenotes the class label to which the t-th labeled training sample belongs, yuThe u-th sample representing the test set;
(4) η unlabeled samples are selected from the Q-N unlabeled samples, and the unlabeled samples and the N labeled samples form a total training setWherein s iskRepresenting the kth training sample of the total training set, wherein N + η is the number of samples of the total training set, and Q-N is more than or equal to 1 and less than or equal to η;
(5) constructing an inter-class dissimilarity matrix B with a labeled training set X:
<mrow> <mi>B</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>V</mi> <mi>p</mi> </msub> </mrow> </munder> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>&amp;Element;</mo> <msubsup> <mi>V</mi> <mi>P</mi> <mo>&amp;prime;</mo> </msubsup> </mrow> </munder> <mfrac> <mn>1</mn> <mrow> <msub> <mi>n</mi> <mi>p</mi> </msub> <msub> <mi>n</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msub> </mrow> </mfrac> <msub> <mi>mat</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>&amp;chi;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&amp;chi;</mi> <mi>j</mi> </msub> </mrow> <mo>)</mo> </mrow> <msub> <mo>&amp;times;</mo> <mn>1</mn> </msub> <msubsup> <mi>U</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mo>&amp;times;</mo> <mn>2</mn> </msub> <msubsup> <mi>U</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <msubsup> <mi>mat</mi> <mn>3</mn> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>&amp;chi;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&amp;chi;</mi> <mi>j</mi> </msub> </mrow> <mo>)</mo> </mrow> <msub> <mo>&amp;times;</mo> <mn>1</mn> </msub> <msubsup> <mi>U</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mo>&amp;times;</mo> <mn>2</mn> </msub> <msubsup> <mi>U</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
wherein, VpRepresents a set of p-th-class labeled training samples, V'pRepresenting a set of all samples in the set of labelled training samples except the class p sample, npIndicates the number of samples in the p-th labeled training set, nc(j)Indicates the number of samples, χ, to which the jth labeled training sample belongsiThe ith labeled training sample, χ, representing the p-th classjRepresents V'pJ-th labeled training sample of (1), U1Projection matrix, U, representing the horizontal direction of a full-band sub-data cube1=15×5,U2Projection matrix, U, representing the vertical direction of a full-band sub-data cube2=15×5,U1And U2Each element of (a) has a value of 1,representing tensor and matrixPerforming modulo 1 and modulo 2 product, T represents transposition, (. epsilon.) epsilon.R5×5×DA tensor is represented which is,the first and second stages, representing the product of the two tensors, are compressed to obtain a matrix of size dxd,(·)∈R5×5×Dto representA third order tensor, (.)(3)∈RD ×(5×5)The expression tensor (-) is unfolded into a matrix according to the mode 3, and c is a penalty factor;
(6) constructing an intra-class dissimilarity matrix W with a labeled training set X:
<mrow> <mi>W</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>V</mi> <mi>p</mi> </msub> </mrow> </munder> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>h</mi> <mo>&amp;Element;</mo> <msub> <mi>V</mi> <mi>p</mi> </msub> </mrow> </munder> <mfrac> <mn>1</mn> <mrow> <msub> <mi>n</mi> <mi>p</mi> </msub> <msub> <mi>n</mi> <mi>p</mi> </msub> </mrow> </mfrac> <msub> <mi>mat</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>&amp;chi;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&amp;chi;</mi> <mi>h</mi> </msub> </mrow> <mo>)</mo> </mrow> <msub> <mo>&amp;times;</mo> <mn>1</mn> </msub> <msubsup> <mi>U</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mo>&amp;times;</mo> <mn>2</mn> </msub> <msubsup> <mi>U</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <msubsup> <mi>mat</mi> <mn>3</mn> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>&amp;chi;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&amp;chi;</mi> <mi>h</mi> </msub> </mrow> <mo>)</mo> </mrow> <msub> <mo>&amp;times;</mo> <mn>1</mn> </msub> <msubsup> <mi>U</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mo>&amp;times;</mo> <mn>2</mn> </msub> <msubsup> <mi>U</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
wherein, χhRepresents VpThe h-th labeled training sample in the training set;
(7) constructing a similarity matrix M of all samples in the total training set S:
<mrow> <mi>M</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>N</mi> <mo>+</mo> <mi>&amp;eta;</mi> </mrow> </munderover> <msub> <mi>m</mi> <mrow> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>mat</mi> <mn>3</mn> </msub> <mo>(</mo> <mrow> <mo>(</mo> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>&amp;chi;</mi> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>-</mo> <msub> <mi>&amp;chi;</mi> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> <mo>)</mo> </mrow> <msub> <mo>&amp;times;</mo> <mn>1</mn> </msub> <msubsup> <mi>U</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mo>&amp;times;</mo> <mn>2</mn> </msub> <msubsup> <mi>U</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mrow> <mo>)</mo> </mrow> <mo>)</mo> <msubsup> <mi>mat</mi> <mn>3</mn> <mi>T</mi> </msubsup> <mo>(</mo> <mrow> <mo>(</mo> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>&amp;chi;</mi> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>-</mo> <msub> <mi>&amp;chi;</mi> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> <mo>)</mo> </mrow> <msub> <mo>&amp;times;</mo> <mn>1</mn> </msub> <msubsup> <mi>U</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msub> <mo>&amp;times;</mo> <mn>2</mn> </msub> <msubsup> <mi>U</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mrow> <mo>)</mo> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
wherein m isi'j'Representing sample χi'Hexix-j'Similarity therebetween, χi'Denotes the ith' training sample, χ, in Sj'Represents the jth training sample in S;
(8) constructing a tensor semi-supervised scale cutting objective function by an inter-class dissimilarity matrix B of the marked training set, an intra-class dissimilarity matrix W of the marked training set and a similarity matrix M:
<mrow> <msubsup> <mi>U</mi> <mn>3</mn> <mo>*</mo> </msubsup> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>max</mi> </mrow> <msub> <mi>U</mi> <mn>3</mn> </msub> </munder> <mfrac> <mrow> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <msubsup> <mi>U</mi> <mn>3</mn> <mi>T</mi> </msubsup> <msub> <mi>BU</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <msubsup> <mi>U</mi> <mn>3</mn> <mi>T</mi> </msubsup> <mo>(</mo> <mrow> <mi>W</mi> <mo>+</mo> <mi>&amp;beta;</mi> <mi>M</mi> </mrow> <mo>)</mo> <msub> <mi>U</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
where parameter β is a fine tuning parameter whose value is artificially specified as 0.001, U3For the projection matrix in the direction of the required feature dimension, tr represents the trace of the matrix;
(9) solving the tensor semi-supervised scale cutting objective function to obtain a projection matrix of the characteristic dimension direction
(10) Respectively will have a training set of marksAnd test setIs projected toThe formed low-dimensional space is used for obtaining a new marked training set after projectionAnd a new test setWhereinFor the new feature tensor for the t-th labeled training sample,for the new feature tensor for the u-th test sample,representing tensor and matrixPerforming mould 3 product;
(11) new labeled training setCategory label setAnd a new test setInputting the data into a Support Vector Machine (SVM) for classification to obtain a classification result of the test setWherein l'uIndicates the class label to which the u-th test sample belongs.
2. The method for classifying hyperspectral images by reduction of tangent dimension of tensor-based semi-supervised scale as claimed in claim 1, wherein the sample χ in step (7) is determinedi'Hexix-j'Similarity m betweeni'j'Calculated by the following formula:
where δ is the nuclear parameter.
3. The hyperspectral image classification method based on tensor semi-supervised scale tangent dimension reduction according to claim 1, wherein the tensor semi-supervised scale tangent objective function is solved in the step (9) according to the following steps:
9a) the tensor semi-supervised scale cut objective function is transformed into the form:
whereinIs a parameter for the adjustment of the position of the object,is (B + W + β X M)-Maximum eigenvalue corresponding to B,(B+W+β×M)-Represents inverting B + W + β × M;
9b) setting the value of the characteristic dimension d after dimensionality reduction, and performing the above-mentioned operation on the obtained valueSingular value decomposition is carried out on the items to obtain d maximum eigenvalues and eigenvectors u corresponding to the d maximum eigenvalues1,u2,...,udWherein the value of D is an integer, and D is more than 0 and less than or equal to D;
9c) using feature vectors u1,u2,...,udForming a projection matrix in the direction of the feature dimension
CN201510224055.5A 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor Active CN104778482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510224055.5A CN104778482B (en) 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510224055.5A CN104778482B (en) 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor

Publications (2)

Publication Number Publication Date
CN104778482A CN104778482A (en) 2015-07-15
CN104778482B true CN104778482B (en) 2018-03-13

Family

ID=53619935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510224055.5A Active CN104778482B (en) 2015-05-05 2015-05-05 The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor

Country Status (1)

Country Link
CN (1) CN104778482B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740884B (en) * 2016-01-22 2019-06-07 厦门理工学院 Hyperspectral Image Classification method based on singular value decomposition and neighborhood space information
CN105956603A (en) * 2016-04-15 2016-09-21 天津大学 Video sequence classifying method based on tensor time domain association model
US10169298B1 (en) * 2017-05-11 2019-01-01 NovuMind Limited Native tensor processor, using outer product unit
CN108595555B (en) * 2018-04-11 2020-12-08 西安电子科技大学 Image retrieval method based on semi-supervised tensor quantum space regression
US11748393B2 (en) 2018-11-28 2023-09-05 International Business Machines Corporation Creating compact example sets for intent classification
CN111368691B (en) * 2020-02-28 2022-06-14 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
CN111898710B (en) * 2020-07-15 2023-09-29 中国人民解放军火箭军工程大学 Feature selection method and system of graph
CN112101381B (en) * 2020-08-30 2022-10-28 西南电子技术研究所(中国电子科技集团公司第十研究所) Tensor collaborative drawing discriminant analysis remote sensing image feature extraction method
CN114972118B (en) * 2022-06-30 2023-04-28 抖音视界有限公司 Noise reduction method and device for inspection image, readable medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814148A (en) * 2010-04-30 2010-08-25 霍振国 Remote sensing hyperspectral image classification method based on semi-supervised kernel adaptive learning
CN102024153A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Hyperspectral image supervised classification method
CN102208037A (en) * 2011-06-10 2011-10-05 西安电子科技大学 Hyper-spectral image classification method based on Gaussian process classifier collaborative training algorithm
CN102208034A (en) * 2011-07-16 2011-10-05 西安电子科技大学 Semi-supervised dimension reduction-based hyper-spectral image classification method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8942424B2 (en) * 2012-07-30 2015-01-27 The United States Of America, As Represented By The Secretary Of The Navy Method of optimal out-of-band correction for multispectral remote sensing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814148A (en) * 2010-04-30 2010-08-25 霍振国 Remote sensing hyperspectral image classification method based on semi-supervised kernel adaptive learning
CN102024153A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Hyperspectral image supervised classification method
CN102208037A (en) * 2011-06-10 2011-10-05 西安电子科技大学 Hyper-spectral image classification method based on Gaussian process classifier collaborative training algorithm
CN102208034A (en) * 2011-07-16 2011-10-05 西安电子科技大学 Semi-supervised dimension reduction-based hyper-spectral image classification method

Also Published As

Publication number Publication date
CN104778482A (en) 2015-07-15

Similar Documents

Publication Publication Date Title
CN104778482B (en) The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor
CN107451614B (en) Hyperspectral classification method based on fusion of space coordinates and space spectrum features
CN104392251B (en) Hyperspectral image classification method based on semi-supervised dictionary learning
CN104952050B (en) High spectrum image adaptive de mixing method based on region segmentation
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN102208034B (en) Semi-supervised dimension reduction-based hyper-spectral image classification method
CN104298999B (en) EO-1 hyperion feature learning method based on recurrence autocoding
CN102324047B (en) Hyper-spectral image ground object recognition method based on sparse kernel representation (SKR)
CN103440505B (en) The Classification of hyperspectral remote sensing image method of space neighborhood information weighting
CN110084159A (en) Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN103886336B (en) Polarized SAR image classifying method based on sparse automatic encoder
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN107992891A (en) Based on spectrum vector analysis multi-spectral remote sensing image change detecting method
CN103208011B (en) Based on average drifting and the hyperspectral image space-spectral domain classification method organizing sparse coding
Gao et al. Small sample classification of hyperspectral image using model-agnostic meta-learning algorithm and convolutional neural network
CN111368691B (en) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
Jia et al. A two-stage feature selection framework for hyperspectral image classification using few labeled samples
CN107590515A (en) The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN104182767B (en) The hyperspectral image classification method that Active Learning and neighborhood information are combined
CN105184314B (en) Wrapper formula EO-1 hyperion band selection methods based on pixel cluster
Huang et al. Hyperspectral image classification via discriminant Gabor ensemble filter
CN106503727A (en) A kind of method and device of classification hyperspectral imagery
Ghasrodashti et al. Hyperspectral image classification using an extended Auto-Encoder method
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
CN105160351A (en) Semi-monitoring high-spectral classification method based on anchor point sparse graph

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant