CN109190511B - Hyperspectral classification method based on local and structural constraint low-rank representation - Google Patents
Hyperspectral classification method based on local and structural constraint low-rank representation Download PDFInfo
- Publication number
- CN109190511B CN109190511B CN201810919458.5A CN201810919458A CN109190511B CN 109190511 B CN109190511 B CN 109190511B CN 201810919458 A CN201810919458 A CN 201810919458A CN 109190511 B CN109190511 B CN 109190511B
- Authority
- CN
- China
- Prior art keywords
- matrix
- hyperspectral image
- pixels
- pixel
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a hyperspectral classification method based on local and structural constraint low-rank representation. Firstly, carrying out normalization processing on an input hyperspectral image; then, constructing and obtaining a target function based on local and structural constraint low-rank representation; then, solving an objective function by using an augmented Lagrange multiplier method and an alternate iteration updating algorithm to obtain a low-rank decomposition matrix; and finally, calculating the class label of each test pixel by using the low-rank decomposition matrix to complete the classification of the hyperspectral image. The method can be suitable for the hyperspectral images with different classes of pixel compactness, has better robustness on noise and abnormal points, and can obviously improve the classification precision.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a hyperspectral classification method based on local and structural constraint low-rank representation.
Background
Different from the traditional remote sensing image, the hyperspectral image not only contains the spatial position information of the earth surface target, namely image information, but also contains the spectral curve information corresponding to each wave band, namely spectral information, therefore, the hyperspectral image has an important characteristic: the map is integrated. Due to the characteristics, the hyperspectral image contains abundant and diverse ground feature information, and the nuances of different resolutions of a common image can be captured. The hyperspectral classification technology is to separate different types of ground features according to rich information of hyperspectral images, and has been widely researched in recent years. A Low Rank Representation-based Hyperspectral image Classification method is proposed in the document "Sumarsono, Alex, and Qian Du, Low-Rank Subspace reconstruction for Supervised and Unsupervised Classification of Hyperspectral image in IEEE Journal of Selected Topics in Applied Earth objects and remotes Sensing, vol.9, No.9, pp.4158-4171.2016", which indicates that although Hyperspectral image data has very high dimensionality, a large amount of useful image information is located in a plurality of Low dimensional subspaces, and the noise of the image constitutes a sparse matrix, so that the original Hyperspectral image can be decomposed into a Low Rank data matrix and a sparse noise matrix. Based on the low-rank characteristic, the method firstly utilizes low-rank decomposition on an original hyperspectral image to obtain a true low-rank matrix with noise removed, then combines the existing high-efficiency classification algorithm to classify the obtained low-rank hyperspectral image, and a large number of experimental results show that the classification accuracy of the classifier can be improved through the preprocessing step of the low-rank decomposition. Firstly, the low-rank characteristic of the hyperspectral image is only used for preprocessing image data, and the design of a classifier is not assisted; secondly, the method only adopts the most common low-rank decomposition algorithm, but for the hyperspectral image, the low-rank decomposition algorithm has low applicability, and the obtained low-rank decomposition matrix is not the optimal expression matrix of the original hyperspectral image.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a hyperspectral classification method based on local and structural constraint low-rank representation. Firstly, carrying out normalization processing on an input hyperspectral image; then, constructing and obtaining a target function based on local and structural constraint low-rank representation; then, solving an objective function by using an augmented Lagrange multiplier method and an alternate iteration updating algorithm to obtain a low-rank decomposition matrix; and finally, calculating the class label of each test pixel by using the low-rank decomposition matrix to complete the classification of the hyperspectral image. The method can be suitable for the hyperspectral images with different classes of pixel compactness, has better robustness on noise and abnormal points, and can obviously improve the classification precision.
A hyperspectral classification method based on local and structural constraint low-rank representation is characterized by comprising the following steps:
step 1: performing normalization processing on hyperspectral image data by using a linear min-max normalization method to obtain a normalized hyperspectral image matrix X, wherein each column in the X is a spectral vector of one pixel, and the spectral reflectance value of each pixel is between 0 and 1;
step 2: based on the local constraint and the structure keeping criterion, an objective function of the following local and structure constraint low-rank representation is established:
wherein Z is a low-rank decomposition matrix, E is an error matrix, lambda is an error term regular coefficient, lambda is not less than 0, alpha is a local constraint term regular coefficient, alpha is not less than 0, beta is a structural constraint term regular coefficient, and beta is not less than 0; m is a distance matrix, and Q is a predefined matrix; the normalized hyperspectral image X can be divided into a training set and a test set, namely In order to train the set matrix,for the test set matrix, the training set is composed of 5% -15% of pixels selected from each type of pixels, the test set is composed of the rest hyperspectral pixels except the training set, Q and Z can be divided into the training set and the test set in this way, namely Q and Z are divided into two parts of the training set and the test setEach element in the distance matrix M is according toIs calculated to obtain xiAnd xjRespectively representing the spectral vectors l of the ith and jth pixels in the normalized hyperspectral image XiAnd ljRespectively representing the space coordinate vectors of the ith pixel and the jth pixel in the normalized hyperspectral image X, wherein m is a parameter for balancing spectral characteristics and space characteristics, m is more than or equal to 0, i is 1, …, n1,j=1,…,n,n1For training setThe number of the middle pixel points, wherein n is the total number of the pixel points in the normalized hyperspectral image X; each element in the predefined matrix Q is according toCalculating to obtain the sigma, wherein the sigma is a parameter for controlling the number of adjacent pixel points, the sigma is more than or equal to 0, i is 1, …, n1,j=1,…,n;||·||*Is the kernel norm of the matrix, which is the sum of all singular values of the matrix, | · | | luminance2,1Is L2,1Norm is calculated byd is the dimension of the pixel spectral vector in the hyperspectral image, | · | | luminance1Is L of a matrix1Norm, which is the sum of the absolute values of all the elements of the matrix, | · |. luminanceFIs the Frobenius norm of the matrix, which is the square root of the sum of the squares of all elements of the matrix,is a Hadamard operator which represents the multiplication of the corresponding position elements of the two matrixes;
and step 3: introducing auxiliary variables H and J, and converting the formula (1) into a formula by using an augmented Lagrange multiplier method:
wherein < A, B > - [ trace (A) ]TB) Trace denotes trace operation of the matrix, μ is a penalty factor, μ>0,Y1、Y2And Y3Is a lagrange multiplier;
and then respectively solving by using an alternating iteration updating algorithm to obtain H, J, Z, E optimal solutions, specifically:
step 3.1: initialization λ ═ 20, α ═ 0.8, β ═ 0.6, Y1 k=Y2 k=Y3 k=0,Hk=Jk=Zk=Ek=0,μk=10-6Wherein, the superscripts k all represent iteration times, and k is 1 initially;
step 3.2: fixing J, Z and E, updating the element in H according to the following formula:
where Θ (x) ═ max (x- ω,0) + min (x + ω,0), and ω is an element ω in ωij=(α/μk)Mij,i=1,…,n1,j=1,…,n;Representation matrix ZkThe ith row and the j column of elements,representation matrixRow i and column j of (1);
step 3.3: fix H, Z and E, update J by the following formula:
step 3.4: fix H, J and E, update Z as follows:
wherein the content of the first and second substances,i is an identity matrix, Ak=X-Ek+Y1 k/μk,Ck=Hk+Y3 k/μk。Is a training set matrixThe transposed matrix of (2).
Step 3.5: fixing H, J and Z, updating each column of E as follows:
step 3.6: updating the penalty factor according to the following formula:
μk+1=max(ρμk,maxμ) (7)
therein, maxμIs the maximum set value of μ, set to maxμ=1010Rho is a step length control parameter, and the value range is that rho is more than or equal to 1 and less than or equal to 2;
then, the lagrange multipliers are updated separately as follows:
step 3.7: if at the same time satisfy||Zk+1-Jk+1||∞< ε and | | Hk+1-Zk+1||∞If the value is less than epsilon, the iteration is stopped, and H, J, Z, E obtained by calculation at the moment is the final solution; otherwise, the iteration number k is k +1, and the procedure returns to step 3.2. Wherein | · | purple sweet∞An L ∞ norm representing a matrix, i.e., a product of a maximum element value of the matrix and a column number, epsilon is an error limiting parameter and is set to 10 ∈-4。
And 4, step 4: according toComputing test set pixelsThe category label of (1). Wherein the content of the first and second substances,is a matrixC is the total number of classes of hyperspectral image pixels, q is 1, … n2,n2To test the setThe number of the pixels in (1).
The invention has the beneficial effects that: due to the adoption of the target function of local and structural constraint low-rank representation, the spectral characteristics and the spatial characteristics can be better balanced, and the hyperspectral image processing method is better suitable for different types of hyperspectral images; because the objective function is converted into the minimum term lambda | E | counting with reconstruction error in the solving process2,1In the solving process of the algorithm, the noise of the hyperspectral image is removed, so that the robustness to abnormal points and noise can be improved; due to the obtained low rank decomposition matrixThe method has the characteristic of similarity, and the low-rank decomposition matrix is directly used for pixel classification, so that the method is simple to implement and can effectively improve the classification efficiency.
Drawings
FIG. 1 is a flow chart of a hyperspectral classification method based on local and structural constraint low-rank representation according to the invention;
FIG. 2 is a schematic diagram of the present invention based on local and structural constraints for low rank representation;
FIG. 3 is a diagram of the classification results of different algorithms on an Indian Pines dataset;
FIG. 4 is a diagram of the classification results for a Pavia University dataset;
in the figure, (a) is a group truth standard diagram; (b) is a SVM algorithm result graph; (c) a result graph of the SVMCK algorithm is obtained; (d) a JRSRC algorithm result graph is obtained; (e) a cdSRC algorithm result graph is obtained; (f) is a LRR algorithm result graph; (g) is a LGIDL algorithm result graph; (h) is a LSLRR algorithm result graph.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
As shown in fig. 1, the hyperspectral classification method based on local and structural constraint low-rank representation of the invention is basically implemented as follows:
1. hyperspectral image normalization processing
Because the spectrum value in the original hyperspectral image data reaches thousands, the numerical value can overflow in the actual operation process, the algorithm operation speed can be reduced, and the problem can be solved by adopting preprocessing operation, therefore, the given hyperspectral image data is subjected to normalization processing by utilizing a linear min-max normalization method to obtain a normalized hyperspectral image matrix X, wherein each column in the X is a spectral vector of one pixel, and the spectral reflectance value of each pixel in the X is between 0 and 1. The method comprises the following specific steps:
calculating the minimum value p of the pixel spectrum reflection value in the whole three-dimensional hyperspectral image1And maximum value p2Then, for each pixel point, the following formula is carried outAnd (3) normalization treatment:
x=(xo-p1)/(p2-p1) (11)
wherein x isoAnd x respectively represents the original spectral reflectance value and the normalized spectral reflectance value of the pixel point.
2. Constructing an objective function of local and structural constraint low-rank representation
The normalized hyperspectral image X can be divided into a training set and a test set, namely In order to train the set matrix,for the test set matrix, the training set is composed of samples with a certain proportion (generally, 5% -15% of each type of pixels are selected according to different hyperspectral data sets) selected from each type of pixels, and the rest hyperspectral pixels are used as the test set.
Firstly, calculating the distance M between each pixel and other pixels in the normalized hyperspectral image X according to the following distance measurement formulaijObtaining a distance matrix M:
wherein x isiAnd xjRespectively representing the spectral vectors l of the ith and jth pixels in the normalized hyperspectral image XiAnd ljRespectively representing the space coordinate vectors of the ith pixel and the jth pixel in the normalized hyperspectral image X, wherein m is a parameter for balancing the spectral characteristic and the space characteristic, i is 1, …, n is an arbitrary value which is more than or equal to 01,j=1,…,n,n1For training setThe number of the pixel points, n is the number of all the pixel points in X. The distance matrix M is obtained by calculating the weighting mode of different types of features, and the balance parameter M is set, so that the weights of the spectral features and the spatial features can be adjusted, and the method can be greatly suitable for different types of hyperspectral images.
To decompose the element Z in the matrix Z for low rankijThe similarity between the ith and jth pixels can be represented, there is a priori information that if the distance between two pixels is larger, the similarity is smaller, and the product of the distance and the similarity is smaller, and such a local constraint criterion can be described by a mathematical formula as follows:
wherein the content of the first and second substances,is a Hadamard operator, which means the multiplication of the corresponding position elements of the two matrices. Such local constraint criteria may cause low rank representations to learn to features between the hyperspectral data parts.
Then, each element of the predefined matrix Q is calculated according to the following formula:
wherein, sigma is a parameter for controlling the number of adjacent pixel points, and the requirement that sigma is more than or equal to 0 is met.
Likewise, both Q and Z can be divided into a training set and a test set in the same manner as X, i.e., Q and Z can be divided into two partsTraining set matrixOf (2) isSetting the diagonal block to be 0, and simultaneously properly adjusting the parameter sigma, the test set matrix can be madeApproximating a block diagonal matrix, the strategy for structure preservation is thus expressed as:
wherein | · | purple sweetFIs the Frobenius norm of the matrix, i.e., the square root of the sum of the squares of all the elements of the matrix.
Combining the local constraint criterion of formula (13) and the structure keeping strategy of formula (15), the objective function of the low-rank expression of the local and structural constraints constructed by the invention is as follows:
wherein, λ is the regular coefficient of error term, α is the regular coefficient of local constraint term, β is the regular coefficient of structural constraint term, these three parameters are all non-negative arbitrary values, | | · | survival*Is the kernel norm of the matrix, which is the sum of all singular values of the matrix, | · | | luminance2,1Is L2,1The norm of the number of the first-order-of-arrival,d is the dimension of the hyperspectral pixel, | · | non-woven vision1Is the L1 norm of the matrix, which is the sum of the absolute values of all the elements of the matrix.
3. Solving an objective function using an augmented Lagrange multiplier method and an alternate iterative update algorithm
The correlation between the variables Z and E to be solved in the objective function is large, and the solution is very troublesome, so that firstly, two auxiliary variables H and J are introduced, and the formula (16) is converted into the following form by using an augmented lagrange multiplier method:
wherein < A, B > - [ trace (A) ]TB) Mu > 0 is a penalty factor, Y1,Y2And Y3Is a lagrange multiplier.
Then, the idea of fixing other variables and optimizing the given variable by using an alternative iterative update algorithm is utilized to respectively solve the optimal solutions of H, J, Z and E. The specific process is as follows:
step 3.1: initialization λ ═ 20, α ═ 0.8, β ═ 0.6, Y1 k=Y2 k=Y3 k=0,Hk=Jk=Zk=Ek=0,μk=10-6Wherein, the superscripts k all represent iteration times, and k is 1 initially;
step 3.2: fixing J, Z and E, then H can be updated as follows:
by derivation, the optimal solution is:
where Θ (x) ═ max (x- ω,0) + min (x + ω,0), and ω is an element ω in ωij=(α/μk)Mij,i=1,…,n1,j=1,…,n;Representation matrix ZkThe ith row and the j column of elements,representation matrixRow i and column j.
Step 3.3: h, Z and E are fixed, and J is updated according to the following formula:
step 3.4: fixing H, J, and E, then Z can be updated as follows:
this is a quadratic minimization problem whose closed-form solution can be found by making its derivative 0, and the specific optimal solution is as follows:
wherein the content of the first and second substances,i is an identity matrix, Ak=X-Ek+Y1 k/μk,Ck=Hk+Y3 k/μk。Is a training set matrixThe transposed matrix of (2).
Step 3.5: fixing H, J and Z, E can be updated as follows:
the optimal solution is as follows:
step 3.6: updating corresponding parameters in the augmented Lagrange multiplier, namely updating penalty factors:
μk+1=max(ρμk,maxμ) (25)
therein, maxμIs the maximum set value of μ, set to maxμ=1010Rho is a step length control parameter, and the value range is that rho is more than or equal to 1 and less than or equal to 2;
then, the lagrange multiplier is updated:
step 3.7: the iterative convergence condition is checked, i.e. whether the respective optimization variables satisfy the following condition:
||Zk+1-Jk+1||∞<ε (30)
||Hk+1-Zk+1||∞<ε (31)
wherein | · | purple sweet∞An L ∞ norm representing a matrix, i.e., a product of a maximum element value of the matrix and a column number, epsilon is an error limiting parameter and is set to 10 ∈-4。
If the three conditions are met simultaneously, stopping iteration, and performing subsequent classification processing, wherein the calculated H, J, Z and E are final solutions; otherwise, the iteration number k is k +1, the process returns to step 3.2, and the variables H, J, Z, and E are continuously updated.
4. Sorting process
After the low-rank decomposition matrix Z of the hyperspectral image data is solved, a test set matrix is firstly utilizedComputing matricesThe sum of the elements in the q-th column belonging to class l, denotedWherein l is in the range of [1]And c is the number of categories of the hyperspectral data; then the test pixel is obtained by the following formula calculationCategory label of (2):
wherein the content of the first and second substances,representation imageVegetable extractClass label of (1, … n), q ═ 12,n2To test the setThe number of the pixels in (1). The operation of the step is very simple, and only simple addition and maximum value operation is needed, and other complex classifiers are not needed to obtain the classification result.
The implementation environment of this embodiment is: the central processing unit isA64-bit WINDOWS 7 operating system computer with the i 7-37703.40 GHz and the 32G internal memory is simulated by MATLAB R2015a software. The data used were two hyperspectral public datasets: indian Pines and Pavia University. Indian Pines: 200 bands, each band containing 145 x 145 pixels; pavia University: 103 bands, each band containing 610 x 340 pixels. On an Indian Pines data set, 10% of pixels are randomly selected as a training set, the rest pixels are used as a test set, a parameter sigma used for controlling the number of adjacent pixel points is set to be 0.8, a parameter for balancing spectral characteristics and spatial characteristics is set to be m 25, and a step length control parameter is set to be rho 1.2. On a Pavia University data set, 5% of pixels are randomly selected as a training set, the rest pixels are used as a test set, a parameter sigma for controlling the number of adjacent pixels is set to be 0.8, a parameter for balancing spectral features and spatial features is set to be m 25, and a step length control parameter is set to be rho 1.2. 7 different algorithms are respectively adopted to carry out classification processing on the two data sets, and fig. 3 and fig. 4 are respectively a classification result graph of the different classification algorithms on the Indian pipes and the Pavia University data sets, and are compared with a group route standard graph. These algorithms include: a Support Vector Machine (SVM) algorithm; a Support Vector Machine (SVMCK) algorithm based on synthetic kernels; joint Robust Sparse Representation classifier (Jonit Robust Sparse Representation Classification)r, JRSRC) algorithm; class-independent Sparse Representation Classifier (cdSRC) algorithm; a Low Rank Representation (LRR) algorithm; a Low-rank Group Dictionary Learning (LGIDL) algorithm; the local and structural constrained Low Rank Representation algorithm (LSLRR) of the present invention.
And calculating an Overall Accuracy (OA) index for measuring the Accuracy of the hyperspectral image data classification. The results are shown in table 1, and it can be seen that the overall accuracy of the method of the present invention is highest on two data sets, which also illustrates the advancement of the present invention on hyperspectral image classification.
TABLE 1
Claims (1)
1. A hyperspectral classification method based on local and structural constraint low-rank representation is characterized by comprising the following steps:
step 1: performing normalization processing on hyperspectral image data by using a linear min-max normalization method to obtain a normalized hyperspectral image matrix X, wherein each column in the X is a spectral vector of one pixel, and the spectral reflectance value of each pixel is between 0 and 1;
step 2: based on the local constraint and the structure keeping criterion, an objective function of the following local and structure constraint low-rank representation is established:
wherein Z is a low-rank decomposition matrix, E is an error matrix, lambda is an error term regular coefficient, lambda is not less than 0, alpha is a local constraint term regular coefficient, alpha is not less than 0, beta is a structural constraint term regular coefficient, and beta is not less than 0; m is a distance matrix, and Q is a predefined matrix; the normalized hyperspectral image X can be divided intoBoth training and test sets, i.e. In order to train the set matrix,for the test set matrix, the training set is composed of 5% -15% of pixels selected from each type of pixels, the test set is composed of the rest hyperspectral pixels except the training set, Q and Z can be divided into the training set and the test set in this way, namely Q and Z are divided into two parts of the training set and the test setEach element in the distance matrix M is according toIs calculated to obtain xiAnd xjRespectively representing the spectral vectors l of the ith and jth pixels in the normalized hyperspectral image XiAnd ljRespectively representing the space coordinate vectors of the ith pixel and the jth pixel in the normalized hyperspectral image X, wherein m is a parameter for balancing spectral characteristics and space characteristics, m is more than or equal to 0, i is 1, …, n1,j=1,…,n,n1For training setThe number of the middle pixel points, wherein n is the total number of the pixel points in the normalized hyperspectral image X; each element in the predefined matrix Q is according toCalculating to obtain the sigma, wherein the sigma is a parameter for controlling the number of adjacent pixel points, the sigma is more than or equal to 0, i is 1, …, n1,j=1,…,n;||·||*Is the kernel norm of the matrix, i.e., the sum of all singular values of the matrix, | |||2,1Is L2,1Norm is calculated byd is the dimension of the pixel spectral vector in the hyperspectral image, | · | | luminance1Is L of a matrix1Norm, which is the sum of the absolute values of all the elements of the matrix, | · |. luminanceFIs the Frobenius norm of the matrix, i.e. the square root of the sum of squares of all elements of the matrix, and is an Hadamard operator, which represents the multiplication of elements at corresponding positions of two matrices;
and step 3: introducing auxiliary variables H and J, and converting the formula (1) into a formula by using an augmented Lagrange multiplier method:
wherein < A, B > - [ trace (A) ]TB) Trace denotes trace operation of the matrix, μ is a penalty factor, μ>0,Y1、Y2And Y3Is a lagrange multiplier;
and then respectively solving by using an alternating iteration updating algorithm to obtain H, J, Z, E optimal solutions, specifically:
step 3.1: initialization λ ═ 20, α ═ 0.8, β ═ 0.6, Y1 k=Y2 k=Y3 k=0,Hk=Jk=Zk=Ek=0,μk=10-6Wherein, the superscripts k all represent iteration times, and k is 1 initially;
step 3.2: fixing J, Z and E, updating the element in H according to the following formula:
where Θ (x) ═ max (x- ω,0) + min (x + ω,0), and ω is an element ω in ωij=(α/μk)Mij,i=1,…,n1,j=1,…,n;Representation matrix ZkThe ith row and the j column of elements,representation matrix Y3 kRow i and column j of (1);
step 3.3: fix H, Z and E, update J by the following formula:
step 3.4: fix H, J and E, update Z as follows:
wherein the content of the first and second substances,i is an identity matrix, Ak=X-Ek+Y1 k/μk,Ck=Hk+Y3 k/μk;Is a training set matrixThe transposed matrix of (2);
step 3.5: fixing H, J and Z, updating each column of E as follows:
step 3.6: updating the penalty factor according to the following formula:
μk+1=max(ρμk,maxμ) (7)
therein, maxμIs the maximum set value of μ, set to maxμ=1010Rho is a step length control parameter, and the value range is that rho is more than or equal to 1 and less than or equal to 2;
then, the lagrange multipliers are updated separately as follows:
Y3 k+1=Y3 k+μk+1(Hk+1-Zk+1) (10)
step 3.7: if at the same time satisfy||Zk+1-Jk+1||∞< ε and | | Hk+1-Zk+1||∞If the value is less than epsilon, the iteration is stopped, and H, J, Z, E obtained by calculation at the moment is the final solution; otherwise, the iteration number k is k +1, and the step 3.2 is returned; wherein | · | purple sweet∞An L ∞ norm representing a matrix, i.e., a product of a maximum element value of the matrix and a column number, epsilon is an error limiting parameter and is set to 10 ∈-4;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810919458.5A CN109190511B (en) | 2018-08-14 | 2018-08-14 | Hyperspectral classification method based on local and structural constraint low-rank representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810919458.5A CN109190511B (en) | 2018-08-14 | 2018-08-14 | Hyperspectral classification method based on local and structural constraint low-rank representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109190511A CN109190511A (en) | 2019-01-11 |
CN109190511B true CN109190511B (en) | 2021-04-20 |
Family
ID=64921261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810919458.5A Active CN109190511B (en) | 2018-08-14 | 2018-08-14 | Hyperspectral classification method based on local and structural constraint low-rank representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109190511B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110335201A (en) * | 2019-03-27 | 2019-10-15 | 浙江工业大学 | The high spectrum image denoising method restored in conjunction with Moreau enhancing TV and local low-rank matrix |
CN110599466B (en) * | 2019-08-29 | 2022-04-29 | 武汉大学 | Hyperspectral anomaly detection method for component projection optimization separation |
CN111161199B (en) * | 2019-12-13 | 2023-09-19 | 中国地质大学(武汉) | Space spectrum fusion hyperspectral image mixed pixel low-rank sparse decomposition method |
CN111079838B (en) * | 2019-12-15 | 2024-02-09 | 烟台大学 | Hyperspectral band selection method based on double-flow-line low-rank self-expression |
CN112560975B (en) * | 2020-12-23 | 2024-05-14 | 西北工业大学 | S-based1/2Hyperspectral anomaly detection method of norm low-rank representation model |
CN113409261B (en) * | 2021-06-13 | 2024-05-14 | 西北工业大学 | Hyperspectral anomaly detection method based on spatial spectrum feature joint constraint |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105513102A (en) * | 2015-12-15 | 2016-04-20 | 西安电子科技大学 | Hyper-spectral compression perception reconstruction method based on nonlocal total variation and low-rank sparsity |
CN107832790A (en) * | 2017-11-03 | 2018-03-23 | 南京农业大学 | A kind of semi-supervised hyperspectral image classification method based on local low-rank representation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10235600B2 (en) * | 2015-06-22 | 2019-03-19 | The Johns Hopkins University | System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing |
-
2018
- 2018-08-14 CN CN201810919458.5A patent/CN109190511B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105513102A (en) * | 2015-12-15 | 2016-04-20 | 西安电子科技大学 | Hyper-spectral compression perception reconstruction method based on nonlocal total variation and low-rank sparsity |
CN107832790A (en) * | 2017-11-03 | 2018-03-23 | 南京农业大学 | A kind of semi-supervised hyperspectral image classification method based on local low-rank representation |
Non-Patent Citations (2)
Title |
---|
Hyperspectral unmixing by reweighted low rank and total variation;Rui Wang等;《IEEE》;20171019;第1-4页 * |
基于图像分割和LSSVM的高光谱图像分类;楚恒等;《现代电子技术》;20161231;第39卷(第24期);第14-17+21页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109190511A (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109190511B (en) | Hyperspectral classification method based on local and structural constraint low-rank representation | |
CN110728224B (en) | Remote sensing image classification method based on attention mechanism depth Contourlet network | |
CN108491849B (en) | Hyperspectral image classification method based on three-dimensional dense connection convolution neural network | |
CN107316013B (en) | Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network) | |
WO2021003951A1 (en) | Hyperspectral image classification method based on label-constrained elastic network graph model | |
Lin et al. | Hyperspectral image denoising via matrix factorization and deep prior regularization | |
CN109615008B (en) | Hyperspectral image classification method and system based on stack width learning | |
CN107563442B (en) | Hyperspectral image classification method based on sparse low-rank regular graph tensor embedding | |
CN107145836B (en) | Hyperspectral image classification method based on stacked boundary identification self-encoder | |
CN108460391B (en) | Hyperspectral image unsupervised feature extraction method based on generation countermeasure network | |
CN108734199B (en) | Hyperspectral image robust classification method based on segmented depth features and low-rank representation | |
CN112733659B (en) | Hyperspectral image classification method based on self-learning double-flow multi-scale dense connection network | |
CN108229551B (en) | Hyperspectral remote sensing image classification method based on compact dictionary sparse representation | |
CN107545279B (en) | Image identification method based on convolutional neural network and weighted kernel feature analysis | |
CN112633386A (en) | SACVAEGAN-based hyperspectral image classification method | |
CN113139512B (en) | Depth network hyperspectral image classification method based on residual error and attention | |
CN113344045B (en) | Method for improving SAR ship classification precision by combining HOG characteristics | |
CN112836671A (en) | Data dimension reduction method based on maximization ratio and linear discriminant analysis | |
CN115564996A (en) | Hyperspectral remote sensing image classification method based on attention union network | |
CN111680579B (en) | Remote sensing image classification method for self-adaptive weight multi-view measurement learning | |
Ge et al. | Adaptive hash attention and lower triangular network for hyperspectral image classification | |
CN113052130B (en) | Hyperspectral image classification method based on depth residual error network and edge protection filtering | |
CN107273919A (en) | A kind of EO-1 hyperion unsupervised segmentation method that generic dictionary is constructed based on confidence level | |
CN108460412B (en) | Image classification method based on subspace joint sparse low-rank structure learning | |
CN114511735A (en) | Hyperspectral image classification method and system of cascade empty spectral feature fusion and kernel extreme learning machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |