CN115861683B - Rapid dimension reduction method for hyperspectral image - Google Patents

Rapid dimension reduction method for hyperspectral image Download PDF

Info

Publication number
CN115861683B
CN115861683B CN202211432621.8A CN202211432621A CN115861683B CN 115861683 B CN115861683 B CN 115861683B CN 202211432621 A CN202211432621 A CN 202211432621A CN 115861683 B CN115861683 B CN 115861683B
Authority
CN
China
Prior art keywords
matrix
node
nodes
hyperspectral
hyperspectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211432621.8A
Other languages
Chinese (zh)
Other versions
CN115861683A (en
Inventor
苏远超
白晋颖
蒋梦莹
李朋飞
刘�英
杨军
郝希刘荣
刘乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN202211432621.8A priority Critical patent/CN115861683B/en
Publication of CN115861683A publication Critical patent/CN115861683A/en
Application granted granted Critical
Publication of CN115861683B publication Critical patent/CN115861683B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a fast dimension reduction method for hyperspectral images, which relates to the field of hyperspectral data processing and comprises the following steps: firstly, according to the adjacency, converting a hyperspectral image into network structure data; then obtaining local correlation characteristics of the hyperspectral image through a leachable iterative filter; establishing an undirected graph based on the network structure data and the local correlation characteristics of the hyperspectral image; finally, converging similar vertexes in the undirected graph by utilizing a manifold geometric aggregation mechanism to obtain a low-latitude hyperspectral image containing global correlation characteristics; the method not only reduces the data dimension, but also keeps the feature of the correlation of the ground object from local to global, reduces the storage burden, can improve the accuracy of the classification of the ground object, and meets the market demand that the hyperspectral data is limited by the storage burden.

Description

Rapid dimension reduction method for hyperspectral image
Technical Field
The invention relates to the field of hyperspectral data processing, in particular to a rapid dimension reduction method for hyperspectral images.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The hyperspectral sensor can acquire space details and spectrum signals of land surfaces, so that the hyperspectral remote sensing image can well define the type of the land cover; the higher spectral resolution can provide rich spectral information to distinguish different ground objects; however, the spectrum band of the hyperspectral image also brings serious information redundancy, increases the calculation and storage burden of a computer, and easily generates a House phenomenon when data are used for classifying the ground objects so as to cause adverse effects on classification accuracy; to alleviate the above problems, dimension reduction of hyperspectral data is required; whereas hyperspectral data typically contain hundreds to thousands of bands, while providing rich spectral information, it is extremely inconvenient for data storage; in addition, a large amount of redundant information is attached to a large number of wave bands, and the accuracy of hyperspectral image classification is adversely affected.
The spectrum data dimension reduction technology is a data compression technology and a storage medium which can mine the bidirectional relevance between the space information of the ground feature and the spectrum feature; the method is mainly applied to reducing the redundant information of hyperspectral data, reducing the storage burden of the data, preventing the dimension disaster in the data processing process, reducing the requirement of hyperspectral data on the memory of a computer and improving the accuracy of classification of ground features.
At present, common dimension reduction methods comprise supervised dimension reduction and unsupervised dimension reduction; wherein, the supervised dimension reduction needs to provide priori knowledge in advance, and has certain limiting factors in practical use; the unsupervised dimension reduction is high in automation degree and convenient to use, but the caused information loss is larger, and the high-precision classification requirement is generally difficult to meet.
Disclosure of Invention
The invention aims at: the existing dimension reduction method can reduce the data dimension and relieve the storage burden, but can cause the problem that hyperspectral data lose larger associated information in terms of space and spectrum and seriously influence the accuracy of classification of ground features by using the hyperspectral data, and the quick dimension reduction method for hyperspectral images is provided, so that the problems are solved.
The technical scheme of the invention is as follows:
a fast dimension reduction method for hyperspectral images specifically comprises the following steps:
step S1: converting the hyperspectral image into network structure data according to the adjacency;
step S2: obtaining local correlation characteristics of the hyperspectral image through a leachable iterative filter; that is, in an undirected graph, one vertex aggregates the local characteristics of neighboring pixels, which can provide an initial connection between pixels; in order to establish the vertex, the invention develops a learnable iterative filter which aggregates the adjacent spatial characteristics of each node through iterative learning;
step S3: establishing an undirected graph based on the mesh structure data and local correlation characteristics of the hyperspectral image;
step S4: and converging similar vertexes in the undirected graph by utilizing a manifold geometric aggregation mechanism to obtain a low-latitude hyperspectral image containing global correlation characteristics.
Further, the step S1 includes:
converting the hyperspectral image into network structure data according to the adjacent relation between pixels in the hyperspectral image;
wherein nodes in the mesh structure data correspond to pixels in the hyperspectral image; i.e. the hyperspectral image can be regarded as a cross-grid structure data, where a node corresponds to a pixel.
Further, in the step S1, all elements in the pixel are normalized to the range of [0,1] to ensure reasonable spectral reflectivity; i.e. all elements in the picture elements are normalized to the range of 0,1 before running the learnable iterative filter to ensure a reasonable spectral reflectivity.
Further, the step S2 includes:
set a vectorDefining a node;
setting initial node matrix,/>
Is provided withRepresenting node->Coordinates of->Is the width of the neighborhood window; it should be noted that: />Is an odd-numbered super parameter;
node is connected withRegarded as the center of the neighborhood window, its neighborhood +.>The definition is as follows:
(1)
Wherein:and->Defined as coordinate range of neighborhood node, +.>And->
It should be noted that: for the filter, one large window can collect many heterogeneous pixels, while one small window can significantly increase the computational burden; relative to the central nodeThe similarity of nodes in the window is of primary concern because neighboring pels are more likely to belong to the same material than far pels; therefore, the similarity between each node and other nodes in the adjacent window is calculated by adopting a Gaussian kernel function; meanwhile, defining the similarity value of the nodes outside the window as 0; because the window has +.>Individual picture elements, i.e. obtain->The number of nodes outside the window is +.>And their similarity value is 0; while the number of bands is usually much smaller than the number of picture elements, i.e +.>
According to the similarity between the nodes, the nodes with local consistency are determined, so that the local correlation characteristics between pixels in the hyperspectral image are determined.
Further, the calculating the similarity between each node and other nodes in the adjacent window by using the gaussian kernel function includes:
the ith node is extended to a sparse vector by using the index of the pelIt is written as:
(2)
Wherein:is a Gaussian kernel function, +.>Determining the hyper-parameters of the Gaussian kernel;
sparse vectorsRepresenting node->And similarities between all nodes, these vectors can form a sparse matrix +.>,/>The method comprises the steps of carrying out a first treatment on the surface of the Wherein: matrix->Is a symmetric matrix, and the diagonal element is 1;
node matrixBy sparse matrix->Is updated.
Further, the node matrixBy sparse matrix->Is updated, including:
(3)
Wherein: the/represents the division (i.e. "dot division") of the corresponding elements of the two matrices,representing an all 1 matrix (i.e., all elements in the matrix are 1).
Further, the step S3 includes:
taking the nodes with local consistency as vertexes of the undirected graph, and completing construction of the undirected graph; preferably, the center of each node in the windowSpatial information can be aggregated from neighboring pixels by means of filtering; although a limited difference may be used to avoid the phenomenon of excessive smoothing, the first order difference may face a problem, i.e. if +.>All non-zero elements in (1) are very close to 1, then their variation may be small; to avoid the above phenomenon, the present invention accomplishes this task using a second order difference; with iteration, sparse matrix->The change of non-zero elements in the alloy can become dynamically stable; in terms of similarity, nodes with the same class have higher consistency, and thus nodes with local consistency may form vertices of an undirected graph.
In the invention, specifically, the objective function of the manifold geometry aggregation mechanism in the step S4 is:
(4)
Wherein:representing an objective function +.>Representing a regularized similarity matrix, wherein D is a W degree matrix, and matrix B consists of feature vectors corresponding to the first c maximum feature values of matrix E;
the objective function acquisition process is as follows:
the initial objective function of the manifold geometry aggregation mechanism is:
(5)
By the initial objective function, an optimal solution of B can be theoretically obtained;
is provided withFor regularization of the similarity matrix, D is the degree matrix of W,>the method comprises the steps of carrying out a first treatment on the surface of the Converting the initial objective function into:
the connection weight between each node is calculated by using a linear kernel function, and the process can be expressed as follows:
(6)
Wherein:is a connection matrix for representing weights, < ->Is->Is converted into (a)The matrix is formed by a matrix of,the method comprises the steps of carrying out a first treatment on the surface of the At this time, a->Is equivalent to +.>The singular value decomposition is carried out and,;/>is a matrix of singular values with diagonals of non-negative real numbers, < ->Andis two orthogonal matrices, < >>,/>
Then, E can be expressed asWherein->Is a diagonal matrix of the type,,/>c maximum eigenvalues of (a);
at this time, the undirected graph completes the cut, andis +.>Is a feature vector of (1);
then, extractThe first c maximum eigenvalues of (a) and extracting +.>These feature vectors are combined into a matrix B (i.e., b= = ->) At this time, matrix->The data after preliminary dimension reduction.
Furthermore, the mechanism can further improve the operation efficiency;
in the step S4, before the connection weight between the nodes is calculated by using the linear kernel function, k node construction anchor points are randomly selected, and the value condition of k is set as follows:
obtaining a link matrix from the node to the anchor point by adopting a linear kernel functionThen->Can be expressed as:
(7)
Wherein:representing an anchor matrix, each row vector representing an anchor node, i.e./>Represents an anchor point, p=1,..n, q=1,..k), for example>
By usingInstead of +.6>Then->The expression of (2) is +.>
By usingSubstitution 7->,/>Is->Is a conversion matrix of (a);
for a pair ofPerform as->The same singular value decomposition, finally obtaining a matrix +.>
Through the steps, the requirements of big data application are further met; the operation efficiency of the equipment is further improved, and the operation time is shortened; the consumption of the memory of the computer in the dimension reduction process is further reduced; the efficiency is improved, the memory consumption is reduced, but a certain information loss is caused, so that the method is only used for processing large-scale hyperspectral data.
Further, in practical engineering applications, singular value decomposition may cause some outliers, and in order to enhance the robustness of the mechanism, in the step S4, the matrix is formed by adopting the following formulaAll elements of (1) are normalized to [0,1]];
(8)
Wherein:is->Is>,/>Is->The mth element of (a)>And->Are respectively->Minimum and maximum values of (a);
at this time, the liquid crystal display device,is the aggregate characteristic after normalization realization, matrix ∈>The reduced-dimension data is that which includes both local and global correlations.
Compared with the prior art, the invention has the beneficial effects that:
1. a fast dimension reduction method for hyperspectral images, comprising: step S1: converting the hyperspectral image into network structure data according to the adjacency; step S2: obtaining local correlation characteristics of the hyperspectral image through a leachable iterative filter; step S3: establishing an undirected graph based on the mesh structure data and local correlation characteristics of the hyperspectral image; step S4: converging similar vertexes in the undirected graph by utilizing a manifold geometric aggregation mechanism to obtain a low-latitude hyperspectral image containing global correlation characteristics; by the dimension reduction method, although the dimension of the data is reduced, the feature correlation characteristics of the ground object are maintained from local to global; the data dimension is reduced, the storage burden is reduced, the accuracy of the classification of the ground features can be improved, and the market demand that hyperspectral data are applied but limited by the storage burden is met.
2. A method for rapidly reducing dimension of hyperspectral image, which adopts full connection mode to gather the relativity of all nodes, finally saves the relativity information of original space and spectrum of pixel after reducing dimension; the learning type iterative filter training does not relate to a depth network, and realizes global relevance aggregation in a mode of carrying out feature decomposition on the undirected graph, so that the learning type iterative filter training is high in running speed and wide in application range; the invention can reduce the dimension of the hyperspectral image without providing priori knowledge in advance, and can better save the correlation between pixels; after the dimension reduction processing is carried out on the hyperspectral data, the calculation load of the subsequent classification task can be reduced, and the accuracy of ground object classification can be further improved.
Drawings
FIG. 1 is a block flow diagram of a fast dimension reduction method for hyperspectral images;
FIG. 2 is a schematic diagram of a technical architecture and implementation principle of a learning type iterative filter;
FIG. 3 is a schematic diagram of a manifold geometry aggregation mechanism implementing dimension reduction.
Detailed Description
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with examples.
Example 1
The spectrum data dimension reduction technology is a data compression technology and a storage medium which can mine the bidirectional relevance between the space information of the ground feature and the spectrum feature; the method is mainly applied to reducing the redundant information of hyperspectral data, reducing the storage burden of the data, preventing the dimension disaster in the data processing process, reducing the requirement of hyperspectral data on the memory of a computer and improving the accuracy of classification of ground features.
At present, common dimension reduction methods comprise supervised dimension reduction and unsupervised dimension reduction; wherein, the supervised dimension reduction needs to provide priori knowledge in advance, and has certain limiting factors in practical use; the unsupervised dimension reduction is high in automation degree and convenient to use, but the caused information loss is larger, and the high-precision classification requirement is generally difficult to meet.
Aiming at the problems, the embodiment provides a rapid dimension reduction method for hyperspectral images, the hyperspectral images are regarded as an undirected full-connection graph, and relevance convergence is realized by cutting the undirected graph; meanwhile, a novel ground feature spatial spectrum relevance aggregation mechanism is generated based on a rapid dimension reduction method.
Specifically, from Euclidean distance and manifold geometry theory, the correlation features from local to global are aggregated.
Firstly, a novel learning type iterative filter is designed, local correlation characteristics between pixels can be obtained from Euclidean space in a self-adaptive mode, namely, vertexes of an undirected graph are optimized through the novel learning type iterative filter, for the filter, each node corresponds to one pixel and serves as a center to aggregate local correlation characteristics of adjacent pixels, nodes with the same attribute form one vertex, and all nodes and vertexes are updated through kernel-based learning iteration; along with updating of the nodes, consistency between adjacent pixels is increased, and similarity relationship between the adjacent pixels is gradually stable; meanwhile, by using a kernel-based learning method, the new filter can adaptively calculate the aggregate weight between the center pixel and its neighboring pixels.
Then, global correlation characteristics among pixels are acquired by utilizing a manifold geometric aggregation mechanism provided by the embodiment; after the processing of the method in the embodiment, although the data dimension is reduced, the feature of the correlation of the ground features is maintained from local to global, so that the data dimension is reduced, the storage burden is reduced, the accuracy of the classification of the ground features can be improved, and the market requirement that the application of hyperspectral data is limited by the storage burden is met.
Referring to fig. 1-3, a fast dimension reduction method for hyperspectral images specifically includes:
step S1: converting the hyperspectral image into network structure data according to the adjacency;
step S2: obtaining local correlation characteristics of the hyperspectral image through a leachable iterative filter; that is, in an undirected graph, one vertex aggregates the local characteristics of neighboring pixels, which can provide an initial connection between pixels; in order to establish the vertex, the embodiment develops a learning type iterative filter to aggregate adjacent spatial characteristics of each node through iterative learning;
step S3: establishing an undirected graph based on the mesh structure data and local correlation characteristics of the hyperspectral image;
step S4: and converging similar vertexes in the undirected graph by utilizing a manifold geometric aggregation mechanism to obtain a low-latitude hyperspectral image containing global correlation characteristics.
In this embodiment, specifically, the step S1 includes:
converting the hyperspectral image into network structure data according to the adjacent relation between pixels in the hyperspectral image;
wherein nodes in the mesh structure data correspond to pixels in the hyperspectral image; i.e. the hyperspectral image can be regarded as a cross-grid structure data, where a node corresponds to a pixel.
In this embodiment, specifically, in the step S1, all elements in the pixel are normalized to the range of [0,1] to ensure a reasonable spectral reflectance; i.e. all elements in the picture elements are normalized to the range of 0,1 before running the learnable iterative filter to ensure a reasonable spectral reflectivity.
In this embodiment, specifically, the step S2 includes:
set a vectorDefining a node;
setting initial node matrix,/>
Is provided withRepresenting node->Coordinates of->Is the width of the neighborhood window; it should be noted that: />Is an odd-numbered super parameter;
node is connected withRegarded as the center of the neighborhood window, its neighborhood +.>The definition is as follows:
(1)
Wherein:and->Defined as coordinate range of neighborhood node, +.>And->
It should be noted that: for the filter, one large window can collect many heterogeneous pixels, while one small window can significantly increase the computational burden; relative to the central nodeThe similarity of nodes in the window is of primary concern because neighboring pels are more likely to belong to the same material than far pels; therefore, the similarity between each node and other nodes in the adjacent window is calculated by adopting a Gaussian kernel function; meanwhile, defining the similarity value of the nodes outside the window as 0; because the window has +.>Individual picture elements, i.e. obtain->The number of nodes outside the window is +.>And their similarity value is 0; while the number of bands is usually much smaller than the number of picture elements, i.e +.>
According to the similarity between the nodes, the nodes with local consistency are determined, so that the local correlation characteristics between pixels in the hyperspectral image are determined.
In this embodiment, specifically, the calculating, using a gaussian kernel function, the similarity between each node in the adjacent window and other nodes includes:
the ith node is extended to a sparse vector by using the index of the pelIt is written as:
(2)
Wherein:is a Gaussian kernel function, +.>Determining the hyper-parameters of the Gaussian kernel;
sparse vectorsRepresenting node->And similarities between all nodes, these vectors can form a sparse matrix +.>,/>The method comprises the steps of carrying out a first treatment on the surface of the Wherein: matrix->Is a symmetric matrix, and the diagonal element is 1;
node matrixBy sparse matrix->Is updated.
In this embodiment, specifically, the node matrixBy sparse matrix->Is updated, including:
(3)
Wherein: the/represents the division (i.e. "dot division") of the corresponding elements of the two matrices,representing an all 1 matrix (i.e., all elements in the matrix are 1).
In this embodiment, specifically, the step S3 includes:
taking the nodes with local consistency as vertexes of the undirected graph, and completing construction of the undirected graph; preferably, the center of each node in the windowSpatial information can be aggregated from neighboring pixels by means of filtering; although a limited difference may be used to avoid the phenomenon of excessive smoothing, the first order difference may face a problem, i.e. if +.>All non-zero elements in (1) are very close to 1, then their variation may be small; to avoid the above phenomenon, this embodiment uses a second order difference to accomplish this task; with iteration, sparse matrix->The change of non-zero elements in the alloy can become dynamically stable; in terms of similarity, nodes with the same class have higher consistency, and thus nodes with local consistency may form vertices of an undirected graph.
In this embodiment, specifically, the objective function of the manifold geometry aggregation mechanism in step S4 is:
(4)
Wherein:representing an objective function +.>Representing a regularized similarity matrix, wherein D is a W degree matrix, and matrix B consists of feature vectors corresponding to the first c maximum feature values of matrix E;
the objective function acquisition process is as follows:
the initial objective function of the manifold geometry aggregation mechanism is:
(5)
By the initial objective function, an optimal solution of B can be theoretically obtained;
is provided withFor regularization of the similarity matrix, D is the degree matrix of W,>the method comprises the steps of carrying out a first treatment on the surface of the Converting the initial objective function into:
the connection weight between each node is calculated by using a linear kernel function, and the process can be expressed as follows:
(6)
Wherein:is a connection matrix for representing weights, < ->Is->Is used for the conversion matrix of the (c),the method comprises the steps of carrying out a first treatment on the surface of the At this time, a->Is equivalent to +.>The singular value decomposition is carried out and,;/>is a matrix of singular values with diagonals of non-negative real numbers, < ->Andis two orthogonal matrices, < >>,/>
Then, E can be expressed asWherein->Is a diagonal matrix of the type,,/>c maximum eigenvalues of (a);
at this time, the undirected graph completes the cut, andis +.>Is a feature vector of (1);
then, extractThe first c maximum eigenvalues of (a) and extracting +.>These feature vectors are combined into a matrix B (i.e., b= = ->) At this time, matrix->Is just the first timeData after dimension reduction; preferably, the dimension reduced data may provide training samples for the classifier.
In this embodiment, specifically, the above mechanism may further improve the operation efficiency;
in the step S4, before the connection weight between the nodes is calculated by using the linear kernel function, k node construction anchor points are randomly selected, and the value condition of k is set as follows:
obtaining a link matrix from the node to the anchor point by adopting a linear kernel functionThen->Can be expressed as:
(7)
Wherein:representing an anchor matrix, each row vector representing an anchor node, i.e./>Represents an anchor point, p=1,..n, q=1,..k,/-for>
By usingInstead of +.6>Then->The expression of (2) is +.>
By usingSubstitution 7->,/>Is->Is a conversion matrix of (a);
for a pair ofPerform as->The same singular value decomposition, finally obtaining a matrix +.>
Through the steps, the requirements of big data application are further met; the operation efficiency of the equipment is further improved, and the operation time is shortened; the consumption of the memory of the computer in the dimension reduction process is further reduced; the efficiency is improved, the memory consumption is reduced, but a certain information loss is caused, so that the method is only used for processing large-scale hyperspectral data.
In this embodiment, in particular, in practical engineering applications, singular value decomposition may cause some outliers, and in order to enhance the robustness of the mechanism, in the step S4, the matrix is represented by the following formulaAll elements of (1) are normalized to [0,1]];
(8)
Wherein:is->Is>,/>Is->The mth element of (a)>And->Are respectively->Minimum and maximum values of (a);
at this time, the liquid crystal display device,is the aggregate characteristic after normalization realization, matrix ∈>The reduced-dimension data is that which includes both local and global correlations.
The foregoing examples merely represent specific embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, several variations and modifications can be made without departing from the technical solution of the present application, which fall within the protection scope of the present application.
This background section is provided to generally present the context of the present invention and the work of the presently named inventors, to the extent it is described in this background section, as well as the description of the present section as not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present invention.

Claims (4)

1. A method for fast dimension reduction for hyperspectral images, comprising:
step S1: converting the hyperspectral image into network structure data according to the adjacency;
step S2: obtaining local correlation characteristics of the hyperspectral image through a leachable iterative filter;
step S3: establishing an undirected graph based on the mesh structure data and local correlation characteristics of the hyperspectral image;
step S4: converging similar vertexes in the undirected graph by utilizing a manifold geometric aggregation mechanism to obtain a low-latitude hyperspectral image containing global correlation characteristics;
the step S2 includes:
set a vectorDefining a node;
setting initial node matrix,/>
Is provided withRepresenting node->Coordinates of->Is the width of the neighborhood window;
node is connected withRegarded as the center of the neighborhood window, its neighborhood +.>The definition is as follows:
wherein:and->Defined as coordinate range of neighborhood node, +.>,/>And->
Calculating the similarity between each node and other nodes in the adjacent windows by adopting a Gaussian kernel function; meanwhile, defining the similarity value of the nodes outside the window as 0;
according to the similarity between the nodes, determining the nodes with local consistency, thereby determining local correlation characteristics between pixels in the hyperspectral image;
the step of calculating the similarity between each node and other nodes in the adjacent windows by using the Gaussian kernel function comprises the following steps:
first, theiIndividual nodes are extended to a sparse vector by using the index of the pelsIt is written as:
wherein:is a Gaussian kernel function, +.>Determining the hyper-parameters of the Gaussian kernel;
sparse vectorsRepresenting node->And similarities between all nodes, these vectors can form a sparse matrix +.>,/>The method comprises the steps of carrying out a first treatment on the surface of the Wherein: matrix->Is a symmetric matrix, and the diagonal element is 1;
node matrixBy sparse matrix->Is updated;
the node matrixBy sparse matrix->Is further covered withThe new, include:
wherein: the/represents the division of the corresponding elements of the two matrices,representing an all 1 matrix;
the objective function of the manifold geometry aggregation mechanism in step S4 is:
wherein:representing an objective function +.>Representing a regularized similarity matrix, wherein D is a W degree matrix, and matrix B consists of feature vectors corresponding to the first c maximum feature values of matrix E;
the connection weight between each node is calculated by using a linear kernel function, and the process can be expressed as follows:
wherein:is a connection matrix for representing weights, < ->Is->Is used for the conversion matrix of the (c),the method comprises the steps of carrying out a first treatment on the surface of the At this time, a->Is equivalent to +.>The singular value decomposition is carried out and,;/>is a matrix of singular values with diagonals of non-negative real numbers, < ->And->Is two orthogonal matrices, < >>,/>
Then, E can be expressed asWherein->Is a diagonal matrix>,/>C maximum eigenvalues of (a);
at this time, undirected graph completionCutting, andis +.>Is a feature vector of (1);
then, extractThe first c maximum eigenvalues of (a) and extracting +.>The corresponding feature vectors of the matrix B are formed into the matrix +.>The data after preliminary dimension reduction;
in the step S4, before the connection weight between the nodes is calculated by using the linear kernel function, k node construction anchor points are randomly selected, and the value condition of k is set as follows:
obtaining a link matrix from the node to the anchor point by adopting a linear kernel functionThen->Can be expressed as:
wherein:representing an anchor matrix, each rowThe vector represents one anchor node, i.e. ]>Representing an anchor point of the anchor point,;/>
by usingReplace->Then->The expression of (2) is +.>
By usingSubstitute->,/>Is->Is a conversion matrix of (a);
for a pair ofPerform as->The same singular value decomposition, finally obtaining a matrix +.>
In the step S4, the matrix is formed by the following formulaAll elements of (1) are normalized to [0,1]];
Wherein:is->Is>,/>Is->The mth element of (a)>And->Are respectively->Is the minimum and maximum of (a).
2. A method of fast dimension reduction for hyperspectral images as claimed in claim 1 wherein step S1 comprises:
converting the hyperspectral image into network structure data according to the adjacent relation between pixels in the hyperspectral image;
wherein nodes in the mesh data correspond to pixels in the hyperspectral image.
3. A fast dimensionality reduction method for hyperspectral images in accordance with claim 2, wherein in step S1, all elements in the picture elements are normalized to the range of [0,1] to ensure reasonable spectral reflectance.
4. A method of fast dimension reduction for hyperspectral images as claimed in claim 1 wherein step S3 comprises:
and taking the nodes with local consistency as vertexes of the undirected graph to complete the construction of the undirected graph.
CN202211432621.8A 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image Active CN115861683B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211432621.8A CN115861683B (en) 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211432621.8A CN115861683B (en) 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image

Publications (2)

Publication Number Publication Date
CN115861683A CN115861683A (en) 2023-03-28
CN115861683B true CN115861683B (en) 2024-01-16

Family

ID=85663659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211432621.8A Active CN115861683B (en) 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image

Country Status (1)

Country Link
CN (1) CN115861683B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778885A (en) * 2016-12-26 2017-05-31 重庆大学 Hyperspectral image classification method based on local manifolds insertion
CN108520281A (en) * 2018-04-13 2018-09-11 上海海洋大学 A kind of semi-supervised dimension reduction method of high spectrum image kept based on overall situation and partial situation
CN110298414A (en) * 2019-07-09 2019-10-01 西安电子科技大学 Hyperspectral image classification method based on denoising combination dimensionality reduction and guiding filtering
CN111860612A (en) * 2020-06-29 2020-10-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
CN112529865A (en) * 2020-12-08 2021-03-19 西安科技大学 Mixed pixel bilinear deep layer de-mixing method, system, application and storage medium
CN113920345A (en) * 2021-09-09 2022-01-11 中国地质大学(武汉) Hyperspectral image dimension reduction method based on clustering multi-manifold measure learning
WO2022178977A1 (en) * 2021-02-26 2022-09-01 西北工业大学 Unsupervised data dimensionality reduction method based on adaptive nearest neighbor graph embedding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7961957B2 (en) * 2007-01-30 2011-06-14 Alon Schclar Diffusion bases methods for segmentation and clustering

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778885A (en) * 2016-12-26 2017-05-31 重庆大学 Hyperspectral image classification method based on local manifolds insertion
CN108520281A (en) * 2018-04-13 2018-09-11 上海海洋大学 A kind of semi-supervised dimension reduction method of high spectrum image kept based on overall situation and partial situation
CN110298414A (en) * 2019-07-09 2019-10-01 西安电子科技大学 Hyperspectral image classification method based on denoising combination dimensionality reduction and guiding filtering
CN111860612A (en) * 2020-06-29 2020-10-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
WO2022001159A1 (en) * 2020-06-29 2022-01-06 西南电子技术研究所(中国电子科技集团公司第十研究所) Latent low-rank projection learning based unsupervised feature extraction method for hyperspectral image
CN112529865A (en) * 2020-12-08 2021-03-19 西安科技大学 Mixed pixel bilinear deep layer de-mixing method, system, application and storage medium
WO2022178977A1 (en) * 2021-02-26 2022-09-01 西北工业大学 Unsupervised data dimensionality reduction method based on adaptive nearest neighbor graph embedding
CN113920345A (en) * 2021-09-09 2022-01-11 中国地质大学(武汉) Hyperspectral image dimension reduction method based on clustering multi-manifold measure learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Graph-Cut-Based Node Embedding for Dimensionality Reduction and Classification of Hyperspectral Remote Sensing Images;Yuanchao Su, et al.;IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium;1720-1723 *
基于流形学习的新高光谱图像降维算法;普晗晔;王斌;张立明;;红外与激光工程(01);238-243 *
空间一致性邻域保留嵌入的高光谱数据特征提取;魏峰;何明一;梅少辉;;红外与激光工程(05);143-148 *
空间加权的孤立森林高光谱影像异常目标检测;苏远超 等;测绘科学;92-98 *

Also Published As

Publication number Publication date
CN115861683A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
Fu et al. Hyperspectral anomaly detection via deep plug-and-play denoising CNN regularization
Sun et al. Low-rank and sparse matrix decomposition-based anomaly detection for hyperspectral imagery
US9600860B2 (en) Method and device for performing super-resolution on an input image
WO2022178977A1 (en) Unsupervised data dimensionality reduction method based on adaptive nearest neighbor graph embedding
Fang et al. Infrared small UAV target detection based on residual image prediction via global and local dilated residual networks
JP2019528500A (en) System and method for processing an input point cloud having points
Wei et al. An overview on linear unmixing of hyperspectral data
CN109598750B (en) Large-scale difference image feature point matching method based on deformation space pyramid
CN113344103B (en) Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network
Guo et al. Dual graph U-Nets for hyperspectral image classification
Li et al. Hyperspectral anomaly detection via image super-resolution processing and spatial correlation
Racah et al. Semi-supervised detection of extreme weather events in large climate datasets
Li et al. DLPNet: A deep manifold network for feature extraction of hyperspectral imagery
Fu et al. Tensor Singular Spectral Analysis for 3D feature extraction in hyperspectral images
Tu et al. Ensemble entropy metric for hyperspectral anomaly detection
Han et al. Deep low-rank graph convolutional subspace clustering for hyperspectral image
CN109815440A (en) The Dimensionality Reduction method of the optimization of joint figure and projection study
Qu et al. Feature Mutual Representation Based Graph Domain Adaptive Network for Unsupervised Hyperspectral Change Detection
CN115861683B (en) Rapid dimension reduction method for hyperspectral image
CN109460772B (en) Spectral band selection method based on information entropy and improved determinant point process
Ziemann et al. Hyperspectral target detection using graph theory models and manifold geometry via an adaptive implementation of locally linear embedding
Sun et al. NF-3DLogTNN: An effective hyperspectral and multispectral image fusion method based on nonlocal low-fibered-rank regularization
Singh et al. A Pre-processing framework for spectral classification of hyperspectral images
Ye et al. Bayesian Nonlocal Patch Tensor Factorization for Hyperspectral Image Super-Resolution
Yang et al. Multi-sensor data fusion of remotely-sensed images with sparse and logarithmic low-rank regularization for shadow removal and denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant