CN113344103B - Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network - Google Patents
Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network Download PDFInfo
- Publication number
- CN113344103B CN113344103B CN202110697409.3A CN202110697409A CN113344103B CN 113344103 B CN113344103 B CN 113344103B CN 202110697409 A CN202110697409 A CN 202110697409A CN 113344103 B CN113344103 B CN 113344103B
- Authority
- CN
- China
- Prior art keywords
- hypergraph
- remote sensing
- sensing image
- neural network
- hyperspectral remote
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a hyperspectral remote sensing image ground object classification method based on a hypergraph convolutional neural network, which comprises the following steps of: extracting multi-modal characteristics of the hyperspectral remote sensing image; constructing a hypergraph based on different modal characteristics; inputting the hypergraph and hyperspectral remote sensing images into a hypergraph convolution neural network to extract features, and optimizing a loss function by using a full gradient descent method to obtain a ground object classification result. The invention has high classification precision and high speed and improves the classification efficiency.
Description
Technical Field
The invention relates to the technical field of pattern recognition and machine learning, in particular to a hyperspectral remote sensing image ground object classification method based on a hypergraph convolutional neural network.
Background
The remote sensing technology has rapidly developed and widely paid attention after the first remote sensing satellite Landsat-1 is emitted in the last 70 th century. Due to the advances of the spectrum sensor and the spectrum imaging technology, imaging of tens to hundreds of continuous wave bands on a target area becomes possible, and the hyperspectral remote sensing technology is provided. Unlike visible light and multispectral images, the hyperspectral remote sensing images really combine spectral information with images for the first time. Due to the drastic increase in the number of spectral channels, the hyperspectral image is also called a hyperspectral cube, i.e., the hyperspectral image is often represented as a three-dimensional data block rather than two-dimensional data of a conventional image. The hyperspectral images are widely applied to the fields of food safety detection, medical auxiliary diagnosis, land resource management and the like. One of the challenging tasks is terrain classification, also known as hyperspectral image classification.
The remote sensing image classification refers to the pixel-by-pixel classification of remote sensing images, and is similar to the semantic segmentation task in the computer vision field. The hyperspectral image classification belongs to special remote sensing image classification, and the main difference lies in the number of samples, the category richness and the classification precision. Due to richer spectrum information, the hyperspectral image can be used for accurately classifying multiple classes of objects under the condition of less sample number. Meanwhile, a large amount of spectral information causes the problem of information redundancy, the problem of overfitting of the method and the like due to insufficient sample number is caused, and the problems increase the research difficulty of hyperspectral classification.
Therefore, how to provide a high-precision hyperspectral remote sensing image ground object classification method is a problem that needs to be solved urgently by technical personnel in the field.
Disclosure of Invention
In view of the above, the invention provides a hyperspectral remote sensing image ground object classification method based on a hypergraph convolutional neural network, which is high in classification accuracy and speed and improves classification efficiency.
In order to achieve the purpose, the invention adopts the following technical scheme:
the hyperspectral remote sensing image ground feature classification method based on the hypergraph convolutional neural network comprises the following steps:
extracting multi-modal characteristics of the hyperspectral remote sensing image;
constructing a hypergraph based on different modal characteristics;
and inputting the hypergraph and the hyperspectral remote sensing image into a hypergraph convolution neural network to extract features, and optimizing a loss function by using a full gradient descent method to obtain a ground object classification result.
Preferably, the multi-modal features comprise spectral features and spatial features.
Preferably, the extraction is carried out by principal component analysisSpectral feature X of hyperspectral remote sensing image spectral ;
Extracting spatial feature X of the hyperspectral remote sensing image by using a spatial position coding method spatial The calculation formula is as follows:
X spatial [i]=[x(i),y(i)]
wherein x (i) and y (i) respectively represent the horizontal and vertical coordinates of the pixel point i.
Preferably, the construction of the hypergraph based on different modal characteristics specifically comprises:
assuming that each pixel point in the hyperspectral remote sensing image represents a sample, representing the spectral characteristics and the spatial characteristics to the characteristics of the samples, and calculating the similarity between the samples by using a measurement function;
and generating the probability of the existence of the super-edge between the samples according to the similarity between the samples, generating the incidence matrix of the hypergraph, and finishing the construction of the hypergraph.
Preferably, the correlation matrix H is calculated by the following formula:
wherein x is i Features of the i-th sample, x j The characteristics of the j-th sample are represented,denotes x j K neighbors of (a), mean denotes k neighbors with sample x j Mean of euclidean distances.
Preferably, the hypergraph convolutional neural network represents hypergraph data by using the incidence matrix H;
the convolution operation of the hypergraph is as follows:
wherein, theta is a trainable parameter, W is a super-edge weight matrix, Y is an output after convolution operation, H is a correlation matrix, D v And D e Diagonal matrices representing vertex degrees and edge degrees, respectively, each vertex degree being defined as d (v) = ∑ Σ e∈E ω (e) h (v, e) and each edge degree are defined as δ (e) = ∑ Σ v∈V h (V, E), wherein V represents a vertex set in the hypergraph, and E represents a hyper-edge set in the hypergraph; wherein the content of the first and second substances,
v represents the hypergraph vertex, e represents the hyperedge of the hypergraph;
the convolution layer of the hypergraph convolution neural network is obtained through hypergraph convolution operation and a nonlinear activation function, and the calculation formula is as follows:
wherein X (l+1) Is the output of the l-th layer, σ is the RELU function for nonlinear activation, Θ (l) Is a trainable parameter and W is a trainable transfinite weight matrix.
According to the technical scheme, compared with the prior art, the hyperspectral remote sensing image ground object classification method based on the hypergraph convolutional neural network is disclosed. The method adopts the spectrum and the space to construct the hypergraph structure, and the hypergraph constructed by the spectrum and the space characteristics can express the long-distance relationship. Due to the fact that the long-short distance dependency relationship and the strong feature representation capability of the hypergraph convolution are effectively combined, the method is high in classification accuracy and speed, and classification efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow chart of a hyperspectral remote sensing image ground object classification method based on a hypergraph convolutional neural network provided by the invention.
Fig. 2 is an overall framework diagram of the algorithm provided by the invention.
Fig. 3 is a schematic diagram comparing a graph structure and a hypergraph structure provided by the present invention, wherein fig. 3 (a) is the graph structure, and fig. 3 (b) is the hypergraph structure.
FIG. 4 is a comparison graph of classification performance of the algorithm of the present invention on Indian Pines data sets, wherein FIG. 4 (a) shows the truth value and FIG. 4 (b) shows the classification result of the algorithm.
FIG. 5 is a comparison graph of classification performance of the algorithm of the present invention on a Botswana data set, wherein FIG. 5 (a) shows a true value and FIG. 5 (b) shows a classification result of the algorithm.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in FIG. 1, the embodiment of the invention discloses a hyperspectral remote sensing image ground object classification method based on a hypergraph convolutional neural network, which comprises the following steps:
s1: extracting multi-modal characteristics of the hyperspectral remote sensing image;
a. acquisition of spectral features
The hyperspectral image has high-dimensional spectral features, and the redundancy among the features can cause the hughes phenomenon, namely, the classification effect is poor along with the increase of feature dimensions. Therefore, the principal component analysis method is selected to carry out feature dimension reduction to obtain the spectral feature X spectral 。
b. Acquisition of spatial features
The spatial features refer to the features of a pixel neighborhood, and the algorithm extracts the spatial features of an image by using a spatial position coding method, and comprises the following steps:
X spatial [i]=[x(i),y(i)] (1)
x (i), y (i) respectively represent the horizontal and vertical coordinates of the pixel point i.
S2: constructing a hypergraph based on different modal characteristics;
extracting the hypergraph structure from the multi-modal features requires two steps, the first step is to determine the similarity between samples by using a specific metric function, and the second step is to generate the probability that a hyperedge exists between the samples according to the similarity between the samples.
Regarding the hyperspectral data, each pixel point is regarded as a sample, and the extracted spectral features and spatial features are regarded as the features of the sample.
The Euclidean distance is adopted as a measurement function, and the incidence matrix H of the hypergraph is generated by using the following formula:
x i features of the i-th sample, x j The characteristics of the jth sample are shown,represents x j K neighbors of (a), mean denotes k neighbors and sample x j Mean of euclidean distances.
S3: inputting the hypergraph and hyperspectral remote sensing images into a hypergraph convolution neural network to extract features, and optimizing a loss function by using a full gradient descent method to obtain a ground object classification result.
The hypergraph data is a more generalized graph data, the main difference is that the edges of the hypergraph have no degree limitation, and the degree of each edge in the graph structure must be 2, i.e. each edge is only responsible for the connection between two nodes, as shown in fig. 3.
The hypergraph convolutional neural network has several improvements over the conventional graph network. Firstly, the hypergraph convolution neural network is not limited to the calculation of the adjacency matrix, and the feature fusion is more convenient to carry out. In addition, the hypergraph convolution neural network can dynamically update the weight of the hypergraph through the learning weight matrix, and is more flexible compared with an unchangeable graph structure in the graph network. The hypergraph convolutional neural network is described in detail below.
a. Definition of hypergraph:
a simple undirected graph may be represented by the set of vertices V and edges E as G = (V, E). Unlike the simple undirected graph structure, the hyperedges in the hypergraph are not strictly constrained, meaning that a hyperedge can connect more than two vertices. In addition, each hyper-edge e also has a hyper-edge confidence parameter w (e). Thus, a hypergraph can be defined as G = (V, E, W), where V represents a set of vertices in the hypergraph, E represents a set of hyperedges in the hypergraph, and W is a set of hyperedge weights. The hypergraph convolutional network represents hypergraph data using a correlation matrix H whose rows and columns (size, | V | × | E |) represent nodes and edges, respectively, as follows:
given an H matrix, the laplacian matrix of the hypergraph can be computed as:
wherein D is v And D e Diagonal matrices representing vertex degrees and edge degrees, respectively, each vertex degree being defined as d (v) = ∑ Σ e∈E ω (e) h (v, e) and each edge degree are defined as δ (e) = ∑ Σ v∈V h(v,e),D v And D e The role of (c) can be simply summarized as normalized incidence matrix H, W being the set of superedge weights.
b. Hypergraph convolution:
graph convolution is based on spectrogram theory. In short, spectrogram theory exploits the eigenvalues and eigenvectors of the graph laplacian matrix to study the properties of the graph and derive therefrom the convolution of the graph. Hypergraph convolution is improved from graph convolution. Given a hypergraph G = (V, E, W), the fourier transform of the signal (vertex) x is defined as:
where Φ can be calculated by diagonalizing the positive semi-definite matrix L:
L=ΦΛΦ T (6)
where phi = diag (phi) 1 ,...,φ n ) Containing orthogonal eigenvectors, Λ = diag (λ) 1 ,...,λ n ) Is a diagonal matrix composed of eigenvalues. The hypergraph convolution operation of signal x with filter g can be written as:
g★x=Φ((Φ T g)⊙(Φ T x))=Φg(Λ)Φ T (7)
wherein g (Λ) = diag (g (λ) 1 ),...,g(λ n ) Is a fourier coefficient, which can also be viewed as a convolution kernel, representing a product of the hadamard codes. To reduce the computational complexity of finding the feature vectors, chebyshev polynomials are used to fit g (Λ):
whereinIs rescaled a to ensure input. Chebyshev polynomial in [1, -1]T is k Is a chebyshev polynomial of order k, calculated from:
T k (x)=cos(k·arccos(x)) (9)
substituting equations 8 and 9 into equation 7 can yield:
whereinIs the scaled L of the image to be displayed, device for selecting or keeping>θ k Are trainable parameters. After reducing the computational complexity, further setting k =1, λ max And is approximately equal to 2. Therefore, the convolution operation of the hypergraph can be further simplified as:
wherein theta is 0 And theta 1 One parameter θ can be substituted to avoid overfitting, defined as:
then, the convolution operation of the hypergraph is further derived:
where W is the weight matrix of the excess edge, usually calculated in advance or directly initialized to the identity matrix.
Given a vertex with n vertices and c 1 Hypergraph data for individual feature channelsThe convolution operation on the hypergraph can be defined as:
wherein W = diag (W) 1 ,w 2 ,...,w n ) Representing a set of super-edge weights, is trainable, with Θ being a trainable parameter. Y is the output after the convolution operation.
c. Hypergraph convolutional neural network:
the complete hypergraph convolution layer is obtained by adding a nonlinear activation function to the hypergraph convolution operation, and the expression is as follows:
wherein X (l+1) Is the output of the l-th layer, σ is the RELU function for nonlinear activation, θ (l) Are trainable parameters. W is a weight matrix of the super-edge, training can be carried out, an H matrix is a hypergraph structure extracted from the multi-modal characteristics, and other parameters can be calculated by the H matrix.
Experimental validation section:
two common hyperspectral remote sensing classification datasets, the Indian Pines and Botswana datasets, are used herein. Wherein:
indian Pines: the spatial resolution is 20m, the spectral range is 0.4-2.5 μm, the number of wave bands is 220, the types of ground objects are 16, and the image size is 145 multiplied by 145.
Botswana: the spatial resolution is 30m, the spectral range is 0.4-2.5 μm, the number of wave bands is 242, the types of ground objects are 14, and the image size is 1476 x 256.
The experimental results are shown in the following graph, and fig. 4 shows the classification performance of the algorithm on the Indian Pines data set, and the classification precision reaches 92.41%. FIG. 5 shows the classification performance of the algorithm on the Botswana data set, and the classification precision reaches 98.43%.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (2)
1. The hyperspectral remote sensing image ground object classification method based on the hypergraph convolutional neural network is characterized by comprising the following steps of:
extracting multi-modal characteristics of the hyperspectral remote sensing image;
constructing a hypergraph based on different modal characteristics;
inputting the hypergraph and the hyperspectral remote sensing image into a hypergraph convolution neural network to extract features, optimizing a loss function by using a full gradient descent method to obtain a ground object classification result,
the multi-modal features include spectral features and spatial features,
extracting spectral feature X of the hyperspectral remote sensing image by using principal component analysis method spectral ;
Extracting spatial feature X of the hyperspectral remote sensing image by using a spatial position coding method spatial The calculation formula is as follows:
X spatial [i]=[x(i),y(i)]
wherein x (i) and y (i) respectively represent the horizontal and vertical coordinates of the pixel point i,
the construction of the hypergraph based on different modal characteristics specifically comprises the following steps:
assuming that each pixel point in the hyperspectral remote sensing image represents a sample, representing the spectral characteristics and the spatial characteristics to the characteristics of the samples, and calculating the similarity between the samples by using a measurement function;
generating the probability of the existence of the super-edge between the samples according to the similarity between the samples, generating the incidence matrix of the hypergraph, completing the construction of the hypergraph,
the correlation matrix H is calculated by the formula:
wherein x is i Which represents the characteristics of the i-th sample,x j features representing the jth sample, N k (x j ) Denotes x j K neighbors of (a), mean denotes k neighbors and sample x j Mean of euclidean distances.
2. The hyperspectral remote sensing image ground object classification method based on the hypergraph convolutional neural network as claimed in claim 1, wherein the hypergraph convolutional neural network represents hypergraph data by using the incidence matrix H;
the convolution operation of the hypergraph is as follows:
wherein, theta is a trainable parameter, W is a super-edge weight matrix, Y is an output after convolution operation, H is a correlation matrix, D v And D e Diagonal matrices representing vertex and edge degrees, respectively, for D v And D e Wherein each vertex degree is defined as d (v) = Σ e∈E ω (e) h (v, e) and each edge degree are defined as δ (e) = Σ v∈V h (V, E), wherein V represents a vertex set in the hypergraph, and E represents a hyper-edge set in the hypergraph; wherein, the first and the second end of the pipe are connected with each other,
v represents the hypergraph vertex, e represents the hyperedge of the hypergraph;
the convolution layer of the hypergraph convolution neural network is obtained through hypergraph convolution operation and a nonlinear activation function, and the calculation formula is as follows:
wherein, X (l+1) Is the output of the l-th layer, σ is the RELU function for nonlinear activation, Θ (l) Is a trainable parameter and W is a trainable transfinite weight matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110697409.3A CN113344103B (en) | 2021-06-23 | 2021-06-23 | Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110697409.3A CN113344103B (en) | 2021-06-23 | 2021-06-23 | Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113344103A CN113344103A (en) | 2021-09-03 |
CN113344103B true CN113344103B (en) | 2023-03-24 |
Family
ID=77478159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110697409.3A Active CN113344103B (en) | 2021-06-23 | 2021-06-23 | Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344103B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808138B (en) * | 2021-11-22 | 2022-02-18 | 山东鹰联光电科技股份有限公司 | Artificial intelligence-based wire and cable surface defect detection method |
CN116883692A (en) * | 2023-06-06 | 2023-10-13 | 中国地质大学(武汉) | Spectrum feature extraction method, device and storage medium of multispectral remote sensing image |
CN117315381B (en) * | 2023-11-30 | 2024-02-09 | 昆明理工大学 | Hyperspectral image classification method based on second-order biased random walk |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368896A (en) * | 2020-02-28 | 2020-07-03 | 南京信息工程大学 | Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787516B (en) * | 2016-03-09 | 2019-07-16 | 南京信息工程大学 | A kind of hyperspectral image classification method based on empty spectrum locality low-rank hypergraph study |
CN109492691A (en) * | 2018-11-07 | 2019-03-19 | 南京信息工程大学 | A kind of hypergraph convolutional network model and its semisupervised classification method |
CN110363236B (en) * | 2019-06-29 | 2020-06-19 | 河南大学 | Hyperspectral image extreme learning machine clustering method for embedding space-spectrum combined hypergraph |
-
2021
- 2021-06-23 CN CN202110697409.3A patent/CN113344103B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368896A (en) * | 2020-02-28 | 2020-07-03 | 南京信息工程大学 | Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network |
Non-Patent Citations (1)
Title |
---|
三维卷积神经网络模型联合条件随机场优化的高光谱遥感影像分类;李竺强等;《光学学报》;20180403(第08期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113344103A (en) | 2021-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Learning tensor low-rank representation for hyperspectral anomaly detection | |
Fu et al. | Hyperspectral anomaly detection via deep plug-and-play denoising CNN regularization | |
Zhang et al. | A feature difference convolutional neural network-based change detection method | |
CN110399909B (en) | Hyperspectral image classification method based on label constraint elastic network graph model | |
Li et al. | Robust capsule network based on maximum correntropy criterion for hyperspectral image classification | |
Feng et al. | CNN-based multilayer spatial–spectral feature fusion and sample augmentation with local and nonlocal constraints for hyperspectral image classification | |
CN113344103B (en) | Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network | |
Dong et al. | Dimensionality reduction and classification of hyperspectral images using ensemble discriminative local metric learning | |
CN108009559B (en) | Hyperspectral data classification method based on space-spectrum combined information | |
Gao et al. | Change detection from synthetic aperture radar images based on channel weighting-based deep cascade network | |
Coleman et al. | Image segmentation by clustering | |
Zhang et al. | SLIC superpixels for efficient graph-based dimensionality reduction of hyperspectral imagery | |
Venugopal | Automatic semantic segmentation with DeepLab dilated learning network for change detection in remote sensing images | |
Jia et al. | Superpixel-based multitask learning framework for hyperspectral image classification | |
Luo et al. | On the eigenvectors of p-Laplacian | |
CN110363236B (en) | Hyperspectral image extreme learning machine clustering method for embedding space-spectrum combined hypergraph | |
Li et al. | Multidimensional local binary pattern for hyperspectral image classification | |
CN109034213B (en) | Hyperspectral image classification method and system based on correlation entropy principle | |
Zhang et al. | An improved low rank and sparse matrix decomposition-based anomaly target detection algorithm for hyperspectral imagery | |
Liu et al. | Region-based relaxed multiple kernel collaborative representation for hyperspectral image classification | |
Larabi et al. | High-resolution optical remote sensing imagery change detection through deep transfer learning | |
Guo et al. | Dual graph U-Nets for hyperspectral image classification | |
Sellami et al. | Spectra-spatial graph-based deep restricted boltzmann networks for hyperspectral image classification | |
Liu et al. | Kernel low-rank representation based on local similarity for hyperspectral image classification | |
CN115578632A (en) | Hyperspectral image classification method based on expansion convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |