CN117934975A - Full-variation regular guide graph convolution unsupervised hyperspectral image classification method - Google Patents
Full-variation regular guide graph convolution unsupervised hyperspectral image classification method Download PDFInfo
- Publication number
- CN117934975A CN117934975A CN202410328183.3A CN202410328183A CN117934975A CN 117934975 A CN117934975 A CN 117934975A CN 202410328183 A CN202410328183 A CN 202410328183A CN 117934975 A CN117934975 A CN 117934975A
- Authority
- CN
- China
- Prior art keywords
- module
- graph
- convolution
- total variation
- size
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000001228 spectrum Methods 0.000 claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 31
- 230000008569 process Effects 0.000 claims abstract description 28
- 230000003595 spectral effect Effects 0.000 claims description 42
- 238000000605 extraction Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 7
- 238000013145 classification model Methods 0.000 claims description 6
- 230000006835 compression Effects 0.000 claims description 5
- 238000007906 compression Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000010200 validation analysis Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000005096 rolling process Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 2
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000701 chemical imaging Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Remote Sensing (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及高光谱遥感影像处理技术领域,具体来说是一种全变分正则引导图卷积的无监督高光谱图像分类方法。The invention relates to the technical field of hyperspectral remote sensing image processing, and in particular to an unsupervised hyperspectral image classification method based on total variation regularized guided graph convolution.
背景技术Background technique
近年来,随着高光谱成像技术的飞速发展和应用领域的不断拓展,高光谱图像在农业、环境监测、地质勘探等领域得到了广泛而深入的应用。其独特之处在于其多光谱波段的信息,这使得它能够提供物体表面的极为丰富的细节和光谱特征,为遥感图像分析带来了前所未有的信息深度。然而,尽管高光谱图像在多个领域表现出巨大的潜力,其处理和分析仍然面临一系列挑战,其中最为突出的之一就是高光谱图像的无监督分类问题。In recent years, with the rapid development of hyperspectral imaging technology and the continuous expansion of its application fields, hyperspectral images have been widely and deeply applied in agriculture, environmental monitoring, geological exploration and other fields. Its uniqueness lies in its multi-spectral band information, which enables it to provide extremely rich details and spectral characteristics of the surface of objects, bringing unprecedented information depth to remote sensing image analysis. However, although hyperspectral images have shown great potential in many fields, their processing and analysis still face a series of challenges, one of the most prominent of which is the unsupervised classification problem of hyperspectral images.
在高光谱图像分类领域,传统的方法主要依赖于监督学习,即需要大量标记好的样本用于训练分类器。然而,获取这些大规模标记样本的过程不仅费时费力,而且在某些特殊场景下,很难获得足够的标签信息。为应对这一困境,无监督学习逐渐成为解决高光谱图像分类问题的一条重要途径。无监督学习方法不依赖于预先标记的样本,而是通过挖掘数据本身的内在结构和特征进行图像分类,从而更好地适应不同场景和应用需求。In the field of hyperspectral image classification, traditional methods mainly rely on supervised learning, which requires a large number of labeled samples to train classifiers. However, the process of obtaining these large-scale labeled samples is not only time-consuming and laborious, but also difficult to obtain sufficient label information in some special scenarios. To cope with this dilemma, unsupervised learning has gradually become an important way to solve the problem of hyperspectral image classification. Unsupervised learning methods do not rely on pre-labeled samples, but instead classify images by mining the intrinsic structure and features of the data itself, thereby better adapting to different scenarios and application requirements.
然而,当前的无监督学习方法在高光谱图像分类中仍然存在一些亟待解决的问题。首先,由于高光谱图像具有高维度和复杂的光谱特征,传统的特征提取方法难以充分挖掘图像的潜在信息。其次,图像中存在的噪声、光照变化以及不同类别之间的相似性使得无监督学习难以实现准确的分类。为了克服这些问题,亟需提出一种噪声不敏感、适应复杂场景的高光谱图像分类方法,以更好地满足复杂实际应用场景的需求。这涉及到深度学习、特征嵌入等先进技术的引入,以及对大规模高光谱图像数据进行更加细致和精准的分析。However, the current unsupervised learning methods still have some urgent problems to be solved in the classification of hyperspectral images. First, due to the high dimensionality and complex spectral characteristics of hyperspectral images, traditional feature extraction methods are difficult to fully explore the potential information of the image. Secondly, the noise, illumination changes and similarities between different categories in the image make it difficult for unsupervised learning to achieve accurate classification. In order to overcome these problems, it is urgent to propose a hyperspectral image classification method that is insensitive to noise and adaptable to complex scenes to better meet the needs of complex practical application scenarios. This involves the introduction of advanced technologies such as deep learning and feature embedding, as well as more detailed and accurate analysis of large-scale hyperspectral image data.
发明内容Summary of the invention
本发明的目的是旨在克服现有方法的局限性,提高高光谱图像的无监督分类准确性和鲁棒性,提供一种全变分正则引导图卷积的无监督高光谱图像分类方法。The purpose of the present invention is to overcome the limitations of existing methods, improve the accuracy and robustness of unsupervised classification of hyperspectral images, and provide an unsupervised hyperspectral image classification method based on total variation regularized guided graph convolution.
为了实现上述目的,本发明的技术方案如下:In order to achieve the above object, the technical solution of the present invention is as follows:
一种全变分正则引导图卷积的无监督高光谱图像分类方法,包括以下步骤:An unsupervised hyperspectral image classification method based on total variation regularized guided graph convolution includes the following steps:
11) 基于相对全变分的高光谱图像去噪预处理:获得待分类区域的高光谱遥感影像,对获取的高光谱图像进行相对全变分去噪;11) Hyperspectral image denoising preprocessing based on relative total variation: obtain the hyperspectral remote sensing image of the area to be classified, and perform relative total variation denoising on the acquired hyperspectral image;
12) 构建全变分正则引导图卷积模块和空谱自编码器模块的联合模型:一种全变分正则引导图卷积的无监督高光谱图像分类模型,包括三个部分:其中第一个是用于局部上下文提取、特征维度压缩的空谱自编码器模块;第二个是用于全局上下文提取、特征融合的全变分引导图卷积模块;第三个是仅在测试过程中用于获得分类结果的K-means算法模块;12) Construct a joint model of the total variation regularized guided graph convolution module and the spatial spectrum autoencoder module: An unsupervised hyperspectral image classification model of total variation regularized guided graph convolution includes three parts: the first is the spatial spectrum autoencoder module for local context extraction and feature dimension compression; the second is the total variation guided graph convolution module for global context extraction and feature fusion; the third is the K-means algorithm module used only in the test process to obtain classification results;
13) 训练全变分正则引导图卷积模块和空谱自编码器模块的联合模型:将高光谱图像切分成图像块,输入到联合模型中先进行空间、光谱上下文提取,再通过图神经网络进行全局上下文建模,最后通过反向传播算法完成模型训练;13) Train the joint model of the total variation regularized guided graph convolution module and the spatial spectral autoencoder module: divide the hyperspectral image into image blocks, input them into the joint model to first extract the spatial and spectral context, then use the graph neural network to perform global context modeling, and finally complete the model training through the back propagation algorithm;
14) 高光谱图像无监督分类结果的获得:将模型应用于整张高光谱图像上,获得无监督高光谱图像分类结果。14) Obtaining unsupervised classification results of hyperspectral images: Apply the model to the entire hyperspectral image to obtain unsupervised hyperspectral image classification results.
所述高光谱图像的获取及预处理包括以下步骤:The acquisition and preprocessing of the hyperspectral image comprises the following steps:
21)获得待分类区域的高光谱遥感卫星影像;21) Obtain hyperspectral remote sensing satellite images of the area to be classified;
22)将高光谱遥感卫星影像像素值归一化;22) Normalize the pixel values of hyperspectral remote sensing satellite images;
23)将高光谱遥感卫星影像进行相对全变分去噪23) Perform relative total variation denoising on hyperspectral remote sensing satellite images
24) 以高光谱遥感卫星影像的每个像素点为中心裁剪成7×7大小的图像块,边界区域填充为0 ;24) Each pixel of the hyperspectral remote sensing satellite image is cropped into a 7×7 image block with the border area filled with 0;
25) 将处理后的影像导出成.GIF格式;25) Export the processed image to .GIF format;
26) 按照7:3的比例将所有图像块划分为训练集、验证集。26) Divide all image blocks into training set and validation set in a ratio of 7:3.
所述构建全变分正则引导图卷积模块和空谱自编码器模块的联合模型包括以下步骤:The joint model of constructing the total variation regularized guided graph convolution module and the spatial spectral autoencoder module comprises the following steps:
31)设定无监督高光谱图像分类模型,包括用于特征维度压缩的空谱自编码器模块和全变分引导图卷积模块,其中空谱自编码器中编码器输出的特征作为每个样本构建图结构的节点特征,节点特征完成图嵌入之后输入到图卷积模块中完成特征提取,最后在测试阶段,每个样本都完成特征提取之后应用K-means算法获得最后的分类结果;31) Set up an unsupervised hyperspectral image classification model, including a spatial spectral autoencoder module for feature dimension compression and a total variation guided graph convolution module. The features output by the encoder in the spatial spectral autoencoder are used as node features for each sample to construct a graph structure. After the node features are embedded in the graph, they are input into the graph convolution module to complete feature extraction. Finally, in the test phase, after each sample has completed feature extraction, the K-means algorithm is applied to obtain the final classification result.
32)设定空谱自编码器模块网络结构,其包括一个编码器和一个对应的解码器;32) Setting the network structure of the spatial spectral autoencoder module, which includes an encoder and a corresponding decoder;
编码器的网络结构包括:The encoder network structure includes:
第一个2D卷积:卷积核尺寸为3×3 ,步长为2 ,填充为0;The first 2D convolution: the convolution kernel size is 3×3, the stride is 2, and the padding is 0;
第二个2D卷积:卷积核尺寸为3×3 ,步长为2 ,填充为0;The second 2D convolution: the convolution kernel size is 3×3, the stride is 2, and the padding is 0;
第三个2D卷积:卷积核尺寸为1×1 ,步长为1 ,填充为0;The third 2D convolution: the convolution kernel size is 1×1, the stride is 1, and the padding is 0;
解码器的网络结构包括:The decoder network structure includes:
第一个1D卷积:卷积核尺寸为1×3 ,步长为1 ,填充为1;The first 1D convolution: the convolution kernel size is 1×3, the stride is 1, and the padding is 1;
第二个1D卷积:卷积核尺寸为1×3 ,步长为1 ,填充为1;The second 1D convolution: the convolution kernel size is 1×3, the stride is 1, and the padding is 1;
第三个2D卷积:卷积核尺寸为1×1 ,步长为1 ,填充为0;The third 2D convolution: the convolution kernel size is 1×1, the stride is 1, and the padding is 0;
33)设定全变分引导图卷积网络结构,具体步骤为:33) Set the total variation guided graph convolutional network structure. The specific steps are:
331) 设定图嵌入模块结构:331) Set the graph embedding module structure:
设为实数域,的大小为图节点数目,的大小为空谱自编码器的编码器输出 通道维度大小; set up is the field of real numbers, The size of is the number of graph nodes, The size of is the encoder output channel dimension size of the empty spectral autoencoder;
图嵌入模块的输入为,通过以下公式计算邻接矩阵的值: The input of the graph embedding module is , the adjacency matrix is calculated by the following formula Value:
, ,
其中代表平滑系数,设为0.5;代表求矩阵的L2范数操作; in represents the smoothing coefficient, which is set to 0.5; Represents the L2 norm operation of the matrix;
332)设定图卷积模块网络结构:332) Set the graph convolution module network structure:
图卷积模块接受一个无向图,其中表示顶点集,表示边集,图嵌入模 块的输出作为第一个图卷积单元的输入,则被表示为模块输入,被表示为邻 接矩阵; The graph convolution module accepts an undirected graph ,in represents a vertex set, represents the edge set, and the output of the graph embedding module is used as the input of the first graph convolution unit. Represented as module input , is represented as an adjacency matrix ;
第一个图卷积单元的输出可以被表示为如下公式: Output of the first graph convolutional unit It can be expressed as the following formula:
, ,
其中为激活函数,为偏置量;且为单位矩阵;为网 络可学习参数;且由如下公式计算: in is the activation function, is the offset; and is the identity matrix; are the learnable parameters of the network; and Calculated by the following formula:
, ,
全变分引导图卷积包括两个图卷积单元,则整个全变分引导图卷积的输出可 由如下公式表示: The total variation guided graph convolution includes two graph convolution units, so the output of the entire total variation guided graph convolution is It can be expressed by the following formula:
, ,
其中、的意义与、相同;其余各符号意义皆如上述相应符号意义相同; in , The meaning and , The meanings of the other symbols are the same as those of the corresponding symbols above;
在图卷积模块中进行一次前向传播后,图全变分正则项损失可由下式计算: After a forward propagation in the graph convolution module, the graph total variation regularization loss It can be calculated by the following formula:
, ,
其中为全变分损失权重,默认设为0.01;是对张量的切片操作;其余各 符号意义皆如上述相应符号意义相同; in is the total variation loss weight, which is set to 0.01 by default; It is a slicing operation on a tensor; the meanings of the other symbols are the same as those of the corresponding symbols above;
34)设定K-means算法参数,具体步骤为:34) Set the K-means algorithm parameters. The specific steps are:
341)随机选取K个簇中心,记为,且K-means算法的输入是 全变分引导图卷积模块的输出记为; 341) Randomly select K cluster centers, denoted as , and the input of the K-means algorithm is the output of the total variation guided graph convolution module denoted as ;
342)在迭代过程中可以通过损失函数是否收敛来判断分类是否完成;损失函数由以下公式表示:342) During the iteration process, whether the classification is completed can be judged by whether the loss function converges; the loss function is expressed by the following formula:
, ,
其中全变分引导图卷积模块输出在迭代过程中不变,表示在 迭代过程中第个样本所从属的簇类别; The total variation guided graph convolution module outputs It remains unchanged during the iteration process. Indicates that in the iteration process The cluster category to which the sample belongs;
343)令为迭代步数,其中为最大迭代次数,设为1000; 343) Order is the number of iteration steps, where is the maximum number of iterations, set to 1000;
344)对于每一个样本在每一次迭代过程中都要将它分配给距离最近的簇中心, 由以下公式表示:344) For each sample In each iteration, it is assigned to the nearest cluster center, which is expressed by the following formula:
, ,
345)对于每一个簇中心,在每一次迭代过程中都要根据从属于该簇中心的样本 更新簇中心的值,可由以下公式表示: 345) For each cluster center In each iteration, the value of the cluster center is updated according to the samples belonging to the cluster center, which can be expressed by the following formula:
。 .
所述训练全变分正则引导图卷积模块和空谱自编码器模块的联合模型包括以下步骤:The joint model of training the total variation regularized guided graph convolution module and the spatial spectral autoencoder module comprises the following steps:
41) 在训练过程中只有空谱自编码器模块和全变分引导图卷积模块参与梯度传播和参数的更新;在网络完成一次正向传播之后,计算总损失;41) During the training process, only the Spatial Spectral Autoencoder module and the Total Variation Guided Graph Convolution module participate in gradient propagation and parameter update; after the network completes a forward propagation, the total loss is calculated;
42) 通过总损失的反向传播确定参数更新的梯度方向,再更新参数完成一次网络训练的迭代伦次;42) Determine the gradient direction of parameter update through back propagation of total loss, and then update the parameters to complete an iteration of network training;
43) 当网络训练轮次到达预设定的训练轮数,停止网络训练并保存模型参数。43) When the network training rounds reach the preset number of training rounds, stop the network training and save the model parameters.
所述高光谱图像无监督分类结果的获得包括以下步骤:The acquisition of the unsupervised classification result of the hyperspectral image comprises the following steps:
51)对于一张处理好的待分类高光谱图像,将其所有划分好的图像块都输入到模型中;51) For a processed hyperspectral image to be classified, all its divided image blocks are input into the model;
52)定义空谱自编码器模块和全变分引导图卷积模块,并加载已经训练好的网络参数并冻结网络的参数更新;52) Define the spatial spectrum autoencoder module and the total variation guided graph convolution module, load the trained network parameters and freeze the network parameter update;
53) 在图嵌入模块中节点的数目是固定的且等于训练过程中前向传播的批次数目大小,则在网络的测试过程中以相同的批次数目大小进行前向传播;53) In the graph embedding module, the number of nodes is fixed and equal to the batch size of the forward propagation during training. In the test process of the network, the forward propagation is performed with the same batch size.
若图像块总数目不能整除批次数目大小,则对一些样本再进行一次前向传播,最后平均这两次提取的特征;If the total number of image blocks cannot divide the batch size, forward propagation is performed on some samples again, and the features extracted twice are finally averaged;
54)获得所有待分类区域样本的特征后,使用K-means迭代地聚类样本特征;54) After obtaining the features of all samples in the area to be classified, use K-means to iteratively cluster the sample features;
55)获得待分类区域所有样本的K分类结果。55) Obtain K classification results for all samples in the area to be classified.
有益效果Beneficial Effects
本发明涉及一种全变分正则引导图卷积的无监督高光谱图像分类方法,The present invention relates to an unsupervised hyperspectral image classification method based on total variation regularized guided graph convolution.
与现有技术相比,首先通过引入相对全变分去噪操作,有效地抑制了图像中的噪声,提高了分类的准确性。其次,采用图卷积网络和空谱自编码器对高光谱图像进行联合建模,更好地捕捉了图像的空间和光谱信息,提高了特征提取的效果。最后,本发明的方法是一种无监督学习方法,避免了依赖大量标记样本的繁琐过程,更加灵活适用于不同场景和数据集。因此,本发明的提出填补了目前无监督高光谱图像分类方法的空白,具有较高的实用价值和广泛的应用前景。Compared with the prior art, firstly, by introducing the relative total variation denoising operation, the noise in the image is effectively suppressed, and the classification accuracy is improved. Secondly, the hyperspectral image is jointly modeled by the graph convolutional network and the spatial spectrum autoencoder, which better captures the spatial and spectral information of the image and improves the effect of feature extraction. Finally, the method of the present invention is an unsupervised learning method, which avoids the cumbersome process of relying on a large number of labeled samples and is more flexible and applicable to different scenarios and data sets. Therefore, the proposal of the present invention fills the gap in the current unsupervised hyperspectral image classification method, and has high practical value and broad application prospects.
由于高光谱图像噪声大、场景复杂、光谱信息冗余,本发明提出的一种全变分正则引导图卷积的高光谱图像分类方法在处理高光谱图像分类问题上,能够在学习过程中保持图像空间的平滑性和一致性,对于高噪声、高维度情况不敏感,并且无需任何标签的标注,具有更高的分类准确性、鲁棒性和泛用性。该方法有望为农业、环境监测、地质勘探等领域提供更可靠的高光谱图像分类解决方案,推动高光谱图像处理技术的发展和应用。Due to the high noise, complex scenes and redundant spectral information of hyperspectral images, the hyperspectral image classification method proposed in this paper, which is based on total variation regularized guided graph convolution, can maintain the smoothness and consistency of the image space during the learning process, is insensitive to high noise and high dimensionality, and does not require any label annotation, and has higher classification accuracy, robustness and versatility. This method is expected to provide more reliable hyperspectral image classification solutions for agriculture, environmental monitoring, geological exploration and other fields, and promote the development and application of hyperspectral image processing technology.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是一种全变分正则引导图卷积的无监督高光谱图像分类方法流程图;FIG1 is a flow chart of an unsupervised hyperspectral image classification method using total variation regularized guided graph convolution;
图2是一种全变分正则引导图卷积的无监督高光谱图像分类方法总体结构图;FIG2 is a general structure diagram of an unsupervised hyperspectral image classification method based on total variation regularized guided graph convolution;
图3是本发明涉及的自编码器模块框架图;FIG3 is a framework diagram of an autoencoder module according to the present invention;
图4是本发明涉及的全变分引导图卷积模块框架图。FIG4 is a framework diagram of a total variation guided graph convolution module involved in the present invention.
具体实施方式Detailed ways
为使对本发明的结构特征及所达成的功效有更进一步的了解与认识,用以较佳的实施例及附图配合详细的说明,说明如下:In order to have a further understanding and recognition of the structural features and the effects achieved by the present invention, a preferred embodiment and accompanying drawings are used for detailed description as follows:
如图1所示,本发明所述的一种全变分正则引导图卷积的无监督高光谱图像分类方法,包括以下步骤:As shown in FIG1 , the unsupervised hyperspectral image classification method of the present invention using total variation regularized guided graph convolution comprises the following steps:
第一步,基于相对全变分的高光谱图像去噪预处理:The first step is to preprocess the hyperspectral image based on relative total variation:
获得待分类区域的高光谱遥感影像,对高光谱遥感卫星影像进行归一化预处理,使得模型能够快速稳定的收敛,提高分类精度;对高光谱遥感卫星影像进行相对全变分去噪;以高光谱遥感卫星影像的每个像素点为中心裁剪成相同大小的图像块并划分训练集、验证集,则每个样本在特征提取的过程中可以获取局部空间的上下文信息,其具体步骤如下;Obtain the hyperspectral remote sensing image of the area to be classified, perform normalization preprocessing on the hyperspectral remote sensing satellite image, so that the model can converge quickly and stably, and improve the classification accuracy; perform relative total variation denoising on the hyperspectral remote sensing satellite image; crop each pixel point of the hyperspectral remote sensing satellite image into image blocks of the same size and divide them into training sets and validation sets, so that each sample can obtain local spatial context information during the feature extraction process. The specific steps are as follows;
(1)对待分类区域的高光谱图像进行相对全变分去噪,具体步骤为:(1) Perform relative total variation denoising on the hyperspectral image of the classification area. The specific steps are as follows:
(1-1)以图像中像素点为中心,计算出该像素点水平方向窗口总变化量 及垂直方向上的窗口总变化量,和的计算过程如下: (1-1) Based on the pixel points in the image As the center, calculate the total change of the horizontal window of the pixel point And the total change of the window in the vertical direction , and The calculation process is as follows:
, ,
, ,
其中代表窗口内的像素点集合,窗口大小为7×7;代表窗口内的一个像素;代表权重系数;代表去噪后的图像;和分别代表去噪后图像上相对于水平方向上和垂直方向上的梯度;代表取绝对值操作; in Represents the set of pixels in the window, and the window size is 7×7; Represents a pixel within the window; represents the weight coefficient; represents the denoised image; and Represent the denoised image. Relative to Horizontal and vertical gradients; Represents the absolute value operation;
其中水平方向梯度和垂直方向梯度的计算公式分别如下: The horizontal gradient and vertical gradient The calculation formulas are as follows:
, ,
, ,
其中代表在去噪后图像上特定位置的像素值;和分别代表相对于 水平方向上和垂直方向上的偏移量; in Represents the pixel value at a specific location on the denoised image; and Representing Relative to Horizontal and vertical offsets;
权重系数的计算公式如下:Weight coefficient The calculation formula is as follows:
, ,
其中代表取指数操作;和分别代表相对于水平方向上和垂直方 向上的偏移量;代表平滑系数,设为0.5; in Represents the exponential operation; and Representing Relative to Horizontal and vertical offsets; represents the smoothing coefficient, set to 0.5;
(1-2)以图像中像素点为中心,计算出该像素点水平方向上的窗口固有变化量,垂直方向上的窗口固有变化量,和的计算过程如下: (1-2) Based on the pixel points in the image As the center, calculate the window intrinsic change in the horizontal direction of the pixel point , the intrinsic change of the window in the vertical direction , and The calculation process is as follows:
, ,
, ,
其中各符号意义均与窗口总变化量中各符号意义相同;The meaning of each symbol is the same as that of the total window change;
(1-3)输出去噪后图像,其过程可由以下公式表示;(1-3) Output the denoised image. The process can be expressed by the following formula:
, ,
其中为原高光谱图像,为去噪后图像;为平滑系数,设为0.01;为防止分母 为0的常数,设为0.001;其余各符号意义均与上述相应符号意义相同; in is the original hyperspectral image, is the denoised image; is the smoothing coefficient, set to 0.01; To prevent the denominator from being a constant of 0, it is set to 0.001; the meanings of the other symbols are the same as those of the corresponding symbols above;
(2)将高光谱遥感卫星影像像素值归一化;(2) Normalizing the pixel values of hyperspectral remote sensing satellite images;
(3)以高光谱遥感卫星影像的每个像素点为中心裁剪成7×7大小的图像块,边界区域填充为0 ;(3) Each pixel of the hyperspectral remote sensing satellite image is cropped into a 7×7 image block with the boundary area filled with 0;
(4)将处理后的影像导出成.GIF格式;(4) Export the processed image into .GIF format;
(5)按照7:3的比例将所有图像块划分为训练集、验证集。(5) Divide all image blocks into training set and validation set in a ratio of 7:3.
第二步,构建全变分正则引导图卷积模块和空谱自编码器模块的联合模型:The second step is to build a joint model of the total variation regularized guided graph convolution module and the spatial spectral autoencoder module:
设定无监督高光谱图像分类模型包括用于特征维度压缩的空谱自编码器模块网络结构和全变分引导图卷积模块网络结构,其中空谱自编码器模块中编码器输出的特征作为每个样本构建图结构的节点特征,节点特征完成图嵌入之后输入到全变分引导图卷积模块中完成特征提取,最后在测试阶段,每个样本都完成特征提取之后应用K-means算法获得最后的分类结果;此外如图2所示,构建的空谱自编码器模块能够有效提取样本的空间局部上下文信息并且压缩样本的光谱维度,显著降低了噪声的影响并防止因为维度灾难造成的同物异谱现象而导致分类精度下降的情况;并且如图3所示,构建的全变分引导图卷积模块能够构建所有样本之间的全局上下文联系,并且由于全变分正则项的引入进一步消除噪声的影响,使得类间特征稀疏、类内特征聚合,有效地提取出每个样本的最终特征;The unsupervised hyperspectral image classification model is set to include a spatial spectrum autoencoder module network structure for feature dimension compression and a total variation guided graph convolution module network structure, wherein the features output by the encoder in the spatial spectrum autoencoder module are used as node features for constructing a graph structure for each sample, and the node features are input into the total variation guided graph convolution module after completing graph embedding to complete feature extraction. Finally, in the test phase, after each sample has completed feature extraction, the K-means algorithm is applied to obtain the final classification result; in addition, as shown in FIG2, the constructed spatial spectrum autoencoder module can effectively extract the spatial local context information of the sample and compress the spectral dimension of the sample, significantly reducing the influence of noise and preventing the phenomenon of different spectra of the same object caused by the dimensional disaster, which leads to a decrease in classification accuracy; and as shown in FIG3, the constructed total variation guided graph convolution module can construct a global contextual connection between all samples, and the introduction of the total variation regularization term further eliminates the influence of noise, making the inter-class features sparse and the intra-class features aggregated, and effectively extracting the final features of each sample;
其具体步骤如下:The specific steps are as follows:
(1)设定空谱自编码器模块网络结构,其包括一个编码器和一个对应的解码器;(1) Setting the network structure of the spatial spectral autoencoder module, which includes an encoder and a corresponding decoder;
编码器的网络结构包括:The encoder network structure includes:
第一个2D卷积:卷积核尺寸为3×3 ,步长为2 ,填充为0;The first 2D convolution: the convolution kernel size is 3×3, the stride is 2, and the padding is 0;
第二个2D卷积:卷积核尺寸为3×3 ,步长为2 ,填充为0;The second 2D convolution: the convolution kernel size is 3×3, the stride is 2, and the padding is 0;
第三个2D卷积:卷积核尺寸为1×1 ,步长为1 ,填充为0;The third 2D convolution: the convolution kernel size is 1×1, the stride is 1, and the padding is 0;
解码器的网络结构包括:The decoder network structure includes:
第一个1D卷积:卷积核尺寸为1×3 ,步长为1 ,填充为1;The first 1D convolution: the convolution kernel size is 1×3, the stride is 1, and the padding is 1;
第二个1D卷积:卷积核尺寸为1×3 ,步长为1 ,填充为1;The second 1D convolution: the convolution kernel size is 1×3, the stride is 1, and the padding is 1;
第三个2D卷积:卷积核尺寸为1×1 ,步长为1 ,填充为0;The third 2D convolution: the convolution kernel size is 1×1, the stride is 1, and the padding is 0;
(2)设定全变分引导图卷积网络结构,具体步骤为:(2) Set the total variation guided graph convolutional network structure. The specific steps are:
(2-1) 设定图嵌入模块结构:(2-1) Set the graph embedding module structure:
图嵌入模块接受一批样本的特征,通过这批样本两两之间特征的相似度计算邻接矩阵完成图拓扑结构嵌入;每个节点的特征取决于空谱自编码器模块中编码器的输出结果;The graph embedding module accepts the features of a batch of samples, and calculates the adjacency matrix through the similarity of the features between the samples to complete the graph topology structure embedding; the features of each node depend on the output result of the encoder in the spatial spectral autoencoder module;
设为实数域,的大小为图节点数目,的大小为空谱自编码器的编码器输出 通道维度大小; set up is the field of real numbers, The size of is the number of graph nodes, The size of is the encoder output channel dimension size of the empty spectral autoencoder;
图嵌入模块的输入为,通过以下公式计算邻接矩阵的值: The input of the graph embedding module is , the adjacency matrix is calculated by the following formula Value:
, ,
其中代表平滑系数,设为0.5;代表求矩阵的L2范数操作; in represents the smoothing coefficient, which is set to 0.5; Represents the L2 norm operation of the matrix;
(2-2)设定图卷积模块网络结构:(2-2) Set the graph convolution module network structure:
图卷积模块接受一个无向图,其中表示顶点集,表示边集,图嵌入模 块的输出作为第一个图卷积单元的输入,则被表示为模块输入,被表示为邻 接矩阵; The graph convolution module accepts an undirected graph ,in represents a vertex set, represents the edge set, and the output of the graph embedding module is used as the input of the first graph convolution unit. Represented as module input , is represented as an adjacency matrix ;
第一个图卷积单元的输出可以被表示为如下公式: Output of the first graph convolutional unit It can be expressed as the following formula:
, ,
其中为激活函数,为偏置量;且为单位矩阵;为网 络可学习参数;且由如下公式计算: in is the activation function, is the offset; and is the identity matrix; are the learnable parameters of the network; and Calculated by the following formula:
, ,
全变分引导图卷积包括两个图卷积单元,则整个全变分引导图卷积的输出可 由如下公式表示: The total variation guided graph convolution includes two graph convolution units, so the output of the entire total variation guided graph convolution is It can be expressed by the following formula:
, ,
其中、的意义与、相同;其余各符号意义皆如上述相应符号意义相同;in , The meaning and , The meanings of the other symbols are the same as those of the corresponding symbols above;
在图卷积模块中进行一次前向传播后,图全变分正则项损失可由下式计算: After a forward propagation in the graph convolution module, the graph total variation regularization loss It can be calculated by the following formula:
, ,
其中为全变分损失权重,默认设为0.01;是对张量的切片操作;其余各 符号意义皆如上述相应符号意义相同; in is the total variation loss weight, which is set to 0.01 by default; It is a slicing operation on a tensor; the meanings of the other symbols are the same as those of the corresponding symbols above;
(3)设定K-means算法参数:(3) Set K-means algorithm parameters:
将给定的多个样本特征划分成K个簇,并在划分的同时不断地更新每个簇的中心点;K的值设为分类类别数;Divide the given multiple sample features into K clusters, and continuously update the center point of each cluster while dividing; the value of K is set to the number of classification categories;
(3-1)随机选取K个簇中心,记为,且K-means算法的输入是 全变分引导图卷积模块的输出记为; (3-1) Randomly select K cluster centers, denoted as , and the input of the K-means algorithm is the output of the total variation guided graph convolution module denoted as ;
(3-2)在迭代过程中可以通过损失函数是否收敛来判断分类是否完成;损失函数由以下公式表示:(3-2) During the iteration process, whether the classification is completed can be judged by whether the loss function converges; the loss function is expressed by the following formula:
, ,
其中在迭代过程中不变,表示在迭代过程中第个样本所从 属的簇类别; in It remains unchanged during the iteration process. Indicates that in the iteration process The cluster category to which the sample belongs;
(3-3)令为迭代步数,其中为最大迭代次数,设为1000; (3-3) Command is the number of iteration steps, where is the maximum number of iterations, set to 1000;
(3-4)对于每一个样本在每一次迭代过程中都要将它分配给距离最近的簇中心, 由以下公式表示: (3-4) For each sample In each iteration, it is assigned to the nearest cluster center, which is expressed by the following formula:
, ,
(3-5)对于每一个簇中心,在每一次迭代过程中都要根据从属于该簇中心的样 本更新簇中心的值,可由以下公式表示: (3-5) For each cluster center In each iteration, the value of the cluster center is updated according to the samples belonging to the cluster center, which can be expressed by the following formula:
。 .
第三步,训练全变分正则引导图卷积模块和空谱自编码器模块的联合模型:The third step is to train the joint model of the total variation regularized guided graph convolution module and the spatial spectral autoencoder module:
获取切分好的高光谱遥感影像图像块,将其输入到空谱自编码器模块和全变分引导图卷积模块中进行模型训练;Obtain segmented hyperspectral remote sensing image blocks and input them into the spatial spectrum autoencoder module and the total variation guided graph convolution module for model training;
其具体步骤如下:The specific steps are as follows:
(1)对处理好的高光谱图像随机采样取样本点并划分为7×7×C大小的图像块,分训练集和验证集输入到网络中;其中C为高光谱图像谱段数目;(1) Randomly sample sample points from the processed hyperspectral image and divide it into image blocks of size 7×7×C, and input them into the network as training set and validation set; where C is the number of spectral bands of the hyperspectral image;
(2)将一批图像块输入到空谱自编码器模块中,编码器输出的张量维度变为1×1×B,作为空谱自编码器模块的解码器和全变分引导图卷积模块的图嵌入模块的输入;其中B为编码器压缩波段数目;(2) A batch of image blocks are input into the spatial spectral autoencoder module. The tensor dimension of the encoder output becomes 1×1×B, which serves as the input of the decoder of the spatial spectral autoencoder module and the graph embedding module of the total variation guided graph convolution module; where B is the number of compressed bands of the encoder;
(3) 解码器接收编码器的输出作为输入,将维度大小为1×1×B的张量上采样至1×1×C,和该样本图像块的中心采样点计算L1损失;(3) The decoder receives the output of the encoder as input, upsamples the tensor of dimension 1×1×B to 1×1×C, and calculates the L1 loss with the center sampling point of the sample image block;
(4)1×1×B大小的特征向量首先经过两个步长等于1且卷积核尺寸大于1的1D卷积提取特征向量的光谱维度局部上下文信息,再通过卷积核尺寸为1×1且卷积核个数为C的2D卷积实现光谱维度上采样完成光谱重建;(4) The feature vector of size 1×1×B is first subjected to two 1D convolutions with a step size of 1 and a convolution kernel size greater than 1 to extract the local context information of the spectral dimension of the feature vector, and then a 2D convolution with a convolution kernel size of 1×1 and a number of convolution kernels C is performed to achieve spectral dimension upsampling to complete spectral reconstruction;
(5)重建之后的光谱和原样本中心点光谱向量通过计算L1损失更新空谱自编码器 模块参数,光谱重建损失可以由以下公式表示: (5) The reconstructed spectrum and the original sample center point spectrum vector are used to update the parameters of the empty spectrum autoencoder module by calculating the L1 loss. The spectrum reconstruction loss It can be expressed by the following formula:
, ,
其中的大小和图节点数目相同;代表求矩阵的L1范数操作; in The size of is the same as the number of graph nodes; Represents the L1 norm operation of the matrix;
(6) 图嵌入模块接受编码器的输出作为输入,其将压缩特征完成拓扑图 构建之后将其输入到图卷积网络中进行特征提取,输出结果计算图全变分正则项损失; (6) The graph embedding module accepts the output of the encoder as input, which compresses the features After the topology graph is constructed, it is input into the graph convolutional network for feature extraction, and the output result is used to calculate the total variation regularization loss of the graph. ;
(7) 在训练过程中只有空谱自编码器模块和全变分引导图卷积模块参与梯度传 播和参数的更新;在网络完成一次正向传播之后,总损失由以下公式表示: (7) During the training process, only the spatial spectral autoencoder module and the total variation guided graph convolution module participate in gradient propagation and parameter update; after the network completes a forward propagation, the total loss It is expressed by the following formula:
, ,
空谱自编码器模块和全变分引导图卷积模块的训练可以同时进行,其中光谱重建 损失只用于更新空谱自编码器模块参数,图全变分正则项损失只用于更新全 变分引导图卷积模块参数,可分别由以下公式表示: The training of the spatial spectral autoencoder module and the total variation guided graph convolution module can be performed simultaneously, where the spectral reconstruction loss Only used to update the parameters of the empty spectrum autoencoder module, graph total variation regularization loss It is only used to update the parameters of the total variation guided graph convolution module, which can be expressed by the following formulas:
, ,
其中表示自空谱编码器模块参数; in Represents the parameters of the self-space spectrum encoder module;
, ,
其中表示全变分引导图卷积模块参数; in Represents the total variation guided graph convolution module parameters;
(8) 通过总损失的反向传播确定参数更新的梯度方向,再更新参数完成一次 网络训练的迭代轮次; (8) Through total loss Back propagation determines the gradient direction of parameter update, and then updates the parameters to complete an iteration of network training;
(9) 当网络训练轮次到达预设定的训练轮数,停止网络训练并保存模型参数。(9) When the network training rounds reach the preset number of training rounds, stop the network training and save the model parameters.
第四步,高光谱图像无监督分类结果的获得:The fourth step is to obtain the unsupervised classification results of hyperspectral images:
获取待分类区域的高光谱遥感卫星影像,将其所有划分好的输入到训练好的一种全变分正则引导图卷积的无监督高光谱图像分类模型中进行前向传播并得到分类图;Obtain the hyperspectral remote sensing satellite image of the area to be classified, input all the divided images into a trained total variation regularized guided graph convolution unsupervised hyperspectral image classification model for forward propagation and obtain the classification map;
其具体步骤如下:The specific steps are as follows:
(1)对于一张处理好的待分类区域高光谱图像,其所有样本点都被划分为7×7×C大小的图像块;(1) For a processed hyperspectral image of the area to be classified, all its sample points are divided into image blocks of size 7×7×C;
(2)定义空谱自编码器模块和全变分引导图卷积模块,并加载已经训练好的网络参数并冻结网络的参数更新;(2) Define the spatial spectral autoencoder module and the total variation guided graph convolution module, load the trained network parameters and freeze the network parameter updates;
(3)在图嵌入模块中节点的数目是固定的且等于训练过程中前向传播的批次数目大小,则在网络的测试过程中以相同的批次数目大小进行前向传播;(3) The number of nodes in the graph embedding module is fixed and equal to the batch size of the forward propagation during training. In the network testing process, the forward propagation is performed with the same batch size.
若图像块总数目不能整除批次数目大小, 则对一些样本再进行一次前向传播,最后平均这两次提取的特征;If the total number of image blocks cannot divide the batch size, forward propagation is performed on some samples again, and finally the features extracted twice are averaged;
(4)获得所有待分类区域样本的特征后,使用K-means迭代地聚类样本特征;(4) After obtaining the features of all samples in the area to be classified, use K-means to iteratively cluster the sample features;
(5)获得待分类区域所有样本的K分类结果。(5) Obtain K classification results for all samples in the area to be classified.
利用本方法提出的空谱自编码器模块和全变分引导图卷积模块,显著降低了噪声的影响并防止因为维度灾难造成的同物异谱现象而导致分类精度下降的情况,并且综合了全局上下文信息以进行更加有效地特征提取。与现有无监督高光谱分类技术相比,本发明解决了对于高光谱遥感影像中噪声大、场景复杂、维度冗余导致的难分类问题。The proposed spatial spectrum autoencoder module and total variation guided graph convolution module significantly reduce the impact of noise and prevent the phenomenon of different spectra of the same object caused by the dimensionality disaster, which leads to a decrease in classification accuracy. The global context information is integrated for more effective feature extraction. Compared with the existing unsupervised hyperspectral classification technology, the present invention solves the problem of difficult classification caused by high noise, complex scenes and dimensional redundancy in hyperspectral remote sensing images.
以上显示和描述了本发明的基本原理、主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是本发明的原理,在不脱离本发明精神和范围的前提下本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明的范围内。本发明要求的保护范围由所附的权利要求书及其等同物界定。The above shows and describes the basic principles, main features and advantages of the present invention. Those skilled in the art should understand that the present invention is not limited to the above embodiments. The above embodiments and descriptions only describe the principles of the present invention. The present invention may be subject to various changes and improvements without departing from the spirit and scope of the present invention. These changes and improvements fall within the scope of the present invention. The scope of protection claimed by the present invention is defined by the attached claims and their equivalents.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410328183.3A CN117934975B (en) | 2024-03-21 | 2024-03-21 | Full-variation regular guide graph convolution unsupervised hyperspectral image classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410328183.3A CN117934975B (en) | 2024-03-21 | 2024-03-21 | Full-variation regular guide graph convolution unsupervised hyperspectral image classification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117934975A true CN117934975A (en) | 2024-04-26 |
CN117934975B CN117934975B (en) | 2024-06-07 |
Family
ID=90764987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410328183.3A Active CN117934975B (en) | 2024-03-21 | 2024-03-21 | Full-variation regular guide graph convolution unsupervised hyperspectral image classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117934975B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119131438A (en) * | 2024-11-11 | 2024-12-13 | 中国人民解放军火箭军工程大学 | A hyperspectral image clustering method, system, device, medium and product |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985357A (en) * | 2018-06-29 | 2018-12-11 | 湖南理工学院 | The hyperspectral image classification method of set empirical mode decomposition based on characteristics of image |
CN110084159A (en) * | 2019-04-15 | 2019-08-02 | 西安电子科技大学 | Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint |
CN111161199A (en) * | 2019-12-13 | 2020-05-15 | 中国地质大学(武汉) | A low-rank sparse decomposition method for mixed pixels of hyperspectral images based on spatial spectrum fusion |
CN111368691A (en) * | 2020-02-28 | 2020-07-03 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Unsupervised hyperspectral remote sensing image space spectrum feature extraction method |
CN111754593A (en) * | 2020-06-28 | 2020-10-09 | 西安航空学院 | A Compressed Sensing Reconstruction Method for Multi-hypothesis Prediction of Hyperspectral Images Based on Spatial Spectrum Joint |
CN113343942A (en) * | 2021-07-21 | 2021-09-03 | 西安电子科技大学 | Remote sensing image defect detection method |
CN113743429A (en) * | 2020-05-28 | 2021-12-03 | 中国人民解放军战略支援部队信息工程大学 | Hyperspectral image classification method and device |
CN115331105A (en) * | 2022-08-19 | 2022-11-11 | 西安石油大学 | Hyperspectral image classification method and system |
CN115565071A (en) * | 2022-10-26 | 2023-01-03 | 深圳大学 | Hyperspectral Image Transformer Network Training and Classification Method |
CN115731135A (en) * | 2022-11-24 | 2023-03-03 | 电子科技大学长三角研究院(湖州) | Hyperspectral image denoising method and system based on low-rank tensor decomposition and adaptive graph total variation |
US20230114877A1 (en) * | 2020-06-29 | 2023-04-13 | Southwest Electronics Technology Research Institute ( China Electronics Technology Group Corporation | Unsupervised Latent Low-Rank Projection Learning Method for Feature Extraction of Hyperspectral Images |
CN116310459A (en) * | 2023-03-28 | 2023-06-23 | 中国地质大学(武汉) | Hyperspectral image subspace clustering method based on multi-view spatial spectrum combination |
WO2023125456A1 (en) * | 2021-12-28 | 2023-07-06 | 苏州大学 | Multi-level variational autoencoder-based hyperspectral image feature extraction method |
CN116403046A (en) * | 2023-04-13 | 2023-07-07 | 中国人民解放军海军航空大学 | Hyperspectral image classification device and method |
US20230252644A1 (en) * | 2022-02-08 | 2023-08-10 | Ping An Technology (Shenzhen) Co., Ltd. | System and method for unsupervised superpixel-driven instance segmentation of remote sensing image |
-
2024
- 2024-03-21 CN CN202410328183.3A patent/CN117934975B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985357A (en) * | 2018-06-29 | 2018-12-11 | 湖南理工学院 | The hyperspectral image classification method of set empirical mode decomposition based on characteristics of image |
CN110084159A (en) * | 2019-04-15 | 2019-08-02 | 西安电子科技大学 | Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint |
CN111161199A (en) * | 2019-12-13 | 2020-05-15 | 中国地质大学(武汉) | A low-rank sparse decomposition method for mixed pixels of hyperspectral images based on spatial spectrum fusion |
CN111368691A (en) * | 2020-02-28 | 2020-07-03 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Unsupervised hyperspectral remote sensing image space spectrum feature extraction method |
CN113743429A (en) * | 2020-05-28 | 2021-12-03 | 中国人民解放军战略支援部队信息工程大学 | Hyperspectral image classification method and device |
CN111754593A (en) * | 2020-06-28 | 2020-10-09 | 西安航空学院 | A Compressed Sensing Reconstruction Method for Multi-hypothesis Prediction of Hyperspectral Images Based on Spatial Spectrum Joint |
US20230114877A1 (en) * | 2020-06-29 | 2023-04-13 | Southwest Electronics Technology Research Institute ( China Electronics Technology Group Corporation | Unsupervised Latent Low-Rank Projection Learning Method for Feature Extraction of Hyperspectral Images |
CN113343942A (en) * | 2021-07-21 | 2021-09-03 | 西安电子科技大学 | Remote sensing image defect detection method |
WO2023125456A1 (en) * | 2021-12-28 | 2023-07-06 | 苏州大学 | Multi-level variational autoencoder-based hyperspectral image feature extraction method |
US20230252644A1 (en) * | 2022-02-08 | 2023-08-10 | Ping An Technology (Shenzhen) Co., Ltd. | System and method for unsupervised superpixel-driven instance segmentation of remote sensing image |
CN115331105A (en) * | 2022-08-19 | 2022-11-11 | 西安石油大学 | Hyperspectral image classification method and system |
CN115565071A (en) * | 2022-10-26 | 2023-01-03 | 深圳大学 | Hyperspectral Image Transformer Network Training and Classification Method |
CN115731135A (en) * | 2022-11-24 | 2023-03-03 | 电子科技大学长三角研究院(湖州) | Hyperspectral image denoising method and system based on low-rank tensor decomposition and adaptive graph total variation |
CN116310459A (en) * | 2023-03-28 | 2023-06-23 | 中国地质大学(武汉) | Hyperspectral image subspace clustering method based on multi-view spatial spectrum combination |
CN116403046A (en) * | 2023-04-13 | 2023-07-07 | 中国人民解放军海军航空大学 | Hyperspectral image classification device and method |
Non-Patent Citations (3)
Title |
---|
QICHAO LIU ET.AL: "CNN-Enhanced Graph Convolutional Network With Pixel- and Superpixel-Level Feature Fusion for Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 59, no. 10, 24 November 2020 (2020-11-24), pages 8657, XP011879869, DOI: 10.1109/TGRS.2020.3037361 * |
ZHI GONG ET.AL: "Superpixel Spectral–Spatial Feature Fusion Graph Convolution Network for Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 60, 16 August 2022 (2022-08-16), pages 1 - 16, XP011919617, DOI: 10.1109/TGRS.2022.3198931 * |
王婷婷 等: "基于Gabor滤波和级联GCN与CNN的高光谱图像分类", 《 应用科技 》, vol. 50, no. 2, 6 June 2023 (2023-06-06), pages 79 - 85 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119131438A (en) * | 2024-11-11 | 2024-12-13 | 中国人民解放军火箭军工程大学 | A hyperspectral image clustering method, system, device, medium and product |
CN119131438B (en) * | 2024-11-11 | 2025-02-21 | 中国人民解放军火箭军工程大学 | A hyperspectral image clustering method, system, device, medium and product |
Also Published As
Publication number | Publication date |
---|---|
CN117934975B (en) | 2024-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108830855B (en) | Full convolution network semantic segmentation method based on multi-scale low-level feature fusion | |
CN111738329A (en) | A land use classification method for time series remote sensing images | |
CN111368691B (en) | Unsupervised hyperspectral remote sensing image space spectrum feature extraction method | |
CN111738363B (en) | Alzheimer disease classification method based on improved 3D CNN network | |
CN111860612A (en) | Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method | |
CN113192076B (en) | MRI Brain Tumor Image Segmentation Using Combined Classification Prediction and Multiscale Feature Extraction | |
CN108171122A (en) | The sorting technique of high-spectrum remote sensing based on full convolutional network | |
CN110059768A (en) | The semantic segmentation method and system of the merging point and provincial characteristics that understand for streetscape | |
CN110287777A (en) | A Body Segmentation Algorithm for Golden Monkey in Natural Scenes | |
CN114862871A (en) | Remote sensing image wheat planting area extraction method based on SE-UNet deep learning network | |
CN104732551A (en) | Level set image segmentation method based on superpixel and graph-cup optimizing | |
CN108898269A (en) | Electric power image-context impact evaluation method based on measurement | |
CN111696043A (en) | Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN | |
WO2024222610A1 (en) | Remote-sensing image change detection method based on deep convolutional network | |
CN113887656B (en) | Hyperspectral image classification method combining deep learning and sparse representation | |
CN117934975B (en) | Full-variation regular guide graph convolution unsupervised hyperspectral image classification method | |
CN118967449A (en) | A super-resolution method for pathological slice images based on diffusion model | |
CN111639697B (en) | Hyperspectral image classification method based on non-repeated sampling and prototype network | |
Yuan et al. | ROBUST PCANet for hyperspectral image change detection | |
CN114463340A (en) | Edge information guided agile remote sensing image semantic segmentation method | |
CN117975002A (en) | Weak supervision image segmentation method based on multi-scale pseudo tag fusion | |
CN118015483A (en) | A lightweight remote sensing image cloud detection method, system, device and medium guided by geoscience prior knowledge | |
CN118470479A (en) | Remote sensing image space-time fusion method and system based on parallel interaction of Swin transducer and CNN | |
CN112949771A (en) | Hyperspectral remote sensing image classification method based on multi-depth multi-scale hierarchical attention fusion mechanism | |
CN109002771A (en) | A kind of Classifying Method in Remote Sensing Image based on recurrent neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |