CN111695467B - Spatial Spectral Fully Convolutional Hyperspectral Image Classification Method Based on Superpixel Sample Expansion - Google Patents
Spatial Spectral Fully Convolutional Hyperspectral Image Classification Method Based on Superpixel Sample Expansion Download PDFInfo
- Publication number
- CN111695467B CN111695467B CN202010485713.7A CN202010485713A CN111695467B CN 111695467 B CN111695467 B CN 111695467B CN 202010485713 A CN202010485713 A CN 202010485713A CN 111695467 B CN111695467 B CN 111695467B
- Authority
- CN
- China
- Prior art keywords
- convolution
- spectral
- hyperspectral image
- label
- spatial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000003595 spectral effect Effects 0.000 title claims description 76
- 238000012549 training Methods 0.000 claims abstract description 51
- 238000001228 spectrum Methods 0.000 claims abstract description 32
- 230000011218 segmentation Effects 0.000 claims abstract description 26
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 17
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 16
- 238000010606 normalization Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 238000000513 principal component analysis Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 19
- 238000012706 support-vector machine Methods 0.000 description 6
- 230000010339 dilation Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
Description
技术领域technical field
本发明属于图像处理技术领域,具体涉及一种基于超像素样本扩充的空谱全卷积高光谱图像分类方法。The invention belongs to the technical field of image processing, and in particular relates to a method for classifying hyperspectral images with spatial spectrum full convolution based on superpixel sample expansion.
背景技术Background technique
随着科学技术的进步,高光谱遥感技术得到了巨大的发展。高光谱数据可表示为高光谱数据立方体,是三维数据结构。高光谱数据可视为三维图像,在普通二维图像之外又多了一维光谱信息。其空间图像描述地表的二维空间特征,其光谱维揭示了图像每一像元的光谱曲线特征,由此实现了遥感数据空间维与光谱维信息的有机融合。高光谱遥感图像含有丰富的光谱信息,可以提供空间域信息和光谱域信息,具有“图谱合一”的特点,可以实现对地物精确的辨别与细节提取,对于认识客观世界提供了有利条件。由于高光谱图像独有的特点,高光谱遥感技术已经广泛的应用在不同的领域。在民用领域,高光谱遥感影像已经被用于城市环境监测、地表土壤监测、地质勘探、灾害评估、农业产量估计、农作物分析等方面。高光谱遥感技术已经广泛的应用于人们的日常生活中。因此,设计实用高效的高光谱图像分类方法,已经成为现代社会必不可少的科技需求。With the advancement of science and technology, hyperspectral remote sensing technology has been greatly developed. Hyperspectral data can be represented as a hyperspectral data cube, which is a three-dimensional data structure. Hyperspectral data can be regarded as a three-dimensional image, and one-dimensional spectral information is added to the ordinary two-dimensional image. Its spatial image describes the two-dimensional spatial characteristics of the earth's surface, and its spectral dimension reveals the spectral curve characteristics of each pixel in the image, thus realizing the organic fusion of remote sensing data spatial dimension and spectral dimension information. Hyperspectral remote sensing images contain rich spectral information, can provide spatial domain information and spectral domain information, have the characteristics of "integration of map and spectrum", can realize accurate identification and detail extraction of ground objects, and provide favorable conditions for understanding the objective world. Due to the unique characteristics of hyperspectral images, hyperspectral remote sensing technology has been widely used in different fields. In the civilian field, hyperspectral remote sensing images have been used in urban environmental monitoring, surface soil monitoring, geological exploration, disaster assessment, agricultural output estimation, crop analysis, etc. Hyperspectral remote sensing technology has been widely used in people's daily life. Therefore, designing a practical and efficient hyperspectral image classification method has become an indispensable scientific and technological requirement in modern society.
目前,研究人员已经提出了许多经典的分类方法用于高光谱图像分类,代表性的分类方法有支持向量机(SVM)和神经网络(CNN)。SVM通过最大化类别边界,在小样本分类中取得了较好的分类结果。F.Melgani and L.Bruzzone在“Classification ofhyperspectral remote sensing images with support vector machines”将SVM引入了高光谱图像分类,在当时取得了最好的分类结果,但是SVM确丁核函数,完全需要经验判断,选择不合适的核函数会导致分类性能不佳。同时,随着深度学习的兴起,卷积神经网络也被应用于高光谱图像分类领域。但是由于训练卷积神经网络需要大量的有标记样本作为训练样本,而高光谱图像标注成本又十分昂贵,所以如何解决小样本问题是目前的一个热门方向。At present, researchers have proposed many classic classification methods for hyperspectral image classification, representative classification methods are Support Vector Machine (SVM) and Neural Network (CNN). SVM achieves better classification results in few-shot classification by maximizing the class boundaries. F.Melgani and L.Bruzzone introduced SVM into hyperspectral image classification in "Classification of hyperspectral remote sensing images with support vector machines", and achieved the best classification results at that time, but SVM does not require empirical judgment. Choosing an inappropriate kernel function can lead to poor classification performance. At the same time, with the rise of deep learning, convolutional neural networks have also been applied to the field of hyperspectral image classification. However, since the training of convolutional neural networks requires a large number of labeled samples as training samples, and the cost of hyperspectral image labeling is very expensive, how to solve the problem of small samples is currently a hot direction.
发明内容Contents of the invention
本发明所要解决的技术问题在于针对上述现有技术中的不足,提供一种基于超像素样本扩充的空谱全卷积高光谱图像分类方法,以使用分割结果作为先验信息,提升高光谱图像的分类效果。The technical problem to be solved by the present invention is to provide a space-spectrum full-convolution hyperspectral image classification method based on superpixel sample expansion, which uses the segmentation results as prior information to improve the hyperspectral image. classification effect.
本发明采用以下技术方案:The present invention adopts following technical scheme:
基于超像素样本扩充的空谱全卷积高光谱图像分类方法,包括以下步骤:A method for classifying hyperspectral images based on spatial-spectrum full-convolution hyperspectral image expansion based on superpixel samples, including the following steps:
S1、输入高光谱图像PaviaU,从高光谱图像PaviaU中获取训练样本Xt和测试样本Xe;S1, input hyperspectral image PaviaU, obtain training sample X t and test sample X e from hyperspectral image PaviaU;
S2、对高光谱数据集进行归一化处理,对每个训练样本从分割标签矩阵中取和在它邻域内并且与它的分割标签相同的n个标签作为伪标签样本加入到训练样本中;S2. Normalize the hyperspectral data set, and for each training sample, take n labels that are in its neighborhood and are the same as its segmentation label from the segmentation label matrix and add them to the training sample as pseudo-label samples;
S3、分别构建光谱特征提取模块和空谱特征提取模块,构建光谱特征图和空谱特征图加权融合模块,以空谱结合的特征作为输入经过两个卷积层并设置,构建一个用于高光谱分类的空谱结合的全卷积神经网络;S3. Construct the spectral feature extraction module and the spatial spectral feature extraction module respectively, construct the spectral feature map and the spatial spectral feature map weighted fusion module, use the space-spectrum combination feature as input and set it through two convolutional layers, and construct a high-level A fully convolutional neural network with space-spectrum combination for spectral classification;
S4、构建步骤S3全卷积神经网络的损失函数,训练神经网络;S4, building the loss function of the fully convolutional neural network in step S3, and training the neural network;
S5、经过多次训练投票得出最终分类结果图,实现图像分类。S5. Obtain a final classification result map after multiple training and voting, and realize image classification.
具体的,步骤S1具体为:Specifically, step S1 is specifically:
S101、记三维高光谱图像PaviaU为U,V,C分别为高光谱图像的空间长度,空间宽度和光谱通道数,高光谱图像包含N个像素点,每个像素点有C个光谱波段,N=U×V;S101. Record the three-dimensional hyperspectral image PaviaU as U, V, and C are the spatial length, spatial width, and number of spectral channels of the hyperspectral image, respectively. The hyperspectral image contains N pixels, and each pixel has C spectral bands, N=U×V;
S102、将X中类别标签1到9的样本每类随机取30个组成初始训练样本集Xt,剩下的作为测试样本Xe。S102. Randomly select 30 samples of each class label 1 to 9 in X to form an initial training sample set X t , and use the rest as test samples X e .
具体的,步骤S2具体为:Specifically, step S2 is specifically:
S201、对三维高光谱图像进行PCA降维处理,降维后图像的通道数为1;S201. Perform PCA dimensionality reduction processing on the 3D hyperspectral image, and the number of channels of the image after dimensionality reduction is 1;
S202、对PCA降维后的图像进行熵率超像素分割,分割结果为50块,将得到分割标签矩阵为 S202. Carry out entropy rate superpixel segmentation on the image after PCA dimension reduction, the segmentation result is 50 blocks, and the segmentation label matrix will be obtained as
S203、设真实标签矩阵分割标签矩阵为/>(x0,y0)处的训练样本的真实标签为/>分割图中分割标签为/>选择(x,y)为中心的7×7的空间中满足的任意n个样本,生成伪标签为/>符合上述标准的伪标签样本进行扩充,此时训练样本数量变成原来的n+1倍,测试样本维持不变。S203, setting the real label matrix Split label matrix as /> The true label of the training sample at (x 0 ,y 0 ) is /> The segmentation label in the segmentation map is /> Choose (x,y) as the center of the 7×7 space to satisfy For any n samples of , the generated pseudo-label is /> The pseudo-label samples that meet the above criteria are expanded. At this time, the number of training samples becomes n+1 times the original, and the test samples remain unchanged.
具体的,步骤S3具体为:Specifically, step S3 is specifically:
S301、构建光谱特征提取模块,光谱特征提取模块包括三个卷积层和一个合并层,每个卷积层后面加relu激活函数和批归一化处理;S301, build a spectral feature extraction module, the spectral feature extraction module includes three convolutional layers and a merging layer, each convolutional layer is followed by a relu activation function and batch normalization processing;
S302、构建空谱特征提取模块,空谱特征提取模块包括1×1卷积,relu激活,批归一化,多尺度空间特征融合层,3×3空洞卷积,relu激活,批归一化,2×2平均池化和一个合并层;S302. Construct a spatial spectral feature extraction module. The spatial spectral feature extraction module includes 1×1 convolution, relu activation, batch normalization, multi-scale spatial feature fusion layer, 3×3 dilated convolution, relu activation, batch normalization , 2×2 average pooling and a pooling layer;
S303、构建光谱特征图和空谱特征图加权融合模块;S303. Construct a weighted fusion module of the spectral feature map and the spatial spectral feature map;
S304、空谱结合的特征作为输入经过两个卷积层;S304. The feature of space-spectrum combination is used as input through two convolutional layers;
S305、卷积后的特征图进行PCA降维到5维,以备后续CRF处理的使用;S305. The feature map after convolution is subjected to PCA dimensionality reduction to 5 dimensions for use in subsequent CRF processing;
S306、对卷积后的特征图进行Softmax操作输出分类概率矩阵,将分类概率矩阵中数值最大的维度数作为预测类别标签进行输出得到分类结果。S306. Perform a Softmax operation on the convoluted feature map to output a classification probability matrix, and output the dimension with the largest value in the classification probability matrix as a predicted category label to obtain a classification result.
进一步的,步骤S301中,批归一化处理参数为:momentum=0.8,卷积核尺寸都为1,步长为1,所有卷积后通道数都为64,连续三次卷积之后将卷积结果相加得出光谱特征图。Further, in step S301, the batch normalization processing parameters are: momentum=0.8, the convolution kernel size is 1, the step size is 1, and the number of channels after all convolutions is 64. After three consecutive convolutions, the convolution The results are summed to obtain a spectral profile.
进一步的,步骤S302中,第一层卷积层使用1×1卷积,步长为1;3×3空洞卷积的dilation rate为2,步长为1;所有卷积结果通道数是64;所有批归一化处理参数都为:momentum=0.8,合并层是三个卷积层的特征图相加,通道数保持64。Further, in step S302, the first convolutional layer uses 1×1 convolution with a step size of 1; the dilation rate of 3×3 atrous convolution is 2 and the step size is 1; the number of channels of all convolution results is 64 ; All batch normalization processing parameters are: momentum=0.8, the merge layer is the addition of the feature maps of the three convolution layers, and the number of channels remains 64.
进一步的,步骤S303中,将光谱特征图和空谱特征图加权叠加如下:Further, in step S303, the weighted superimposition of the spectral feature map and the spatial spectral feature map is as follows:
Cunite=λspectralCspectral+λspatialCspatial C unite = λ spectral C spectral + λ spatial C spatial
其中,Cunite为加权后的特征图,λspectral和λspatial分别为网络中可训练的光谱特征和空间特征的权重系数,Cspectral和Cspatial分别为光谱特征图和空谱特征图。Among them, C unite is the weighted feature map, λ spectral and λ spatial are the weight coefficients of the trainable spectral features and spatial features in the network, respectively, and C spectral and C spatial are the spectral feature map and spatial spectral feature map, respectively.
具体的,步骤S4中,损失函数为:Specifically, in step S4, the loss function is:
L=L1+L2 L=L 1 +L 2
其中,L为最终的损失函数,L1和L2分别为训练集中的有标记样本和伪标签样本,和/>表示第i个训练样本的标签和预测标签,j取1或2代表样本为原始样本或伪标签样本。Among them, L is the final loss function, L 1 and L 2 are the labeled samples and pseudo-labeled samples in the training set, respectively, and /> Indicates the label and predicted label of the i-th training sample, and j takes 1 or 2 to indicate that the sample is an original sample or a pseudo-label sample.
具体的,步骤S5具体为:Specifically, step S5 is specifically:
S501、将步骤S3中PCA降维到5维的卷积后特征图和步骤S4中归一化的高光谱数据输入网络得到的分类结果加入条件随机场;S501, adding the classification result obtained by inputting the convolutional feature map obtained by PCA dimension reduction to 5 dimensions in step S3 and the hyperspectral data normalized in step S4 into the network, and adding the conditional random field;
S502、经过条件随机场得到一次训练的分类结果;S502. Obtain a classification result of one training through a conditional random field;
S503、对相同的训练样本重复以上m次伪样本扩充和网络训练操作,得到m次分类结果,将每个像素的出现次数最多的预测类标作为最终预测类标进行输出。S503. Repeat the above m times of pseudo sample expansion and network training operations on the same training sample to obtain m times of classification results, and output the predicted class label with the most occurrences of each pixel as the final predicted class label.
进一步的,步骤S501中,条件随机场的能量函数如下:Further, in step S501, the energy function of the conditional random field is as follows:
其中,ψu(yi)和ψp(yi,yj)分别是一元函数部分和二元函数部分。Among them, ψ u (y i ) and ψ p (y i , y j ) are the unary function part and the binary function part respectively.
与现有技术相比,本发明至少具有以下有益效果:Compared with the prior art, the present invention has at least the following beneficial effects:
本发明一种基于超像素样本扩充的空谱全卷积高光谱图像分类方法,利用了高光谱图像分割产生的结果指导生成了伪标签样本,有效的利用了高光谱图像的先验信息扩充了训练样本,使其在小样本情况下仍能保持较好的分类准确率;采用空谱结合的方式进行特征提取,能够更加充分提取高光谱图像的光谱和空谱特征,从而提高了高光谱图像的分类准确率;在空间特征提取模块中使用了不同dilation rate的空洞卷积实现了多尺度的特征融合,在多个尺度上提取了高光谱图像的空间特征;对最后的分类结果之前加入了投票器,增强了整个结构的鲁棒性,使得分类结果更加稳定可靠;采用全卷积神经网络,没有引入全连接层,可以对任意尺寸的高光谱图像实现端到端的图像分类;全卷积神经网络,输入是与原始高光谱图像尺寸一致的经过预处理的高光谱数据,避免了现有方法中常用的用每个像素的Patch作为输入数据导致的训练数据高度冗余的情况。The present invention is a space-spectrum full-convolution hyperspectral image classification method based on superpixel sample expansion, which utilizes the results generated by hyperspectral image segmentation to guide the generation of pseudo-label samples, and effectively utilizes the prior information of hyperspectral images to expand Training samples, so that it can still maintain a good classification accuracy in the case of small samples; feature extraction using the combination of space and spectrum can more fully extract the spectral and space spectral features of hyperspectral images, thereby improving the performance of hyperspectral images. Classification accuracy rate; In the spatial feature extraction module, dilated convolutions with different dilation rates are used to achieve multi-scale feature fusion, and the spatial features of hyperspectral images are extracted on multiple scales; before the final classification result, add The voting device enhances the robustness of the entire structure, making the classification results more stable and reliable; using a fully convolutional neural network without introducing a fully connected layer, it can achieve end-to-end image classification for hyperspectral images of any size; fully convolutional The input of the neural network is the preprocessed hyperspectral data of the same size as the original hyperspectral image, which avoids the highly redundant training data caused by using the Patch of each pixel as the input data commonly used in existing methods.
进一步的,设置熵率超像素得到分割标签有效的利用了高光谱图像的先验信息,在不需要知道分类标签的情况下将与训练样本类似的样本补充到训练集中,有效地解决了高光谱图像可供训练的有标记样本稀缺的困难。Furthermore, setting the entropy rate superpixel to obtain the segmentation label effectively utilizes the prior information of the hyperspectral image, and adds samples similar to the training sample to the training set without knowing the classification label, effectively solving the hyperspectral image problem. Images available for training have difficulty with the scarcity of labeled samples.
进一步的,构建一个用于高光谱分类的空谱结合的全卷积神经网络所使用的全卷积神经网络没有全连接层,因此可以非常方便地接受任意尺寸的高光谱图像作为输入。空谱结合的模式可以将高光谱图像中的光谱信息和空间信息结合成相对于单一空间特征或光谱特征效果更好的新特征。Furthermore, the fully convolutional neural network used to construct a spatial-spectral combined fully convolutional neural network for hyperspectral classification does not have a fully connected layer, so it is very convenient to accept hyperspectral images of any size as input. The space-spectrum combination mode can combine the spectral information and spatial information in the hyperspectral image into a new feature that is better than a single spatial feature or spectral feature.
进一步的,光谱模块的特征提取使用连续的1×1卷积层在影响空间信息尽可能小的前提下完成对光谱特征的提取,并且残差模块的存在使得梯度信息得以保留使得模型能更好的收敛。Further, the feature extraction of the spectral module uses continuous 1×1 convolutional layers to complete the extraction of spectral features while affecting the spatial information as little as possible, and the existence of the residual module allows the gradient information to be preserved so that the model can be better. of convergence.
进一步的,空间谱模块的特征提取使用了不同dilation rate的空洞卷积,可以扩大感受野的同时提取到多尺度的空间信息。Furthermore, the feature extraction of the spatial spectrum module uses dilated convolutions with different dilation rates, which can expand the receptive field and extract multi-scale spatial information at the same time.
进一步的,针对提出的伪标签样本扩充方法,构建全卷积神经网络的损失函数做出了相应改进,将伪标记样本的交叉熵也加入损失函数可以使网络更好的收敛。Further, for the proposed pseudo-label sample expansion method, the loss function of constructing the fully convolutional neural network is improved accordingly. Adding the cross-entropy of pseudo-label samples to the loss function can make the network converge better.
进一步的,针对提出的伪标签样本扩充方法由于自身局限导致的不稳定性,多次训练投票得出最终分类图有效增加了模型的鲁棒性。Furthermore, in view of the instability of the proposed pseudo-label sample expansion method due to its own limitations, the final classification map obtained by multiple training votes effectively increases the robustness of the model.
综上所述,本发明提出的基于超像素样本扩充的空谱全卷积高光谱图像分类方法有效的利用高光谱图像的先验信息实现了伪样本扩充解决了高光谱图像有标记样本的稀缺问题,同时空谱的全卷积分类网络也充分利用了多尺度的空间特征和光谱特征实现了较高的分类精度。In summary, the space-spectrum full-convolution hyperspectral image classification method based on superpixel sample expansion proposed by the present invention effectively utilizes the prior information of hyperspectral images to realize pseudo-sample expansion and solve the scarcity of marked samples in hyperspectral images At the same time, the spatial-spectrum full-volume classification network also makes full use of multi-scale spatial and spectral features to achieve higher classification accuracy.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.
附图说明Description of drawings
图1为本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;
图2为本发明中伪标签样本扩充实现的流程框图;Fig. 2 is the flow chart diagram that the sample expansion of pseudo-label among the present invention realizes;
图3为本发明中的多尺度空间特征融合模块。Fig. 3 is a multi-scale spatial feature fusion module in the present invention.
具体实施方式Detailed ways
本发明提供了一种基于超像素样本扩充的空谱全卷积高光谱图像分类方法,输入高光谱图像;获取训练集与测试集;对高光谱图像进行主成分分析并降维;对降维结果进行熵率超像素分割;生成伪标签样本;更新训练集;对高光谱图像进行数据预处理;输入全卷积神经网络;训练全卷积神经网络,对高光谱图像进行分类;重复以上操作并投票表决;输出高光谱分类结果。本发明利用熵率超像素分割结果进行了伪标签样本的扩充,充分利用了高光谱图像的空间先验信息,增加样本数量,缓解了网络过拟合的问题,有效提高了小样本条件下高光谱图像分类的准确率,分类效率和分类表现。The invention provides a space-spectrum full-convolution hyperspectral image classification method based on superpixel sample expansion, input hyperspectral images; obtain training sets and test sets; perform principal component analysis and dimensionality reduction on hyperspectral images; dimensionality reduction As a result, perform entropy superpixel segmentation; generate pseudo-label samples; update the training set; perform data preprocessing on hyperspectral images; input fully convolutional neural networks; train fully convolutional neural networks to classify hyperspectral images; repeat the above operations And vote; output hyperspectral classification results. The present invention uses entropy rate superpixel segmentation results to expand pseudo-label samples, fully utilizes the spatial prior information of hyperspectral images, increases the number of samples, alleviates the problem of network over-fitting, and effectively improves high-resolution images under small-sample conditions. Spectral Image Classification Accuracy, Classification Efficiency and Classification Performance.
请参阅图1,本发明一种基于超像素样本扩充的空谱全卷积高光谱图像分类方法,包括以下步骤:Please refer to Fig. 1, a kind of space spectrum full convolution hyperspectral image classification method based on superpixel sample expansion of the present invention, comprises the following steps:
S1、输入高光谱图像PaviaU,该高光谱三维图像,是本发明实验使用的数据,从PaviaU高光谱图像中获取训练样本Xt和测试样本Xe;S1, input hyperspectral image PaviaU, this hyperspectral three-dimensional image, is the data used in the experiment of the present invention, obtains training sample X t and test sample X e from PaviaU hyperspectral image;
S101、记三维高光谱图像PaviaU为其中U,V,C分别是高光谱图像的空间长度,空间宽度和光谱通道数,高光谱图像包含N个像素点,每个像素点有C个光谱波段,其中N=U×V。PaviaU数据集中N=207400个样本,U=610,V=340,C=103。类别标签为1到9的样本共有42776个。对X进行归一化,将其中数据数值保持在[0,1]之间,具体如下:S101. Record the three-dimensional hyperspectral image PaviaU as Where U, V, and C are the spatial length, spatial width, and number of spectral channels of the hyperspectral image, respectively. The hyperspectral image contains N pixels, and each pixel has C spectral bands, where N=U×V. N=207400 samples in PaviaU dataset, U=610, V=340, C=103. There are a total of 42776 samples with class labels from 1 to 9. Normalize X and keep the data value in it between [0, 1], as follows:
; ;
S102、将X中类别标签为1到9的样本每类随机取30个组成初始训练样本集Xt,剩下的作为测试样本Xe。S102. Randomly select 30 samples from X with category labels 1 to 9 for each category to form an initial training sample set X t , and use the rest as test samples X e .
S2、对高光谱数据集进行归一化之后利用熵率超像素来生成分割标签来生成伪标签样本;S2. After normalizing the hyperspectral data set, use the entropy rate superpixel to generate segmentation labels to generate pseudo-label samples;
S201、对三维高光谱图像进行PCA降维处理,降维后图像的通道数为1;S201. Perform PCA dimensionality reduction processing on the 3D hyperspectral image, and the number of channels of the image after dimensionality reduction is 1;
S202、对PCA降维后的图像进行熵率超像素分割,分割结果为50块,将得到分割标签矩阵为 S202. Carry out entropy rate superpixel segmentation on the image after PCA dimension reduction, the segmentation result is 50 blocks, and the segmentation label matrix will be obtained as
S203、设真实标签矩阵分割标签矩阵为/>(x0,y0)处的训练样本的真实标签为/>分割图中分割标签为/>选择(x,y)为中心的7×7的空间中满足的任意n个样本,为其生成伪标签为/>符合上述标准的伪标签样本进行扩充,此时训练样本数量变成了原来的n+1倍,测试样本维持不变,如图2所示。S203, setting the real label matrix Split label matrix as /> The true label of the training sample at (x 0 ,y 0 ) is /> The segmentation label in the segmentation map is /> Choose (x,y) as the center of the 7×7 space to satisfy For any n samples of , generate a pseudo-label for it The pseudo-label samples that meet the above criteria are expanded, and the number of training samples becomes n+1 times the original, while the test samples remain unchanged, as shown in Figure 2.
S3、构建一个用于高光谱分类的空谱结合的全卷积神经网络;S3. Construct a fully convolutional neural network for space-spectrum combination for hyperspectral classification;
S301、光谱特征提取模块由三个卷积层组成,每个卷积层后面加relu激活函数和批归一化处理。S301. The spectral feature extraction module is composed of three convolutional layers, each convolutional layer is followed by a relu activation function and batch normalization processing.
批归一化处理参数为:The batch normalization processing parameters are:
momentum=0.8,卷积核尺寸都为1,步长为1,所有卷积后通道数都为64,连续三次卷积之后将卷积结果相加得出光谱特征图。momentum=0.8, the size of the convolution kernel is 1, the step size is 1, and the number of channels after all convolutions is 64. After three consecutive convolutions, the convolution results are added to obtain the spectral feature map.
S302、空谱特征提取模块由三个卷积层组成,第一个卷积层的卷积核尺寸为1,步长为1;第二个卷积层用来实现多尺度特征融合,结构如图3,由三个卷积核尺寸为3,dilation rate为2,3,4,步长为1的空洞卷积相加得到;第三个卷积层后面加2×2平均池化层。S302. The spatial spectral feature extraction module is composed of three convolutional layers. The convolution kernel size of the first convolutional layer is 1, and the step size is 1; the second convolutional layer is used to realize multi-scale feature fusion, and the structure is as follows Figure 3 is obtained by adding three convolution kernels with a size of 3, a dilation rate of 2, 3, 4, and a step size of 1; a 2×2 average pooling layer is added after the third convolution layer.
每次卷积完成后经过relu激活函数和批归一化处理;第三个卷积层的卷积核尺寸都为3,dilation rate为2,步长为1的空洞卷积。After each convolution is completed, the relu activation function and batch normalization are processed; the convolution kernel size of the third convolution layer is 3, the dilation rate is 2, and the step size is 1.
所有批归一化处理参数为:All batch normalization processing parameters are:
momentum=0.8,所有卷积结果的通道数都为64,连续三次卷积之后将卷积结果相加得出空谱特征图。momentum=0.8, the number of channels of all convolution results is 64, and the convolution results are added after three consecutive convolutions to obtain a spatial spectral feature map.
S303、将光谱特征图和空谱特征图加权叠加如下式:S303. Weighted and superimposed the spectral characteristic map and the spatial spectral characteristic map as follows:
Cunite=λspectralCspectral+λspatialCspatial C unite = λ spectral C spectral + λ spatial C spatial
其中,Cunite为加权后的特征图,其通道仍为64,λspectral和λspatial分别为网络中可训练的光谱特征和空间特征的权重系数,Cspectral和Cspatial分别为光谱特征图和空谱征图;Among them, C unite is the weighted feature map, and its channel is still 64, λ spectral and λ spatial are the weight coefficients of the spectral features and spatial features that can be trained in the network, respectively, C spectral and C spatial are the spectral feature map and empty spectrogram;
S304、空谱结合的特征作为输入经过两个1×1卷积层,卷积后进行relu激活,卷积核尺寸都为1,步长都为1,第一个卷积结果通道数为64,第二个为128;S304. The spatial-spectral combination feature is used as input through two 1×1 convolutional layers, and relu activation is performed after convolution. The convolution kernel size is 1, the step size is 1, and the number of channels of the first convolution result is 64. , the second is 128;
S305、卷积后的特征图进行PCA降维到5维,以备后续CRF处理的使用;S305. The feature map after convolution is subjected to PCA dimensionality reduction to 5 dimensions for use in subsequent CRF processing;
S306、对卷积后的特征图进行Softmax操作输出610×340×9的分类概率矩阵,将9维中数值最大的维度数作为预测类别标签进行输出得到分类结果尺寸为610×340。S306. Perform Softmax operation on the convoluted feature map to output a classification probability matrix of 610×340×9, and output the dimension with the largest value among the 9 dimensions as the predicted category label to obtain a classification result with a size of 610×340.
S4、构建全卷积神经网络的损失函数,训练神经网络;S4. Construct the loss function of the fully convolutional neural network and train the neural network;
S401、损失函数使用交叉熵,计算扩充之后的训练样本预测标签和训练样本标签的交叉熵,如下式所示:S401. The loss function uses cross entropy to calculate the cross entropy of the expanded training sample prediction label and the training sample label, as shown in the following formula:
L=L1+L2 L=L 1 +L 2
其中,L为最终的损失函数,L1和L2分别为训练集中的有标记样本和伪标签样本,和/>表示第i个训练样本的标签和预测标签,j取1或2代表该样本为原始样本或伪标签样本;Among them, L is the final loss function, L 1 and L 2 are the labeled samples and pseudo-labeled samples in the training set, respectively, and /> Indicates the label and predicted label of the i-th training sample, and j takes 1 or 2 to indicate that the sample is an original sample or a pseudo-label sample;
S402、将归一化的高光谱数据输入网络,迭代1000次生成预测标签图。S402. Input the normalized hyperspectral data into the network, and iterate 1000 times to generate a predicted label map.
S5、多次训练投票得出最终分类结果图。S5. Multiple training votes to obtain a final classification result map.
S501、将步骤S305和步骤S402的输出加入条件随机场,条件随机场的能量函数如下所示,S501, adding the output of step S305 and step S402 into the conditional random field, the energy function of the conditional random field is as follows,
ψu(yi)和ψp(yi,yj)分别是一元函数部分和二元函数部分。ψ u (y i ) and ψ p (y i ,y j ) are the unary function part and the binary function part respectively.
在本发明中,一元部分的计算公式为ψu(yi)=-log P(yi),其中,P(yi)是由提出的全卷积网络给出的像素i的标签分配概率。In the present invention, the unary part is calculated as ψ u (y i )=-log P(y i ), where P(y i ) is the label assignment probability of pixel i given by the proposed fully convolutional network .
二元函数部分定义为:The binary function part is defined as:
其中,如果yi=yj,μ(yi,yj)=1,否则为零;km是一个高斯核;fi和fj是任意特征空间中像素i和j的特征向量;ωm是相应的权重;为了充分利用深光谱空间特征,将S305中Cunite的前五个主要成分用作每个像素的特征。高斯内核的完整形式写成:Among them, if y i =y j , μ(y i ,y j )=1, otherwise zero; k m is a Gaussian kernel; f i and f j are the feature vectors of pixels i and j in any feature space; ω m is the corresponding weight; in order to fully utilize the deep spectral spatial features, the first five principal components of C unite in S305 are used as the features of each pixel. The full form of the Gaussian kernel is written as:
; ;
S502、经过条件随机场得到一次训练的分类结果;S502. Obtain a classification result of one training through a conditional random field;
S503、对相同的训练样本重复以上m次操作,得到m次分类结果,将每个像素的出现次数最多的预测类标作为最终预测类标进行输出。S503. Repeat the above operations m times on the same training sample to obtain m times of classification results, and output the predicted class label with the most occurrences of each pixel as the final predicted class label.
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中的描述和所示的本发明实施例的组件可以通过各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. The components of the embodiments of the invention generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
仿真实验的三个评价指标具体如下:The three evaluation indicators of the simulation experiment are as follows:
总精度OA表示正确分类的样本占所有样本的比例,值越大,说明分类效果越好。平均精度AA表示每一类分类精度的平均值,值越大,说明分类效果越好。卡方系数Kappa表示混淆矩阵中不同的权值,值越大,说明分类效果越好。The total accuracy OA indicates the proportion of correctly classified samples to all samples, and the larger the value, the better the classification effect. The average accuracy AA represents the average value of the classification accuracy of each category, and the larger the value, the better the classification effect. The chi-square coefficient Kappa represents different weights in the confusion matrix, and the larger the value, the better the classification effect.
本发明用到的现有技术对比分类方法如下:The prior art comparative classification method used in the present invention is as follows:
W.Song等人在“Hyperspectral Image Classification With Deep FeatureFusion Network,IEEE Trans.Geosci.Remote Sens.,vol.56,no.6,pp.3173-3184,June2018”中提出的高光谱图像分类方法,简称深度特征融合的DFFN方法。The hyperspectral image classification method proposed by W.Song et al. in "Hyperspectral Image Classification With Deep FeatureFusion Network, IEEE Trans.Geosci.Remote Sens., vol.56, no.6, pp.3173-3184, June2018", referred to as DFFN method for deep feature fusion.
表1为本发明的分类结果定量分析表(选取PaviaU数据集,每类30个有标记样本作为训练集:Table 1 is the classification result quantitative analysis table of the present invention (choose PaviaU data set, every kind of 30 marked samples are used as training set:
综上所述,本发明一种基于超像素样本扩充的空谱全卷积高光谱图像分类方法,有效的利用高光谱图像的先验信息实现了伪样本扩充解决了高光谱图像有标记样本的稀缺问题,同时空谱的全卷积分类网络也充分利用了多尺度的空间特征和光谱特征实现了较高的分类精度。In summary, the present invention is a space-spectrum full-convolution hyperspectral image classification method based on superpixel sample expansion, which effectively utilizes the prior information of hyperspectral images to realize pseudo-sample expansion and solves the problem of marked samples in hyperspectral images. At the same time, the spatial-spectrum full-volume classification network also makes full use of multi-scale spatial and spectral features to achieve higher classification accuracy.
以上内容仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明权利要求书的保护范围之内。The above content is only to illustrate the technical ideas of the present invention, and cannot limit the protection scope of the present invention. Any changes made on the basis of the technical solutions according to the technical ideas proposed in the present invention shall fall within the scope of the claims of the present invention. within the scope of protection.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010485713.7A CN111695467B (en) | 2020-06-01 | 2020-06-01 | Spatial Spectral Fully Convolutional Hyperspectral Image Classification Method Based on Superpixel Sample Expansion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010485713.7A CN111695467B (en) | 2020-06-01 | 2020-06-01 | Spatial Spectral Fully Convolutional Hyperspectral Image Classification Method Based on Superpixel Sample Expansion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111695467A CN111695467A (en) | 2020-09-22 |
CN111695467B true CN111695467B (en) | 2023-05-30 |
Family
ID=72479042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010485713.7A Active CN111695467B (en) | 2020-06-01 | 2020-06-01 | Spatial Spectral Fully Convolutional Hyperspectral Image Classification Method Based on Superpixel Sample Expansion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111695467B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112232137A (en) * | 2020-09-24 | 2021-01-15 | 北京航空航天大学 | Hyperspectral image processing method and device |
CN112699756B (en) * | 2020-12-24 | 2023-08-25 | 中国农业科学院农业信息研究所 | Hyperspectral image-based tea origin identification method and system |
CN112733769B (en) * | 2021-01-18 | 2023-04-07 | 西安电子科技大学 | Hyperspectral image classification method based on multiband entropy rate superpixel segmentation |
CN113052014B (en) * | 2021-03-09 | 2022-12-23 | 西北工业大学深圳研究院 | Hyperspectral Image Classification Method Based on Two-layer Spatial Manifold Representation |
CN112949592B (en) * | 2021-03-31 | 2022-07-22 | 云南大学 | Hyperspectral image classification method and device and electronic equipment |
CN113222867B (en) * | 2021-04-16 | 2022-05-20 | 山东师范大学 | Image data enhancement method and system based on multi-template image |
CN113239755B (en) * | 2021-04-28 | 2022-06-21 | 湖南大学 | A medical hyperspectral image classification method based on deep learning of spatial spectrum fusion |
CN113327231B (en) * | 2021-05-28 | 2022-10-14 | 北京理工大学重庆创新中心 | A method and system for detecting abnormal hyperspectral targets based on space-spectrum combination |
WO2023000160A1 (en) * | 2021-07-20 | 2023-01-26 | 海南长光卫星信息技术有限公司 | Hyperspectral remote sensing image semi-supervised classification method, apparatus, and device, and storage medium |
CN113516194B (en) * | 2021-07-20 | 2023-08-08 | 海南长光卫星信息技术有限公司 | Semi-supervised classification method, device, equipment and storage medium for hyperspectral remote sensing images |
CN113642655B (en) * | 2021-08-18 | 2024-02-13 | 杭州电子科技大学 | Small sample image classification method based on support vector machine and convolutional neural network |
CN113723255B (en) * | 2021-08-24 | 2023-09-01 | 中国地质大学(武汉) | Hyperspectral image classification method and storage medium |
CN113673607A (en) * | 2021-08-24 | 2021-11-19 | 支付宝(杭州)信息技术有限公司 | Method and device for training image annotation model and image annotation |
CN113822209B (en) * | 2021-09-27 | 2023-11-14 | 海南长光卫星信息技术有限公司 | Hyperspectral image recognition method and device, electronic equipment and readable storage medium |
CN113902013A (en) * | 2021-10-09 | 2022-01-07 | 黑龙江雨谷科技有限公司 | Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation |
CN114049567B (en) * | 2021-11-22 | 2024-02-23 | 齐鲁工业大学 | Adaptive soft label generation method and application in hyperspectral image classification |
CN114332534B (en) * | 2021-12-29 | 2024-03-29 | 山东省科学院海洋仪器仪表研究所 | Hyperspectral image small sample classification method |
CN114972889A (en) * | 2022-06-29 | 2022-08-30 | 江南大学 | Wheat seed classification method based on data enhancement and attention mechanism |
CN118781432B (en) * | 2024-07-23 | 2025-02-11 | 中国兵工物资集团有限公司 | A hyperspectral image classification method based on refined spatial-spectral joint feature extraction |
CN118823486A (en) * | 2024-09-14 | 2024-10-22 | 北京观微科技有限公司 | Hyperspectral image classification method, device and electronic equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106097355A (en) * | 2016-06-14 | 2016-11-09 | 山东大学 | The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks |
CN109948693B (en) * | 2019-03-18 | 2021-09-28 | 西安电子科技大学 | Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network |
CN110321963B (en) * | 2019-07-09 | 2022-03-04 | 西安电子科技大学 | Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional spatial spectral features |
-
2020
- 2020-06-01 CN CN202010485713.7A patent/CN111695467B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111695467A (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111695467B (en) | Spatial Spectral Fully Convolutional Hyperspectral Image Classification Method Based on Superpixel Sample Expansion | |
CN111368896B (en) | Hyperspectral Remote Sensing Image Classification Method Based on Dense Residual 3D Convolutional Neural Network | |
CN111461258B (en) | Remote sensing image scene classification method of coupling convolution neural network and graph convolution network | |
US20220382553A1 (en) | Fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery | |
CN111680176B (en) | Remote sensing image retrieval method and system based on attention and bidirectional feature fusion | |
CN107316013B (en) | Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network) | |
CN111652038A (en) | Remote sensing sea ice image classification method based on convolutional neural network | |
CN109063719B (en) | Image classification method combining structure similarity and class information | |
CN113705526A (en) | Hyperspectral remote sensing image classification method | |
CN112200090B (en) | Hyperspectral image classification method based on cross-grouping space-spectral feature enhancement network | |
CN111259828A (en) | High-resolution remote sensing image multi-feature-based identification method | |
CN112232151B (en) | Iterative polymerization neural network high-resolution remote sensing scene classification method embedded with attention mechanism | |
CN111310666A (en) | High-resolution image ground feature identification and segmentation method based on texture features | |
CN107590515A (en) | The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation | |
CN115457311B (en) | Hyperspectral remote sensing image band selection method based on self-expression transfer learning | |
CN113420775A (en) | Image classification method under extremely small quantity of training samples based on adaptive subdomain field adaptation of non-linearity | |
CN113642445A (en) | Hyperspectral image classification method based on full convolution neural network | |
CN105046272A (en) | Image classification method based on concise unsupervised convolutional network | |
CN118537727A (en) | Hyperspectral image classification method based on multi-scale cavity convolution and attention mechanism | |
CN111626267A (en) | Hyperspectral remote sensing image classification method using void convolution | |
CN109002771B (en) | Remote sensing image classification method based on recurrent neural network | |
CN115512096A (en) | CNN and Transformer-based low-resolution image classification method and system | |
CN112949771A (en) | Hyperspectral remote sensing image classification method based on multi-depth multi-scale hierarchical attention fusion mechanism | |
CN109034213A (en) | Hyperspectral image classification method and system based on joint entropy principle | |
CN116152556A (en) | Hyperspectral image classification method, hyperspectral image classification system, hyperspectral image classification equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |