CN109359623B - Hyperspectral image transfer classification method based on deep joint distribution adaptation network - Google Patents
Hyperspectral image transfer classification method based on deep joint distribution adaptation network Download PDFInfo
- Publication number
- CN109359623B CN109359623B CN201811347067.7A CN201811347067A CN109359623B CN 109359623 B CN109359623 B CN 109359623B CN 201811347067 A CN201811347067 A CN 201811347067A CN 109359623 B CN109359623 B CN 109359623B
- Authority
- CN
- China
- Prior art keywords
- probability distribution
- layer
- domain
- network
- hyperspectral
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000009826 distribution Methods 0.000 title claims abstract description 75
- 230000006978 adaptation Effects 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012546 transfer Methods 0.000 title abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 28
- 238000013508 migration Methods 0.000 claims abstract description 14
- 230000005012 migration Effects 0.000 claims abstract description 14
- 230000003595 spectral effect Effects 0.000 claims description 25
- 238000012360 testing method Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 6
- 239000010426 asphalt Substances 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 244000025254 Cannabis sativa Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000011449 brick Substances 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 238000013145 classification model Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- ZZUFCTLCJUWOSV-UHFFFAOYSA-N furosemide Chemical compound C1=C(Cl)C(S(=O)(=O)N)=CC(C(O)=O)=C1NCC1=CC=CO1 ZZUFCTLCJUWOSV-UHFFFAOYSA-N 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- -1 land Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
一种基于深度联合分布适配网络的高光谱图像迁移分类方法,步骤为:输入源域和目标域的高光谱图像,进行特征归一化与维度统一;组合源域和目标域高光谱图像的特征;构建边缘概率分布适配网络,进行源域和目标域高光谱图像的边缘概率分布适配;按照一对多分类原理,选取源域和和少量的目标域高光谱图像的训练样本;构建条件概率分布适配网络,进行源域和目标域高光谱图像的条件概率分布适配;对目标域高光谱图像进行一对多分类。本发明提出了基于深度联合分布适配网络,实现了源域和目标域高光谱图像的特征适配,减小了两者的联合概率分布差异;同时,采用一对多分类模型,提高了类内与类间的区分性,进而提高了高光谱图像迁移分类的准确度。
A hyperspectral image migration classification method based on a deep joint distribution adaptation network. Features; construct an edge probability distribution adaptation network to adapt the edge probability distribution of hyperspectral images in the source and target domains; according to the principle of one-to-many classification, select the training samples of the source domain and a small number of hyperspectral images in the target domain; construct The conditional probability distribution adaptation network is used to adapt the conditional probability distribution of the hyperspectral images in the source domain and the target domain; perform one-to-many classification on the hyperspectral images in the target domain. The invention proposes a deep joint distribution adaptation network, which realizes the feature adaptation of the hyperspectral images in the source domain and the target domain, and reduces the joint probability distribution difference between the two; Intra- and inter-class discrimination, which in turn improves the accuracy of hyperspectral image transfer classification.
Description
技术领域technical field
本发明属于遥感图像处理技术领域,特别是涉及一种基于深度联合分布适配网络的高光谱图像迁移分类。The invention belongs to the technical field of remote sensing image processing, in particular to a hyperspectral image migration classification based on a depth joint distribution adaptation network.
背景技术Background technique
高光谱图像光谱分辨率高、波段覆盖范围广、获得的地物目标光谱细节信息丰富,有利于进行精细地物分析。高光谱图像分类是高光谱图像解译的重要内容,在矿产勘探、植被调研、农业监测等领域有广泛应用。由于高光谱图像数据量大、存在冗余,同一目标在不同数据上存在光谱差异,影响分类的效果。Hyperspectral images have high spectral resolution, wide band coverage, and rich spectral detail information of ground objects, which is conducive to fine ground object analysis. Hyperspectral image classification is an important part of hyperspectral image interpretation, which is widely used in mineral exploration, vegetation survey, agricultural monitoring and other fields. Due to the large amount of hyperspectral image data and the existence of redundancy, the same target has spectral differences in different data, which affects the effect of classification.
高光谱图像分类是分析获取的目标光谱信号等特征进行分类的过程,主要采用有监督分类的方法。有监督分类模型均需要大量的训练样本,才能获得比较优异的分类结果。通常,训练样本是通过人工标注获得的,需要较大的人工成本。在实际应用中,新的遥感数据的标记样本往往不易获得。为了实现自动分类,新的遥感数据应使用经过其他数据训练好的分类器进行分类。然而,在高光谱图像分类过程中,由于传感器、波段覆盖等因素的不同,都会影响分类器的数据迁移能力。在一个数据集上训练的分类器在该数据上分类精度较高,但在其他数据上很难取得相同的效果。Hyperspectral image classification is the process of classifying the acquired target spectral signal and other features, and mainly adopts the supervised classification method. Supervised classification models require a large number of training samples in order to obtain excellent classification results. Usually, training samples are obtained by manual annotation, which requires a large labor cost. In practical applications, labeled samples of new remote sensing data are often not easy to obtain. To achieve automatic classification, new remote sensing data should be classified using classifiers trained on other data. However, in the process of hyperspectral image classification, different factors such as sensors and band coverage will affect the data transfer ability of the classifier. A classifier trained on one dataset has higher classification accuracy on that data, but it is difficult to achieve the same effect on other data.
针对高光谱图像迁移分类中数据迁移能力有限的问题,基于迁移学习的域适配模型被提出,以减小不同传感器数据差异的影响。例如,Persello C.等人2016年在IEEETransactions on Geoscience and Remote Sensing的第54卷第5期上发表的《Kernel-based domain-invariant feature selection in hyperspectral images for transferlearning》,提出了基于核学习的域不变特征选择方法,通过计算再生核希尔伯特空间中源域和目标域条件概率分布的距离,来衡量数据迁移的稳定性,实验证明了该方法特征选择的有效性。Zhou X.等人2018年在IEEE Transactions on Geoscience and RemoteSensing的第56卷第10期上发表的《Deep feature alignment neural networks fordomain adaptation of hyperspectral data》,提出了深度卷积循环神经网络,进行源域和目标域高光谱图像的特征学习和域适配,实验证明了该方法能够利用源域的信息来提高目标域高光谱图像分类的精度。上述方法未分析源域和目标域高光谱图像的联合概率分布,未有效利用数据的联合概率分布来进一步提高迁移分类效果。Aiming at the problem of limited data transfer ability in hyperspectral image transfer classification, a domain adaptation model based on transfer learning is proposed to reduce the impact of data differences from different sensors. For example, "Kernel-based domain-invariant feature selection in hyperspectral images for transferlearning" published by Persello C. et al. in IEEE Transactions on Geoscience and Remote Sensing, Vol. 54, Issue 5, in 2016, proposes a The variable feature selection method measures the stability of data migration by calculating the distance of the conditional probability distribution between the source domain and the target domain in the regenerated kernel Hilbert space. The experiment proves the effectiveness of the feature selection of this method. Zhou X. et al. published "Deep feature alignment neural networks for domain adaptation of hyperspectral data" in IEEE Transactions on Geoscience and RemoteSensing, Vol. 56, No. 10 in 2018, and proposed a deep convolutional recurrent neural network to perform source domain and Feature learning and domain adaptation of target-domain hyperspectral images. Experiments show that this method can improve the accuracy of target-domain hyperspectral image classification by using information from source domains. The above methods do not analyze the joint probability distribution of hyperspectral images in the source and target domains, and do not effectively use the joint probability distribution of the data to further improve the transfer classification effect.
发明内容SUMMARY OF THE INVENTION
本发明旨在提供能够获得更好迁移分类效果的模型,提出一种基于深度联合分布适配网络的高光谱图像迁移分类方法。The present invention aims to provide a model capable of obtaining a better migration classification effect, and proposes a hyperspectral image migration classification method based on a deep joint distribution adaptation network.
本发明的技术方案:Technical scheme of the present invention:
基于深度联合分布适配网络的高光谱图像迁移分类方法,步骤如下:The classification method of hyperspectral image transfer based on deep joint distribution adaptation network, the steps are as follows:
(1)输入源域和目标域的高光谱图像,进行特征归一化与维度统一:(1) Input the hyperspectral images of the source domain and the target domain, and perform feature normalization and dimension unification:
(1a)分别对源域和目标域的两幅高光谱图像的光谱特征进行归一化,使其分布在0与1之间;(1a) Normalize the spectral features of the two hyperspectral images in the source domain and the target domain, respectively, so that they are distributed between 0 and 1;
(1b)如果两幅高光谱图像的光谱维度不同,对维度低的高光谱图像进行补零,使之与维度高的图像实现维度统一;(1b) If the spectral dimensions of the two hyperspectral images are different, zero-pad the hyperspectral image with low dimension to make it dimensionally unified with the image with high dimension;
(2)组合源域和目标域高光谱图像的特征:(2) Combine the features of source and target hyperspectral images:
(2a)分别对源域和目标域高光谱图像的光谱特征进行向量化;(2a) Quantize the spectral features of the source domain and target domain hyperspectral images respectively;
(2b)将源域和目标域的光谱特征组合成一个向量集合;(2b) Combine the spectral features of the source domain and the target domain into a vector set;
(3)构建三层边缘概率分布适配网络,进行源域和目标域高光谱图像的边缘概率分布适配:(3) Construct a three-layer edge probability distribution adaptation network to adapt the edge probability distribution of the source domain and target domain hyperspectral images:
(3a)利用一个线性去噪编码器与一个非线性编码器,构建第一层边缘概率分布适配网络,求解线性去噪编码器的权重,并对源域和目标域光谱特征进行适配;(3a) Use a linear denoising encoder and a nonlinear encoder to construct a first-layer edge probability distribution adaptation network, solve the weights of the linear denoising encoder, and adapt the spectral features of the source domain and the target domain;
(3b)利用一个线性去噪编码器与一个非线性编码器,构建第二层边缘概率分布适配网络,求解线性去噪编码器的权重,并对源域和目标域光谱特征进行适配;(3b) Use a linear denoising encoder and a nonlinear encoder to construct a second-layer edge probability distribution adaptation network, solve the weights of the linear denoising encoder, and adapt the spectral features of the source domain and the target domain;
(3c)利用一个线性去噪编码器与一个非线性编码器,构建第三层边缘概率分布适配网络,求解线性去噪编码器的权重,并对源域和目标域光谱特征进行适配;(3c) Use a linear denoising encoder and a nonlinear encoder to construct a third-layer edge probability distribution adaptation network, solve the weights of the linear denoising encoder, and adapt the spectral features of the source domain and the target domain;
(4)按照一对多分类原理,选取源域和目标域高光谱图像的训练样本:(4) According to the principle of one-to-many classification, select the training samples of hyperspectral images in the source domain and target domain:
对C类分类,选取源域高光谱图像样本和p%的目标域高光谱图像样本,按照一对多分类原理形成C个训练样本集,每个训练集样本标签为属于该类和不属于该类,样本特征为经过边缘概率分布适配网络输出的特征;For the C class classification, select the source domain hyperspectral image samples and p% target domain hyperspectral image samples, and form C training sample sets according to the one-to-many classification principle. class, and the sample features are the features of the network output through the edge probability distribution adaptation;
(5)构建C个三层条件概率分布适配网络,进行源域和目标域高光谱图像的条件概率分布适配:(5) Construct C three-layer conditional probability distribution adaptation networks to perform conditional probability distribution adaptation of source domain and target domain hyperspectral images:
(5a)分别初始化C个三层条件概率分布适配网络的权重和偏置参数,利用各个训练样本集分别逐层预训练C个网络;(5a) Respectively initialize the weights and bias parameters of C three-layer conditional probability distribution adaptation networks, and use each training sample set to pre-train C networks layer by layer;
(5b)预训练完后,第三层条件概率分布适配网络的隐含输出作为优化后的样本特征;(5b) After pre-training, the implicit output of the third-layer conditional probability distribution adaptation network is used as the optimized sample feature;
(5c)各个网络中,在第三层条件概率分布适配网络后连接softmax分类器,将训练样本优化后的特征和类别标签输入到分类器,优化softmax分类器的权重和偏置参数;(5c) In each network, connect the softmax classifier after the third-layer conditional probability distribution is adapted to the network, input the optimized features and category labels of the training samples into the classifier, and optimize the weight and bias parameters of the softmax classifier;
(5d)对C个条件概率分布适配网络由顶层到底层进行微调,进一步优化各个网络的参数;(5d) Fine-tune the C conditional probability distribution adaptation networks from the top to the bottom to further optimize the parameters of each network;
(6)对目标域高光谱图像进行一对多分类:(6) One-to-many classification of target domain hyperspectral images:
(6a)将目标域高光谱图像的测试样本集分别输入到C个条件概率分布适配网络,网络优化后的特征是第三层条件概率分布适配网络的隐含输出;(6a) Input the test sample set of the target domain hyperspectral image into C conditional probability distribution adaptation networks respectively, and the optimized feature of the network is the implicit output of the third-layer conditional probability distribution adaptation network;
(6b)利用训练好的C个softmax分类器对测试样本进行分类,获得属于和不属于各个类别的预测概率;(6b) Use the trained C softmax classifiers to classify the test samples, and obtain the predicted probabilities of belonging and not belonging to each category;
(6c)对比属于各个类别预测概率的大小,取最大概率对应的类别作为各个样本的预测标签;(6c) Compare the size of the predicted probability belonging to each category, and take the category corresponding to the maximum probability as the predicted label of each sample;
(6d)按照预测标签向量和测试样本的空间位置,输出目标域高光谱图像的分类结果图。(6d) According to the predicted label vector and the spatial position of the test sample, output the classification result map of the hyperspectral image in the target domain.
本发明与现有技术相比,主要具有如下的优点:Compared with the prior art, the present invention mainly has the following advantages:
第一,本发明提出了深度联合分布适配网络,分别利用边缘概率分布适配网络和条件概率分布适配网络,实现了源域和目标域高光谱图像的特征适配,减小了两者的联合概率分布差异。First, the present invention proposes a deep joint distribution adaptation network, which uses the edge probability distribution adaptation network and the conditional probability distribution adaptation network respectively to realize the feature adaptation of the hyperspectral images in the source domain and the target domain, and reduce the two. The difference in the joint probability distribution of .
第二,本发明利用源域样本和少量的目标域样本进行分类器的训练学习,减少了目标域训练样本的需求;同时,采用一对多分类模型,通过学习多个二分类的分类器,提高了类内与类间的区分性,改善了迁移分类的精度。Second, the present invention uses the source domain samples and a small amount of target domain samples to train and learn the classifier, reducing the requirement for training samples in the target domain; The intra- and inter-class discrimination is improved, and the accuracy of transfer classification is improved.
附图说明Description of drawings
图1为高光谱图像迁移分类的实现流程图;Fig. 1 is the realization flow chart of the hyperspectral image migration classification;
图2为源域Pavia University高光谱数据;Figure 2 shows the hyperspectral data of the source domain Pavia University;
图3为目标域Pavia Center高光谱数据;Figure 3 is the hyperspectral data of the Pavia Center in the target domain;
图4为利用一对多softmax分类器未做迁移的分类结果图;Fig. 4 is the classification result graph of using one-to-many softmax classifier without migration;
图5为利用本发明方法的分类结果图。FIG. 5 is a diagram of the classification result using the method of the present invention.
具体实施方式Detailed ways
下面结合具体实例和附图,对本发明作详细的阐述。The present invention will be described in detail below with reference to specific examples and accompanying drawings.
根据图1,基于深度联合分布适配网络的高光谱图像迁移分类方法,包括如下步骤:According to Figure 1, the hyperspectral image migration classification method based on the deep joint distribution adaptation network includes the following steps:
(1)读取源域和目标域的高光谱图像,进行特征归一化与维度统一:(1) Read the hyperspectral images of the source domain and the target domain, and perform feature normalization and dimension unification:
(1a)分别对源域和目标域的高光谱图像的光谱特征进行线性归一化,使其分布在0与1之间;(1a) Linearly normalize the spectral features of the hyperspectral images in the source and target domains, respectively, so that they are distributed between 0 and 1;
(1b)如果源域和目标域的高光谱图像的光谱维度不同,对维度低的高光谱图像进行补零,使之与维度高的图像实现维度统一;(1b) If the spectral dimensions of the hyperspectral images in the source domain and the target domain are different, zero-padded the hyperspectral image with low dimension to make it dimensionally unified with the image with high dimension;
(2)组合源域和目标域高光谱图像的特征:(2) Combine the features of source and target hyperspectral images:
(2a)将源域高光谱图像的光谱特征向量化为XS,将目标域高光谱图像的光谱特征向量化为XT;(2a) The spectral feature vector of the source domain hyperspectral image is quantized as X S , and the spectral feature vector of the target domain hyperspectral image is quantized as X T ;
(2b)将源域和目标域的光谱特征组合成一个向量集合X=[XS XT];(2b) Combining the spectral features of the source domain and the target domain into a vector set X=[X S X T ];
(3)构建三层条件概率分布适配网络,进行源域和目标域高光谱图像的边缘概率分布适配:(3) Construct a three-layer conditional probability distribution adaptation network to perform edge probability distribution adaptation of hyperspectral images in the source and target domains:
(3a)利用一个线性去噪编码器与一个非线性编码器,构建第一层边缘概率分布适配网络:(3a) Use a linear denoising encoder and a nonlinear encoder to construct the first-layer edge probability distribution adaptation network:
对样本xi,通过随机对各个维度特征置零,得到M个被扰动的版本,其中第m个被扰动的版本为x′i,m;通过线性去噪编码器对x′i,m线性隐射以恢复原样本,优化权重W的目标方程如下:For the sample x i , by randomly setting the features of each dimension to zero, M disturbed versions are obtained, of which the m-th disturbed version is x′ i,m ; Implicit to restore the original sample, The objective equation for optimizing the weight W is as follows:
其中,第一项为平均重建误差,第二项为边缘惩罚项,λ表示平衡因子,NS和NT分别表示源域高光谱和目标域高光谱的样本数,N为总的样本数,即N=NS+NT;令被扰动的样本矩阵X′m=[x′1,m,x′2,m,…,x′N,m],M倍的原样本矩阵M个被扰动的样本矩阵组合X′=[X′1,X′2,…,X′M],这样上式目标方程转化为Among them, the first term is the average reconstruction error, the second term is the edge penalty term, λ represents the balance factor, N S and N T represent the number of samples of source domain hyperspectral and target domain hyperspectral respectively, N is the total number of samples, That is, N=N S +N T ; let the perturbed sample matrix X′ m =[x′ 1,m ,x′ 2,m ,...,x′ N,m ], M times the original sample matrix M perturbed sample matrix combinations X′=[X′ 1 ,X′ 2 ,...,X′ M ], so the target equation above is transformed into
其中,D表示差异指数矩阵,定义为where D represents the difference index matrix, defined as
这样,线性去噪编码器权重W有如下闭合解:In this way, the linear denoising encoder weight W has the following closed solution:
之后,采用一个非线性编码器对特征进行非线性隐射,公式如下After that, a nonlinear encoder is used to perform nonlinear mapping on the feature, and the formula is as follows
这里,表示第一层边缘概率分布适配网络的输出;here, Represents the output of the first-layer edge probability distribution adaptation network;
(3b)按照如上(3a)步骤,利用一个线性去噪编码器与一个非线性编码器,构建第二层边缘概率分布适配网络,求解线性去噪编码器的权重,并对源域和目标域光谱特征进行适配,得到第二层网络的输出 (3b) According to the above step (3a), use a linear denoising encoder and a nonlinear encoder to construct a second-layer edge probability distribution adaptation network, solve the weight of the linear denoising encoder, and compare the source domain and target. Domain spectral features are adapted to obtain the output of the second layer network
(3c)按照如上(3a)步骤,利用一个线性去噪编码器与一个非线性编码器,构建第三层边缘概率分布适配网络,求解线性去噪编码器的权重,并对源域和目标域光谱特征进行适配,得到第三层网络的输出 (3c) According to the above step (3a), use a linear denoising encoder and a nonlinear encoder to construct a third-layer edge probability distribution adaptation network, solve the weight of the linear denoising encoder, and compare the source domain and target. Domain spectral features are adapted to obtain the output of the third-layer network
(4)按照一对多分类原理,选取源域和目标域高光谱图像的训练样本:(4) According to the principle of one-to-many classification, select the training samples of hyperspectral images in the source domain and target domain:
对9类分类,选取源域高光谱图像各个类别500个样本和目标域高光谱图像各个类别1%的样本,按照一对多分类原理形成9个训练样本集,每个训练集样本标签为属于该类(表示为1)和不属于该类(表示为0),样本特征即为第三层网络的输出 For the classification of 9 categories, select 500 samples of each category of source domain hyperspectral images and 1% samples of each category of target domain hyperspectral images, and form 9 training sample sets according to the principle of one-to-many classification. This class (represented as 1) and does not belong to this class (represented as 0), the sample feature is the output of the third layer network
(5)构建9个三层条件概率分布适配网络,进行源域和目标域高光谱图像的条件概率分布适配:(5) Construct 9 three-layer conditional probability distribution adaptation networks to perform conditional probability distribution adaptation of source domain and target domain hyperspectral images:
(5a)先初始化三层条件概率分布适配网络的权重和偏置参数,再利用第c个训练样本集逐层预训练各层网络;第k层条件概率分布适配网络的编码过程和解码过程如下:(5a) First initialize the weight and bias parameters of the three-layer conditional probability distribution adaptation network, and then use the c-th training sample set to pre-train each layer of the network layer by layer; the encoding process and decoding of the k-th layer of conditional probability distribution adaptation network The process is as follows:
其中,和分别表示第k层网络的输入、隐含表示和输出,f(·)和g(·)分别为编码和解码的非线性函数,和分别为编码过程的权重和偏置,和分别为解码过程的权重和偏置,*表示S(源域)或者T(目标域);in, and represent the input, implicit representation and output of the k-th layer network, respectively, f( ) and g( ) are the nonlinear functions of encoding and decoding, respectively, and are the weights and biases of the encoding process, respectively, and are the weights and biases of the decoding process, respectively, * means S (source domain) or T (target domain);
预训练权重和偏置参数的目标方程为:The objective equations for the pretrained weights and bias parameters are:
其中,第一项表示在训练样本上的平均重建误差,第二项为权重惩罚项,防止权重过大,Ntrain表示训练样本数量,λ′表示权重惩罚因子;利用反向传播算法求解上式;Among them, the first item represents the average reconstruction error on the training samples, and the second item is the weight penalty item to prevent the weight from being too large. N train represents the number of training samples, λ′ represents the weight penalty factor; use the back propagation algorithm to solve the above formula;
(5b)预训练完后,第三层条件概率分布适配网络的隐含输出作为优化后的样本特征;(5b) After pre-training, the third-layer conditional probability distribution adapts to the implicit output of the network as the optimized sample features;
(5c)在第三层条件概率分布适配网络后连接softmax分类器,将训练样本优化后的特征和类别标签输入到分类器,优化softmax分类器的权重和偏置参数;(5c) Connect the softmax classifier after the third-layer conditional probability distribution is adapted to the network, input the optimized features and category labels of the training samples into the classifier, and optimize the weight and bias parameters of the softmax classifier;
(5d)对条件概率分布适配网络由顶层到底层进行微调,进一步优化各个网络的参数;反向微调整个网络权重和偏置参数的目标方程为:(5d) Fine-tune the conditional probability distribution adaptation network from the top layer to the bottom layer to further optimize the parameters of each network; the objective equation for reverse fine-tuning the weight and bias parameters of the entire network is:
其中,第一项为编码层总的重建误差,第二项为条件惩罚项,用于减小源域和目标域样本条件概率分布的差异;和分别为第c类源域高光谱和目标域高光谱的样本数;同样,利用反向传播算法求解上式,得到微调后的各层条件概率分布适配网络和softmax分类器的参数;Among them, the first term is the total reconstruction error of the coding layer, and the second term is the conditional penalty term, which is used to reduce the difference between the conditional probability distributions of the source domain and the target domain samples; and are the sample numbers of the c-th source domain hyperspectral and target domain hyperspectral samples; similarly, the backpropagation algorithm is used to solve the above equation, and the fine-tuned parameters of the conditional probability distribution adaptation network and softmax classifier of each layer are obtained;
(5e)按照上述(5a)-(5d)的步骤,分别利用9个训练样本集训练获得9个三层条件概率分布适配网络,实现源域和目标域高光谱图像的条件概率分布适配;(5e) According to the steps of (5a)-(5d) above, 9 three-layer conditional probability distribution adaptation networks are obtained by training 9 training sample sets respectively, so as to realize the conditional probability distribution adaptation of the source domain and target domain hyperspectral images ;
(6)对目标域高光谱图像进行一对多分类:(6) One-to-many classification of target domain hyperspectral images:
(6a)将目标域高光谱图像的测试样本集输入到第c个条件概率分布适配网络,网络优化后的特征是第三层条件概率分布适配网络的隐含输出 (6a) Input the test sample set of the target domain hyperspectral image into the c-th conditional probability distribution adaptation network, and the optimized feature of the network is the implicit output of the third-layer conditional probability distribution adaptation network
(6b)利用训练好的softmax分类器对测试样本进行分类,获得属于第c类的预测概率:(6b) Use the trained softmax classifier to classify the test samples to obtain the predicted probability of belonging to the c-th class:
其中,和分别对应softmax分类器的部分权重和偏置;in, and Corresponding to the partial weights and biases of the softmax classifier, respectively;
(6c)按照上述(6a)-(6b)的步骤,将目标域高光谱图像的测试样本集分别输入到9个条件概率分布适配网络和softmax分类器,获得样本属于各个类别的概率;(6c) According to the steps of (6a)-(6b) above, input the test sample set of the hyperspectral image in the target domain into 9 conditional probability distribution adaptation networks and softmax classifiers, respectively, to obtain the probability that the sample belongs to each category;
(6d)对比属于各个类别预测概率的大小,取最大概率对应的类别作为各个样本的预测标签,具体如下(6d) Compare the predicted probability of each category, and take the category corresponding to the maximum probability as the predicted label of each sample, as follows
(6e)按照预测标签向量和测试样本的空间位置,输出目标域高光谱图像的分类结果图。(6e) According to the predicted label vector and the spatial position of the test sample, output the classification result map of the hyperspectral image in the target domain.
以下通过仿真实验,对本发明的技术效果进行说明:The technical effects of the present invention are described below through simulation experiments:
1、仿真条件与内容1. Simulation conditions and content
本发明的实验数据是Pavia University高光谱数据和Pavia Center高光谱数据,均由ROSIS传感器获取的数据。如同2所示是源域Pavia University高光谱数据,大小为610×340像素,103个波段,图2(a)是其第57、34、3波段合成的图像,图2(b)是对应的真实地物标记图,共有9类,包括柏油路、草地、碎石、树、金属板、土地、沥青、砖和阴影。如同3所示是目标域Pavia Center高光谱数据,大小为1096×715像素,102个波段,图3(a)是其第57、34、3波段合成的图像,图3(b)是对应的真实地物标记图,共有9类,包括水、树、草地、砖、土地、柏油路、沥青、瓦片和阴影。图4为利用一对多softmax分类器未做迁移的Pavia Center高光谱图像分类结果图,图5为利用本发明方法对Pavia Center高光谱图像进行迁移分类的结果图,表一为源域和目标域训练样本对应情况及其分类精度对比。仿真实验中,本发明和对比方法都是在Matlab R2017a中编程实现。The experimental data of the present invention are Pavia University hyperspectral data and Pavia Center hyperspectral data, both of which are acquired by ROSIS sensors. As shown in 2, it is the source domain Pavia University hyperspectral data, with a size of 610 × 340 pixels and 103 bands. Figure 2(a) is the composite image of its 57th, 34th, and 3rd bands, and Figure 2(b) is the corresponding Real-world feature marker map with 9 categories including asphalt, grass, gravel, tree, sheet metal, land, asphalt, brick and shade. As shown in 3, it is the hyperspectral data of the target domain Pavia Center, with a size of 1096 × 715 pixels and 102 bands. Figure 3(a) is the composite image of its 57th, 34th, and 3rd bands, and Figure 3(b) corresponds to Real-world landmark map with 9 categories, including water, trees, grass, bricks, land, asphalt, asphalt, tiles, and shadows. Fig. 4 is the result chart of the classification result of Pavia Center hyperspectral image without migration using one-to-many softmax classifier, Fig. 5 is the result chart of the migration classification of Pavia Center hyperspectral image using the method of the present invention, Table 1 shows the source domain and the target Correspondence of domain training samples and their classification accuracy comparison. In the simulation experiment, both the present invention and the comparison method are implemented by programming in Matlab R2017a.
2、仿真结果分析2. Analysis of simulation results
表一 分类精度对比Table 1 Classification accuracy comparison
由表一可知,采用本发明深度联合分布适配网络进行迁移分类比无迁移的分类获得了更高的分类精度,证明本发明方法在迁移分类中的有效性。对比图4和图5可知,本发明方法的分类结果图的误分类点更少,更接近于真实地物标记图。总之,本发明的方法能够有效提高高光谱图像迁移分类效果。It can be seen from Table 1 that the migration classification using the deep joint distribution adaptation network of the present invention obtains higher classification accuracy than the classification without migration, which proves the effectiveness of the method of the present invention in the migration classification. Comparing Fig. 4 and Fig. 5, it can be seen that the classification result map of the method of the present invention has fewer misclassification points, and is closer to the real object marking map. In conclusion, the method of the present invention can effectively improve the effect of hyperspectral image transfer classification.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811347067.7A CN109359623B (en) | 2018-11-13 | 2018-11-13 | Hyperspectral image transfer classification method based on deep joint distribution adaptation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811347067.7A CN109359623B (en) | 2018-11-13 | 2018-11-13 | Hyperspectral image transfer classification method based on deep joint distribution adaptation network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109359623A CN109359623A (en) | 2019-02-19 |
CN109359623B true CN109359623B (en) | 2021-05-11 |
Family
ID=65344913
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811347067.7A Expired - Fee Related CN109359623B (en) | 2018-11-13 | 2018-11-13 | Hyperspectral image transfer classification method based on deep joint distribution adaptation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109359623B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109961096B (en) * | 2019-03-19 | 2021-01-05 | 大连理工大学 | Multimode hyperspectral image migration classification method |
CN110033045A (en) * | 2019-04-17 | 2019-07-19 | 内蒙古工业大学 | A kind of method and apparatus of trained identification image atomization |
CN110321941A (en) * | 2019-06-24 | 2019-10-11 | 西北工业大学 | The Compression of hyperspectral images and classification method of identifiable feature learning |
CN110598636B (en) * | 2019-09-09 | 2023-01-17 | 哈尔滨工业大学 | A Ship Target Recognition Method Based on Feature Migration |
CN111898635B (en) * | 2020-06-24 | 2024-12-24 | 华为技术有限公司 | Neural network training method, data acquisition method and device |
CN112016392B (en) * | 2020-07-17 | 2024-05-28 | 浙江理工大学 | A small sample detection method for soybean pest severity based on hyperspectral images |
CN112331313B (en) * | 2020-11-25 | 2022-07-01 | 电子科技大学 | Automatic grading method for sugar net image lesions based on label coding |
CN113030197B (en) * | 2021-03-26 | 2022-11-04 | 哈尔滨工业大学 | A kind of gas sensor drift compensation method |
CN113505856B (en) * | 2021-08-05 | 2024-04-09 | 大连海事大学 | Non-supervision self-adaptive classification method for hyperspectral images |
CN113947126B (en) * | 2021-09-07 | 2025-01-28 | 广东工业大学 | Ceramic tile color classification method and device based on transfer learning |
CN113947725B (en) * | 2021-10-26 | 2024-06-14 | 中国矿业大学 | Hyperspectral image classification method based on convolution width migration network |
CN114494762B (en) * | 2021-12-10 | 2024-09-24 | 南通大学 | Hyperspectral image classification method based on depth migration network |
CN115410088B (en) * | 2022-10-10 | 2023-10-31 | 中国矿业大学 | Hyperspectral image field self-adaption method based on virtual classifier |
CN119830774B (en) * | 2025-03-17 | 2025-05-23 | 中国电建集团成都勘测设计研究院有限公司 | Comprehensive monitoring method and system for hydropower projects based on digital twins |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011026167A1 (en) * | 2009-09-03 | 2011-03-10 | National Ict Australia Limited | Illumination spectrum recovery |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
CN107239759A (en) * | 2017-05-27 | 2017-10-10 | 中国科学院遥感与数字地球研究所 | A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic |
JP2018081404A (en) * | 2016-11-15 | 2018-05-24 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Discriminating method, discriminating apparatus, discriminator generating method, and discriminator generating apparatus |
CN108171232A (en) * | 2017-11-15 | 2018-06-15 | 中山大学 | The sorting technique of bacillary and viral children Streptococcus based on deep learning algorithm |
CN108280396A (en) * | 2017-12-25 | 2018-07-13 | 西安电子科技大学 | Hyperspectral image classification method based on depth multiple features active migration network |
-
2018
- 2018-11-13 CN CN201811347067.7A patent/CN109359623B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011026167A1 (en) * | 2009-09-03 | 2011-03-10 | National Ict Australia Limited | Illumination spectrum recovery |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
JP2018081404A (en) * | 2016-11-15 | 2018-05-24 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Discriminating method, discriminating apparatus, discriminator generating method, and discriminator generating apparatus |
CN107239759A (en) * | 2017-05-27 | 2017-10-10 | 中国科学院遥感与数字地球研究所 | A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic |
CN108171232A (en) * | 2017-11-15 | 2018-06-15 | 中山大学 | The sorting technique of bacillary and viral children Streptococcus based on deep learning algorithm |
CN108280396A (en) * | 2017-12-25 | 2018-07-13 | 西安电子科技大学 | Hyperspectral image classification method based on depth multiple features active migration network |
Non-Patent Citations (2)
Title |
---|
Reduced Encoding Diffusion Spectrum Imaging Implemented With a Bi-Gaussian Model;Chun-Hung Yeh;《IEEE Transactions on Medical Imaging 》;20080722;第1415 - 1424页 * |
基于分类器集成的高光谱遥感图像分类方法;樊利恒;《光学学报》;20140930;第99-109页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109359623A (en) | 2019-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109359623B (en) | Hyperspectral image transfer classification method based on deep joint distribution adaptation network | |
CN106408030B (en) | SAR image classification method based on middle layer semantic attribute and convolutional neural networks | |
CN103886342B (en) | Hyperspectral image classification method based on spectrums and neighbourhood information dictionary learning | |
CN111401426B (en) | A Few-Sample Hyperspectral Image Classification Method Based on Pseudo-Label Learning | |
CN113723255A (en) | Hyperspectral image classification method and storage medium | |
CN103714354A (en) | Hyperspectral image wave band selection method based on quantum-behaved particle swarm optimization algorithm | |
CN111339827A (en) | SAR image change detection method based on multi-region convolutional neural network | |
CN109145832A (en) | Polarimetric SAR image semisupervised classification method based on DSFNN Yu non local decision | |
CN114399674A (en) | Hyperspectral image technology-based shellfish toxin nondestructive rapid detection method and system | |
CN106056627B (en) | A kind of robust method for tracking target based on local distinctive rarefaction representation | |
CN108460400A (en) | A kind of hyperspectral image classification method of combination various features information | |
CN117315381B (en) | Hyperspectral image classification method based on second-order biased random walk | |
CN105160351A (en) | Semi-monitoring high-spectral classification method based on anchor point sparse graph | |
Xia et al. | Land resource use classification using deep learning in ecological remote sensing images | |
CN114937173A (en) | Hyperspectral image rapid classification method based on dynamic graph convolution network | |
Jayapal et al. | Enhanced Disease Identification Model for Tea Plant Using Deep Learning. | |
CN116912550A (en) | Land utilization parallel classification method for heterogeneous convolution network remote sensing images based on ground object dependency relationship | |
CN108460326A (en) | A kind of high spectrum image semisupervised classification method based on sparse expression figure | |
CN104794483A (en) | Image division method based on inter-class maximized PCM (Pulse Code Modulation) clustering technology | |
CN105787045A (en) | Precision enhancing method for visual media semantic indexing | |
Ji et al. | CLGAN: A GAN-based video prediction model for precipitation nowcasting | |
CN116129280B (en) | Method for detecting snow in remote sensing image | |
Li et al. | Change detection in synthetic aperture radar images based on log-mean operator and stacked auto-encoder | |
CN117611908A (en) | Accurate crop classification using sky-ground integrated large-scene hyperspectral remote sensing images | |
CN108805029B (en) | A ground-based cloud image recognition method based on significant dual activation coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210511 Termination date: 20211113 |
|
CF01 | Termination of patent right due to non-payment of annual fee |