CN116152666A - An Adaptive Learning Method for Cross-Domain Remote Sensing Images Considering the Heterogeneity of Surface Object and Phenology - Google Patents
An Adaptive Learning Method for Cross-Domain Remote Sensing Images Considering the Heterogeneity of Surface Object and Phenology Download PDFInfo
- Publication number
- CN116152666A CN116152666A CN202310258590.7A CN202310258590A CN116152666A CN 116152666 A CN116152666 A CN 116152666A CN 202310258590 A CN202310258590 A CN 202310258590A CN 116152666 A CN116152666 A CN 116152666A
- Authority
- CN
- China
- Prior art keywords
- domain
- style
- samples
- seg
- category
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 230000003044 adaptive effect Effects 0.000 title claims description 47
- 230000011218 segmentation Effects 0.000 claims abstract description 70
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 24
- 230000008569 process Effects 0.000 claims abstract description 21
- 230000007246 mechanism Effects 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000009826 distribution Methods 0.000 claims abstract description 11
- 238000013508 migration Methods 0.000 claims abstract description 6
- 230000005012 migration Effects 0.000 claims abstract description 6
- 238000012546 transfer Methods 0.000 claims description 82
- 238000005457 optimization Methods 0.000 claims description 36
- 230000006870 function Effects 0.000 claims description 21
- 230000002123 temporal effect Effects 0.000 claims description 13
- 238000010606 normalization Methods 0.000 claims description 12
- 230000006978 adaptation Effects 0.000 claims description 8
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 7
- 230000008901 benefit Effects 0.000 abstract description 4
- 230000003993 interaction Effects 0.000 abstract description 4
- 238000013461 design Methods 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000002679 ablation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002845 discoloration Methods 0.000 description 1
- 230000035784 germination Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000004660 morphological change Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012803 optimization experiment Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 238000010206 sensitivity analysis Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明属于域自适应学习技术领域,尤其涉及一种顾及地物物候异质性的跨域遥感影像自适应学习方法。The present invention belongs to the technical field of domain adaptive learning, and in particular relates to a cross-domain remote sensing image adaptive learning method that takes into account the heterogeneity of ground objects and phenology.
背景技术Background Art
遥感技术凭借覆盖面广、高时空分辨率、信息量大等特点,已成为地表覆盖分类研究的重要手段。在全球经济一体化的形势下,地表覆盖分类的尺度范围不再局限于国家本土,而是扩大到区域或全球尺度,传统人工解译分类方法工作量大、更新速度慢,已经远不能满足现代地图制图需求。近年来,深度学习等人工智能技术的进步推进了遥感影像解译智能化的发展,涌现了大量以深度学习为基础的数据驱动的遥感影像解译方法。Remote sensing technology has become an important means of studying land cover classification due to its wide coverage, high spatiotemporal resolution, and large amount of information. In the context of global economic integration, the scale of land cover classification is no longer limited to the country, but has expanded to the regional or global scale. Traditional manual interpretation and classification methods are labor-intensive and slow to update, and are far from meeting the needs of modern map making. In recent years, advances in artificial intelligence technologies such as deep learning have promoted the development of intelligent remote sensing image interpretation, and a large number of data-driven remote sensing image interpretation methods based on deep learning have emerged.
然而,已有的各种数据驱动的遥感影像深度学习模型的良好性能要求测试数据和训练数据满足独立同分布假设。在多时相遥感影像分类的真实应用场景中,有标注的训练数据(源域)和无标注的测试数据(目标域)往往来源于不同的数据分布,具有显著的视觉风格差异,导致源域模型在目标域的性能表现较差。为了解决这一难题,近几年不少研究工作致力于利用深度神经网络学习不同图像域的映射,其动机是将某一个域的遥感影像的风格迁移到另一指定域,使得生成的迁移样本与指定域样本在视觉风格上更接近,从而支持跨时相遥感影像域自适应学习。Yang等人.在非专利文献“FG-GAN:AFine-GrainedGenerative Adversarial Network for Unsupervised SAR-to-Optical ImageTranslation,IEEE Trans.Geosci.Remote Sensing,vol.60,pp.1–11,2022,doi:10.1109/TGRS.2022.3165371”提出在生成网络中集成密集连接模块和残差模块以及使用多尺度的判别网络,增强风格迁移模型对遥感图像辐射特征的表征能力。Tasar等人在非专利文献“DAugNet:Unsupervised,Multisource,Multitarget,and Life-Long Domain Adaptationfor Semantic Segmentation of Satellite Images,”IEEE Trans.Geosci.RemoteSensing,vol.59,no.2,pp.1067–1081,Feb.2021,doi:10.1109/TGRS.2020.3006161”中利用图像特征各通道的统计量描述图像的风格特征,并通过自适应实例规范化简单调整输入图像特征各通道的均值和方差去匹配目标遥感图像的风格特征。Zhang等人在非专利文献“Remote Sensing Image Translation via Style-Based Recalibration Module andImproved Style Discriminator,”IEEE Geosci.Remote Sensing Lett.,vol.19,pp.1–5,2022,doi:10.1109/LGRS.2021.3068558”中引入一个基于风格的特征较准模块,其根据图像特征各通道的统计量对遥感影像风格迁移的重要性为图像特征各通道分配学习权重,使得风格迁移网络可以更快地获取遥感图像中需要重点关注的风格信息。However, the good performance of various existing data-driven remote sensing image deep learning models requires that the test data and training data meet the independent and identically distributed assumption. In the real application scenarios of multi-temporal remote sensing image classification, the labeled training data (source domain) and the unlabeled test data (target domain) often come from different data distributions and have significant visual style differences, resulting in poor performance of the source domain model in the target domain. In order to solve this problem, in recent years, many research works have been devoted to using deep neural networks to learn the mapping of different image domains. The motivation is to transfer the style of remote sensing images in one domain to another specified domain, so that the generated transferred samples are closer to the specified domain samples in visual style, thereby supporting cross-temporal remote sensing image domain adaptive learning. Yang et al. proposed in the non-patent document "FG-GAN: AFine-Grained Generative Adversarial Network for Unsupervised SAR-to-Optical Image Translation, IEEE Trans. Geosci. Remote Sensing, vol. 60, pp. 1–11, 2022, doi: 10.1109/TGRS.2022.3165371" to integrate densely connected modules and residual modules in the generative network and use a multi-scale discriminant network to enhance the style transfer model's ability to characterize the radiation characteristics of remote sensing images. Tasar et al. in the non-patent literature “DAugNet: Unsupervised, Multisource, Multitarget, and Life-Long Domain Adaptationfor Semantic Segmentation of Satellite Images,” IEEE Trans. Geosci. Remote Sensing, vol. 59, no. 2, pp. 1067–1081, Feb. 2021, doi: 10.1109/TGRS.2020.3006161” use the statistics of each channel of the image feature to describe the style characteristics of the image, and simply adjust the mean and variance of each channel of the input image feature through adaptive instance normalization to match the style characteristics of the target remote sensing image. Zhang et al. in the non-patent literature “Remote Sensing Image Translation via Style-Based Recalibration Module and Improved Style Discriminator,” IEEE Geosci. Remote Sensing Lett., vol. 19, pp. 1–5, 2022, doi: 10.1109/LGRS.2021.3068558” introduces a style-based feature calibration module, which assigns learning weights to each channel of image features according to the importance of the statistics of each channel of image features to the style transfer of remote sensing images, so that the style transfer network can more quickly obtain the style information that needs to be focused on in the remote sensing images.
但是,目前这些利用风格迁移样本进行跨时相遥感影像域自适应学习的技术方法隐式地假设遥感场景中所有地物目标的视觉风格变化是同向性的,其主要关注于减小遥感场景在外部成像过程中产生的图像辐射差异,忽略了对场景内部地物的物候差异影响因素的挖掘。地物的物候差异对遥感图像视觉风格变化的影响主要体现在两个方面。一方面,相比人造地表等物候不敏感的地物,物候敏感的地物特别容易受到季节循环影响而发生形态变化,如植物的发芽、展叶、叶变色、叶脱落等。另一方面,不同地物的物候规律具有异质性[3],比如森林的生长季节相对较长,一年一般只会发生一次物候周期性变化,而农田作物生长季节较短,一年两熟甚至三熟的农田可能会发生多次物候周期性变化。因此,这些技术方法在识别地物光谱混合严重、生态景观相对破碎的地理区域的目标遥感影像时,识别结果难以分辨物候敏感地物的边界,同时特别容易混淆物候敏感地物(比如农田)和物候不敏感地物(比如人造地表)。However, the current technical methods for adaptive learning across temporal remote sensing image domains using style transfer samples implicitly assume that the visual style changes of all objects in the remote sensing scene are unidirectional. They mainly focus on reducing the image radiation differences generated during the external imaging process of the remote sensing scene, and ignore the mining of factors affecting the phenological differences of objects within the scene. The impact of phenological differences on the visual style changes of remote sensing images is mainly reflected in two aspects. On the one hand, compared with objects such as artificial surfaces that are not phenologically sensitive, phenologically sensitive objects are particularly susceptible to seasonal cycles and undergo morphological changes, such as plant germination, leaf expansion, leaf discoloration, and leaf shedding. On the other hand, the phenological laws of different objects are heterogeneous [3] . For example, the growing season of forests is relatively long, and generally only one phenological periodic change occurs in a year, while the growing season of farmland crops is short, and farmland with two or even three crops a year may undergo multiple phenological periodic changes. Therefore, when these technical methods are used to identify target remote sensing images of geographical areas with severe spectral mixing and relatively fragmented ecological landscapes, the identification results are difficult to distinguish the boundaries of phenologically sensitive objects, and are particularly prone to confusing phenologically sensitive objects (such as farmland) and phenologically insensitive objects (such as artificial surfaces).
根据目前的技术研究背景,对于利用风格迁移样本进行跨时相遥感影像域自适应学习的技术方法而言,主要存在以下问题有待解决:(1)遥感场景的视觉风格变化既受到外部成像过程产生的图像辐射差异影响,也受到场景内部地物的物候差异影响,然而现有技术生成的风格迁移样本不能模拟地物物候的异质性,阻碍了域自适应的分类效果。(2)风格迁移网络在特征学习过程中没能与语义分割网络构建相互作用,导致风格迁移样本对域自适应学习过程传递的信息不足,限制了模型的域自适应学习能力。According to the current technical research background, the following problems need to be solved for the technical method of using style transfer samples for cross-temporal remote sensing image domain adaptive learning: (1) The visual style changes of remote sensing scenes are affected by both the image radiation differences generated by the external imaging process and the phenological differences of the objects inside the scene. However, the style transfer samples generated by the existing technology cannot simulate the heterogeneity of the phenology of the objects, which hinders the classification effect of domain adaptation. (2) The style transfer network fails to interact with the semantic segmentation network during the feature learning process, resulting in insufficient information transmitted by the style transfer samples to the domain adaptive learning process, which limits the domain adaptive learning ability of the model.
发明内容Summary of the invention
有鉴于此,本发明提出的框架包括一个风格迁移网络Mst和一个语义分割网络Mseg。给定带有分割标签YA的源时相数据集XA和没有标签的目标时相数据集XB,目标是训练Mst生成顾及地物物候异质性的风格迁移样本,并利用这些风格迁移样本训练Mseg减小源域和目标域的类别特征分布差异,同时构建Mst和Mseg的双向优化学习机制,提升跨时相遥感影像域自适应学习能力,完成对XB的语义分割任务。In view of this, the framework proposed in the present invention includes a style transfer network M st and a semantic segmentation network M seg . Given a source temporal dataset X A with segmentation labels Y A and a target temporal dataset X B without labels, the goal is to train M st to generate style transfer samples that take into account the heterogeneity of ground objects and phenology, and use these style transfer samples to train M seg to reduce the difference in the distribution of category features between the source domain and the target domain. At the same time, a bidirectional optimization learning mechanism of M st and M seg is constructed to improve the adaptive learning ability of cross-temporal remote sensing image domains and complete the semantic segmentation task of X B.
本发明公开的一种顾及地物物候异质性的跨域遥感影像自适应学习方法,该方法应用于一个风格迁移网络Mst和一个语义分割网络Mseg,所述方法包括以下步骤:The present invention discloses a cross-domain remote sensing image adaptive learning method that takes into account the heterogeneity of ground objects and phenology. The method is applied to a style transfer network M st and a semantic segmentation network M seg , and the method comprises the following steps:
给定带有分割标签YA的源时相数据集XA和没有标签的目标时相数据集XB,训练Mst生成顾及地物物候异质性的风格迁移样本;Given a source temporal dataset XA with segmentation labels YA and a target temporal dataset XB without labels, train Mst to generate style transfer samples that take into account the heterogeneity of ground objects and phenology;
利用所述风格迁移样本训练Mseg减小源域和目标域的类别特征分布差异,同时构建Mst和Mseg的双向优化学习机制,提升跨时相遥感影像域自适应学习能力,完成对XB的语义分割任务。The style transfer samples are used to train Mseg to reduce the difference in category feature distribution between the source domain and the target domain. At the same time, a bidirectional optimization learning mechanism of Mst and Mseg is constructed to improve the adaptive learning ability of cross-temporal remote sensing image domain and complete the semantic segmentation task of XB .
进一步地,给定一个图像特征图和对应尺度的地物分割图对于地物类别k,得到类别特征图用类别特征通道维的均值和方差表示该图像的地物类别风格其中,Furthermore, given an image feature map And the corresponding scale of the ground feature segmentation map For the ground feature category k, we get the category feature map The mean and variance of the channel dimension of the category feature are used to represent the category style of the image. in,
用和分别表示不同图像域的地物类别特征风格参数集合,其中,μk表示均值,σk表示方差,βk,γk表示μk,σk的数学期望;假定对于地物类别k,图像域共包含Nk个采样像素,则use and They represent the feature style parameter sets of different object categories in different image domains, where μ k represents the mean, σ k represents the variance, β k and γ k represent the mathematical expectations of μ k and σ k . Assuming that for object category k, the image domain contains N k sample pixels in total, then
先随机选择一部分样本利用公式(2)初始化βk,γk,在模型训练过程中利用移动平均逐步更新μk,γk:First, randomly select a part of samples and use formula (2) to initialize β k , γ k . During the model training process, use moving average to gradually update μ k , γ k :
βk←λβk+(1-λ)μk(F)β k ←λβ k +(1-λ)μ k (F)
γk←λγk+(1-λ)σk(F)γ k ←λγ k +(1-λ)σ k (F)
其中λ是动量系数,设置为0.9999;Where λ is the momentum coefficient, which is set to 0.9999;
通过自适应分割样本规范化对嵌入特征进行逐类别的正则约束,定义如下:The embedding features are regularized by class by adaptive segmentation sample normalization, which is defined as follows:
其中表示地物类别特征风格参数集合,表示物候敏感因子,如果类别k是物候敏感地物,wk=1,否则wk=0,即对物候敏感地物类别特征进行风格正则化。in Represents a set of feature style parameters for feature categories. represents the phenological sensitivity factor. If category k is a phenologically sensitive feature, w k = 1, otherwise w k = 0, that is, style regularization is performed on the category characteristics of phenologically sensitive features.
进一步地,所述顾及地物物候异质性的风格迁移的过程,包括:Furthermore, the process of style transfer taking into account the heterogeneity of ground objects and phenology includes:
不同域空间样本XA和XB经过域编码器得到嵌入特征FA和FB,FA和其分割图YA与地物类别风格参数组合,FB和其伪分割图与进行组合;Different domain spatial samples X A and X B are passed through the domain encoder to obtain embedded features F A and F B , F A and its segmentation map Y A and the style parameters of the object category Combination, F B and its pseudo segmentation map and Make a combination;
通过所述逐类别的正则约束对物候敏感地物类别特征进行风格正则化得到FAB和FBA,随后经过域解码器得到风格迁移样本XAB和XBA;Performing style regularization on the phenological sensitive land feature category features through the category-by-category regularization constraints to obtain F AB and F BA , and then passing through a domain decoder to obtain style transfer samples X AB and X BA ;
使用对抗学习训练风格迁移网络:定义GA和GB为图像域XA和XB的生成器,DA和DB为图像域XA和XB的鉴别器,所述鉴别器采用语义分割网络结构,其既判别真实样本和风格迁移样本,也对真实样本中的地物进行正确分类。Use adversarial learning to train the style transfer network: define GA and GB as generators of image domains XA and XB , D A and DB as discriminators of image domains XA and XB , and the discriminator adopts a semantic segmentation network structure, which can not only distinguish real samples and style transfer samples, but also correctly classify the objects in the real samples.
进一步地,所述鉴别器的对抗损失函数定义为:Furthermore, the adversarial loss function of the discriminator is defined as:
其中yA表示源域样本xA的真实分割图,表示由Mseg(初始为源域模型)对目标域样本xB预测得到的伪分割图;Where yA represents the true segmentation map of the source domain sample xA , represents the pseudo segmentation map predicted by Mseg (initially the source domain model) for the target domain sample xB ;
对应地,所述生成器的对抗损失函数定义为:Correspondingly, the adversarial loss function of the generator is defined as:
通过组合XAB和以及XBA和生成交叉风格迁移样本XABA和XBAB,因此,最小化交叉重建一致损失:By combining X AB and and XBA and Generate cross-style transfer samples XABA and XBAB , thus minimizing the cross-reconstruction consistency loss:
同时,通过组合XA和以及XB和生成的重建样本XAA和XBB与原始样本也应该保持一致,并最小化自重建一致损失:At the same time, by combining X A and and X B and The generated reconstructed samples X AA and X BB should also be consistent with the original samples and minimize the self-reconstruction consistency loss:
最终生成器的目标损失函数定义为:The objective loss function of the final generator is defined as:
进一步的,所述双向优化学习机制中模型双向优化包括两个方向:(Mseg→Mst和(Mst→Mseg);表示第m次模型双向优化,表示初始模型为源域模型,为生成的目标域伪分割图;Furthermore, the bidirectional optimization of the model in the bidirectional optimization learning mechanism includes two directions: (M seg →M st and (M st →M seg ); represents the mth model bidirectional optimization, Indicates that the initial model is the source domain model, for Generated pseudo segmentation map of the target domain;
(Mseg→Mst)优化方向表示利用语义分割网络预测的目标域伪标签训练顾及地物物候异质性的风格迁移网络;The optimization direction (M seg →M st ) indicates that the target domain pseudo-labels predicted by the semantic segmentation network are used to train a style transfer network that takes into account the heterogeneity of ground objects and phenology;
给定Mseg在目标域的预测结果pB=Mseg(xB),将设置置信度阈值d对pB进行筛选,高置信度的预测结果将被选作伪标签用于Mst的训练;Given the prediction result p B =M seg (x B ) of Mseg in the target domain, a confidence threshold d will be set to screen p B , and the prediction results with high confidence will be selected as pseudo labels for the training of Mst ;
对于目标像素其伪标签表示为:For the target pixel Its pseudo label It is expressed as:
(Mst→Mseg)优化方向表示利用风格迁移样本优化语义分割网络;首先利用源域数据及其真实分割图训练Mseg:The optimization direction (M st →M seg ) indicates optimizing the semantic segmentation network using style transfer samples. First, use the source domain data and its true segmentation map to train M seg :
然后给定训练好的Mst对目标域样本的风格迁移结果得到pB=Mseg(xB)以及pBA=Mseg(xBA);Then, given the trained M st, the style transfer result of the target domain sample is We obtain p B =M seg (x B ) and p BA =M seg (x BA );
Mseg对目标域样本及其风格迁移结果的预测应该保持一致,因此,最小化预测一致性损失函数:The predictions of M seg for the target domain samples and their style transfer results should be consistent, therefore, minimizing the prediction consistency loss function:
同时,对于pB的高置信度区域max(pB>d),最小化迁移样本的互学习损失函数:At the same time, for the high confidence region max(p B >d) of p B , the mutual learning loss function of the migration samples is minimized:
因此,语义分割域自适应的目标损失函数定义为:Therefore, the objective loss function for semantic segmentation domain adaptation is defined as:
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
(1)本发明方法从地物物候的异质性规律出发,设计自适应分割样本规范化模块(AdaSIN)对嵌入特征进行逐类别的正则约束,使得风格迁移网络可以生成顾及地物物候异质性的风格迁移样本。与传统方法生成的风格迁移样本相比,顾及地物物候异质性的风格迁移样本在减小不同时相域之间的类别特征分布差异上更有优势,有利于提升模型的域自适应学习效果,可以推广应用于跨域场景分类、跨域语义分割、变化检测等其他具有多时相特征数据的任务场景。(1) Starting from the heterogeneity of landforms and phenology, the method of the present invention designs an adaptive segmentation sample normalization module (AdaSIN) to perform regularization constraints on the embedded features on a category-by-category basis, so that the style transfer network can generate style transfer samples that take into account the heterogeneity of landforms and phenology. Compared with the style transfer samples generated by traditional methods, the style transfer samples that take into account the heterogeneity of landforms and phenology have more advantages in reducing the differences in the distribution of category features between different temporal domains, which is conducive to improving the domain adaptive learning effect of the model, and can be extended to other task scenarios with multi-temporal feature data, such as cross-domain scene classification, cross-domain semantic segmentation, and change detection.
(2)本发明方法由于风格迁移网络和语义分割网络的双向优化学习机制的设计,加强了风格迁移过程和语义分割过程的信息交互,进一步提升了模型的域自适应学习能力。(2) Due to the design of the bidirectional optimization learning mechanism of the style transfer network and the semantic segmentation network, the method of the present invention strengthens the information interaction between the style transfer process and the semantic segmentation process, and further improves the domain adaptive learning ability of the model.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是顾及地物物候异质性的风格迁移过程示意图,其中长方形状和三角形状表示物候敏感地物类别,圆形状表示物候不敏感地物类别;Figure 1 is a schematic diagram of the style transfer process that takes into account the heterogeneity of landform phenology, where rectangular and triangular shapes represent phenologically sensitive landform categories, and circular shapes represent phenologically insensitive landform categories;
图2是风格迁移网络和语义分割网络的双向优化算法伪代码;Figure 2 is the pseudo code of the bidirectional optimization algorithm of the style transfer network and the semantic segmentation network;
图3是不同域自适应学习方法的语义分割图;Figure 3 is a semantic segmentation diagram of different domain adaptation learning methods;
图4是置信度阈值参数的敏感性分析图。FIG4 is a sensitivity analysis diagram of the confidence threshold parameter.
具体实施方式DETAILED DESCRIPTION
下面结合附图对本发明作进一步的说明,但不以任何方式对本发明加以限制,基于本发明教导所作的任何变换或替换,均属于本发明的保护范围。The present invention is further described below in conjunction with the accompanying drawings, but the present invention is not limited in any way. Any changes or substitutions made based on the teachings of the present invention belong to the protection scope of the present invention.
本发明提出的框架包括一个风格迁移网络Mst和一个语义分割网络Mseg。给定带有分割标签YA的源时相数据集XA和没有标签的目标时相数据集XB,本发明的目标是训练Mst生成顾及地物物候异质性的风格迁移样本,并利用这些风格迁移样本训练Mseg减小源域和目标域的类别特征分布差异,同时构建Mst和Mseg的双向优化学习机制,提升跨时相遥感影像域自适应学习能力,完成对XB的语义分割任务。接下来,本发明首先阐述如何生成顾及地物物候异质性的风格迁移样本,然后再详细介绍Mst和Mseg的双向优化学习机制。The framework proposed in the present invention includes a style transfer network M st and a semantic segmentation network M seg . Given a source temporal data set X A with segmentation labels Y A and a target temporal data set X B without labels, the goal of the present invention is to train M st to generate style transfer samples that take into account the heterogeneity of ground objects and phenology, and use these style transfer samples to train M seg to reduce the distribution difference of category features between the source domain and the target domain, and at the same time construct a bidirectional optimization learning mechanism of M st and M seg to improve the adaptive learning ability of cross-temporal remote sensing image domains and complete the semantic segmentation task of X B. Next, the present invention first explains how to generate style transfer samples that take into account the heterogeneity of ground objects and phenology, and then introduces the bidirectional optimization learning mechanism of M st and M seg in detail.
(1).顾及地物物候异质性的风格迁移(1) Style transfer that takes into account the heterogeneity of ground features and phenology
给定一个图像特征图和对应尺度的地物分割图从地物物候的异质性规律出发,图像场景中不同地物类别应当具有不同的风格特征。因此,对于地物类别k,可得类别特征图本发明用类别特征通道维的均值和方差表示该图像的地物类别风格其中,Given an image feature map And the corresponding scale of the ground feature segmentation map Based on the heterogeneity of ground objects and phenology, different ground object categories in the image scene should have different style characteristics. Therefore, for ground object category k, the category feature map can be obtained: The present invention uses the mean and variance of the category feature channel dimension to represent the category style of the image. in,
本发明用和分别表示不同图像域的地物类别特征风格参数集合,其中βk,γk表示μk,σk的数学期望。假定对于地物类别k,图像域共包含Nk个采样像素,则The present invention is used and They represent the feature style parameter sets of different object categories in different image domains, where β k and γ k represent the mathematical expectations of μ k and σ k . Assuming that for object category k, the image domain contains N k sample pixels, then
是这样的计算方式太占用计算资源,不利于模型训练。因此,本发明先随机选择一部分样本利用公式(2)初始化βk,γk,在模型训练过程中利用移动平均逐步更新βk,γk:This calculation method takes up too much computing resources and is not conducive to model training. Therefore, the present invention first randomly selects a part of samples and initializes β k , γ k using formula (2), and gradually updates β k , γ k using moving average during model training:
βk←λβk+(1-λ)μk(F)β k ←λβ k +(1-λ)μ k (F)
γk←λγk+(1-λ)σk(F) (3)γ k ←λγ k +(1-λ)σ k (F) (3)
其中λ是动量系数,设置为0.9999。因此,本发明可以通过自适应分割样本规范化(Adaptive Segmented Instance Normalization,AdaSIN)对嵌入特征进行逐类别的正则约束,定义如下:Where λ is the momentum coefficient, which is set to 0.9999. Therefore, the present invention can perform regularization constraints on the embedded features by category through Adaptive Segmented Instance Normalization (AdaSIN), which is defined as follows:
其中表示地物类别特征风格参数集合,表示物候敏感因子,如果类别k是物候敏感地物,wk=1,否则wk=0,即对物候敏感地物类别特征进行风格正则化。in Represents a set of feature style parameters for feature categories. represents the phenological sensitivity factor. If category k is a phenologically sensitive feature, w k = 1, otherwise w k = 0, that is, style regularization is performed on the category characteristics of phenologically sensitive features.
顾及地物物候异质性的风格迁移的过程描述如下:如图1所示,不同域空间样本XA和XB经过域编码器得到嵌入特征FA和FB,FA和其分割图YA与地物类别风格参数组合,FB和其伪分割图与进行组合,并通过公式(4)对物候敏感地物类别特征进行风格正则化得到FAB和FBA,随后经过域解码器得到风格迁移样本XAB和XBA。为了保证XA与风格迁移样本XBA以及XB与风格迁移样本XAB的地物类别特征分布尽可能接近,本发明使用对抗学习训练风格迁移网络。定义GA和GB为图像域XA和XB的生成器,DA和DB为图像域XA和XB的鉴别器。鉴别器采用语义分割网络结构,其既需要判别真实样本和风格迁移样本,也需要对真实样本中的地物进行正确分类。因此,鉴别器的对抗损失函数定义为:The process of style transfer taking into account the heterogeneity of ground objects is described as follows: As shown in Figure 1, different domain spatial samples X A and X B are passed through the domain encoder to obtain embedded features FA and FB , FA and its segmentation map Y A and ground object category style parameters Combination, F B and its pseudo segmentation map and The phenologically sensitive ground object category features are combined, and the style regularization is performed on the phenologically sensitive ground object category features through formula (4) to obtain F AB and F BA , and then the style transfer samples X AB and X BA are obtained through the domain decoder. In order to ensure that the ground object category feature distribution of X A and the style transfer sample X BA and X B and the style transfer sample X AB are as close as possible, the present invention uses adversarial learning to train the style transfer network. GA and G B are defined as generators of the image domain X A and X B , and D A and D B are discriminators of the image domain X A and X B. The discriminator adopts a semantic segmentation network structure, which needs to distinguish between real samples and style transfer samples, and also needs to correctly classify the ground objects in the real samples. Therefore, the adversarial loss function of the discriminator is defined as:
其中yA表示源域样本xA的真实分割图,表示由Mseg(初始为源域模型)对目标域样本xB预测得到的伪分割图。对应地,生成器的对抗损失函数定义为:Where yA represents the true segmentation map of the source domain sample xA , represents the pseudo segmentation map predicted by Mseg (initially the source domain model) for the target domain sample xB . Correspondingly, the adversarial loss function of the generator is defined as:
为了保证风格迁移后XA与XAB,XB与XBA保持语义一致性,本发明首先通过组合XAB和以及XBA和生成交叉风格迁移样本XABA和XBAB。因此,本发明最小化交叉重建一致损失:In order to ensure that XA and XAB , XB and XBA maintain semantic consistency after style transfer, the present invention first combines XAB and and XBA and Generate cross-style transfer samples XABA and XBAB . Therefore, the present invention minimizes the cross-reconstruction consistency loss:
同时,本发明通过组合XA和以及XB和生成的重建样本XAA和XBB与原始样本也应该保持一致,并最小化自重建一致损失:At the same time, the present invention combines X A and and X B and The generated reconstructed samples X AA and X BB should also be consistent with the original samples and minimize the self-reconstruction consistency loss:
因此,最终生成器的目标损失函数定义为:Therefore, the objective loss function of the final generator is defined as:
(2).双向优化学习机制(2) Bidirectional optimization learning mechanism
模型双向优化包括两个方向:(Mseg→Mst)和(Mst→Mseg)。 表示第m次模型双向优化,详细优化过程见图2,表示初始模型为源域模型,为生成的目标域伪分割图。The bidirectional optimization of the model includes two directions: (M seg →M st ) and (M st →M seg ). It represents the mth bidirectional optimization of the model. The detailed optimization process is shown in Figure 2. Indicates that the initial model is the source domain model, for Generated pseudo segmentation map of the target domain.
(Mseg→Mst)优化方向表示利用语义分割网络预测的目标域伪标签训练顾及地物物候异质性的风格迁移网络。给定Mseg在目标域的预测结果pB=Mseg(xB),本发明将设置置信度阈值d对pB进行筛选,高置信度的预测结果将被选作伪标签用于Mst的训练。对于目标像素其伪标签表示为:The optimization direction (M seg →M st ) indicates that the target domain pseudo-label predicted by the semantic segmentation network is used to train the style transfer network that takes into account the heterogeneity of the ground objects and phenology. Given the prediction result p B =M seg (x B ) of M seg in the target domain, the present invention sets a confidence threshold d to screen p B , and the prediction results with high confidence will be selected as pseudo-labels for the training of M st . For the target pixel Its pseudo label It is expressed as:
(Mst→Mseg)优化方向表示利用风格迁移样本优化语义分割网络。本发明首先利用源域数据及其真实分割图训练Mseg:The optimization direction (M st →M seg ) indicates optimizing the semantic segmentation network using style transfer samples. The present invention first trains M seg using source domain data and its true segmentation map:
然后给定训练好的Mst对目标域样本的风格迁移结果本发明可以得到pB=Mseg(xB)以及pBA=Mseg(xBA)。Mseg对目标域样本及其风格迁移结果的预测应该保持一致,因此,本发明最小化预测一致性损失函数:Then, given the trained M st, the style transfer result of the target domain sample is The present invention can obtain p B =M seg (x B ) and p BA =M seg (x BA ). The prediction of M seg for the target domain sample and its style transfer result should be consistent. Therefore, the present invention minimizes the prediction consistency loss function:
同时,对于pB的高置信度区域max(pB>d),本发明最小化迁移样本的互学习损失函数:At the same time, for the high confidence region max(p B >d) of p B , the present invention minimizes the mutual learning loss function of the migration samples:
因此,语义分割域自适应的目标损失函数定义为:Therefore, the objective loss function for semantic segmentation domain adaptation is defined as:
为了验证本发明方法的有效性,本发明与两类典型的域自适应学习技术在同一数据集上进行跨时相语义分割对比实验,即基于自训练的域自适应学习方法(CBST)和利用风格迁移样本的域自适应学习方法(DAugnet)。本发明使用总体精度(OA)、Kappa系数(Kappa)和加权交并比(FWIoU)作为总体的分类评价指标,使用交并比(IoU)作为个类的分类评价指标。从定量指标上看(见表1),与基准相比,CBST对跨时相语义分割的提升效果最差,本发明方法提升效果最好。相较于使用传统风格迁移样本的域自适应学习方法DAugnet,本发明方法使用顾及地物物候异质性的风格迁移样本进行域自适应学习,在总体指标OA、Kappa和FWIoU上分别有3.58%、In order to verify the effectiveness of the method of the present invention, a comparative experiment on cross-temporal semantic segmentation was conducted on the same dataset between the present invention and two typical domain adaptive learning technologies, namely, a domain adaptive learning method based on self-training (CBST) and a domain adaptive learning method using style transfer samples (DAugnet). The present invention uses overall accuracy (OA), Kappa coefficient (Kappa) and weighted intersection over union (FWIoU) as overall classification evaluation indicators, and uses intersection over union (IoU) as the classification evaluation indicator for individual classes. From the quantitative indicators (see Table 1), compared with the baseline, CBST has the worst improvement effect on cross-temporal semantic segmentation, and the method of the present invention has the best improvement effect. Compared with the domain adaptive learning method DAugnet that uses traditional style transfer samples, the method of the present invention uses style transfer samples that take into account the heterogeneity of ground objects and phenology for domain adaptive learning, and has 3.58%, 1.2% and 2.3% improvements on the overall indicators OA, Kappa and FWIoU, respectively.
5.35%和5.71%的提升。从定性结果上看(见图3),传统域自适应学习方法更容易混淆耕地和水体,而本发明方法利用顾及地物物候异质性的风格迁移样本减小不同时相域之间的类别特征分布差异,并且构建了风格迁移网络和语义分割网络的双向优化学习机制,使得耕地和水体的分类混淆显著降低,这说明了本发明方法对于区分物候敏感地物(耕地)和物候不敏感地物(水体)具有显著优势。From the qualitative results (see Figure 3), the traditional domain adaptive learning method is more likely to confuse cultivated land and water bodies, while the method of the present invention uses style transfer samples that take into account the phenological heterogeneity of land objects to reduce the distribution differences of category features between different temporal domains, and constructs a two-way optimization learning mechanism of style transfer network and semantic segmentation network, which significantly reduces the classification confusion of cultivated land and water bodies. This shows that the method of the present invention has significant advantages in distinguishing phenologically sensitive land objects (cultivated land) from phenologically insensitive land objects (water bodies).
表1不同域自适应学习方法的语义分割对比实验结果(%)Table 1 Comparative experimental results of semantic segmentation of different domain adaptation learning methods (%)
为验证本发明方法的可行性,本发明选取中国湖南省湘潭市区域的一组多时相遥感影像数据进行实验,数据采样自GF-2传感器,分辨率2m,其中源域数据集采样自2018年,目标域数据集采样自2019年。源域和目标域各包含4232个遥感图像(尺寸均为512×512像素),其包含6类地物标签,即空值、耕地、林地、草地、水体和人造地表。源域图像及标签和目标域图像用于域自适应模型训练,目标域标签用于域自适应模型测试。本发明方法与两类典型的域自适应学习方法进行了对比实验,实验结果如图3和表1所示,本发明方法表现出更为突出的域自适应学习能力。此外,本文还讨论了三个问题:(1)顾及地物物候异质性的风格迁移样本对跨时相域遥感影像语义分割的贡献;(2)风格迁移网络和语义分割网络双向优化学习机制的作用;(3)目标伪标签生成过程中置信度阈值对模型优化的影响。In order to verify the feasibility of the method of the present invention, a set of multi-temporal remote sensing image data in Xiangtan City, Hunan Province, China was selected for experiments. The data was sampled from the GF-2 sensor with a resolution of 2m. The source domain data set was sampled from 2018 and the target domain data set was sampled from 2019. The source domain and the target domain each contain 4232 remote sensing images (both with a size of 512×512 pixels), which contain 6 types of ground feature labels, namely null, cultivated land, forest land, grassland, water body and artificial surface. The source domain images and labels and the target domain images are used for domain adaptive model training, and the target domain labels are used for domain adaptive model testing. The method of the present invention was compared with two typical domain adaptive learning methods. The experimental results are shown in Figure 3 and Table 1. The method of the present invention shows more outstanding domain adaptive learning ability. In addition, this paper discusses three issues: (1) the contribution of style transfer samples that take into account the heterogeneity of ground objects and phenology to the semantic segmentation of remote sensing images across temporal domains; (2) the role of the bidirectional optimization learning mechanism of style transfer networks and semantic segmentation networks; and (3) the impact of the confidence threshold on model optimization during the target pseudo-label generation process.
表2本发明方法各组件的消融实验结果(%),其中ST表示伪标签自学习,AdaIN表示自适应样本规范化,AdaSIN表示本发明提出的自适应分割样本规范化,m表示双向优化学习次数Table 2 Ablation experimental results of each component of the method of the present invention (%), where ST represents pseudo-label self-learning, AdaIN represents adaptive sample normalization, AdaSIN represents the adaptive segmentation sample normalization proposed by the present invention, and m represents the number of bidirectional optimization learning
首先,从消融实验结果(见表2)可以看出,相比伪标签自学习的方式(ST),在域自适应学习过程中引入风格迁移样本的效果提升更加明显。但是,传统风格迁移方法采用自适应样本规范化(Adaptive Instance Normalization,AdaIN)生成的风格迁移样本只考虑减小图像辐射差异却忽略了地物物候的异质性,限制了模型的域自适应学习能力。本发明方法从地物物候异质性规律出发,设计自适应分割样本规范化(Adaptive SegmentedInstance Normalization,AdaSIN)对嵌入特征进行逐类别的正则约束,使得风格迁移网络可以生成顾及地物物候异质性的风格迁移样本。对比AdaIN,本发明方法提出的AdaSIN提升了模型的域自适应学习能力,使得实验结果的各项指标得到明显提升。First, it can be seen from the ablation experiment results (see Table 2) that compared with the pseudo-label self-learning method (ST), the effect of introducing style transfer samples in the domain adaptive learning process is more obvious. However, the style transfer samples generated by the traditional style transfer method using adaptive sample normalization (Adaptive Instance Normalization, AdaIN) only consider reducing the image radiation difference but ignore the heterogeneity of the phenology of the ground objects, which limits the domain adaptive learning ability of the model. Starting from the law of the heterogeneity of the ground objects and phenology, the method of the present invention designs adaptive segmented sample normalization (Adaptive Segmented Instance Normalization, AdaSIN) to perform regular constraints on the embedded features by category, so that the style transfer network can generate style transfer samples that take into account the heterogeneity of the ground objects and phenology. Compared with AdaIN, the AdaSIN proposed by the method of the present invention improves the domain adaptive learning ability of the model, so that the various indicators of the experimental results are significantly improved.
其次,本发明方法以语义分割网络的输出作为伪标签信息输入风格迁移网络,并构建了风格迁移网络和语义分割网络双向优化学习机制。从表2可以看出,当双向优化学习次数m>1时,每经过一次双向优化,实验结果都得到了进一步的提升,说明了双向优化学习机制加强了风格迁移过程和域自适应学习过程的信息交互,进一步提升了模型的域自适应学习能力。Secondly, the method of the present invention uses the output of the semantic segmentation network as pseudo-label information to input the style transfer network, and constructs a bidirectional optimization learning mechanism for the style transfer network and the semantic segmentation network. As can be seen from Table 2, when the number of bidirectional optimization learning times m>1, the experimental results are further improved after each bidirectional optimization, which shows that the bidirectional optimization learning mechanism strengthens the information interaction between the style transfer process and the domain adaptation learning process, and further improves the domain adaptation learning ability of the model.
最后,为了研究伪标签生成过程中置信度阈值d对模型优化的影响,本发明将d设置在[0.4,0.8]的范围内进行了一系列模型优化实验。如图4所示,当d设置为0.7时,模型获得了最佳性能。在d设置太小时(例如小于0.5),伪标签的错误信息较多,模型性能较弱;在d介于0.6和0.7之间时,对模型优化的影响并不明显;当d设置过大(例如大于0.7),模型性能有轻微下降,使用更大d导致性能下降的可能原因是它生成的伪标签数量减少,限制了模型的再训练。Finally, in order to study the influence of the confidence threshold d on model optimization during pseudo-label generation, the present invention sets d in the range of [0.4, 0.8] and conducts a series of model optimization experiments. As shown in Figure 4, when d is set to 0.7, the model achieves the best performance. When d is set too small (for example, less than 0.5), the pseudo-label has more erroneous information and the model performance is weaker; when d is between 0.6 and 0.7, the impact on model optimization is not obvious; when d is set too large (for example, greater than 0.7), the model performance decreases slightly. The possible reason for the performance degradation caused by using a larger d is that the number of pseudo-labels it generates is reduced, which limits the retraining of the model.
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
(1)本发明方法从地物物候的异质性规律出发,设计自适应分割样本规范化模块(AdaSIN)对嵌入特征进行逐类别的正则约束,使得风格迁移网络可以生成顾及地物物候异质性的风格迁移样本。与传统方法生成的风格迁移样本相比,顾及地物物候异质性的风格迁移样本在减小不同时相域之间的类别特征分布差异上更有优势,有利于提升模型的域自适应学习效果,可以推广应用于跨域场景分类、跨域语义分割、变化检测等其他具有多时相特征数据的任务场景。(1) Starting from the heterogeneity of landforms and phenology, the method of the present invention designs an adaptive segmentation sample normalization module (AdaSIN) to perform regularization constraints on the embedded features on a category-by-category basis, so that the style transfer network can generate style transfer samples that take into account the heterogeneity of landforms and phenology. Compared with the style transfer samples generated by traditional methods, the style transfer samples that take into account the heterogeneity of landforms and phenology have more advantages in reducing the differences in the distribution of category features between different temporal domains, which is conducive to improving the domain adaptive learning effect of the model, and can be extended to other task scenarios with multi-temporal feature data, such as cross-domain scene classification, cross-domain semantic segmentation, and change detection.
(2)本发明方法由于风格迁移网络和语义分割网络的双向优化学习机制的设计,加强了风格迁移过程和语义分割过程的信息交互,进一步提升了模型的域自适应学习能力。(2) Due to the design of the bidirectional optimization learning mechanism of the style transfer network and the semantic segmentation network, the method of the present invention strengthens the information interaction between the style transfer process and the semantic segmentation process, and further improves the domain adaptive learning ability of the model.
本文所使用的词语“优选的”意指用作实例、示例或例证。本文描述为“优选的”任意方面或设计不必被解释为比其他方面或设计更有利。相反,词语“优选的”的使用旨在以具体方式提出概念。如本申请中所使用的术语“或”旨在意指包含的“或”而非排除的“或”。即,除非另外指定或从上下文中清楚,“X使用A或B”意指自然包括排列的任意一个。即,如果X使用A;X使用B;或X使用A和B二者,则“X使用A或B”在前述任一示例中得到满足。As used herein, the word "preferred" is intended to be used as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as being more advantageous than other aspects or designs. On the contrary, the use of the word "preferred" is intended to present concepts in a specific way. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless otherwise specified or clear from the context, "X uses A or B" means any one of the naturally included permutations. That is, if X uses A; X uses B; or X uses both A and B, then "X uses A or B" is satisfied in any of the foregoing examples.
而且,尽管已经相对于一个或实现方式示出并描述了本公开,但是本领域技术人员基于对本说明书和附图的阅读和理解将会想到等价变型和修改。本公开包括所有这样的修改和变型,并且仅由所附权利要求的范围限制。特别地关于由上述组件(例如元件等)执行的各种功能,用于描述这样的组件的术语旨在对应于执行所述组件的指定功能(例如其在功能上是等价的)的任意组件(除非另外指示),即使在结构上与执行本文所示的本公开的示范性实现方式中的功能的公开结构不等同。此外,尽管本公开的特定特征已经相对于若干实现方式中的仅一个被公开,但是这种特征可以与如可以对给定或特定应用而言是期望和有利的其他实现方式的一个或其他特征组合。而且,就术语“包括”、“具有”、“含有”或其变形被用在具体实施方式或权利要求中而言,这样的术语旨在以与术语“包含”相似的方式包括。Moreover, although the present disclosure has been shown and described with respect to one or implementations, those skilled in the art will think of equivalent variations and modifications based on the reading and understanding of this specification and the accompanying drawings. The present disclosure includes all such modifications and variations, and is limited only by the scope of the appended claims. In particular, with respect to the various functions performed by the above-mentioned components (such as elements, etc.), the terms used to describe such components are intended to correspond to any component (unless otherwise indicated) that performs the specified function of the component (such as it is functionally equivalent), even if the structure is not equivalent to the disclosed structure of the function in the exemplary implementation of the present disclosure shown herein. In addition, although the specific features of the present disclosure have been disclosed with respect to only one of several implementations, such features can be combined with one or other features of other implementations that may be desired and advantageous for a given or specific application. Moreover, insofar as the terms "including", "having", "containing" or their variations are used in specific embodiments or claims, such terms are intended to be included in a manner similar to the term "comprising".
本发明实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以多个或多个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。上述提到的存储介质可以是只读存储器,磁盘或光盘等。上述的各装置或系统,可以执行相应方法实施例中的存储方法。The functional units in the embodiments of the present invention may be integrated into a processing module, or each unit may exist physically separately, or multiple or more units may be integrated into one module. The above-mentioned integrated module may be implemented in the form of hardware or in the form of a software functional module. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium. The above-mentioned storage medium may be a read-only memory, a disk or an optical disk, etc. The above-mentioned devices or systems may execute the storage method in the corresponding method embodiment.
综上所述,上述实施例为本发明的一种实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何背离本发明的精神实质与原理下所做的改变、修饰、代替、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。To sum up, the above embodiment is an implementation mode of the present invention, but the implementation mode of the present invention is not limited by the embodiment, and any other changes, modifications, substitutions, combinations, and simplifications that deviate from the spirit and principles of the present invention should be equivalent replacement methods and are included in the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310258590.7A CN116152666A (en) | 2023-03-17 | 2023-03-17 | An Adaptive Learning Method for Cross-Domain Remote Sensing Images Considering the Heterogeneity of Surface Object and Phenology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310258590.7A CN116152666A (en) | 2023-03-17 | 2023-03-17 | An Adaptive Learning Method for Cross-Domain Remote Sensing Images Considering the Heterogeneity of Surface Object and Phenology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116152666A true CN116152666A (en) | 2023-05-23 |
Family
ID=86360159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310258590.7A Pending CN116152666A (en) | 2023-03-17 | 2023-03-17 | An Adaptive Learning Method for Cross-Domain Remote Sensing Images Considering the Heterogeneity of Surface Object and Phenology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116152666A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116561536A (en) * | 2023-07-11 | 2023-08-08 | 中南大学 | Landslide hidden danger identification method, terminal equipment and medium |
CN116935242A (en) * | 2023-07-24 | 2023-10-24 | 哈尔滨工业大学 | Remote sensing image semantic segmentation method and system based on space and semantic consistency contrast learning |
-
2023
- 2023-03-17 CN CN202310258590.7A patent/CN116152666A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116561536A (en) * | 2023-07-11 | 2023-08-08 | 中南大学 | Landslide hidden danger identification method, terminal equipment and medium |
CN116561536B (en) * | 2023-07-11 | 2023-11-21 | 中南大学 | Landslide hidden danger identification method, terminal equipment and medium |
CN116935242A (en) * | 2023-07-24 | 2023-10-24 | 哈尔滨工业大学 | Remote sensing image semantic segmentation method and system based on space and semantic consistency contrast learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Information fusion for classification of hyperspectral and LiDAR data using IP-CNN | |
Zhang et al. | Deep learning for processing and analysis of remote sensing big data: A technical review | |
Geng et al. | SAR image classification via deep recurrent encoding neural networks | |
Chen et al. | A landslide extraction method of channel attention mechanism U-Net network based on Sentinel-2A remote sensing images | |
Xu et al. | High-resolution remote sensing image change detection combined with pixel-level and object-level | |
CN116152666A (en) | An Adaptive Learning Method for Cross-Domain Remote Sensing Images Considering the Heterogeneity of Surface Object and Phenology | |
Saltori et al. | Gipso: Geometrically informed propagation for online adaptation in 3d lidar segmentation | |
CN115049841A (en) | Depth unsupervised multistep anti-domain self-adaptive high-resolution SAR image surface feature extraction method | |
CN114219963B (en) | Multi-scale capsule network remote sensing object classification method and system guided by geoscience knowledge | |
CN109871812A (en) | A kind of multi-temporal remote sensing image urban vegetation extracting method neural network based | |
Wang et al. | Simultaneous extracting area and quantity of agricultural greenhouses in large scale with deep learning method and high-resolution remote sensing images | |
Zhu et al. | Unrestricted region and scale: Deep self-supervised building mapping framework across different cities from five continents | |
Liu et al. | Spatial-temporal hidden Markov model for land cover classification using multitemporal satellite images | |
Liu et al. | MAE-YOLOv8-based small object detection of green crisp plum in real complex orchard environments | |
Chen et al. | A superpixel-guided unsupervised fast semantic segmentation method of remote sensing images | |
Zhang et al. | Land cover change detection based on vector polygons and deep learning with high-resolution remote sensing images | |
Mudigonda et al. | Deep learning for detecting extreme weather patterns | |
Zhang et al. | Crop classification for UAV visible imagery using deep semantic segmentation methods | |
Liu et al. | Cross-region plastic greenhouse segmentation and counting using the style transfer and dual-task networks | |
Liu et al. | Straw Segmentation Algorithm Based on Modified UNet in Complex Farmland Environment. | |
Luo et al. | VrsNet-density map prediction network for individual tree detection and counting from UAV images | |
Chen et al. | A noval super-resolution model for 10-m mangrove mapping with landsat-5 | |
Fu et al. | MSFANet: multi-scale fusion attention network for mangrove remote sensing lmage segmentation using pattern recognition | |
Liu et al. | Remote sensing detection and mapping of plastic greenhouses based on YOLOX+: A case study in Weifang, China | |
Martins | Segmentation and classification of individual clouds in images captured with horizon-aimed cameras for nowcasting of solar irradiance absorption |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |