CN113657455A - Semi-supervised learning method based on triple network and labeling consistency regularization - Google Patents

Semi-supervised learning method based on triple network and labeling consistency regularization Download PDF

Info

Publication number
CN113657455A
CN113657455A CN202110837568.9A CN202110837568A CN113657455A CN 113657455 A CN113657455 A CN 113657455A CN 202110837568 A CN202110837568 A CN 202110837568A CN 113657455 A CN113657455 A CN 113657455A
Authority
CN
China
Prior art keywords
data
network
label
labeled
unlabeled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110837568.9A
Other languages
Chinese (zh)
Other versions
CN113657455B (en
Inventor
蒋雯
苗旺
耿杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110837568.9A priority Critical patent/CN113657455B/en
Publication of CN113657455A publication Critical patent/CN113657455A/en
Application granted granted Critical
Publication of CN113657455B publication Critical patent/CN113657455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于三重网络与标注一致性正则化的半监督学习方法,包括以下步骤:步骤一、输入图像数据集及其对应的标签集;步骤二、对有标签和未标注数据集进行预处理;步骤三、构建并训练自适应视觉机制的深度网络,用于提取图像数据集的深度特征;步骤四、构建孪生网络,利用标注数据与无标注数据的深度特征,获取正向传播结果与伪标签;步骤五、利用正向传播结果与伪标签,构建训练标注数据和无标注数据的损失函数,对孪生网络进行半监督训练;本发明构造三重网络对标注数据不足的数据集进行训练,首先建立自适应视觉机制的生成对抗网络,对图像数据集进行无监督学习,用于更有效地特征提取,可以消除异类标注数据与无标注数据特征提取的差异;然后,基于标注一致性原则建立并训练孪生网络,可以消除同类标注数据与无标注数据特征判别的差异,减少网络训练参数数量,更有效地利用无标注数据进行半监督学习。

Figure 202110837568

The invention discloses a semi-supervised learning method based on triple network and label consistency regularization, comprising the following steps: step 1, inputting an image data set and its corresponding label set; Perform preprocessing; step 3, build and train a deep network of adaptive vision mechanism to extract the depth features of the image data set; step 4, build a twin network, use the depth features of labeled data and unlabeled data to obtain forward propagation Results and pseudo-labels; Step 5: Use the forward propagation results and pseudo-labels to construct a loss function for training labeled data and unlabeled data, and perform semi-supervised training on the twin network; the present invention constructs a triple network for data sets with insufficient labeled data. For training, first build a generative adversarial network with an adaptive vision mechanism to perform unsupervised learning on image data sets for more effective feature extraction, which can eliminate the difference between heterogeneous labeled data and unlabeled data feature extraction; then, based on label consistency The principle of establishing and training a twin network can eliminate the difference between the characteristics of the same labeled data and unlabeled data, reduce the number of network training parameters, and more effectively use unlabeled data for semi-supervised learning.

Figure 202110837568

Description

一种基于三重网络与标注一致性正则化的半监督学习方法A Semi-Supervised Learning Method Based on Triple Network and Label Consistency Regularization

技术领域technical field

本发明属于深度学习技术领域,具体涉及一种基于三重网络与标注一致性正则化的半监督学习方法。The invention belongs to the technical field of deep learning, and in particular relates to a semi-supervised learning method based on triple network and label consistency regularization.

背景技术Background technique

近年来,深度学习方法使人工智能领域取得飞速发展,在智能、神经、思维等领域,对信息化具有“范式”突破意义。深度学习技术是机器学习的分支,是一种以人工神经网络为架构,对数据进行表征学习的算法。深度学习方法应用通常使用了大量的标注数据,学习完全监督模型,并且已经获得了较好的应用效果。但是,这种完全监督的学习成本昂贵且费时,数据标注必须由具有相关领域专业知识的研究人员手动完成。由于某些图像数据集具有高类内多样性与高类间相似性的特点,数据难以准确标注。In recent years, the deep learning method has made rapid development in the field of artificial intelligence. Deep learning technology is a branch of machine learning, which is an algorithm that uses artificial neural network as the architecture to perform representation learning on data. The application of deep learning methods usually uses a large amount of labeled data to learn fully supervised models, and has achieved good application results. However, such fully supervised learning is expensive and time-consuming, and data labeling must be done manually by researchers with relevant domain expertise. Due to the characteristics of high intra-class diversity and high inter-class similarity in some image datasets, it is difficult to accurately label the data.

因此,针对实际场景中深度学习不同任务,由于数据来源的多样性,通常只有训练集的子集具有标签,其余数据均不具有标签。这种情况发生在各类任务中,尤其对于图像多分类任务。在标注监督信息不足的情况下,模型无法充分拟合,导致提取标注数据与无标注数据特征时存在差异,无法充分利用数据间的相关性,得到泛化能力较强的模型。Therefore, for different deep learning tasks in practical scenarios, due to the diversity of data sources, usually only a subset of the training set has labels, and the rest of the data does not have labels. This happens in a variety of tasks, especially for image multi-classification tasks. In the case of insufficient annotation and supervision information, the model cannot be adequately fitted, resulting in differences in extracting the features of labeled data and unlabeled data.

数据的标注问题一直是计算机视觉和人工智能的重点研究领域,为了提高深度学习模型的效率,需要消除不充分拟合模型对模型特征提取的差异,研究标注一致性的半监督多分类技术。Data labeling has always been a key research area of computer vision and artificial intelligence. In order to improve the efficiency of deep learning models, it is necessary to eliminate the differences in model feature extraction by insufficiently fitted models, and to study semi-supervised multi-classification techniques for labeling consistency.

目前的现有技术提取标注数据与无标注数据时存在差异,因此迫切需要一种标注一致性的半监督学习方法,对数据集中的无标注数据进行充分学习,为后续实际场景标注数据不足的深度学习多分类任务提供便利。There is a difference between the existing technology when extracting labeled data and unlabeled data. Therefore, a semi-supervised learning method with consistent labeling is urgently needed, which can fully learn the unlabeled data in the data set and label the depth of insufficient data for subsequent actual scenes. Facilitates learning multi-classification tasks.

发明内容SUMMARY OF THE INVENTION

本发明所要解决的技术问题在于针对现有技术中的不足,提供一种基于三重网络与标注一致性正则化的半监督学习方法,其结构简单、设计合理。The technical problem to be solved by the present invention is to provide a semi-supervised learning method based on triple network and label consistency regularization, which is simple in structure and reasonable in design, aiming at the deficiencies in the prior art.

考虑到实际场景具有信息不完备的特点,获取的数据普遍存在标注缺少的现象,造成监督信息严重不足,导致提取标注数据与无标注数据时存在差异,限制深度学习网络分类的训练效果和泛化能力。为解决上述技术问题,本发明采用的技术方案是:基于三重网络与标注一致性正则化的半监督学习方法,其特征在于:包括以下步骤:Considering the fact that the actual scene has the characteristics of incomplete information, the acquired data is generally lack of labeling, resulting in a serious lack of supervision information, resulting in differences between the extraction of labeled data and unlabeled data, limiting the training effect and generalization of deep learning network classification. ability. In order to solve the above-mentioned technical problems, the technical solution adopted in the present invention is: a semi-supervised learning method based on triple network and label consistency regularization, which is characterized in that: it includes the following steps:

步骤一、输入图像数据集及其对应的标签集:Step 1. Input the image dataset and its corresponding label set:

步骤101、输入图像数据集V,具体地,V={v1,...vi,...vl},分为标注数据 X={x1,...xf,...xn}与无标注数据U={u1,...uj,...um},其中,vi表示第i个图像样本数据,1≤i≤l,1≤f≤n,1≤j≤m,且l=m+n,n、m与l均为正整数;Step 101. Input image data set V, specifically, V={v 1 ,...v i ,...v l }, divided into labeling data X={x 1 ,...x f ,... x n } and unlabeled data U={u 1 ,...u j ,... um }, where v i represents the i-th image sample data, 1≤i≤l, 1≤f≤n, 1≤j≤m, and l=m+n, n, m and l are all positive integers;

步骤102、输入图像集V对应的标签集,标注数据X={x1,...xi,...xn}的标签为p={p1,...pi,...pn},无标注数据U={u1,...uj,...um}不具有标签。Step 102: Input the label set corresponding to the image set V, the labels of the labeling data X={x 1 ,...x i ,...x n } are p={p 1 ,...p i ,... p n }, the unlabeled data U={u 1 , . . . u j , . . . um } has no labels.

步骤二、对有标签和未标注数据集进行预处理:Step 2: Preprocess the labeled and unlabeled datasets:

步骤201、对图像标注数据X和未标注数据U进行数据增强,其中,对于标注数据,进行单次增强,得到增强后的数据X′。对于无标注数据,进行K次随机增强,得到增强后的数据U′;Step 201 , performing data enhancement on the image labeled data X and the unlabeled data U, wherein, for the labeled data, a single enhancement is performed to obtain the enhanced data X′. For unlabeled data, perform K random enhancements to obtain the enhanced data U';

步骤202、将数据X′和U′混合,随机排列得到数据组合W,其中,增强数据的标签与原标签一致。Step 202: Mix the data X' and U', and arrange them randomly to obtain a data combination W, wherein the label of the enhanced data is consistent with the original label.

步骤三、构建并训练自适应视觉机制的深度网络,用于提取图像数据集的深度特征:Step 3. Build and train a deep network of adaptive vision mechanism to extract the deep features of the image dataset:

步骤301、构建生成对抗网络G,分为数据生成器与鉴别器。Step 301: Build a generative adversarial network G, which is divided into a data generator and a discriminator.

步骤302、在生成对抗网络中使用自卷积层,基于空间特殊性,频域无关性的原则,设置自适应卷积核生成函数。依据输入的图像特征,输出与特征图尺寸大小相同的卷积核,控制缩放比例调整参数量,对特征图通道进行缩放。Step 302 , using a self-convolution layer in the generative adversarial network, and setting an adaptive convolution kernel generating function based on the principles of spatial specificity and frequency domain independence. According to the input image features, output the convolution kernel with the same size as the feature map, control the scaling ratio to adjust the parameter amount, and scale the feature map channel.

步骤303、将图像集V中标注数据的标签删除,利用无标注的全部图像集 V对生成对抗网络G进行无监督学习,使生成器生成的伪数据特征接近真实的图像特征,利用自卷积层,增强鉴别器的特征表示能力。Step 303: Delete the labels of the labeled data in the image set V, and use all the unlabeled image sets V to perform unsupervised learning on the generative adversarial network G, so that the pseudo data features generated by the generator are close to the real image features, and self-convolution is used. layer to enhance the feature representation capability of the discriminator.

步骤304、将训练好的生成对抗网络G的鉴别器Gd作为特征提取器Fd,用于提取目标图像标注数据X={x1,...xf,...xn}与无标注数据U={u1,...uj,...um}的深度特征xlabeled=Fd(xf)与xunlabeled=Fd(uj)。Step 304: Use the trained discriminator G d of the generated adversarial network G as the feature extractor F d for extracting the target image annotation data X={x 1 ,...x f ,...x n } and no The depth features x labeled =F d (x f ) and x unlabeled =F d (u j ) of the labeled data U={u 1 ,...u j ,... um }.

步骤四、构建孪生网络,利用标注数据与无标注数据的深度特征,获取正向传播结果与伪标签:Step 4. Build a twin network, and use the depth features of labeled data and unlabeled data to obtain forward propagation results and pseudo-labels:

步骤401、构建两个浅层分类网络Net1与Net2,作为孪生网络,输入数据组合W;Step 401, construct two shallow classification networks Net 1 and Net 2 , as twin networks, input data combination W;

步骤402、对于标注数据,输入增强后的数据X′与对应的标签p,得到深度特征xlabeled=Fd(X′),利用孪生网络进行预测,正向传播结果为

Figure BDA0003177713900000031
其中,pd1和pd2为Net1与Net2的组合预测,w为超参数;Step 402: For the labeled data, input the enhanced data X' and the corresponding label p to obtain the depth feature x labeled =F d (X'), use the twin network for prediction, and the forward propagation result is:
Figure BDA0003177713900000031
Among them, p d1 and p d2 are the combined predictions of Net 1 and Net 2 , and w is a hyperparameter;

步骤403、对于未标注数据,输入增强后的数据U′,得到深度特征 xunlabeled=Fd(U′),利用孪生网络进行预测,将输出加权平均作为正向传播结果 pn

Figure BDA0003177713900000032
其中,
Figure BDA0003177713900000033
Figure BDA0003177713900000034
为孪生网络对无标注数据的预测,θ为网络训练参数。Step 403: For the unlabeled data, input the enhanced data U' to obtain the depth feature x unlabeled =F d (U'), use the twin network for prediction, and use the output weighted average as the forward propagation result p n ,
Figure BDA0003177713900000032
in,
Figure BDA0003177713900000033
and
Figure BDA0003177713900000034
is the prediction of the Siamese network on unlabeled data, and θ is the network training parameter.

步骤404、对无标注数据的预测进行锐化,得到伪标签q。其中,锐化操作具体为

Figure BDA0003177713900000035
T是锐化参数,K是增强次数,P(U;θ) 是网络对每个类别的预测概率。Step 404: Sharpen the prediction of the unlabeled data to obtain a pseudo-label q. Among them, the sharpening operation is specifically
Figure BDA0003177713900000035
T is the sharpening parameter, K is the number of enhancements, and P(U; θ) is the predicted probability of the network for each class.

步骤405、将孪生网络预测的伪标签q进行标签融合。具体地,融合后的伪标签为:

Figure BDA0003177713900000036
其中,
Figure BDA0003177713900000037
为网络Net1锐化的伪标签,
Figure BDA0003177713900000038
为网络Net2锐化的伪标签,λ服从根据实际数据集设置的概率分布。Step 405: Perform label fusion on the pseudo-label q predicted by the Siamese network. Specifically, the fused pseudo-label is:
Figure BDA0003177713900000036
in,
Figure BDA0003177713900000037
Pseudo-labels sharpened for network Net 1 ,
Figure BDA0003177713900000038
The pseudo-labels sharpened for the network Net 2 , λ obey the probability distribution set according to the actual dataset.

步骤五、利用正向传播结果与伪标签,构建训练标注数据和无标注数据的损失函数,对孪生网络进行半监督训练:Step 5. Use the forward propagation results and pseudo-labels to construct a loss function for training labeled data and unlabeled data, and perform semi-supervised training on the Siamese network:

步骤501、建立半监督标注一致性正则化损失函数,按照每个类别计算标注数据与无标注数据差异的正则项,消除同类别标注数据与无标注数据的差异,如下所示:Step 501: Establish a semi-supervised labeling consistency regularization loss function, calculate the regular term for the difference between labeled data and unlabeled data according to each category, and eliminate the difference between labeled data and unlabeled data in the same category, as shown below:

Figure BDA0003177713900000041
Figure BDA0003177713900000041

其中,num为类别数量,xlabeled、xunlabeled为图像标注数据与未标注数据的深度特征,class-k为第k个类别;Among them, num is the number of categories, x labeled and x unlabeled are the depth features of image labeled data and unlabeled data, and class-k is the kth category;

步骤502、对于增强后的已标注数据X′,建立损失函数如下所示:Step 502: For the enhanced labeled data X', establish a loss function as follows:

Figure BDA0003177713900000042
Figure BDA0003177713900000042

步骤503、对于增强后的无标注数据U′,建立损失函数如下所示:Step 503: For the enhanced unlabeled data U', establish a loss function as follows:

Figure BDA0003177713900000043
Figure BDA0003177713900000043

其中,|X′|等于每批次样本数量,|U′|等于K倍每批次样本数量,

Figure BDA0003177713900000044
是交叉熵函数,x,p是增强的已标注数据和标签,u,q是增强的未标注数据和伪标签。Among them, |X′| is equal to the number of samples per batch, |U′| is equal to K times the number of samples per batch,
Figure BDA0003177713900000044
is the cross-entropy function, x, p are the augmented labeled data and labels, and u, q are the augmented unlabeled data and pseudo-labels.

步骤504、整体损失函数L是三者的加权,如下所示:Step 504, the overall loss function L is the weight of the three, as follows:

L=LXULUULosssemi-supervised L=L XU L UU Loss semi-supervised

其中,λU、βU为超参数。利用整体损失函数L,经过不停迭代,将训练好的孪生网络模型进行分类测试。Among them, λ U and β U are hyperparameters. Using the overall loss function L, after continuous iteration, the trained twin network model is classified and tested.

本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:

1、本发明的结构简单、设计合理,实现及使用操作方便。1. The structure of the present invention is simple, the design is reasonable, and the realization, use and operation are convenient.

2、本发明采用使用自适应视觉机制的生成对抗网络对数据集进行无监督学习,将训练好的模型进行数据集深度特征提取,可以有效消除不同类别间标注数据与无标注数据特征提取的差异性,且特征提取与选择更具有鲁棒性,保护图像信息的完整性,提高半监督多分类性能。2. The present invention uses a generative adversarial network using an adaptive vision mechanism to perform unsupervised learning on the data set, and extracts the depth features of the data set from the trained model, which can effectively eliminate the difference between the feature extraction of labeled data and unlabeled data between different categories. The feature extraction and selection are more robust, protect the integrity of image information, and improve the performance of semi-supervised multi-classification.

3、本发明基于标注一致性的思想,使用孪生网络进行半监督学习,可以有效消除同类别间标注数据与无标注数据特征判别的差异性,避免了因特征差异而引起的分类结果不同,且训练参数量相对较小,拥有更高的有效性与正确性。3. Based on the idea of labeling consistency, the present invention uses a twin network for semi-supervised learning, which can effectively eliminate the difference in feature discrimination between labeled data and unlabeled data between the same categories, and avoid different classification results caused by feature differences, and The amount of training parameters is relatively small and has higher validity and correctness.

综上所述,本发明结构简单、设计合理。本发明构造三重网络对标注数据不足的数据集进行训练,首先建立自适应视觉机制的生成对抗网络,对图像数据集进行无监督学习,用于更有效地特征提取,可以消除异类标注数据与无标注数据特征提取的差异;然后,基于标注一致性原则建立并训练孪生网络,可以消除同类标注数据与无标注数据特征判别的差异,减少网络训练参数数量,更有效地利用无标注数据进行半监督学习。In conclusion, the present invention has simple structure and reasonable design. The invention constructs a triple network to train data sets with insufficient labeled data. First, a generative adversarial network with an adaptive vision mechanism is established, and unsupervised learning is performed on the image data set for more effective feature extraction. The difference in feature extraction of labeled data; then, based on the principle of labeling consistency, a twin network is established and trained, which can eliminate the difference in feature discrimination between the same labeled data and unlabeled data, reduce the number of network training parameters, and more effectively use unlabeled data for semi-supervision study.

下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be further described in detail below through the accompanying drawings and embodiments.

附图说明Description of drawings

图1为本发明的方法流程图。FIG. 1 is a flow chart of the method of the present invention.

具体实施方式Detailed ways

下面结合附图及本发明的实施例对本发明的方法作进一步详细的说明。The method of the present invention will be described in further detail below with reference to the accompanying drawings and the embodiments of the present invention.

需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict. The present invention will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.

需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施方式例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second", etc. in the description and claims of the present application and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that data so used may be interchanged under appropriate circumstances such that the embodiments of the application described herein can, for example, be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.

为了便于描述,在这里可以使用空间相对术语,如“在……之上”、“在……上方”、“在……上表面”、“上面的”等,用来描述如在图中所示的一个器件或特征与其他器件或特征的空间位置关系。应当理解的是,空间相对术语旨在包含除了器件在图中所描述的方位之外的在使用或操作中的不同方位。例如,如果附图中的器件被倒置,则描述为“在其他器件或构造上方”或“在其他器件或构造之上”的器件之后将被定位为“在其他器件或构造下方”或“在其他器件或构造之下”。因而,示例性术语“在……上方”可以包括“在……上方”和“在……下方”两种方位。该器件也可以其他不同方式定位(旋转90度或处于其他方位),并且对这里所使用的空间相对描述作出相应解释。For ease of description, spatially relative terms, such as "on", "over", "on the surface", "above", etc., may be used herein to describe what is shown in the figures. The spatial positional relationship of one device or feature shown to other devices or features. It should be understood that spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or features would then be oriented "below" or "over" the other devices or features under other devices or constructions". Thus, the exemplary term "above" can encompass both an orientation of "above" and "below." The device may also be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptions used herein interpreted accordingly.

如图1所示,本发明包括以下步骤:As shown in Figure 1, the present invention comprises the following steps:

步骤一、输入图像数据集及其对应的标签集:Step 1. Input the image dataset and its corresponding label set:

步骤101、输入图像数据集V,具体地,V={v1,...vi,...vl},分为标注数据 X={x1,...xf,...xn}与无标注数据U={u1,...uj,...um},其中,vi表示第i个图像样本数据,1≤i≤l,1≤f≤n,1≤j≤m,且l=m+n,n、m与l均为正整数;Step 101. Input image data set V, specifically, V={v 1 ,...v i ,...v l }, divided into labeling data X={x 1 ,...x f ,... x n } and unlabeled data U={u 1 ,...u j ,... um }, where v i represents the i-th image sample data, 1≤i≤l, 1≤f≤n, 1≤j≤m, and l=m+n, n, m and l are all positive integers;

步骤102、输入图像集V对应的标签集,标注数据X={x1,...xi,...xn}的标签为p={p1,...pi,...pn},无标注数据U={u1,...uj,...um}不具有标签。Step 102: Input the label set corresponding to the image set V, the labels of the labeling data X={x 1 ,...x i ,...x n } are p={p 1 ,...p i ,... p n }, the unlabeled data U={u 1 , . . . u j , . . . um } has no labels.

步骤二、对有标签和未标注数据集进行预处理:Step 2: Preprocess the labeled and unlabeled datasets:

步骤201、对图像标注数据X和未标注数据U进行数据增强,其中,对于标注数据,进行单次增强,得到增强后的数据X′。对于无标注数据,进行K次随机增强,得到增强后的数据U′;Step 201 , performing data enhancement on the image labeled data X and the unlabeled data U, wherein, for the labeled data, a single enhancement is performed to obtain the enhanced data X′. For unlabeled data, perform K random enhancements to obtain the enhanced data U';

步骤202、将数据X′和U′混合,随机排列得到数据组合W,其中,增强数据的标签与原标签一致。Step 202: Mix the data X' and U', and arrange them randomly to obtain a data combination W, wherein the label of the enhanced data is consistent with the original label.

步骤三、构建并训练自适应视觉机制的深度网络,用于提取图像数据集的深度特征:Step 3. Build and train a deep network of adaptive vision mechanism to extract the deep features of the image dataset:

步骤301、构建生成对抗网络G,分为数据生成器与鉴别器,其中生成对抗网络使用DCGAN,具体采用Resnet-18。Step 301: Build a generative adversarial network G, which is divided into a data generator and a discriminator, wherein the generative adversarial network uses DCGAN, specifically Resnet-18.

步骤302、在生成对抗网络中使用自卷积层,基于空间特殊性,频域无关性的原则,设置自适应卷积核生成函数。依据输入的图像特征,输出与特征图尺寸大小相同的卷积核,控制缩放比例调整参数量,使用1x1卷积核对特征图通道进行缩放,可以得到特征图,其中,输出特征通道数为(Z*Z*Gs), Z为后续自卷积核的尺寸大小,Gs代表自卷积操作的分组数。Step 302 , using a self-convolution layer in the generative adversarial network, and setting an adaptive convolution kernel generating function based on the principles of spatial specificity and frequency domain independence. According to the input image features, output the convolution kernel with the same size as the feature map, control the scaling ratio to adjust the parameter amount, use the 1x1 convolution kernel to scale the feature map channel, and get the feature map, where the number of output feature channels is (Z *Z*Gs), Z is the size of the subsequent self-convolution kernel, and Gs represents the number of groups of self-convolution operations.

步骤303、将图像集V中标注数据的标签删除,利用无标注的全部图像集 V对生成对抗网络G进行无监督学习,使生成器生成的伪数据特征接近真实的图像特征,利用自卷积层,增强鉴别器的特征表示能力。Step 303: Delete the labels of the labeled data in the image set V, and use all the unlabeled image sets V to perform unsupervised learning on the generative adversarial network G, so that the pseudo data features generated by the generator are close to the real image features, and self-convolution is used. layer to enhance the feature representation capability of the discriminator.

步骤304、将训练好的生成对抗网络G的鉴别器Gd去掉全连接层,保留卷积层作为特征提取器Fd,用于提取目标图像标注数据X={x1,...xf,...xn}与无标注数据U={u1,...uj,...um}的深度特征xlabeled=Fd(xf)与xunlabeled=Fd(uj)。Step 304: Remove the fully connected layer from the trained discriminator G d of the generated adversarial network G, and retain the convolution layer as the feature extractor F d for extracting the target image annotation data X={x 1 ,...x f ,...x n } and the depth features of unlabeled data U={u 1 ,...u j ,...u m } x labeled =F d (x f ) and x unlabeled =F d (u j ).

步骤四、构建孪生网络,利用标注数据与无标注数据的深度特征,获取正向传播结果与伪标签:Step 4. Build a twin network, and use the depth features of labeled data and unlabeled data to obtain forward propagation results and pseudo-labels:

步骤401、构建两个浅层分类网络Net1与Net2,作为孪生网络,输入数据组合W,其中浅层分类网络使用VGG-11;Step 401, construct two shallow classification networks Net 1 and Net 2 , as twin networks, input data combination W, wherein the shallow classification network uses VGG-11;

步骤402、对于标注数据,输入增强后的数据X′与对应的标签p,得到深度特征xlabeled=Fd(X′),利用孪生网络进行预测,正向传播结果为

Figure BDA0003177713900000071
其中,pd1和pd2为Net1与Net2的组合预测,w为超参数;Step 402: For the labeled data, input the enhanced data X' and the corresponding label p to obtain the depth feature x labeled =F d (X'), use the twin network for prediction, and the forward propagation result is:
Figure BDA0003177713900000071
Among them, p d1 and p d2 are the combined predictions of Net 1 and Net 2 , and w is a hyperparameter;

步骤403、对于未标注数据,输入增强后的数据U′,得到深度特征 xunlabeled=Fd(U′),利用孪生网络进行预测,将输出加权平均作为正向传播结果

Figure BDA0003177713900000072
其中,
Figure BDA0003177713900000073
Figure BDA0003177713900000075
为孪生网络对无标注数据的预测,θ为网络训练参数。Step 403: For the unlabeled data, input the enhanced data U' to obtain the depth feature x unlabeled =F d (U'), use the twin network for prediction, and use the output weighted average as the forward propagation result
Figure BDA0003177713900000072
in,
Figure BDA0003177713900000073
and
Figure BDA0003177713900000075
is the prediction of the Siamese network on unlabeled data, and θ is the network training parameter.

步骤404、对无标注数据的预测进行锐化,得到伪标签q。其中,锐化操作具体为

Figure BDA0003177713900000081
T是锐化参数,K是增强次数,P(U;θ) 是网络对每个类别的预测概率。Step 404: Sharpen the prediction of the unlabeled data to obtain a pseudo-label q. Among them, the sharpening operation is specifically
Figure BDA0003177713900000081
T is the sharpening parameter, K is the number of enhancements, and P(U; θ) is the predicted probability of the network for each class.

步骤405、将孪生网络预测的伪标签q进行标签融合。具体地,融合后的伪标签为:

Figure BDA0003177713900000082
其中,
Figure BDA0003177713900000083
为网络Net1锐化的伪标签,
Figure BDA0003177713900000084
为网络Net2锐化的伪标签,λ~Beta(α,α),α根据实际数据集设置服从的概率分布。Step 405: Perform label fusion on the pseudo-label q predicted by the Siamese network. Specifically, the fused pseudo-label is:
Figure BDA0003177713900000082
in,
Figure BDA0003177713900000083
Pseudo-labels sharpened for network Net 1 ,
Figure BDA0003177713900000084
Pseudo-labels sharpened for the network Net 2 , λ ~ Beta(α, α), α sets the probability distribution obeyed according to the actual dataset.

步骤五、利用正向传播结果与伪标签,构建训练标注数据和无标注数据的损失函数,对孪生网络进行半监督训练:Step 5. Use the forward propagation results and pseudo-labels to construct a loss function for training labeled data and unlabeled data, and perform semi-supervised training on the Siamese network:

步骤501、建立半监督标注一致性正则化损失函数,按照每个类别计算标注数据与无标注数据差异的正则项,消除同类别标注数据与无标注数据的差异,如下所示:Step 501: Establish a semi-supervised labeling consistency regularization loss function, calculate the regular term for the difference between labeled data and unlabeled data according to each category, and eliminate the difference between labeled data and unlabeled data in the same category, as shown below:

Figure BDA0003177713900000085
Figure BDA0003177713900000085

其中,num为类别数量,xlabeled、xunlabeled为图像标注数据与未标注数据的深度特征,class-k为第k个类别;Among them, num is the number of categories, x labeled and x unlabeled are the depth features of image labeled data and unlabeled data, and class-k is the kth category;

步骤502、对于增强后的已标注数据X′,建立损失函数如下所示:Step 502: For the enhanced labeled data X', establish a loss function as follows:

Figure BDA0003177713900000086
Figure BDA0003177713900000086

步骤503、对于增强后的无标注数据U′,建立损失函数如下所示:Step 503: For the enhanced unlabeled data U', establish a loss function as follows:

Figure BDA0003177713900000087
Figure BDA0003177713900000087

其中,|X′|等于每批次样本数量,|U′|等于K倍每批次样本数量,

Figure BDA0003177713900000088
是交叉熵函数,x,p是增强的已标注数据和标签,u,q是增强的未标注数据和伪标签。Among them, |X′| is equal to the number of samples per batch, |U′| is equal to K times the number of samples per batch,
Figure BDA0003177713900000088
is the cross-entropy function, x, p are the augmented labeled data and labels, and u, q are the augmented unlabeled data and pseudo-labels.

步骤504、整体损失函数L是三者的加权,如下所示:Step 504, the overall loss function L is the weight of the three, as follows:

L=LXULUULosssemi-sup ervised L=L XU L UU Loss semi-sup ervised

其中,λU、βU为超参数。利用整体损失函数L,经过不停迭代,将训练好的孪生网络模型进行分类测试。Among them, λ U and β U are hyperparameters. Using the overall loss function L, after continuous iteration, the trained twin network model is classified and tested.

以上所述,仅是本发明的实施例,并非对本发明作任何限制,凡是根据本发明技术实质对以上实施例所作的任何简单修改、变更以及等效结构变化,均仍属于本发明技术方案的保护范围内。The above are only the embodiments of the present invention and do not limit the present invention. Any simple modifications, changes and equivalent structural changes made to the above embodiments according to the technical essence of the present invention still belong to the technical solutions of the present invention. within the scope of protection.

Claims (1)

1. A semi-supervised learning method based on a triple network and labeling consistency regularization is characterized by comprising the following steps:
step one, inputting an image data set and a corresponding label set:
step 101, an image data set V is input, in particular V ═ V1,...vi,...vlDividing the data into marked data X ═ X1,...xf,...xnAnd un-annotated data U ═ U1,...uj,...umIn which v isiRepresenting ith image sample data, i is more than or equal to 1 and less than or equal to l, f is more than or equal to 1 and less than or equal to n, j is more than or equal to 1 and less than or equal to m, l is m + n, and n, m and l are positive integers;
step 102, inputting a label set corresponding to the image set V, wherein the label data X is { X ═ X }1,...xi,...xnThe label of is p ═ p1,...pi,...pnU ═ U }, no-mark data U ═ U1,...uj,...umThere is no label.
Step two, preprocessing the labeled and unlabeled data sets:
step 201, performing data enhancement on the image labeled data X and the unlabeled data U, wherein the labeled data is subjected to single enhancement to obtain enhanced data X'. For the data without the label, performing random enhancement for K times to obtain enhanced data U';
step 202, mixing the data X 'and the data U', and randomly arranging to obtain a data combination W, wherein the label of the enhanced data is consistent with the original label.
Step three, constructing and training a depth network of the self-adaptive visual mechanism, and extracting the depth characteristics of the image data set:
step 301, constructing and generating the countermeasure network G, which is divided into a data generator and a discriminator.
Step 302, using a self-convolution layer in the generation of the countermeasure network, and setting a self-adaptive convolution kernel generation function based on the principles of spatial specificity and frequency domain independence. And outputting a convolution kernel with the same size as the feature map according to the input image features, controlling the scaling ratio to adjust the parameter quantity, and scaling the feature map channel.
And 303, deleting labels of the labeled data in the image set V, carrying out unsupervised learning on the countermeasure network G by using all the image sets V without labels, enabling the pseudo data characteristics generated by the generator to be close to real image characteristics, and enhancing the characteristic representation capability of the discriminator by using the self-convolution layer.
Step 304, generating the trained discriminator G of the countermeasure network GdAs a feature extractor FdAnd is used for extracting target image labeling data X ═ X1,...xf,...xnAnd un-annotated data U ═ U1,...uj,...umDepth feature x oflabeled=Fd(xf) And xunlabeled=Fd(uj)。
Step four, constructing a twin network, and acquiring a forward propagation result and a pseudo label by using the depth characteristics of the marked data and the unmarked data:
step 401, constructing two shallow classification networks Net1And Net2As a twin network, a data combination W is input;
step 402, inputting the enhanced data X' and the corresponding label p for the labeled data to obtain the depth feature Xlabeled=Fd(X'), prediction is carried out by using twin network, and forward propagation results are
Figure FDA0003177713890000021
Wherein p isd1And pd2Is Net1And Net2W is a hyperparameter;
step 403, inputting the enhanced data U' for the unmarked data to obtain the depth feature xunlabeled=Fd(U') predicting using twin networks, and taking the weighted average of the outputs as the forward propagation result pn
Figure FDA0003177713890000022
Wherein,
Figure FDA0003177713890000023
and
Figure FDA0003177713890000024
and theta is a network training parameter for the prediction of the twin network on the unmarked data.
And step 404, sharpening the prediction of the label-free data to obtain a pseudo label q. Wherein the sharpening operation is specifically
Figure FDA0003177713890000025
T is the sharpening parameter, K is the number of enhancements, and P (U; θ) is the prediction probability of the network for each class.
And 405, performing label fusion on the pseudo label q predicted by the twin network. Specifically, the fused pseudo tag is:
Figure FDA0003177713890000026
wherein,
Figure FDA0003177713890000027
for the network Net1A pseudo-label of the sharpening is generated,
Figure FDA0003177713890000028
for the network Net2The sharpened pseudo-label, λ, obeys a probability distribution set from the actual data set.
And fifthly, constructing loss functions of training labeled data and unlabeled data by using the forward propagation result and the pseudo label, and performing semi-supervised training on the twin network:
step 501, establishing a semi-supervised labeling consistency regularization loss function, calculating a regularization term of the difference between labeled data and unlabeled data according to each category, and eliminating the difference between labeled data of the same category and unlabeled data, as follows:
Figure FDA0003177713890000031
where num is the number of categories, xlabeled、xunlabeledMarking the depth features of the image marking data and the image unmarked data, wherein class-k is the kth class;
step 502, for the enhanced labeled data X', establishing a loss function as follows:
Figure FDA0003177713890000032
step 503, for the enhanced unmarked data U', establishing a loss function as follows:
Figure FDA0003177713890000033
wherein, | X '| equals to the number of samples in each batch, | U' | equals to K times the number of samples in each batch,
Figure FDA0003177713890000034
is a cross entropy function, x, p are enhanced labeled data and labels, and u, q are enhanced unlabeled data and pseudo labels.
Step 504, the overall loss function L is a weighting of the three, as follows:
L=LXULUULosssemi-supervised
wherein λ isU、βUIs a hyper-parameter. And carrying out classification test on the trained twin network model by using the overall loss function L through continuous iteration.
CN202110837568.9A 2021-07-23 2021-07-23 Semi-supervised learning method based on triple play network and labeling consistency regularization Active CN113657455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110837568.9A CN113657455B (en) 2021-07-23 2021-07-23 Semi-supervised learning method based on triple play network and labeling consistency regularization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110837568.9A CN113657455B (en) 2021-07-23 2021-07-23 Semi-supervised learning method based on triple play network and labeling consistency regularization

Publications (2)

Publication Number Publication Date
CN113657455A true CN113657455A (en) 2021-11-16
CN113657455B CN113657455B (en) 2024-02-09

Family

ID=78477710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110837568.9A Active CN113657455B (en) 2021-07-23 2021-07-23 Semi-supervised learning method based on triple play network and labeling consistency regularization

Country Status (1)

Country Link
CN (1) CN113657455B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048536A (en) * 2021-11-18 2022-02-15 重庆邮电大学 A road structure prediction and target detection method based on multi-task neural network
CN114155398A (en) * 2021-11-29 2022-03-08 杭州涿溪脑与智能研究所 A method and device for active learning image target detection with self-adaptive annotation type
CN114331971A (en) * 2021-12-08 2022-04-12 之江实验室 Ultrasonic endoscope target detection method based on semi-supervised self-training
CN114445789A (en) * 2022-01-24 2022-05-06 上海宏景智驾信息科技有限公司 Automatic driving scene mining method based on semi-supervised transform detection
CN114494973A (en) * 2022-02-14 2022-05-13 中国科学技术大学 Training method, system, equipment and storage medium of video semantic segmentation network
CN114612685A (en) * 2022-03-22 2022-06-10 中国科学院空天信息创新研究院 A Self-Supervised Information Extraction Method Combining Deep Features and Contrastive Learning
CN114648077A (en) * 2022-05-18 2022-06-21 合肥高斯智能科技有限公司 Method and device for multi-point industrial data defect detection
CN114742119A (en) * 2021-12-30 2022-07-12 浙江大华技术股份有限公司 Cross-supervised model training method, image segmentation method and related equipment
CN114781526A (en) * 2022-04-26 2022-07-22 西安理工大学 Depth semi-supervised image classification method based on discriminant feature learning and entropy
CN115792807A (en) * 2023-02-13 2023-03-14 北京理工大学 Semi-supervised learning underwater sound source positioning method based on twin network
CN116403074A (en) * 2023-04-03 2023-07-07 上海锡鼎智能科技有限公司 Semi-automatic image labeling method and device based on active labeling
CN117649528A (en) * 2024-01-29 2024-03-05 山东建筑大学 Semi-supervised image segmentation method, system, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598053A (en) * 2020-12-21 2021-04-02 西北工业大学 Active significance target detection method based on semi-supervised learning
CN112837338A (en) * 2021-01-12 2021-05-25 浙江大学 A Generative Adversarial Network-Based Approach for Semi-Supervised Medical Image Segmentation
KR20210071378A (en) * 2019-12-06 2021-06-16 인하대학교 산학협력단 Hierarchical object detection method for extended categories
US20220129735A1 (en) * 2019-05-20 2022-04-28 Institute of intelligent manufacturing, Guangdong Academy of Sciences Semi-supervised Hyperspectral Data Quantitative Analysis Method Based on Generative Adversarial Network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220129735A1 (en) * 2019-05-20 2022-04-28 Institute of intelligent manufacturing, Guangdong Academy of Sciences Semi-supervised Hyperspectral Data Quantitative Analysis Method Based on Generative Adversarial Network
KR20210071378A (en) * 2019-12-06 2021-06-16 인하대학교 산학협력단 Hierarchical object detection method for extended categories
CN112598053A (en) * 2020-12-21 2021-04-02 西北工业大学 Active significance target detection method based on semi-supervised learning
CN112837338A (en) * 2021-01-12 2021-05-25 浙江大学 A Generative Adversarial Network-Based Approach for Semi-Supervised Medical Image Segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WANG MIAO ET AL.: ""A SEMI-SUPERVISED SIAMESE NETWORK WITH LABEL FUSION FOR REMOTE SENSING IMAGE SCENE CLASSIFICATION"", 《 2021 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM IGARSS》 *
徐哲等: "联合训练生成对抗网络的半监督分类方法", 光学精密工程, vol. 29, no. 5 *
耿艳磊;邹峥嵘;何帅帅;: "基于半监督生成对抗网络的遥感影像地物语义分割", 测绘与空间地理信息, no. 04 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048536A (en) * 2021-11-18 2022-02-15 重庆邮电大学 A road structure prediction and target detection method based on multi-task neural network
CN114155398A (en) * 2021-11-29 2022-03-08 杭州涿溪脑与智能研究所 A method and device for active learning image target detection with self-adaptive annotation type
CN114331971A (en) * 2021-12-08 2022-04-12 之江实验室 Ultrasonic endoscope target detection method based on semi-supervised self-training
CN114742119A (en) * 2021-12-30 2022-07-12 浙江大华技术股份有限公司 Cross-supervised model training method, image segmentation method and related equipment
CN114445789A (en) * 2022-01-24 2022-05-06 上海宏景智驾信息科技有限公司 Automatic driving scene mining method based on semi-supervised transform detection
CN114494973A (en) * 2022-02-14 2022-05-13 中国科学技术大学 Training method, system, equipment and storage medium of video semantic segmentation network
CN114494973B (en) * 2022-02-14 2024-03-29 中国科学技术大学 Training methods, systems, equipment and storage media for video semantic segmentation networks
CN114612685A (en) * 2022-03-22 2022-06-10 中国科学院空天信息创新研究院 A Self-Supervised Information Extraction Method Combining Deep Features and Contrastive Learning
CN114781526A (en) * 2022-04-26 2022-07-22 西安理工大学 Depth semi-supervised image classification method based on discriminant feature learning and entropy
CN114648077B (en) * 2022-05-18 2022-09-06 合肥高斯智能科技有限公司 Method and device for multi-point industrial data defect detection
CN114648077A (en) * 2022-05-18 2022-06-21 合肥高斯智能科技有限公司 Method and device for multi-point industrial data defect detection
CN115792807A (en) * 2023-02-13 2023-03-14 北京理工大学 Semi-supervised learning underwater sound source positioning method based on twin network
CN116403074A (en) * 2023-04-03 2023-07-07 上海锡鼎智能科技有限公司 Semi-automatic image labeling method and device based on active labeling
CN116403074B (en) * 2023-04-03 2024-05-14 上海锡鼎智能科技有限公司 Semi-automatic image labeling method and device based on active labeling
CN117649528A (en) * 2024-01-29 2024-03-05 山东建筑大学 Semi-supervised image segmentation method, system, electronic equipment and storage medium
CN117649528B (en) * 2024-01-29 2024-05-31 山东建筑大学 Semi-supervised image segmentation method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113657455B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN113657455A (en) Semi-supervised learning method based on triple network and labeling consistency regularization
Li et al. Deepsaliency: Multi-task deep neural network model for salient object detection
Eslami et al. Attend, infer, repeat: Fast scene understanding with generative models
CN112883839B (en) Remote sensing image interpretation method based on adaptive sample set construction and deep learning
CN114926746A (en) SAR image change detection method based on multi-scale differential feature attention mechanism
CN103258210B (en) A kind of high-definition image classification method based on dictionary learning
CN112434628B (en) Small sample image classification method based on active learning and collaborative representation
CN106096652A (en) Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device
CN114611617B (en) Deep Domain Adaptive Image Classification Method Based on Prototype Network
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
CN114972904B (en) A zero-shot knowledge distillation method and system based on adversarial triplet loss
CN106127240A (en) A kind of classifying identification method of plant image collection based on nonlinear reconstruction model
CN117315381B (en) Hyperspectral image classification method based on second-order biased random walk
CN113408651B (en) Unsupervised three-dimensional object classification method based on local discriminant enhancement
CN116895016A (en) SAR image ship target generation and classification method
Lv et al. Simulation-aided SAR target classification via dual-branch reconstruction and subdomain alignment
CN112465836A (en) Thermal infrared semantic segmentation unsupervised field self-adaption method based on contour information
Tan et al. Wide Residual Network for Vision-based Static Hand Gesture Recognition.
CN115035302A (en) A fine-grained image classification method based on deep semi-supervised model
Zhao et al. Adversarial learning and interpolation consistency for unsupervised domain adaptation
Dolgikh Sparsity Constraint in Unsupervised Concept Learning.
CN117237715A (en) Image multi-classification method based on multi-branch mixed quantum classical neural network
Zhou Lip print recognition algorithm based on convolutional network
Li et al. Image decomposition with multilabel context: Algorithms and applications
CN115393713A (en) A Scene Understanding Method Based on Lot-Aware Dynamic Memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant