CN113537307A - Self-supervision domain adaptation method based on meta-learning - Google Patents

Self-supervision domain adaptation method based on meta-learning Download PDF

Info

Publication number
CN113537307A
CN113537307A CN202110727430.3A CN202110727430A CN113537307A CN 113537307 A CN113537307 A CN 113537307A CN 202110727430 A CN202110727430 A CN 202110727430A CN 113537307 A CN113537307 A CN 113537307A
Authority
CN
China
Prior art keywords
network
domain
meta
target domain
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110727430.3A
Other languages
Chinese (zh)
Other versions
CN113537307B (en
Inventor
路统宇
颜成钢
孙垚棋
张继勇
李宗鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110727430.3A priority Critical patent/CN113537307B/en
Publication of CN113537307A publication Critical patent/CN113537307A/en
Application granted granted Critical
Publication of CN113537307B publication Critical patent/CN113537307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种基于元学习的自监督域适应方法,将目标域图像重建作为域适应过程中的自监督任务,监督信息即为目标域图像本身,不需要额外的目标域图像标注信息,节省了大量人工标注成本;此外,目标域图像的重建过程能够使网络学习到目标域图像中更丰富的高层语义信息,使得网络能够利用目标域数据的内在特征来辅助网络将源域数据中学习到的知识向目标域迁移,从而提升域适应方法的性能。通过将元学习策略引入自监督域适应中,使得目标域自监督任务和源域分类等特定任务对网络参数的更新方向趋于一致,使得网络能够更好地提取域不变特征,减少了域适应任务和特定任务对网络参数的更新方向不一致造成的负迁移问题,提升了域适应性能。

Figure 202110727430

The invention discloses a self-supervised domain adaptation method based on meta-learning. The target domain image reconstruction is regarded as a self-supervised task in the domain adaptation process, and the supervision information is the target domain image itself, and no additional target domain image annotation information is required. It saves a lot of manual annotation costs; in addition, the reconstruction process of the target domain image enables the network to learn richer high-level semantic information in the target domain image, so that the network can use the intrinsic features of the target domain data to assist the network to learn from the source domain data. The acquired knowledge is transferred to the target domain, thereby improving the performance of domain adaptation methods. By introducing the meta-learning strategy into the self-supervised domain adaptation, the update direction of the network parameters for specific tasks such as the target domain self-supervised task and the source domain classification tends to be consistent, so that the network can better extract domain-invariant features and reduce the number of domain The negative transfer problem caused by the inconsistent update direction of network parameters between adaptation tasks and specific tasks improves the domain adaptation performance.

Figure 202110727430

Description

Self-supervision domain adaptation method based on meta-learning
Technical Field
The invention belongs to the field of computer vision and image processing, and particularly relates to an unsupervised domain adaptation method based on meta-learning and image reconstruction, which can enable a source domain image classification task and a target domain image reconstruction task to tend to be consistent in a parameter updating direction of a feature extraction network, namely, the parameter updating direction of source domain image category information to the feature extraction network tends to tend to the parameter updating direction of the feature extraction network by utilizing information such as the spatial relationship, illumination and the like of an object in a target domain image, so that the features obtained by the feature extraction network are domain-invariant features, and the domain adaptation of a source domain and a target domain is realized.
Background
In recent years, supervised learning based on a deep neural network has been applied in many aspects, and is widely applied in the fields of image classification, target detection, semantic segmentation, natural language processing and the like, and the combination of artificial intelligence technology and real life is greatly promoted. However, many supervised learning methods usually assume that probability distributions of a training set sample and a test set sample obey the same distribution, and in addition, in order to make a network have better generalization performance and avoid an overfitting problem, a large number of labeled training samples are usually required in a training stage in a supervised learning mode. However, with the advent of the big data era, the data scale is increasing, and the problems of statistical property difference among different data sets, high labor cost of data annotation and the like are gradually revealed. In order to solve the above problems, unsupervised domain adaptation methods are widely studied. Unsupervised domain adaptation is a method to resolve the distribution differences between source and target domains by learning some generalizable knowledge in tagged source domain data and applying it to the task on untagged target domain data to improve the performance of the network on the target domain.
At present, many unsupervised domain adaptation methods mostly focus on aligning the feature probability distribution of source domain data and target domain data by adopting a distance measurement-based or countermeasure learning-based mode, so that the inter-domain difference is reduced, and a network can learn a domain-invariant feature space. However, these methods only perform distribution alignment from the whole of the source domain data and the target domain data, and do not pay attention to the influence of the intrinsic characteristics of the target domain data on the knowledge migration in the domain adaptation process, so the self-supervision domain adaptation method based on image reconstruction and the like performs joint training on the network by setting self-supervision tasks such as image reconstruction, image rotation angle prediction, image restoration and the like on the non-labeled target domain data in cooperation with the labeled source domain data, that is, assists the network to migrate the knowledge learned from the source domain data to the target domain by mining the intrinsic characteristics of the target domain data, and in addition, because the method based on image reconstruction keeps the integrity of the data while extracting the migratable characteristics in the domain adaptation, it is ensured that the information for improving the performance of a specific task in the target domain data is not damaged, and the original distribution of the target domain data is kept as much as possible in the knowledge migration process of the source domain data, the learned knowledge in the source domain data can thus be better applied in the target domain tasks. However, in the domain adaptation process, many existing unsupervised domain adaptation methods simply update parameters of the feature extraction network through specific tasks such as an unsupervised task and image classification respectively, whether the update directions of the two types of tasks to the parameters of the feature extraction network are consistent or not is not considered, negative effects may be caused to feature learning of the specific tasks such as the image classification by the supervised task, the specific tasks such as the unsupervised task and the image classification can be respectively used as a trainer and a tester through meta-learning, and the parameters of the tester network are updated through a loss function in the trainer, so that the update directions of the two types of tasks to the parameters of the network tend to be consistent.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a self-supervision domain adaptation method based on Meta learning (Meta learning).
A self-supervision domain adaptation method based on meta-learning comprises the following steps:
step 1, setting a trainer and a tester:
and taking the reconstruction process of the target domain sample as a trainer in meta-learning, and taking the classification process of the source domain sample as a tester in the meta-learning.
Step 2, performing an image reconstruction task by using the target domain sample and calculating reconstruction loss:
inputting the label-free target domain sample into a feature extraction network to obtain target domain sample features, inputting the target domain sample features into an image reconstruction network to reconstruct images and calculating reconstruction loss.
And 3, updating parameters of the feature extraction network:
the parameters of the characteristic extraction network in the trainer are updated together with the parameters of the characteristic extraction network in the trainer due to weight sharing, namely, the parameter updating direction of the network in the tester tends to the parameter updating direction of the network in the trainer.
Step 4, performing a classification task by using the source domain samples and calculating a classification loss:
and inputting the source domain data with the labels into the feature extraction network after the parameters are updated to obtain the source domain data features, and then inputting the source domain data features into a classification network to perform an image classification task and calculate the classification loss.
Step 5, calculating a total loss function and updating parameters of all networks:
and calculating a total loss function, and updating parameters of the feature extraction network, the reconstruction network and the classification network in the trainer and the tester.
The invention has the following beneficial effects:
(1) the target domain image reconstruction is used as an automatic supervision task in the domain adaptation process, the supervision information is the target domain image, and no additional target domain image marking information is needed, so that a large amount of manual marking cost is saved; in addition, the reconstruction process of the target domain image enables the network to learn richer high-level semantic information in the target domain image, so that the network can assist the network to transfer the knowledge learned in the source domain data to the target domain by utilizing the intrinsic characteristics of the target domain data, and the performance of the domain adaptation method is improved.
(2) By introducing the meta-learning strategy into the self-supervision domain adaptation, the updating directions of the target domain self-supervision task, the source domain classification and other specific tasks to the network parameters tend to be consistent, so that the network can better extract domain invariant features, the problem of negative migration caused by the inconsistency of the updating directions of the domain adaptation task and the specific tasks to the network parameters is solved, and the domain adaptation performance is improved.
Drawings
Fig. 1 is a flowchart of an adaptive method for an adaptive domain based on meta-learning according to the present invention.
Fig. 2 is a network diagram of an adaptive method for an autonomous domain based on meta-learning according to the present invention.
Fig. 3 is a schematic diagram of basic units of a ResNet network.
Fig. 4 is a schematic view of a fully connected layer structure.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, a method for adaptive self-supervision domain based on meta-learning includes the following steps:
step 1, setting a trainer and a tester: as shown in fig. 2, target domain samples xtThe reconstruction process of (2) is used as a Meta-trainer (Meta-train) in Meta-learning, and the source domain sample x is used as a Meta-trainersAs a Meta-tester (Meta-test) in Meta-learning. The source domain with the label therein is denoted as S ═ { Xs,Ys},xs∈XsAnd ys∈YsRespectively representing source domain samples and corresponding labels, and an unlabeled target domain is denoted as T ═ XtIn which xt∈XtRepresenting a target domain sample.
Step 2, performing an image reconstruction task by using the target domain sample and calculating reconstruction loss:
target domain samples x to be unlabeledtObtaining target domain sample characteristics f by input characteristic extraction network GtThen the target domain sample characteristics ftThe input image reconstruction network D carries out image reconstruction to obtain a target domain reconstruction sample
Figure BDA0003139171380000059
And meterCalculating reconstruction loss Lr. The feature extraction network G adopts a ResNet-50 structure, the basic unit of ResNet is shown in figure 3, the output of the previous layer is added with the output calculated by the current layer through jumping, and the summed result is input into the activation function as the output of the current layer. Obtaining target domain sample characteristics f through the characteristic extraction process of ResNet-50t=G(xt). The image reconstruction network D adopts a decoder structure and carries out a series of upsampling on the target domain sample characteristics ftReduced to the original size, i.e.
Figure BDA0003139171380000051
The reconstruction loss is:
Figure BDA0003139171380000052
wherein N istIs the number of samples in the target domain, and j is the jth sample in the target domain.
And 3, updating parameters of the feature extraction network:
using reconstruction loss L in the trainerrUpdating parameters of the feature extraction network G, namely:
Figure BDA0003139171380000053
wherein
Figure BDA0003139171380000054
The parameters of the network are extracted for the current features,
Figure BDA0003139171380000055
extracting the parameters of the network for the updated features, alpha is the learning rate,
Figure BDA0003139171380000056
are the parameters of the decoder D and,
Figure BDA0003139171380000057
representation of parameters
Figure BDA0003139171380000058
And (5) solving the gradient, wherein a random gradient descent algorithm is adopted for gradient descent. Due to weight sharing, the parameter theta of the feature extraction network G in the tester and the parameter theta of the feature extraction network G in the trainer are updated together, namely, the parameter updating direction of a specific task such as image classification in the tester to the network is forced to trend to the parameter updating direction of an automatic supervision image reconstruction task in the trainer to the network.
Step 4, performing a classification task by using the source domain samples and calculating a classification loss: source domain data x to be taggedsThe input parameter updated feature extraction network G obtains the source domain data feature fsThen the source domain data is characterized by fsAnd inputting the image into a classification network C to perform an image classification task and calculating a classification loss. The classification network C adopts a structure of multiple fully-connected layers and a softmax layer, a schematic diagram of the fully-connected layer structure is shown in FIG. 4, each node of the classification network C is connected with all nodes of the previous layer and used for integrating the extracted features and outputting image prediction tags through the softmax layer
Figure BDA0003139171380000061
Namely, it is
Figure BDA0003139171380000062
Is classified as
Figure BDA0003139171380000063
NsIs the number of samples in the source domain, and k is the kth sample in the source domain.
Step 5, calculating a total loss function and updating parameters of all networks:
calculating a total loss function L, and updating parameters of a feature extraction network G, a reconstruction network D and a classification network C in the trainer and the tester, namely:
Figure BDA0003139171380000064
where β is the learning rate, { θGDC}tFor the parameters of the network at the current moment, the total loss function is as follows:
Figure BDA0003139171380000065
wherein, thetaCExpressing the parameters of the classification network, lambda is a hyper-parameter and is used for controlling the influence of the image reconstruction task and the image classification task on the network parameter updating, LrGD) Expressed in a network parameter of thetaGAnd thetaDThe loss of the image reconstruction obtained by the calculation,
Figure BDA0003139171380000066
Figure BDA0003139171380000067
expressed in the network parameters of
Figure BDA0003139171380000068
And thetaCThe calculated image classification loss.

Claims (6)

1.一种基于元学习的自监督域适应方法,其特征在于,包括以下步骤:1. A self-supervised domain adaptation method based on meta-learning, characterized in that, comprising the following steps: 步骤1,设置训练器和测试器:Step 1, set up trainer and tester: 将目标域样本的重建过程作为元学习中的训练器,将源域样本的分类过程作为元学习中的测试器;The reconstruction process of the target domain samples is used as the trainer in the meta-learning, and the classification process of the source domain samples is used as the tester in the meta-learning; 步骤2,利用目标域样本进行图像重建任务并计算重建损失:Step 2, use the target domain samples to perform the image reconstruction task and calculate the reconstruction loss: 将无标签的目标域样本输入特征提取网络得到目标域样本特征,然后将目标域样本特征输入图像重建网络进行图像重建并计算重建损失;Input the unlabeled target domain samples into the feature extraction network to obtain the target domain sample features, and then input the target domain sample features into the image reconstruction network for image reconstruction and calculate the reconstruction loss; 步骤3,对特征提取网络进行参数更新:Step 3, update the parameters of the feature extraction network: 利用训练器中的重建损失对训练器中的特征提取网络进行参数更新,由于权值共享,测试器中的特征提取网络的参数和训练器中的特征提取网络参数一起更新,即使得测试器中网络的参数更新方向趋向训练器中网络的参数更新方向;Use the reconstruction loss in the trainer to update the parameters of the feature extraction network in the trainer. Due to the sharing of weights, the parameters of the feature extraction network in the tester are updated together with the parameters of the feature extraction network in the trainer, that is, even in the tester The parameter update direction of the network tends to the parameter update direction of the network in the trainer; 步骤4,利用源域样本进行分类任务并计算分类损失:Step 4, use the source domain samples to perform the classification task and calculate the classification loss: 将有标签的源域数据输入参数更新后的特征提取网络得到源域数据特征,然后将源域数据特征输入分类网络进行图像分类任务并计算分类损失;Input the labelled source domain data into the feature extraction network after updating the parameters to obtain the source domain data features, and then input the source domain data features into the classification network for image classification tasks and calculate the classification loss; 步骤5,计算总损失函数并对全部网络进行参数更新:Step 5, calculate the total loss function and update the parameters of the entire network: 计算总损失函数,并对训练器和测试器中的特征提取网络、重建网络和分类网络进行参数更新。Calculate the total loss function and make parameter updates for the feature extraction network, reconstruction network, and classification network in the trainer and tester. 2.根据权利要求1所述的一种基于元学习的自监督域适应方法,其特征在于,步骤1具体方法如下:2. a kind of self-supervised domain adaptation method based on meta-learning according to claim 1, is characterized in that, the concrete method of step 1 is as follows: 将目标域样本xt的重建过程作为元学习中的元训练器(Meta-train),将源域样本xs的分类过程作为元学习中的元测试器(Meta-test);其中有标签的源域表示为S={Xs,Ys},xs∈Xs和ys∈Ys分别表示源域样本和相应的标签,无标签的目标域表示为T={Xt},其中xt∈Xt表示目标域样本。The reconstruction process of the target domain sample x t is used as the meta-trainer (Meta-train) in the meta-learning, and the classification process of the source domain sample x s is used as the meta-tester (Meta-test) in the meta-learning; The source domain is denoted as S = {X s , Y s }, x s ∈ X s and y s ∈ Y s represent the source domain samples and corresponding labels, respectively, and the unlabeled target domain is denoted as T = {X t }, where x t ∈ X t represents the target domain sample. 3.根据权利要求2所述的一种基于元学习的自监督域适应方法,其特征在于,步骤2具体方法如下:3. a kind of self-supervised domain adaptation method based on meta-learning according to claim 2, is characterized in that, the concrete method of step 2 is as follows: 将无标签的目标域样本xt输入特征提取网络G得到目标域样本特征ft,然后将目标域样本特征ft输入图像重建网络D进行图像重建得到目标域重建样本
Figure FDA0003139171370000024
并计算重建损失Lr;其中特征提取网络G采用ResNet-50结构,ResNet的基本单元通过跳接将之前层的输出与本层计算的输出相加,并将求和的结果输入到激活函数中作为本层的输出;通过ResNet-50的特征提取过程,得到目标域样本特征ft=G(xt);图像重建网络D采用解码器结构,通过一系列上采样将目标域样本特征ft还原为原图大小,即
Figure FDA0003139171370000021
(ft);重建损失为:
Input the unlabeled target domain sample x t into the feature extraction network G to obtain the target domain sample feature ft , and then input the target domain sample feature ft into the image reconstruction network D for image reconstruction to obtain the target domain reconstruction sample
Figure FDA0003139171370000024
And calculate the reconstruction loss L r ; in which the feature extraction network G adopts the ResNet-50 structure, and the basic unit of ResNet adds the output of the previous layer and the output calculated by this layer through the jump connection, and the summation result is input into the activation function As the output of this layer; through the feature extraction process of ResNet-50, the target domain sample feature f t =G(x t ) is obtained; the image reconstruction network D adopts the decoder structure, and the target domain sample feature f t is obtained through a series of upsampling Restore to the original image size, ie
Figure FDA0003139171370000021
(f t ); the reconstruction loss is:
Figure FDA0003139171370000022
Figure FDA0003139171370000022
其中Nt为目标域样本个数,j为目标域中第j个样本。where N t is the number of samples in the target domain, and j is the jth sample in the target domain.
4.根据权利要求3所述的一种基于元学习的自监督域适应方法,其特征在于,步骤3具体方法如下:4. a kind of self-supervised domain adaptation method based on meta-learning according to claim 3, is characterized in that, the concrete method of step 3 is as follows: 利用训练器中的重建损失Lr对特征提取网络G进行参数更新,即:Use the reconstruction loss L r in the trainer to update the parameters of the feature extraction network G, namely:
Figure FDA0003139171370000023
Figure FDA0003139171370000023
其中
Figure FDA0003139171370000031
为当前特征提取网络的参数,
Figure FDA0003139171370000032
为经过更新后的特征提取网络的参数,α为学习率,
Figure FDA0003139171370000033
为解码器D的参数,
Figure FDA0003139171370000034
表示对参数
Figure FDA0003139171370000035
求梯度,梯度下降采用随机梯度下降算法;由于权值共享,测试器中的特征提取网络G的参数θ和训练器中的特征提取网络G的参数θ一起更新,即迫使测试器中图像分类等特定任务对网络的参数更新方向趋向训练器中自监督图像重建任务对网络的参数更新方向。
in
Figure FDA0003139171370000031
Extract the parameters of the network for the current feature,
Figure FDA0003139171370000032
is the parameter of the updated feature extraction network, α is the learning rate,
Figure FDA0003139171370000033
is the parameter of decoder D,
Figure FDA0003139171370000034
Indicates the pair of parameters
Figure FDA0003139171370000035
To find the gradient, the gradient descent adopts the stochastic gradient descent algorithm; due to the sharing of weights, the parameter θ of the feature extraction network G in the tester and the parameter θ of the feature extraction network G in the trainer are updated together, that is, forcing the image classification in the tester, etc. The direction of parameter update of the network by a specific task tends to the direction of parameter update of the network by the self-supervised image reconstruction task in the trainer.
5.根据权利要求4所述的一种基于元学习的自监督域适应方法,其特征在于,步骤4具体方法如下:5. a kind of self-supervised domain adaptation method based on meta-learning according to claim 4, is characterized in that, the concrete method of step 4 is as follows: 将有标签的源域数据xs输入参数更新后的特征提取网络G得到源域数据特征fs,然后将源域数据特征fs输入分类网络C进行图像分类任务并计算分类损失;其中分类网络C采用多全连接层加softmax层结构,全连接层结构,其每一个结点都与上一层的所有结点相连,用于将提取到的特征进行综合,并经过softmax层输出图像预测标签
Figure FDA00031391713700000310
Figure FDA0003139171370000036
分类损失为
Figure FDA0003139171370000037
Figure FDA0003139171370000038
Ns为源域样本个数,k为源域中第k个样本。
Input the labeled source domain data x s into the updated feature extraction network G to obtain the source domain data feature f s , and then input the source domain data feature f s into the classification network C to perform the image classification task and calculate the classification loss; where the classification network C adopts multiple fully connected layers plus softmax layer structure, fully connected layer structure, each node is connected to all nodes in the previous layer, used to synthesize the extracted features, and output image prediction labels through the softmax layer
Figure FDA00031391713700000310
which is
Figure FDA0003139171370000036
The classification loss is
Figure FDA0003139171370000037
Figure FDA0003139171370000038
N s is the number of samples in the source domain, and k is the kth sample in the source domain.
6.根据权利要求5所述的一种基于元学习的自监督域适应方法,其特征在于,步骤5具体方法如下:6. a kind of self-supervised domain adaptation method based on meta-learning according to claim 5, is characterized in that, the concrete method of step 5 is as follows: 计算总损失函数L,并对训练器和测试器中的特征提取网络G、重建网络D和分类网络C进行参数更新,即:Calculate the total loss function L, and update the parameters of the feature extraction network G, reconstruction network D and classification network C in the trainer and tester, namely:
Figure FDA0003139171370000039
Figure FDA0003139171370000039
其中β为学习率,{θGDC}t为当前时刻网络的参数,总损失函数如下:where β is the learning rate, {θ G , θ D , θ C } t is the parameter of the network at the current moment, and the total loss function is as follows:
Figure FDA0003139171370000041
Figure FDA0003139171370000041
其中,θC表示分类网络的参数,λ为超参数,用于控制图像重建任务和图像分类任务对网络参数更新的影响大小,LrGD)表示在网络参数为θG和θD时计算得到的图像重建损失,
Figure FDA0003139171370000042
Figure FDA0003139171370000043
表示在网络参数为
Figure FDA0003139171370000044
和θC时计算得到的图像分类损失。
Among them, θ C represents the parameters of the classification network, λ is a hyperparameter, which is used to control the influence of image reconstruction tasks and image classification tasks on the update of network parameters, and L rG , θ D ) represents when the network parameters are θ G and The image reconstruction loss calculated at θ D ,
Figure FDA0003139171370000042
Figure FDA0003139171370000043
Indicates that the network parameters are
Figure FDA0003139171370000044
and the image classification loss computed when θ C .
CN202110727430.3A 2021-06-29 2021-06-29 Self-supervision domain adaptation method based on meta learning Active CN113537307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110727430.3A CN113537307B (en) 2021-06-29 2021-06-29 Self-supervision domain adaptation method based on meta learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110727430.3A CN113537307B (en) 2021-06-29 2021-06-29 Self-supervision domain adaptation method based on meta learning

Publications (2)

Publication Number Publication Date
CN113537307A true CN113537307A (en) 2021-10-22
CN113537307B CN113537307B (en) 2024-04-05

Family

ID=78097130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110727430.3A Active CN113537307B (en) 2021-06-29 2021-06-29 Self-supervision domain adaptation method based on meta learning

Country Status (1)

Country Link
CN (1) CN113537307B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971746A (en) * 2021-12-24 2022-01-25 季华实验室 Garbage sorting method, device and sorting intelligent system based on single manual teaching
CN114724145A (en) * 2022-04-12 2022-07-08 济南博观智能科技有限公司 A character image recognition method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275092A (en) * 2020-01-17 2020-06-12 电子科技大学 An Image Classification Method Based on Unsupervised Domain Adaptation
CN112784790A (en) * 2021-01-29 2021-05-11 厦门大学 Generalization false face detection method based on meta-learning
US20210182618A1 (en) * 2018-10-29 2021-06-17 Hrl Laboratories, Llc Process to learn new image classes without labels

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210182618A1 (en) * 2018-10-29 2021-06-17 Hrl Laboratories, Llc Process to learn new image classes without labels
CN111275092A (en) * 2020-01-17 2020-06-12 电子科技大学 An Image Classification Method Based on Unsupervised Domain Adaptation
CN112784790A (en) * 2021-01-29 2021-05-11 厦门大学 Generalization false face detection method based on meta-learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971746A (en) * 2021-12-24 2022-01-25 季华实验室 Garbage sorting method, device and sorting intelligent system based on single manual teaching
CN114724145A (en) * 2022-04-12 2022-07-08 济南博观智能科技有限公司 A character image recognition method, device, equipment and medium

Also Published As

Publication number Publication date
CN113537307B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN111814854B (en) An object re-identification method with unsupervised domain adaptation
CN110427875B (en) Infrared image target detection method based on deep transfer learning and extreme learning machine
CN112232416B (en) Semi-supervised learning method based on pseudo label weighting
CN114912612A (en) Bird identification method, device, computer equipment and storage medium
CN111079847A (en) Remote sensing image automatic labeling method based on deep learning
CN112288013A (en) A small sample remote sensing scene classification method based on metametric learning
CN113469186A (en) Cross-domain migration image segmentation method based on small amount of point labels
CN112819065A (en) Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information
CN112052818A (en) Unsupervised domain adaptive pedestrian detection method, unsupervised domain adaptive pedestrian detection system and storage medium
CN108491766A (en) A kind of people counting method end to end based on depth decision forest
CN113537307A (en) Self-supervision domain adaptation method based on meta-learning
CN114398992A (en) An Intelligent Fault Diagnosis Method Based on Unsupervised Domain Adaptation
CN108595558A (en) A kind of image labeling method of data balancing strategy and multiple features fusion
CN117035013A (en) Method for predicting dynamic network link by adopting impulse neural network
CN116310647A (en) Labor insurance object target detection method and system based on incremental learning
CN112668633A (en) Adaptive graph migration learning method based on fine granularity field
CN114972920B (en) Multi-level non-supervision field self-adaptive target detection and identification method
CN119152315B (en) ElectroTrackNet electric selection track recognition method and system
Li et al. Semi-supervised crack detection using segment anything model and deep transfer learning
CN115439715A (en) Semi-supervised few-sample image classification learning method and system based on anti-label learning
CN114492581A (en) Method for classifying small sample pictures based on transfer learning and attention mechanism element learning application
Li et al. Self-paced convolutional neural network for computer aided detection in medical imaging analysis
CN117292119B (en) A method and system for multi-scale target detection in power transmission
CN115035330B (en) An unsupervised transfer learning image classification method for environmental changes
CN118014048A (en) Low-illumination face detection model construction method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant