CN114220003A - Multi-target unsupervised domain self-adaption method for large-range ground object segmentation - Google Patents

Multi-target unsupervised domain self-adaption method for large-range ground object segmentation Download PDF

Info

Publication number
CN114220003A
CN114220003A CN202111423886.7A CN202111423886A CN114220003A CN 114220003 A CN114220003 A CN 114220003A CN 202111423886 A CN202111423886 A CN 202111423886A CN 114220003 A CN114220003 A CN 114220003A
Authority
CN
China
Prior art keywords
domain
target
image
model
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111423886.7A
Other languages
Chinese (zh)
Other versions
CN114220003B (en
Inventor
任东
刘明
何雨岩
向杰
安毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202111423886.7A priority Critical patent/CN114220003B/en
Publication of CN114220003A publication Critical patent/CN114220003A/en
Application granted granted Critical
Publication of CN114220003B publication Critical patent/CN114220003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Abstract

A multi-target unsupervised domain self-adaptive method for large-range ground object segmentation comprises the following steps: step 1: selecting data of one of a plurality of fields from the obtained large-range remote sensing image, cutting the data to manufacture a source field data set after marking, and directly cutting images of other fields to manufacture a plurality of target field data sets; step 2: putting the source domain data set and the target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model; and step 3: putting a plurality of target domain data sets into a segmentation model, and making a target domain image with a high-confidence pseudo label into a pseudo source domain data set through entropy minimum sorting; and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model; and 5: and putting all the remote sensing images into a model to obtain a segmentation result.

Description

Multi-target unsupervised domain self-adaption method for large-range ground object segmentation
Technical Field
The invention relates to the technical field of semantic segmentation in remote sensing images, in particular to a multi-target unsupervised domain self-adaptive method for large-range ground object segmentation.
Background
Land resource utilization and human activities are closely related, and accurate acquisition of land utilization information has great significance in better environmental protection, development of dominant economic industries, disaster prevention and the like. With the development of remote sensing technology and deep learning, the method for segmenting the ground features on the remote sensing image through the semantic segmentation algorithm is regarded as an efficient and convenient scheme.
The semantic segmentation algorithm can learn better feature extraction capability under the supervision of a large amount of data, but the trained model is not always effective. The acquisition of the remote sensing image is interfered by a plurality of aspects, so that the difference exists in the distribution of the characteristics, and the difference of the characteristics makes the semantic segmentation model unable to obtain the expected result on the untrained image. In such a scenario, the image with the label for training may be referred to as a source domain image, and the image without the label may be referred to as a target domain image. Since the drastic changes of objects and scenes captured in remote sensing images are difficult to segment through a single model trained on data sets collected at several specific sites, the segmentation of the ground objects in remote sensing images in a wide range becomes a difficult task.
In order to solve the problem, the traditional unsupervised domain self-adaptive method enables the model to obtain better segmentation results on data in different fields in a mode of aligning the characteristics of a source domain and a target domain. However, these domain adaptive methods are all established under the condition of a single source domain and a single target domain, which still has a great problem in practical use, a large-range remote sensing image may have multiple domains, all target domains are divided into one target domain for domain adaptive learning, and the performance of the obtained segmentation model is far inferior to that of the segmentation model obtained by single source-single target domain adaptive learning. And aligning multiple target domain fields with the source domain respectively is costly and long-term. Therefore, designing a method which can align a plurality of target domains and does not reduce the model performance is an important premise for realizing the ground feature segmentation of the large-range remote sensing image.
According to the method, the source domain and the target domains are directly aligned by constructing the multi-branch unsupervised domain self-adaptive model, and the characteristics are further aligned by constructing the pseudo source domain data set through entropy minimization. The method has the advantages that the segmentation model learns the characteristics of multiple fields with low cost, the problem of field deviation between the source field and multiple target fields is well relieved, and therefore the effect of the segmentation model on the large-range remote sensing image is further improved.
Disclosure of Invention
The invention aims to realize the ground feature segmentation of a large-range remote sensing image, and provides a model which can directly align a source and a plurality of target domains, so that the segmentation model can obtain a better segmentation result on data of the plurality of domains with lower cost.
A multi-target unsupervised domain self-adaptive method for large-range ground object segmentation comprises the following steps:
step 1: selecting data of any one of a plurality of fields, cutting the data after marking to manufacture a source field data set, and directly cutting images of other fields except the selected field to manufacture a plurality of target field data sets;
step 2: putting the source domain data set and the target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model;
and step 3: inputting a plurality of target domain data sets into a segmentation model, and making a target domain image with a high-confidence pseudo label into a pseudo source domain data set through entropy minimum sorting;
and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model;
and 5: putting all remote sensing images into a model to obtain a result;
in step 1, when the data of the selected domain is marked, the number of feature types recognized by the model is the same as the number of marked types.
In step 2, when training the multi-branch unsupervised domain adaptive model to obtain a segmentation model, the method includes the following steps:
1) constructing a multi-branch unsupervised domain self-adaptive model;
2) taking the source domain data set and the plurality of target domain data sets as input, and training until the model performance reaches the optimum;
in step 1), the multi-branch unsupervised domain adaptive model is composed of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators (the value of K is equal to the number of target domains). The system comprises a feature extractor, a K target classifiers and K branch modules, wherein the K target classifiers and the K branch modules are connected behind the feature extractor, the K branch modules are respectively connected with K corresponding fine-grained discriminators, and meanwhile, the outputs of the K branch modules are also connected with the invariant feature classifiers.
In step 2), training each module in the multi-branch unsupervised domain self-adaptive model is supervised by different data, wherein the feature extractor and the invariant feature classifier are trained through all data; the kth target classifier, the kth branching module and the kth fine-grain discriminator are trained on the source domain dataset and the kth target domain dataset (K ═ 1,2, …, K).
The loss function used to train the feature extractor is defined as follows:
Figure BDA0003377505830000031
Figure BDA0003377505830000032
Figure BDA0003377505830000033
wherein L issegIs the loss of segmentation from the source domain data,
Figure BDA0003377505830000034
and
Figure BDA0003377505830000035
are the countermeasure loss and classification loss generated by the target domain data. H. W, C are the height, width and number of categories, respectively, of the input image.
Figure BDA0003377505830000036
Is the domain label of the target domain image.
Figure BDA0003377505830000037
Is the predicted probability that the target domain image will be generated after passing through the discriminator.
Figure BDA0003377505830000038
Is the resulting prediction probability of the target domain image on the kth target classifier. n issAnd
Figure BDA0003377505830000039
is the number of samples of the source domain and the kth target domain.
Figure BDA00033775058300000310
And
Figure BDA00033775058300000311
are weighting factors that control the impact on the immunity to loss and classification loss.
The dimensionality of the domain label is the same as the number of the ground object categories, and the value of the domain label represents the probability that the model prediction pixel point belongs to a certain category.
The data flow of the source domain data set in the multi-branch unsupervised domain adaptive model comprises the following steps:
1) generating a feature map from the source domain image through a feature extractor;
2) extracting invariant features from the source domain feature map through all branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating segmentation loss;
3) inputting the source domain invariant features into a discriminator, and training the discriminator;
the data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptive model comprises the following steps:
1) generating a feature map from the target domain image through a feature extractor;
2) extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the countermeasure loss;
3) inputting a feature map generated by the image of the kth target domain into a kth target classifier, and calculating classification loss;
4) inputting the invariant features of the kth target domain into a kth discriminator, and training and inputting the kth discriminator;
in step 3, the target domain image is input into the model to obtain a prediction result, the image segmentation conditions are sequenced by calculating the entropy of the prediction result, and the image with high confidence coefficient is extracted.
In step 4, the target domain label graph with high confidence coefficient and the corresponding target domain image are used as a pseudo source domain, and the pseudo source domain and the target domain image are input into a multi-branch unsupervised domain adaptive model for training.
In step 5, all remote sensing images are cut into small pictures which can be identified by the model, and the model is input to obtain a segmentation result.
Compared with the prior art, the invention has the following technical effects:
(1) the invention is a multi-target unsupervised domain self-adaptive model, and only the data of one domain needs to be marked, so that the knowledge of the source domain can be migrated to other domains at one time. The segmentation model is better popularized, and meanwhile, the labor cost for realizing ground object segmentation in a large-range remote sensing image is reduced;
(2) according to the invention, through designing a multi-branch structure, firstly, a source domain and different target domains are aligned on different branches, so that the target domains are not aligned directly, and the condition that the result is poor due to overlarge gaps among the target domains is avoided; secondly, the branch module in the multi-branch structure separates the domain characteristics from the domain invariant characteristics, so that the domain characteristics are reserved, more effective segmentation can be realized, and meanwhile, the domain invariant characteristics enable the model to be aligned with the characteristics of different domains;
(3) according to the method, the pseudo labels with high confidence coefficient are obtained through entropy minimization, and the target domain images with the pseudo labels are aligned with other target domain images, so that the interior of the field is also aligned, and the performance of the model after intra-domain alignment is further improved;
drawings
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of source domain data flow of the present invention;
FIG. 3 is a flow chart of a target domain data flow in the present invention;
FIG. 4 is a diagram of a network architecture according to the present invention;
Detailed Description
The invention aims to realize the segmentation of the ground features in a wide-range remote sensing image, and provides a model which can directly align a source and a plurality of target domains, so that the segmentation model can obtain a better segmentation result on data of a plurality of domains with lower cost.
A multi-target unsupervised domain self-adaptive method for dividing ground objects in a large-range remote sensing image comprises the following steps:
step 1: selecting data of one of a plurality of fields from the obtained large-range remote sensing image, cutting the data to manufacture a source field data set after marking, and directly cutting images of other fields to manufacture a plurality of target field data sets;
step 2: putting the source domain data set and the target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model;
and step 3: inputting a plurality of target domain data sets into a segmentation model, and making a target domain image with a high-confidence pseudo label into a pseudo source domain data set through entropy minimum sorting;
and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model;
and 5: putting all remote sensing images into a model to obtain a result;
in step 2, when training the multi-branch unsupervised domain adaptive model to obtain a segmentation model, the method includes the following steps:
1) constructing a multi-branch unsupervised domain self-adaptive model;
2) taking the source domain data set and the plurality of target domain data sets as input, and training until the model performance reaches the optimum;
in step 1), the multi-branch unsupervised domain adaptive model is composed of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators (the value of K is equal to the number of target domains). The system comprises a feature extractor, a K target classifiers and K branch modules, wherein the K target classifiers and the K branch modules are connected behind the feature extractor, the K branch modules are respectively connected with K corresponding fine-grained discriminators, and meanwhile, the outputs of the K branch modules are also connected with the invariant feature classifiers.
In step 2), training of each module in the multi-branch unsupervised domain adaptive model is affected by different data. Wherein the training of the feature extractor and invariant feature classifier is affected by all data; the kth target classifier, the kth branch module and the kth fine-grain discriminator training are influenced by the source domain dataset and the kth target domain dataset (K ═ 1,2, ·, K).
The loss function used to train the feature extractor is defined as follows:
Figure BDA0003377505830000051
Figure BDA0003377505830000052
Figure BDA0003377505830000053
wherein L issegIs the loss of segmentation from the source domain data,
Figure BDA0003377505830000054
and
Figure BDA0003377505830000055
are the countermeasure loss and classification loss generated by the target domain data. H. W, C are the height, width and number of categories, respectively, of the input image.
Figure BDA0003377505830000056
Is the domain label of the target domain image.
Figure BDA0003377505830000057
Is the predicted probability that the target domain image will be generated after passing through the discriminator.
Figure BDA0003377505830000058
Is the resulting prediction probability of the target domain image on the kth target classifier. n issAnd
Figure BDA0003377505830000059
is the number of samples of the source domain and the kth target domain.
Figure BDA00033775058300000510
And
Figure BDA0003377505830000061
are weighting factors that control the impact on the immunity to loss and classification loss.
The data flow of the source domain data set in the multi-branch unsupervised domain adaptive model comprises the following steps:
1) generating a feature map from the source domain image through a feature extractor;
2) extracting invariant features from the source domain feature map through all branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating segmentation loss;
3) inputting the source domain invariant features into a discriminator, and training the discriminator;
the data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptive model comprises the following steps:
1) generating a feature map from the target domain image through a feature extractor;
2) extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the countermeasure loss;
3) inputting a feature map generated by the image of the kth target domain into a kth target classifier, and calculating classification loss;
4) inputting the invariant features of the kth target domain into a kth discriminator, and training and inputting the kth discriminator;
in step 3, the target domain image is input into the model to obtain a prediction result, the image segmentation conditions are sequenced by calculating the entropy of the prediction result, and the image with high confidence coefficient is extracted.
In step 4, the target domain label graph with high confidence coefficient and the corresponding target domain image are used as a pseudo source domain, and the pseudo source domain and the target domain image are input into a multi-branch unsupervised domain adaptive model for training.
In step 5, all remote sensing images are cut into small pictures which can be identified by the model, and the model is input to obtain a segmentation result.
Example (b):
the invention provides a method for segmenting ground objects in a large-range remote sensing image on the remote sensing image, which is carried out according to the following modes:
step 1: acquiring data of one of a plurality of fields from a large-range remote sensing image, and marking different ground object categories through ENVI; cutting the obtained image and the corresponding label to prepare a source domain data set, and directly cutting the images in other fields to prepare a plurality of target domain data sets; the cropped image size is 512 × 512 pixels.
Step 2: and constructing a multi-branch unsupervised domain self-adaptive model, wherein the multi-branch unsupervised domain self-adaptive model consists of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators (the value of K is equal to the number of the target domains).
And step 3: a loss function in the model is defined. The loss function used to train the feature extractor is defined as follows:
Figure BDA0003377505830000062
Figure BDA0003377505830000071
Figure BDA0003377505830000072
wherein L issegIs the loss of segmentation from the source domain data,
Figure BDA0003377505830000073
and
Figure BDA0003377505830000074
are the countermeasure loss and classification loss generated by the target domain data. H. W, C are the height, width and number of categories, respectively, of the input image.
Figure BDA0003377505830000075
Is the domain label of the target domain image.
Figure BDA0003377505830000076
Is the predicted probability that the target domain image will be generated after passing through the discriminator.
Figure BDA0003377505830000077
Is the object domain image is classified at the kth objectThe resulting prediction probabilities on the machine. n issAnd
Figure BDA0003377505830000078
is the number of samples of the source domain and the kth target domain.
Figure BDA0003377505830000079
And
Figure BDA00033775058300000710
are weighting factors that control the impact on the immunity to loss and classification loss.
And 4, step 4: the model is trained using the source domain dataset and the plurality of target domain datasets as inputs. The data flow of the source domain data set in the multi-branch unsupervised domain adaptive model is as follows: generating a feature map from the source domain image through a feature extractor; extracting invariant features from the source domain feature map through all branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating segmentation loss; and inputting the source domain invariant features into a discriminator and training the discriminator.
The data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptive model is as follows: generating a feature map from the target domain image through a feature extractor; extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the countermeasure loss; inputting a feature map generated by the image of the kth target domain into a kth target classifier, and calculating classification loss; inputting the invariant features of the kth target domain into a kth discriminator, and training and inputting the kth discriminator;
and 5: the image of the target domain is input into the model to obtain pseudo labels, the confidence degrees of the image pseudo labels of the target domain are sequenced through entropy minimization, the target domain images with high confidence degrees are selected to be made into a pseudo source domain data set, the pseudo source domain data set and the images of the target domain are input into the multi-branch unsupervised domain self-adaptive model to be trained again, and a final segmentation model is obtained.
Step 6: and cutting all the remote sensing images into small pictures which can be identified by the model, and inputting the small pictures into the model to obtain a segmentation result.

Claims (9)

1. A multi-target unsupervised domain self-adaptive method for large-range ground object segmentation comprises the following steps:
step 1: selecting data of any one of a plurality of fields from the obtained large-range remote sensing image, cutting the data to manufacture a source field data set after marking, and directly cutting images of other fields except the selected field to manufacture a plurality of target field data sets;
step 2: putting the source domain data set and the target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model;
and step 3: inputting a plurality of target domain data sets into a segmentation model, and making a target domain image with a high-confidence pseudo label into a pseudo source domain data set through entropy minimum sorting;
and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model;
and 5: and putting all the remote sensing images into a model to obtain a segmentation result.
2. The method according to claim 1, wherein the step 2, when training the multi-branch unsupervised domain adaptive model to obtain the segmentation model, comprises the following steps:
1) constructing a multi-branch unsupervised domain self-adaptive model;
2) and taking the source domain data set and the plurality of target domain data sets as input, and training until the model performance reaches the optimum.
3. The method according to claim 2, characterized in that in step 1), the multi-branch unsupervised domain adaptive model is composed of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators; the feature extractor is connected with K target classifiers and K branch modules, and the K branch modules are respectively connected with K corresponding fine-grained discriminators.
4. The method according to claim 2, wherein in step 2), each module in the multi-branch unsupervised domain adaptive model is trained under different data supervision, wherein the feature extractor and the invariant feature classifier are trained by all data; the kth target classifier, the kth branching module and the kth fine-grain discriminator are trained on the source domain dataset and the kth target domain dataset (K ═ 1,2, …, K).
5. The method of claim 4, wherein the loss function used to train the feature extractor is defined as follows:
Figure FDA0003377505820000011
Figure FDA0003377505820000012
Figure FDA0003377505820000013
wherein L issegIs the loss of segmentation from the source domain data,
Figure FDA0003377505820000014
and
Figure FDA0003377505820000015
is the contrast loss and classification loss produced by the target domain data, H, W, C is the height, width and number of classes of the input image,
Figure FDA0003377505820000016
is a domain label of the target domain image,
Figure FDA0003377505820000017
is the predicted probability that the target domain image will be generated after passing through the discriminator,
Figure FDA0003377505820000018
is the resulting prediction probability of the target domain image on the kth target classifier,
Figure FDA0003377505820000019
is the number of samples of the kth target domain,
Figure FDA00033775058200000110
and
Figure FDA00033775058200000111
are weighting factors that control the impact on the immunity to loss and classification loss.
6. The method according to one of claims 2 to 5, wherein the data flow of the source domain data set in the multi-branch unsupervised domain adaptation model comprises the following steps:
1) generating a feature map from the source domain image through a feature extractor;
2) extracting invariant features from the source domain feature map through all branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating segmentation loss;
3) and inputting the source domain invariant features into a discriminator and training the discriminator.
7. The method according to one of claims 2 to 5, wherein the data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptation model comprises the following steps:
1) generating a feature map from the target domain image through a feature extractor;
2) extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the countermeasure loss;
3) inputting a feature map generated by the image of the kth target domain into a kth target classifier, and calculating classification loss;
4) the invariant features of the kth target domain are input into a kth discriminator, and training is input into the kth discriminator.
8. The method according to claim 1, wherein in the step 3, the target domain image is input into the model to obtain a prediction result, and the image segmentation conditions are sorted by calculating entropy of the prediction result to extract the image with high confidence.
9. The method according to claim 1, wherein in step 4, the target domain label map with high confidence and the corresponding target domain image are used as a pseudo source domain, and the pseudo source domain and the target domain image are input into a multi-branch unsupervised domain adaptive model for training.
CN202111423886.7A 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation Active CN114220003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111423886.7A CN114220003B (en) 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111423886.7A CN114220003B (en) 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation

Publications (2)

Publication Number Publication Date
CN114220003A true CN114220003A (en) 2022-03-22
CN114220003B CN114220003B (en) 2022-10-21

Family

ID=80698523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111423886.7A Active CN114220003B (en) 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation

Country Status (1)

Country Link
CN (1) CN114220003B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082725A (en) * 2022-05-17 2022-09-20 西北工业大学 Multi-source domain self-adaption method based on reliable sample selection and double-branch dynamic network

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062753A (en) * 2017-12-29 2018-05-22 重庆理工大学 The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study
CN109948648A (en) * 2019-01-31 2019-06-28 中山大学 A kind of multiple target domain adaptive migration method and system based on member confrontation study
US20190354801A1 (en) * 2018-05-16 2019-11-21 Nec Laboratories America, Inc. Unsupervised cross-domain distance metric adaptation with feature transfer network
US20200082221A1 (en) * 2018-09-06 2020-03-12 Nec Laboratories America, Inc. Domain adaptation for instance detection and segmentation
US20200089966A1 (en) * 2018-09-13 2020-03-19 Nec Laboratories America, Inc. Recognizing fine-grained objects in surveillance camera images
CN111291705A (en) * 2020-02-24 2020-06-16 北京交通大学 Cross-multi-target-domain pedestrian re-identification method
CN111382871A (en) * 2020-03-11 2020-07-07 中国人民解放军军事科学院国防科技创新研究院 Domain generalization and domain self-adaptive learning method based on data expansion consistency
CN112991353A (en) * 2021-03-12 2021-06-18 北京航空航天大学 Unsupervised semantic segmentation method for cross-domain remote sensing image
CN113096137A (en) * 2021-04-08 2021-07-09 济南大学 Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN113128411A (en) * 2021-04-22 2021-07-16 深圳市格灵精睿视觉有限公司 Cross-domain capture identification method and device, electronic equipment and storage medium
CN113255823A (en) * 2021-06-15 2021-08-13 中国人民解放军国防科技大学 Unsupervised domain adaptation method and unsupervised domain adaptation device
CN113486827A (en) * 2021-07-13 2021-10-08 上海中科辰新卫星技术有限公司 Multi-source remote sensing image transfer learning method based on domain confrontation and self-supervision
CN113536972A (en) * 2021-06-28 2021-10-22 华东师范大学 Self-supervision cross-domain crowd counting method based on target domain pseudo label

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062753A (en) * 2017-12-29 2018-05-22 重庆理工大学 The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study
US20190354801A1 (en) * 2018-05-16 2019-11-21 Nec Laboratories America, Inc. Unsupervised cross-domain distance metric adaptation with feature transfer network
US20200082221A1 (en) * 2018-09-06 2020-03-12 Nec Laboratories America, Inc. Domain adaptation for instance detection and segmentation
US20200089966A1 (en) * 2018-09-13 2020-03-19 Nec Laboratories America, Inc. Recognizing fine-grained objects in surveillance camera images
CN109948648A (en) * 2019-01-31 2019-06-28 中山大学 A kind of multiple target domain adaptive migration method and system based on member confrontation study
CN111291705A (en) * 2020-02-24 2020-06-16 北京交通大学 Cross-multi-target-domain pedestrian re-identification method
CN111382871A (en) * 2020-03-11 2020-07-07 中国人民解放军军事科学院国防科技创新研究院 Domain generalization and domain self-adaptive learning method based on data expansion consistency
CN112991353A (en) * 2021-03-12 2021-06-18 北京航空航天大学 Unsupervised semantic segmentation method for cross-domain remote sensing image
CN113096137A (en) * 2021-04-08 2021-07-09 济南大学 Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN113128411A (en) * 2021-04-22 2021-07-16 深圳市格灵精睿视觉有限公司 Cross-domain capture identification method and device, electronic equipment and storage medium
CN113255823A (en) * 2021-06-15 2021-08-13 中国人民解放军国防科技大学 Unsupervised domain adaptation method and unsupervised domain adaptation device
CN113536972A (en) * 2021-06-28 2021-10-22 华东师范大学 Self-supervision cross-domain crowd counting method based on target domain pseudo label
CN113486827A (en) * 2021-07-13 2021-10-08 上海中科辰新卫星技术有限公司 Multi-source remote sensing image transfer learning method based on domain confrontation and self-supervision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANTOINE SAPORTA ET AL: "Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation", 《ARXIV》 *
TAKASHI ISOBE ET AL: "Multi-Target Domain Adaptation with Collaborative Consistency Learning", 《ARXIV》 *
丁一鹏 等: "遥感图像语义分割中的弱监督域自适应算法", 《计算机工程与应用》 *
李晶晶 等: "领域自适应研究综述", 《计算机工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082725A (en) * 2022-05-17 2022-09-20 西北工业大学 Multi-source domain self-adaption method based on reliable sample selection and double-branch dynamic network
CN115082725B (en) * 2022-05-17 2024-02-23 西北工业大学 Multi-source domain self-adaption method based on reliable sample selection and double-branch dynamic network

Also Published As

Publication number Publication date
CN114220003B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
Farabet et al. Scene parsing with multiscale feature learning, purity trees, and optimal covers
CN102968637B (en) Complicated background image and character division method
CN106897681B (en) Remote sensing image contrast analysis method and system
CN113936217A (en) Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN112819065B (en) Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information
CN112906606B (en) Domain self-adaptive pedestrian re-identification method based on mutual divergence learning
CN112364721A (en) Road surface foreign matter detection method
CN113221848B (en) Hyperspectral open set field self-adaption method based on multi-classifier domain confrontation network
CN108805102A (en) A kind of video caption detection and recognition methods and system based on deep learning
CN108537816A (en) A kind of obvious object dividing method connecting priori with background based on super-pixel
CN112488229A (en) Domain self-adaptive unsupervised target detection method based on feature separation and alignment
Zheng et al. Active discriminative dictionary learning for weather recognition
CN107609509A (en) A kind of action identification method based on motion salient region detection
Han et al. A method based on multi-convolution layers joint and generative adversarial networks for vehicle detection
CN104834891A (en) Method and system for filtering Chinese character image type spam
CN114220003B (en) Multi-target unsupervised domain self-adaption method for large-range ground object segmentation
CN104573701A (en) Automatic detection method of corn tassel traits
Floros et al. Multi-Class Image Labeling with Top-Down Segmentation and Generalized Robust P^N Potentials.
Nong et al. Boundary-aware dual-stream network for VHR remote sensing images semantic segmentation
Deepan et al. Road recognition from remote sensing imagery using machine learning
Qin et al. Application of video scene semantic recognition technology in smart video
CN117132804B (en) Hyperspectral image classification method based on causal cross-domain small sample learning
Han et al. Segmentation Is Not the End of Road Extraction: An All-Visible Denoising Auto-Encoder for Connected and Smooth Road Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant