CN114220003B - Multi-target unsupervised domain self-adaption method for large-range ground object segmentation - Google Patents

Multi-target unsupervised domain self-adaption method for large-range ground object segmentation Download PDF

Info

Publication number
CN114220003B
CN114220003B CN202111423886.7A CN202111423886A CN114220003B CN 114220003 B CN114220003 B CN 114220003B CN 202111423886 A CN202111423886 A CN 202111423886A CN 114220003 B CN114220003 B CN 114220003B
Authority
CN
China
Prior art keywords
domain
target
branch
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111423886.7A
Other languages
Chinese (zh)
Other versions
CN114220003A (en
Inventor
任东
刘明
何雨岩
向杰
安毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202111423886.7A priority Critical patent/CN114220003B/en
Publication of CN114220003A publication Critical patent/CN114220003A/en
Application granted granted Critical
Publication of CN114220003B publication Critical patent/CN114220003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

A multi-target unsupervised domain self-adaptive method for large-range ground object segmentation comprises the following steps: step 1: selecting data of one of a plurality of fields from the obtained large-range remote sensing image, cutting the data to manufacture a source field data set after marking, and directly cutting images of other fields to manufacture a plurality of target field data sets; step 2: putting the source domain data set and the target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model; and step 3: putting a plurality of target domain data sets into a segmentation model, and making a target domain image with a high-confidence pseudo label into a pseudo source domain data set through entropy minimum sorting; and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model; and 5: and putting all the remote sensing images into a model to obtain a segmentation result.

Description

Multi-target unsupervised domain self-adaption method for large-range ground object segmentation
Technical Field
The invention relates to the technical field of semantic segmentation in remote sensing images, in particular to a multi-target unsupervised domain self-adaptive method for large-range ground object segmentation.
Background
Land resource utilization and human activities are closely related, and accurate acquisition of land utilization information has great significance in better environmental protection, development of dominant economic industries, disaster prevention and the like. With the development of remote sensing technology and deep learning, the method for segmenting the ground features on the remote sensing image through the semantic segmentation algorithm is regarded as an efficient and convenient scheme.
The semantic segmentation algorithm can learn better feature extraction capability under the supervision of a large amount of data, but the trained model is not always effective. The acquisition of the remote sensing image is interfered by a plurality of aspects, so that the difference exists in the distribution of the characteristics, and the difference of the characteristics makes the semantic segmentation model unable to obtain the expected result on the untrained image. In such a scenario, the image with the label for training may be referred to as a source domain image, and the image without the label may be referred to as a target domain image. The segmentation of the ground objects in remote sensing images over a large area becomes a difficult task, since the drastic changes of the objects and scenes captured in remote sensing images are difficult to segment by a single model trained on data sets collected at several specific locations.
In order to solve the problem, the traditional unsupervised domain self-adaptive method enables the model to obtain better segmentation results on data in different fields in a mode of aligning the characteristics of a source domain and a target domain. However, these domain adaptive methods are all established under the condition of a single source domain and a single target domain, which still has a great problem in practical use, a large-range remote sensing image may have multiple domains, all target domains are divided into one target domain for domain adaptive learning, and the performance of the obtained segmentation model is far inferior to that of the segmentation model obtained by single source-single target domain adaptive learning. And aligning multiple target domain fields with the source domain respectively is costly and long-term. Therefore, designing a method which can align a plurality of target domains and does not reduce the model performance is an important premise for realizing the ground feature segmentation of the large-range remote sensing image.
According to the method, the source domain and the target domains are directly aligned by constructing the multi-branch unsupervised domain self-adaptive model, and the characteristics are further aligned by constructing the pseudo source domain data set through entropy minimization. The method has the advantages that the segmentation model learns the characteristics of multiple fields with low cost, the problem of field deviation between the source field and multiple target fields is well relieved, and therefore the effect of the segmentation model on the large-range remote sensing image is further improved.
Disclosure of Invention
The invention aims to realize the ground feature segmentation of a large-range remote sensing image, and provides a model which can directly align a source and a plurality of target domains, so that the segmentation model can obtain a better segmentation result on data of the plurality of domains with lower cost.
A multi-target unsupervised domain self-adaptive method for large-range ground object segmentation comprises the following steps:
step 1: selecting data of any one of a plurality of fields, cutting the data after marking to manufacture a source field data set, and directly cutting images of other fields except the selected field to manufacture a plurality of target field data sets;
step 2: putting the source domain data set and the target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model;
and step 3: inputting a plurality of target domain data sets into a segmentation model, and making a target domain image with a high-confidence pseudo label into a pseudo source domain data set through entropy minimum sorting;
and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model;
and 5: putting all remote sensing images into a model to obtain a result;
in step 1, when the data of the selected region is marked, the feature type recognized by the model is the same as the number of marked types.
In step 2, when training the multi-branch unsupervised domain adaptive model to obtain a segmentation model, the method includes the following steps:
1) Constructing a multi-branch unsupervised domain self-adaptive model;
2) Taking the source domain data set and the plurality of target domain data sets as input, and training until the model performance reaches the optimum;
in step 1), the multi-branch unsupervised domain adaptive model is composed of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators (the value of K is equal to the number of target domains). The system comprises a feature extractor, a K target classifiers and K branch modules, wherein the K target classifiers and the K branch modules are connected behind the feature extractor, the K branch modules are respectively connected with K corresponding fine-grained discriminators, and meanwhile, the outputs of the K branch modules are also connected with the invariant feature classifiers.
In step 2), training each module in the multi-branch unsupervised domain self-adaptive model is supervised by different data, wherein the feature extractor and the invariant feature classifier are trained through all data; the kth target classifier, the kth branch module and the K fine-grain discriminators are trained on the source domain dataset and the kth target domain dataset (K =1,2, · ·, K).
The loss function used to train the feature extractor is defined as follows:
Figure GDA0003832942260000031
Figure GDA0003832942260000032
Figure GDA0003832942260000033
wherein L is seg Is the loss of segmentation from the source domain data,
Figure GDA0003832942260000034
and
Figure GDA0003832942260000035
are the countermeasure loss and classification loss generated by the target domain data.
Figure GDA0003832942260000036
Is a domain label of the target domain image.
Figure GDA0003832942260000037
Is the predicted probability that the target domain image will be generated after passing through the discriminator.
Figure GDA0003832942260000038
Is the resulting prediction probability of the target domain image on the kth target classifier. n is s And
Figure GDA0003832942260000039
is the number of samples of the source domain and the kth target domain.
Figure GDA00038329422600000310
And
Figure GDA00038329422600000311
are weighting factors that control the impact on the immunity to loss and classification loss.
The dimensionality of the domain label is the same as the number of the surface feature types, and the value of the domain label represents the probability that the model prediction pixel point belongs to a certain type.
The data flow of the source domain data set in the multi-branch unsupervised domain adaptive model comprises the following steps:
s 1) generating a feature map by a feature extractor from the source domain image;
s 2) extracting invariant features from the source domain feature map through all branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating segmentation loss;
s 3) inputting the source domain invariant features into the discriminator and training the discriminator;
the data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptive model comprises the following steps:
s-1) generating a feature map from the target domain image through a feature extractor;
s-2) extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the countermeasure loss;
s-3) inputting a feature map generated by the image of the kth target domain into a kth target classifier, and calculating classification loss;
s-4) inputting the invariant features of the kth target domain into the kth discriminator to train the kth discriminator;
in step 3, the target domain image is input into the model to obtain a prediction result, the image segmentation conditions are sequenced by calculating the entropy of the prediction result, and the image with high confidence coefficient is extracted.
In step 4, the target domain label graph with high confidence coefficient and the corresponding target domain image are used as a pseudo source domain, and the pseudo source domain and the target domain image are input into a multi-branch unsupervised domain self-adaptive model for training.
In step 5, all the remote sensing images are cut into small pictures which can be identified by the model, and the small pictures are input into the model to obtain a segmentation result.
Compared with the prior art, the invention has the following technical effects:
(1) The invention is a multi-target unsupervised domain self-adaptive model, and only needs to mark the data of one domain, so that the knowledge of a source domain can be migrated to other domains at one time. The segmentation model is better popularized, and meanwhile, the labor cost for realizing ground object segmentation in a large-range remote sensing image is reduced;
(2) According to the invention, through designing a multi-branch structure, firstly, a source domain and different target domains are aligned on different branches, so that the target domains are not directly aligned, and the condition that the result is poor due to overlarge gaps among the target domains is avoided; secondly, the branch module in the multi-branch structure separates the domain characteristics from the domain invariant characteristics, so that the domain characteristics are reserved, more effective segmentation can be realized, and meanwhile, the domain invariant characteristics enable the model to align the characteristics of different domains;
(3) According to the method, the pseudo labels with high confidence coefficient are obtained through entropy minimization, and the target domain images with the pseudo labels are aligned with other target domain images, so that the interior of the field is also aligned, and the performance of the model after intra-domain alignment is further improved;
drawings
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of source domain data flow of the present invention;
FIG. 3 is a flow chart of a target domain data flow in the present invention;
FIG. 4 is a diagram of a network architecture according to the present invention;
Detailed Description
The invention aims to realize the segmentation of the ground features in a wide-range remote sensing image, and provides a model which can directly align a source and a plurality of target domains, so that the segmentation model can obtain a better segmentation result on data of a plurality of domains with lower cost.
A multi-target unsupervised domain self-adaptive method for dividing ground features in a large-range remote sensing image comprises the following steps:
step 1: selecting data of one of a plurality of fields from the obtained large-range remote sensing image, cutting the data to manufacture a source field data set after marking, and directly cutting images of other fields to manufacture a plurality of target field data sets;
step 2: putting the source domain data set and the target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model;
and step 3: inputting a plurality of target domain data sets into a segmentation model, and making a target domain image with a high-confidence pseudo label into a pseudo source domain data set through entropy minimum sorting;
and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model;
and 5: putting all remote sensing images into a model to obtain a result;
in step 2, when training the multi-branch unsupervised domain adaptive model to obtain a segmentation model, the method includes the following steps:
1) Constructing a multi-branch unsupervised domain self-adaptive model;
2) Taking the source domain data set and the plurality of target domain data sets as input, and training until the model performance reaches the optimum;
in step 1), the multi-branch unsupervised domain adaptive model is composed of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators (the value of K is equal to the number of target domains). The system comprises a feature extractor, a K target classifiers and K branch modules, wherein the K target classifiers and the K branch modules are connected behind the feature extractor, the K branch modules are respectively connected with K corresponding fine-grained discriminators, and meanwhile, the outputs of the K branch modules are also connected with the invariant feature classifiers.
In step 2), training of each module in the multi-branch unsupervised domain adaptive model is influenced by different data. Wherein the training of the feature extractor and invariant feature classifier is affected by all data; the kth target classifier, the kth branch module and the kth fine-grain discriminator training are influenced by the source domain dataset and the kth target domain dataset (K =1,2, ·, K).
The penalty function for training the feature extractor is defined as follows:
Figure GDA0003832942260000051
Figure GDA0003832942260000052
Figure GDA0003832942260000053
wherein L is seg Is the loss of segmentation from the source domain data,
Figure GDA0003832942260000054
and
Figure GDA0003832942260000055
are the countermeasure loss and classification loss generated by the target domain data.
Figure GDA0003832942260000056
Is the domain label of the target domain image.
Figure GDA0003832942260000057
Is the predicted probability that the target domain image will be generated after passing through the discriminator.
Figure GDA0003832942260000058
Is the resulting prediction probability of the target domain image on the kth target classifier. n is s And
Figure GDA0003832942260000059
is the number of samples of the source domain and the kth target domain.
Figure GDA00038329422600000510
And
Figure GDA00038329422600000511
are the weighting factors that control the impact on the immunity to loss and classification loss.
The data flow of the source domain data set in the multi-branch unsupervised domain adaptive model comprises the following steps:
s 1) generating a feature map by a feature extractor from the source domain image;
s 2) extracting invariant features from the source domain feature map through all the branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating the segmentation loss;
s 3) inputting the source domain invariant features into the discriminator and training the discriminator;
the data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptive model comprises the following steps:
s-1) generating a feature map by the target domain image through a feature extractor;
s-2) extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the countermeasure loss;
s-3) inputting a feature map generated by the image of the kth target domain into the kth target classifier, and calculating classification loss;
s-4) inputting the invariant features of the kth target domain into the kth discriminator to train the kth discriminator;
in step 3, the target domain image is input into the model to obtain a prediction result, the image segmentation conditions are sequenced by calculating the entropy of the prediction result, and the image with high confidence coefficient is extracted.
In step 4, the target domain label graph with high confidence coefficient and the corresponding target domain image are used as a pseudo source domain, and the pseudo source domain and the target domain image are input into a multi-branch unsupervised domain self-adaptive model for training.
In step 5, all remote sensing images are cut into small pictures which can be identified by the model, and the model is input to obtain a segmentation result.
Example (b):
the invention provides a method for segmenting ground objects in a large-range remote sensing image on the remote sensing image, which is carried out according to the following modes:
step 1: acquiring data of one of a plurality of fields from a large-range remote sensing image, and marking different ground object categories through ENVI; cutting the obtained image and the corresponding label to prepare a source domain data set, and directly cutting the images in other fields to prepare a plurality of target domain data sets; the cropped image size is 512 × 512 pixels.
Step 2: and constructing a multi-branch unsupervised domain self-adaptive model, wherein the multi-branch unsupervised domain self-adaptive model consists of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators (the value of K is equal to the number of target domains).
And 3, step 3: a loss function in the model is defined. The loss function used to train the feature extractor is defined as follows:
Figure GDA0003832942260000061
Figure GDA0003832942260000062
Figure GDA0003832942260000071
wherein L is seg Is the loss of segmentation from the source domain data,
Figure GDA0003832942260000072
and
Figure GDA0003832942260000073
are the competing and categorical penalties incurred by the target domain data.
Figure GDA0003832942260000074
Is the domain label of the target domain image.
Figure GDA0003832942260000075
Is the predicted probability that the target domain image will be generated after passing through the discriminator.
Figure GDA0003832942260000076
Is the resulting prediction probability of the target domain image on the kth target classifier. n is s And
Figure GDA0003832942260000077
is the number of samples of the source domain and the kth target domain.
Figure GDA0003832942260000078
And
Figure GDA0003832942260000079
is a weight controlling the impact on the immunity to loss and classification lossAnd (4) a heavy factor.
And 4, step 4: the model is trained using the source domain dataset and the plurality of target domain datasets as inputs. The data flow of the source domain data set in the multi-branch unsupervised domain adaptive model is as follows: generating a feature map from the source domain image through a feature extractor; extracting invariant features from the source domain feature map through all branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating segmentation loss; and inputting the source domain invariant features into a discriminator and training the discriminator.
The data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptive model is as follows: generating a feature map from the target domain image through a feature extractor; extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the antagonistic loss; inputting a feature map generated by the image of the kth target domain into a kth target classifier, and calculating classification loss; inputting the invariant features of the kth target domain into a kth discriminator, and training and inputting the kth discriminator;
and 5: the image of the target domain is input into the model to obtain pseudo labels, the confidence degrees of the image pseudo labels of the target domain are sequenced through entropy minimization, the target domain images with high confidence degrees are selected to be made into a pseudo source domain data set, the pseudo source domain data set and the images of the target domain are input into the multi-branch unsupervised domain self-adaptive model to be trained again, and a final segmentation model is obtained.
And 6: and cutting all the remote sensing images into small pictures which can be identified by the model, and inputting the small pictures into the model to obtain a segmentation result.

Claims (3)

1. A multi-target unsupervised domain self-adaptive method for large-range ground object segmentation comprises the following steps:
step 1: selecting data of any one of a plurality of fields from the obtained large-range remote sensing image, cutting the data to manufacture a source field data set after marking, and directly cutting images of other fields except the selected field to manufacture a plurality of target field data sets;
and 2, step: putting the source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a segmentation model;
and step 3: inputting a plurality of target domain data sets into a segmentation model, and making a target domain image with a pseudo label with high confidence degree into a pseudo source domain data set through entropy minimum sorting;
and 4, step 4: putting the pseudo source domain data set and the plurality of target domain data sets into a multi-branch unsupervised domain self-adaptive model for training to obtain a final segmentation model;
and 5: putting all remote sensing images into a model to obtain a segmentation result;
in step 2, when the multi-branch unsupervised domain adaptive model is trained to obtain a segmentation model, the method comprises the following steps:
1) Constructing a multi-branch unsupervised domain self-adaptive model;
2) Taking the source domain data set and the plurality of target domain data sets as input, and training until the model performance reaches the optimum;
in the step 1), the multi-branch unsupervised domain self-adaptive model consists of a feature extractor, an invariant feature classifier, K target classifiers, K branch modules and K fine-grained discriminators; the system comprises a feature extractor, a K target classifiers and K branch modules, wherein the K target classifiers and the K branch modules are connected behind the feature extractor, the K branch modules are respectively connected with K corresponding fine-grained discriminators, and meanwhile, the outputs of the K branch modules are also connected with an invariant feature classifier;
in step 2), training each module in the multi-branch unsupervised domain self-adaptive model is supervised by different data, wherein the feature extractor and the invariant feature classifier are trained by all data; the kth target classifier, the kth branch module and the kth fine-grained discriminator are trained through a source domain data set and a kth target domain data set;
the loss function used to train the feature extractor is defined as follows:
Figure FDA0003832942250000011
Figure FDA0003832942250000012
Figure FDA0003832942250000013
wherein L is seg Is the loss of segmentation from the source domain data,
Figure FDA0003832942250000014
and
Figure FDA0003832942250000015
is the countermeasure loss and classification loss generated by the target domain data,
Figure FDA0003832942250000016
is a domain label of the target domain image,
Figure FDA0003832942250000017
is the prediction probability generated after the target domain image passes through the discriminator,
Figure FDA0003832942250000018
is the resulting prediction probability of the target domain image on the kth target classifier,
Figure FDA0003832942250000019
is the number of samples of the kth target domain,
Figure FDA00038329422500000110
and
Figure FDA00038329422500000111
is a weighting factor that controls the impact on loss resistance and classification loss;
the data flow of the source domain data set in the multi-branch unsupervised domain adaptive model comprises the following steps:
s 1) generating a feature map by a feature extractor from the source domain image;
s 2) extracting invariant features from the source domain feature map through all the branch modules, inputting the generated invariant features into an invariant feature classifier to generate a prediction result, and calculating the segmentation loss;
s 3) inputting the source domain invariant features into a discriminator and training the discriminator;
the data flow of the multiple target domain data sets in the multi-branch unsupervised domain adaptive model comprises the following steps:
s-1) generating a feature map by the target domain image through a feature extractor;
s-2) extracting invariant features from a feature map generated by the image of the kth target domain through a kth branch module, inputting the generated invariant features into an invariant feature classifier to generate a pseudo label, and inputting the invariant features into a kth discriminator to calculate the countermeasure loss;
s-3) inputting a feature map generated by the image of the kth target domain into the kth target classifier, and calculating classification loss;
s-4) inputting the invariant feature of the kth target domain into the kth discriminator, and training the kth discriminator.
2. The method according to claim 1, wherein in the step 3, the target domain image is input into the model to obtain a prediction result, and the image segmentation conditions are sorted by calculating entropy of the prediction result to extract the image with high confidence.
3. The method according to claim 1, wherein in step 4, the target domain label map with high confidence and the corresponding target domain image are used as a pseudo source domain, and the pseudo source domain and the target domain image are input into a multi-branch unsupervised domain adaptive model for training.
CN202111423886.7A 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation Active CN114220003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111423886.7A CN114220003B (en) 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111423886.7A CN114220003B (en) 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation

Publications (2)

Publication Number Publication Date
CN114220003A CN114220003A (en) 2022-03-22
CN114220003B true CN114220003B (en) 2022-10-21

Family

ID=80698523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111423886.7A Active CN114220003B (en) 2021-11-26 2021-11-26 Multi-target unsupervised domain self-adaption method for large-range ground object segmentation

Country Status (1)

Country Link
CN (1) CN114220003B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082725B (en) * 2022-05-17 2024-02-23 西北工业大学 Multi-source domain self-adaption method based on reliable sample selection and double-branch dynamic network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948648A (en) * 2019-01-31 2019-06-28 中山大学 A kind of multiple target domain adaptive migration method and system based on member confrontation study
CN111291705A (en) * 2020-02-24 2020-06-16 北京交通大学 Cross-multi-target-domain pedestrian re-identification method
CN111382871A (en) * 2020-03-11 2020-07-07 中国人民解放军军事科学院国防科技创新研究院 Domain generalization and domain self-adaptive learning method based on data expansion consistency
CN113096137A (en) * 2021-04-08 2021-07-09 济南大学 Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN113128411A (en) * 2021-04-22 2021-07-16 深圳市格灵精睿视觉有限公司 Cross-domain capture identification method and device, electronic equipment and storage medium
CN113255823A (en) * 2021-06-15 2021-08-13 中国人民解放军国防科技大学 Unsupervised domain adaptation method and unsupervised domain adaptation device
CN113486827A (en) * 2021-07-13 2021-10-08 上海中科辰新卫星技术有限公司 Multi-source remote sensing image transfer learning method based on domain confrontation and self-supervision
CN113536972A (en) * 2021-06-28 2021-10-22 华东师范大学 Self-supervision cross-domain crowd counting method based on target domain pseudo label

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062753B (en) * 2017-12-29 2020-04-17 重庆理工大学 Unsupervised domain self-adaptive brain tumor semantic segmentation method based on deep counterstudy
US10885383B2 (en) * 2018-05-16 2021-01-05 Nec Corporation Unsupervised cross-domain distance metric adaptation with feature transfer network
US10915792B2 (en) * 2018-09-06 2021-02-09 Nec Corporation Domain adaptation for instance detection and segmentation
US11087142B2 (en) * 2018-09-13 2021-08-10 Nec Corporation Recognizing fine-grained objects in surveillance camera images
CN112991353B (en) * 2021-03-12 2022-10-18 北京航空航天大学 Unsupervised semantic segmentation method for cross-domain remote sensing image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948648A (en) * 2019-01-31 2019-06-28 中山大学 A kind of multiple target domain adaptive migration method and system based on member confrontation study
CN111291705A (en) * 2020-02-24 2020-06-16 北京交通大学 Cross-multi-target-domain pedestrian re-identification method
CN111382871A (en) * 2020-03-11 2020-07-07 中国人民解放军军事科学院国防科技创新研究院 Domain generalization and domain self-adaptive learning method based on data expansion consistency
CN113096137A (en) * 2021-04-08 2021-07-09 济南大学 Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN113128411A (en) * 2021-04-22 2021-07-16 深圳市格灵精睿视觉有限公司 Cross-domain capture identification method and device, electronic equipment and storage medium
CN113255823A (en) * 2021-06-15 2021-08-13 中国人民解放军国防科技大学 Unsupervised domain adaptation method and unsupervised domain adaptation device
CN113536972A (en) * 2021-06-28 2021-10-22 华东师范大学 Self-supervision cross-domain crowd counting method based on target domain pseudo label
CN113486827A (en) * 2021-07-13 2021-10-08 上海中科辰新卫星技术有限公司 Multi-source remote sensing image transfer learning method based on domain confrontation and self-supervision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation;Antoine Saporta et al;《arXiv》;20210916;第3.2-3.3节,图2-3 *
Multi-Target Domain Adaptation with Collaborative Consistency Learning;Takashi Isobe et al;《arXiv》;20210608;1-10 *
遥感图像语义分割中的弱监督域自适应算法;丁一鹏 等;《计算机工程与应用》;20210825;1-10 *
领域自适应研究综述;李晶晶 等;《计算机工程》;20210630;第47卷(第6期);1-13 *

Also Published As

Publication number Publication date
CN114220003A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
Saha et al. Unsupervised deep change vector analysis for multiple-change detection in VHR images
Wu et al. Rapid target detection in high resolution remote sensing images using YOLO model
WO2020163455A1 (en) Automatic optimization of machine learning algorithms in the presence of target datasets
CN108875816A (en) Merge the Active Learning samples selection strategy of Reliability Code and diversity criterion
CN105574505A (en) Human body target re-identification method and system among multiple cameras
CN112906606B (en) Domain self-adaptive pedestrian re-identification method based on mutual divergence learning
CN112819065B (en) Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information
Tung et al. Collageparsing: Nonparametric scene parsing by adaptive overlapping windows
CN108537816A (en) A kind of obvious object dividing method connecting priori with background based on super-pixel
CN111783521A (en) Pedestrian re-identification method based on low-rank prior guidance and based on domain invariant information separation
CN112364721A (en) Road surface foreign matter detection method
CN102750712A (en) Moving object segmenting method based on local space-time manifold learning
CN108805102A (en) A kind of video caption detection and recognition methods and system based on deep learning
CN114220003B (en) Multi-target unsupervised domain self-adaption method for large-range ground object segmentation
CN104063701B (en) Fast electric television stations TV station symbol recognition system and its implementation based on SURF words trees and template matches
CN107609509A (en) A kind of action identification method based on motion salient region detection
CN109214430A (en) A kind of recognition methods again of the pedestrian based on feature space topology distribution
Zhang et al. Few-shot object detection with self-adaptive global similarity and two-way foreground stimulator in remote sensing images
Xu et al. Hierarchical online domain adaptation of deformable part-based models
CN114782752A (en) Small sample image grouping classification method and device based on self-training
CN114329031A (en) Fine-grained bird image retrieval method based on graph neural network and deep hash
Casagrande et al. Abnormal motion analysis for tracking-based approaches using region-based method with mobile grid
CN104517127A (en) Self-learning pedestrian counting method and apparatus based on Bag-of-features model
CN113627443A (en) Domain self-adaptive semantic segmentation method for enhancing feature space counterstudy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant