CN114220016A - Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene - Google Patents

Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene Download PDF

Info

Publication number
CN114220016A
CN114220016A CN202210159053.2A CN202210159053A CN114220016A CN 114220016 A CN114220016 A CN 114220016A CN 202210159053 A CN202210159053 A CN 202210159053A CN 114220016 A CN114220016 A CN 114220016A
Authority
CN
China
Prior art keywords
image
domain
model
representing
source domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210159053.2A
Other languages
Chinese (zh)
Other versions
CN114220016B (en
Inventor
高文飞
王瑞雪
王磊
王辉
郭丽丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Rongling Technology Group Co ltd
Original Assignee
Shandong Rongling Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Rongling Technology Group Co ltd filed Critical Shandong Rongling Technology Group Co ltd
Priority to CN202210159053.2A priority Critical patent/CN114220016B/en
Publication of CN114220016A publication Critical patent/CN114220016A/en
Application granted granted Critical
Publication of CN114220016B publication Critical patent/CN114220016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to the technical field of unmanned aerial vehicle image processing, and relates to a domain self-adaptive identification method for unmanned aerial vehicle aerial images in open scenes. The method comprises the following steps: inputting a labeled source domain image and a non-labeled target domain image which are aerial photographed by an unmanned aerial vehicle, obtaining a source domain model, calibrating the source domain model, calculating an information entropy value, an energy value, a confidence value and a distance value from a clustering center of each image after sample output characteristics, calculating weight for each image, designing a threshold value, obtaining the target model, updating the weight, and obtaining a model with effective identification. The method provided by the invention can effectively reduce the marks of target image recognition and reduce manpower and material resources. And in the optimization process, the optimization countermeasure loss of the weight is given to the target domain data, so that the uncertainty in the optimization process is reduced.

Description

Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle image processing, and relates to a domain self-adaptive identification method for unmanned aerial vehicle aerial images in open scenes.
Background
The domain adaptive identification technology for the aerial images of the unmanned aerial vehicle in the open scene refers to a system or a method for fully automatically identifying the known images and the new images when the unmanned aerial vehicle has distribution change and new image types in the aerial shooting process. Nowadays, unmanned aerial vehicle aerial photography has been applied to each specific field such as water conservancy construction, geological exploration, film and television shooting as a convenient means of capturing images. The biggest difficulty faced in image recognition in open scenes is that the shot images are highly dynamic, complex and diverse, various emergencies are difficult to fully cover in training data, and the training data cannot cover all possible situations.
In practical applications, the same distribution assumption and relatively closed, static, controllable conditions are often not satisfied. Especially, in the identification of the aerial image of the unmanned aerial vehicle, due to the existence of objective factors such as the change of illumination conditions, the change of weather, the change of shooting angles (front and side surfaces), the change of shooting areas (land, air and water) and the like, the training sample and the test sample can deviate from the same distribution assumption to a large extent, and the generalization performance of a standard aerial image identification system cannot achieve the expected effect. In addition, such static (closed set) conditions are difficult to satisfy, especially in an open dynamic environment, some new categories may appear in the test sample and cannot be correctly labeled by the system, and when the aerial photography system identifies such images, the newly appearing categories need to be labeled. In the design of the domain adaptive identification technology for unmanned aerial vehicle aerial images in open scenes, how to realize effective learning with inconsistent probability distribution between a training set (source domain) and a test set (target domain) under the condition of new classes is a serious challenge.
In recent years, no relevant work has been available to solve this system problem of actual unmanned aerial vehicle aerial image recognition. While there is some relevant work for domain adaptive learning in an open environment. For the domain self-adaptation problem of the classes which do not exist in the training set in the test set, the existing work mainly focuses on selecting new classes (information entropy and confidence), and some work selects reliable new classes by using an integrated mode.
Although the prior art methods have met with some success, they suffer from the following disadvantages:
(1) the standard limitations of the new class selection (information entropy, confidence). Too few criteria for measuring the uncertainty of the sample result in a lower accuracy of the new class being selected.
(2) In the first step, the information entropy, the confidence level and the energy which need to be calculated are calculated based on the output values of the network, and in the previous researches, the calculation is directly performed, but the calculated entropy, the confidence level or the energy level have certain deviation due to the excessive confidence level of the network. The neglect of this over-confidence problem is a further important reason that the accuracy of the new class is low.
(3) Part of new classes may still exist in the selected target domain images, most of the work ignores the problem, all the target domain images are used for domain self-adaptation, the generalization of the model may be reduced due to the fact that the new classes are partially or completely aligned, and the uncertainty of the target domain is not measured.
Disclosure of Invention
The invention provides a novel domain self-adaptive identification method for an unmanned aerial vehicle aerial image in an open scene, aiming at the problem that the generalization performance of the traditional unmanned aerial vehicle aerial image identification system cannot achieve the expected effect.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
a domain self-adaptive identification method for unmanned aerial vehicle aerial images in open scenes comprises the following steps:
(1) inputting a labeled source domain image and an unlabeled target domain image which are aerial by the unmanned aerial vehicle;
(2) acquiring a source domain model by using the source domain image with the label through cross entropy loss;
(3) initializing a target domain model by using the source domain model, and calibrating the initialized target domain model: selecting 10% verification sets from various source domain images to calibrate a source domain model;
(4) inputting target domain data into a calibrated source domain model, calculating an information entropy value, an energy value, a confidence value and a distance value from a clustering center of each image after sample output characteristics, and calculating a weight for each image;
(5) designing a threshold, taking the image sample with the image weight not meeting the set threshold as a new class, and endowing the rest images with the weight calculated in the step (4);
(6) and aligning the remaining credible weighted target domain images and all source domain images by using a common domain self-adaption method to fine-tune the source domain model so as to obtain the target model. Calculating the confidence of each class, selecting a sample with the highest 10% confidence from the pseudo labels predicted as each class in the target domain image as a standby verification set, and calibrating a target domain model in iteration by using the standby verification set so as to avoid excessive confidence of the output of the network;
and (4) repeating the steps (4) to (6) for continuous iteration, and continuously updating the weight to finally obtain a model which accurately selects a new class and can effectively identify the remaining target domain images.
The method for adaptively identifying the domain of the aerial image of the unmanned aerial vehicle in the open scene according to claim 1, wherein the formula of the source domain model in the step (2) is as follows:
Figure DEST_PATH_IMAGE001
wherein, S represents all the source domain images,
Figure DEST_PATH_IMAGE002
represents the cross-entropy loss of all source domain images, E represents expectation, xsRepresenting a source domain image, ysA label class representing a source domain image, 1 represents an indication function,
Figure DEST_PATH_IMAGE003
denotes the softmax function, log denotes the log function,
Figure DEST_PATH_IMAGE004
representing a source domain image; f represents the feature extractor module of the depth model, and G represents the classifier module of the depth model.
Preferably, the step (3) is specifically operated as follows:
selecting 10% of data from various types of source domain data as a verification set, and calibrating the initialized target domain model as follows:
Figure DEST_PATH_IMAGE005
wherein the target of optimization is T1(i.e., temperature parameter), xvRepresenting a selected verification set of source domain images, yvTag information representing the selected source domain image validation set, E representing expectation, other symbols being consistent with the above.
Preferably, the information entropy calculation formula in step (4) is as follows:
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
which represents the image of the target domain,
Figure DEST_PATH_IMAGE008
representing that the target domain image output by the last layer of data of the image passing through the model through the Softmax function belongs to each type of probability set;
the energy value calculation formula is as follows:
Figure DEST_PATH_IMAGE009
wherein
Figure DEST_PATH_IMAGE010
And a Logit vector representing the output of the sample through the model, and T represents a temperature parameter.
Figure DEST_PATH_IMAGE011
The number of each of the samples is represented,
Figure DEST_PATH_IMAGE012
representing the total number of categories.
The confidence calculation formula is as follows:
Figure DEST_PATH_IMAGE013
Figure 574380DEST_PATH_IMAGE008
and the target domain image which represents the final layer of data of the target domain image passing through the model and is output by the Softmax function belongs to each class of probability set.
The calculation formula of the distance is as follows:
Figure DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
representing the features of the target domain image output by the feature extractor,
Figure DEST_PATH_IMAGE016
representing the confidence of the output belonging to the kth class.
Threshold value w (x) in step (5)t) The calculation formula is as follows:
Figure DEST_PATH_IMAGE017
the calculation formula of the step (6) is as follows:
Figure DEST_PATH_IMAGE018
wherein d isiAnd (3) representing a binary label of the ith image, wherein w =1 for the source domain image sample, and w is equal to the weight value calculated in the step (4) for the target domain image.
And (4) calibrating the trained target domain model, selecting a verification set based on the confidence coefficient formula in the step (4), and taking 10% of the maximum confidence coefficient of each type as a standby verification set, wherein the calibration formula is as follows:
Figure DEST_PATH_IMAGE019
wherein the target of optimization is T2(i.e., temperature parameter), xvRepresenting a selected verification set of images of the target domain, yvPseudo tag information indicating the selected target domain image verification set, E indicates expectation, and other symbols are consistent with the above.
Compared with the prior art, the invention has the advantages and positive effects that:
(1) the method provided by the invention can effectively reduce the labels for target image identification in an open scene, and greatly reduce manpower and material resources.
(2) The invention provides a framework for identifying new classes based on multiple standards, the new classes can be identified more effectively by the multiple reasonable standards, and the severe problem of the new classes in an open scene is relieved.
(3) The calibration of the model effectively relieves the problem of excessive confidence of the network output content, so that the subsequent network numerical value-based content entropy value, energy value and confidence value are more reasonable, and the new image is more accurate to identify.
(4) And the optimization countermeasure loss for giving weight to the target domain data is used in the optimization process, so that the uncertainty in the optimization process is further reduced.
Drawings
FIG. 1 is a flow chart of training a weighted domain adaptive image model to select a new class based on a plurality of calibrated criteria.
Detailed Description
In order that the above objects, features and advantages of the present invention may be more clearly understood, the present invention will be further described with reference to specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and thus the present invention is not limited to the specific embodiments of the present disclosure.
Example 1
As shown in fig. 1, the present embodiment provides specific steps of a domain adaptive identification method for an aerial image of an unmanned aerial vehicle in an open scene:
(1) input source domain image S (x)s,ys) And a target domain image T (x)t) And the source domain image is provided with a label, and the target domain image is not provided with the label. The purpose of identification is to label all target domain data (y)t) And pick out the new class.
S denotes a set of given source domain images, T denotes a set of given target domain images, xsRepresenting a source domain image, ysLabel, x, representing a source domain imagetRepresenting the target field image, ytA label representing target domain data that needs to be predicted.
(2) And acquiring a source domain model by cross entropy loss by using all the labeled source domain data.
Figure DEST_PATH_IMAGE020
Where S represents all source domain images, LCE(S) represents the cross-entropy loss of all active domain images, E represents expectation, xsRepresenting a source domain image, ysA label class representing a source domain image, 1 represents an indication function,
Figure 199175DEST_PATH_IMAGE003
denotes the softmax function, log denotes the log function,
Figure 809148DEST_PATH_IMAGE004
representing a source domain image. F represents the feature extractor module of the depth model, and G represents the classifier module of the depth model.
(3) And (6) calibrating the model.
Initializing the target domain model, and calibrating the initialized target domain model.
Selecting 10% of data from the source domain data as a verification set Sv(xv,yv) Selecting the simplest calibration mode to find the appropriate T1
Figure DEST_PATH_IMAGE021
(4) Inputting the target domain data into the target domain model, and calculating the information entropy value, the energy value, the confidence value and the distance value from the clustering center of each image to the sample output characteristics, thereby calculating a weight for each image.
The calculation formula of the information entropy value is as follows:
Figure DEST_PATH_IMAGE022
xtrepresenting the target field image, p (x)t) The target domain image which represents the final layer data of the image passing through the model and is output by the Softmax function belongs to each type of probability set (the Softmax function is divided by the temperature parameter T acquired in the calibration process before being output1Or T2The following calculations are all).
In the information theory, the smaller the information entropy value is, the lower the uncertainty of the representation sample is, so that we can use this property to select a sample with higher entropy value as a new type sample, which cannot be accurately identified by the model, resulting in higher uncertainty.
The energy value is calculated as follows:
Figure DEST_PATH_IMAGE023
wherein f iskAnd a Logit vector representing the output of the sample through the model, and T represents a temperature parameter. K represents the number of samples per class, K represents the total number of classes, where K does not contain the number of samples of the new class. The energy is normalized in the subsequent calculation process.
From an analysis of the physical properties and published energy articles, it is known that the value of energy can be used to distinguish between in-distribution samples and out-of-distribution samples. Here we consider the new class as an out-of-distribution sample, so that energy can be a strong criterion for discrimination of the new class. The more negative the energy, the more in-distribution samples, and the less negative the energy, the more out-distribution samples, so we can use the samples with small negative value of this property to select as new class.
The confidence is calculated as follows:
Figure DEST_PATH_IMAGE024
p(xt) And the target domain image which represents the final layer of data of the target domain image passing through the model and is output by the Softmax function belongs to each class of probability set. The confidence level here is the probability value predicted as the class, i.e. the maximum value in the output probability vector.
Higher confidence indicates that the model is more likely to be determined for this sample, and to some extent, only those samples that are less determined by the model may be considered as out-of-distribution samples, i.e., those samples with lower confidence may be considered as out-of-distribution samples.
The calculation formula of the distance is as follows:
first, we need to perform simple clustering (similar to weighted k-means clustering) through the output features of the model to get the cluster center of each class.
Figure DEST_PATH_IMAGE025
F(xt) Representing features of the target domain image output by the feature extractor, Ck(xt) Representing the confidence of the output belonging to the kth class. As with the just-computed confidence indicator, the confidence level here corresponds to the weight of a feature.
Then the cosine distance of each sample from the cluster center where it was predicted as that class is calculated.
Figure DEST_PATH_IMAGE026
Intuitively, the farther from the center of the cluster, the more likely these samples are at the cluster boundary, and these samples can be considered as out-of-distribution samples, i.e., new classes.
(5) Designing a threshold value, and picking out a new class.
The final w is calculated as follows:
Figure DEST_PATH_IMAGE027
by comparing w with a set threshold value w0, samples of w0< w can be considered as new classes and selected.
While all remaining target field plates are given a weight w.
(6) And aligning the remaining credible weighted target domain images and all source domain images by using a common domain self-adaption method to fine-tune the source domain model, so that the target model can be obtained.
Here we use the common countermeasure approach to align all the source domain images and the target domain images, except that our target domain images will have a weight of their own uncertainty estimate through step (4).
The resistance loss:
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
a binary label representing the ith image is used to indicate whether the sample belongs to the source domain or the target domain. W =1 for the source domain samples, and w is equal to the weight value calculated by step (4) for the target domain images, which may mitigate the uncertainty thereof.
Continuing to utilize the source domain data, the source domain data is further trained to optimize the source domain model.
Figure DEST_PATH_IMAGE030
The final optimization loss is:
Figure DEST_PATH_IMAGE031
Figure DEST_PATH_IMAGE032
are tradeoffs of parameters.
And then, calibrating the obtained target domain model.
A portion of the data is selected from the target domain data in the training as a validation set. The selection process is to select the sample with the highest confidence of 10% of each class as the verification set Tv(xv,yv) Search for a suitable T2
Figure DEST_PATH_IMAGE033
And (4) iterating the steps (4) to (6), continuously optimizing the model, and finally obtaining the model which accurately selects a new class and can effectively identify the rest target domain images.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention in other forms, and any person skilled in the art may apply the above modifications or changes to the equivalent embodiments with equivalent changes, without departing from the technical spirit of the present invention, and any simple modification, equivalent change and change made to the above embodiments according to the technical spirit of the present invention still belong to the protection scope of the technical spirit of the present invention.

Claims (4)

1. A domain self-adaptive identification method for unmanned aerial vehicle aerial images in open scenes is characterized by comprising the following steps:
(1) inputting a labeled source domain image and an unlabeled target domain image which are aerial by the unmanned aerial vehicle;
(2) acquiring a source domain model by using the source domain image with the label through cross entropy loss;
(3) initializing a target domain model by using a source domain model, and calibrating the initialized target domain model: selecting 10% of verification sets from the source domain images to calibrate the source domain model;
(4) inputting target domain data into a calibrated target domain model, calculating an information entropy value, an energy value, a confidence value and a distance value from a clustering center of each image after the images pass through sample output characteristics, and calculating a weight for each image;
(5) designing a threshold, taking the image sample with the image weight not meeting the set threshold as a new class, and endowing the rest images with the weight calculated in the step (4);
(6) aligning the remaining credible weighted target domain images and all source domain images by using a common domain self-adaption method to fine-tune a source domain model so as to obtain a target domain model; calculating and predicting the confidence of each type of sample, selecting the sample with the highest 10% confidence from the target domain image as a standby verification set, and calibrating the target domain model in iteration by using the standby verification set;
and (4) repeating the steps (4) to (6) for continuous iteration, and continuously updating the weight to finally obtain a model which accurately selects a new class and can effectively identify the remaining target domain images.
2. The method for adaptively identifying the domain of the aerial image of the unmanned aerial vehicle in the open scene according to claim 1, wherein the formula of the source domain model in the step (2) is as follows:
Figure 899655DEST_PATH_IMAGE001
wherein, S represents all the source domain images,
Figure 347953DEST_PATH_IMAGE002
represents the cross-entropy loss of all source domain images, E represents expectation, xsRepresenting a source domain image, ysA label class representing a source domain image, 1 represents an indication function,
Figure 658849DEST_PATH_IMAGE003
denotes the softmax function, log denotes the log function,
Figure 585217DEST_PATH_IMAGE004
representing a source domain image; f represents the feature extractor module of the depth model, and G represents the classifier module of the depth model.
3. The method for adaptively identifying the domain of the aerial image of the unmanned aerial vehicle in the open scene according to claim 1, wherein the step (3) specifically operates as follows:
selecting a part of data from the source domain data as a verification set, and calibrating the source domain model according to the following formula:
Figure 930747DEST_PATH_IMAGE005
wherein the optimized target T1Is a temperature parameter, xvRepresenting a selected verification set of source domain images, yvTag information indicating the selected source domain image authentication set, and E indicates expectation.
4. The method for adaptively identifying the domain of the aerial image of the unmanned aerial vehicle in the open scene according to claim 1, wherein the information entropy calculation formula in the step (4) is as follows:
Figure 284368DEST_PATH_IMAGE006
Figure 581095DEST_PATH_IMAGE007
which represents the image of the target domain,
Figure 311154DEST_PATH_IMAGE008
representing that the target domain image output by the last layer of data of the image passing through the model through the Softmax function belongs to each type of probability set;
the energy value calculation formula is as follows:
Figure 245612DEST_PATH_IMAGE009
wherein f iskRepresenting a Logit vector of samples output through the model, T representing a temperature parameter, K representing the number of each sample, and K representing the total number of the classes;
the confidence calculation formula is as follows:
Figure 770134DEST_PATH_IMAGE010
Figure 55622DEST_PATH_IMAGE008
representing that the target domain image output through the last layer of data of the model by a Softmax function belongs to each type of probability set;
the calculation formula of the distance is as follows:
Figure 323792DEST_PATH_IMAGE011
Figure 378336DEST_PATH_IMAGE012
representing the features of the target domain image output by the feature extractor,
Figure 339339DEST_PATH_IMAGE013
representing the confidence of the output belonging to the kth class;
threshold value w (x) in step (5)t) The calculation formula is as follows:
Figure 613587DEST_PATH_IMAGE014
the calculation formula of the step (6) is as follows:
Figure 154290DEST_PATH_IMAGE015
wherein d isiA binary label representing the ith image, w =1 for the source domain image sample, and w is equal to the weight value calculated in step (4) for the target domain image;
and (4) calibrating the trained target domain model, selecting a verification set based on the confidence coefficient formula in the step (4), and taking 10% of the maximum confidence coefficient of each type as a standby verification set, wherein the calibration formula is as follows:
Figure 328920DEST_PATH_IMAGE016
wherein the target of optimization is T2,xvRepresenting a selected verification set of images of the target domain, yvAnd E represents expectation.
CN202210159053.2A 2022-02-22 2022-02-22 Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene Active CN114220016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210159053.2A CN114220016B (en) 2022-02-22 2022-02-22 Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210159053.2A CN114220016B (en) 2022-02-22 2022-02-22 Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene

Publications (2)

Publication Number Publication Date
CN114220016A true CN114220016A (en) 2022-03-22
CN114220016B CN114220016B (en) 2022-06-03

Family

ID=80709106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210159053.2A Active CN114220016B (en) 2022-02-22 2022-02-22 Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene

Country Status (1)

Country Link
CN (1) CN114220016B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427920A (en) * 2018-02-26 2018-08-21 杭州电子科技大学 A kind of land and sea border defense object detection method based on deep learning
CN109766921A (en) * 2018-12-19 2019-05-17 合肥工业大学 A kind of vibration data Fault Classification based on depth domain-adaptive
US20200134442A1 (en) * 2018-10-29 2020-04-30 Microsoft Technology Licensing, Llc Task detection in communications using domain adaptation
CN111160553A (en) * 2019-12-23 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Novel field self-adaptive learning method
CN112183788A (en) * 2020-11-30 2021-01-05 华南理工大学 Domain adaptive equipment operation detection system and method
CN112348084A (en) * 2020-11-08 2021-02-09 大连大学 Unknown protocol data frame classification method for improving k-means
CN113111979A (en) * 2021-06-16 2021-07-13 上海齐感电子信息科技有限公司 Model training method, image detection method and detection device
CN113139594A (en) * 2021-04-19 2021-07-20 北京理工大学 Airborne image unmanned aerial vehicle target self-adaptive detection method
CN113435546A (en) * 2021-08-26 2021-09-24 广东众聚人工智能科技有限公司 Migratable image recognition method and system based on differentiation confidence level
CN113771029A (en) * 2021-09-03 2021-12-10 山东融瓴科技集团有限公司 Robot operating system and method based on video incremental learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427920A (en) * 2018-02-26 2018-08-21 杭州电子科技大学 A kind of land and sea border defense object detection method based on deep learning
US20200134442A1 (en) * 2018-10-29 2020-04-30 Microsoft Technology Licensing, Llc Task detection in communications using domain adaptation
CN109766921A (en) * 2018-12-19 2019-05-17 合肥工业大学 A kind of vibration data Fault Classification based on depth domain-adaptive
CN111160553A (en) * 2019-12-23 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Novel field self-adaptive learning method
CN112348084A (en) * 2020-11-08 2021-02-09 大连大学 Unknown protocol data frame classification method for improving k-means
CN112183788A (en) * 2020-11-30 2021-01-05 华南理工大学 Domain adaptive equipment operation detection system and method
CN113139594A (en) * 2021-04-19 2021-07-20 北京理工大学 Airborne image unmanned aerial vehicle target self-adaptive detection method
CN113111979A (en) * 2021-06-16 2021-07-13 上海齐感电子信息科技有限公司 Model training method, image detection method and detection device
CN113435546A (en) * 2021-08-26 2021-09-24 广东众聚人工智能科技有限公司 Migratable image recognition method and system based on differentiation confidence level
CN113771029A (en) * 2021-09-03 2021-12-10 山东融瓴科技集团有限公司 Robot operating system and method based on video incremental learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PENG XIONG 等: "Multi-block domain adaptation with central moment discrepancy for fault diagnosis", 《MEASUREMENT》 *
徐戈 等: "基于视觉误差与语义属性的零样本图像分类", 《计算机应用》 *
郑雄风 等: "基于参数字典的多源域自适应学习算法", 《计算机技术与发展》 *

Also Published As

Publication number Publication date
CN114220016B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN111191732B (en) Target detection method based on full-automatic learning
CN111814584B (en) Vehicle re-identification method based on multi-center measurement loss under multi-view environment
CN110321830B (en) Chinese character string picture OCR recognition method based on neural network
CN109711366B (en) Pedestrian re-identification method based on group information loss function
CN112800876B (en) Super-spherical feature embedding method and system for re-identification
CN110097060B (en) Open set identification method for trunk image
CN108399378B (en) Natural scene image identification method based on VGG deep convolution network
CN111079847B (en) Remote sensing image automatic labeling method based on deep learning
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN110942094B (en) Norm-based antagonistic sample detection and classification method
CN113139594B (en) Self-adaptive detection method for airborne image unmanned aerial vehicle target
CN110942091A (en) Semi-supervised few-sample image classification method for searching reliable abnormal data center
CN111126134A (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN106951915A (en) A kind of one-dimensional range profile multiple Classifiers Combination method of identification based on classification confidence level
CN111241987B (en) Multi-target model visual tracking method based on cost-sensitive three-branch decision
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN111444816A (en) Multi-scale dense pedestrian detection method based on fast RCNN
CN114266321A (en) Weak supervision fuzzy clustering algorithm based on unconstrained prior information mode
CN113076969B (en) Image target detection method based on Gaussian mixture loss function
CN116910571B (en) Open-domain adaptation method and system based on prototype comparison learning
CN110569764B (en) Mobile phone model identification method based on convolutional neural network
CN114220016B (en) Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene
CN117152606A (en) Confidence dynamic learning-based remote sensing image cross-domain small sample classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant