CN114187452A - Robust depth image classification model training method based on active labeling - Google Patents

Robust depth image classification model training method based on active labeling Download PDF

Info

Publication number
CN114187452A
CN114187452A CN202210135383.8A CN202210135383A CN114187452A CN 114187452 A CN114187452 A CN 114187452A CN 202210135383 A CN202210135383 A CN 202210135383A CN 114187452 A CN114187452 A CN 114187452A
Authority
CN
China
Prior art keywords
image
model
labeled
training
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210135383.8A
Other languages
Chinese (zh)
Inventor
黄圣君
周慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202210135383.8A priority Critical patent/CN114187452A/en
Publication of CN114187452A publication Critical patent/CN114187452A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The invention discloses a robust depth image classification model training method based on active labeling. The method comprises the following steps: firstly, collecting a large number of non-labeled image sets and a small number of labeled training image data sets; adding noise disturbance to each image in the labeled image set to obtain a labeled image set containing noise; thirdly, taking the noisy labeled image set as a training set, and initializing an image classification model; and fourthly, carrying out multiple times of disturbance on each image in the unmarked image set, and calculating the value score S of each unmarked image. Fifthly, ranking the scores S to obtain corresponding user feedback; sixthly, updating the labeled image set L and the unlabeled image set, and updating the prediction model; and seventhly, returning to the step four or ending and outputting the prediction model f. According to the invention, the high-utility image annotation is automatically selected through an active learning technology, and the annotation cost of a user can be reduced to the maximum extent while the robustness of the model is improved.

Description

Robust depth image classification model training method based on active labeling
Technical Field
The invention belongs to the technical field of digital image automatic labeling, and particularly relates to a robust depth image classification model training method based on active labeling.
Background
At present, the depth model can obtain higher precision in the image classification field, however, in a real application scene, the model is often interfered by noise to cause serious performance reduction. For example, in an automatic driving task, the image video recognition model is usually disturbed by the weather of fog, frost, snow, sand storm, etc., and it is difficult to accurately recognize road signs. Therefore, improving the robustness of the model has become an important task in the field of machine learning. Recent research shows that the robustness of the depth model can be effectively improved by adding noise disturbance to the training image for training. However, this training process often requires a large number of labeled images. In many practical applications, it is often costly and extremely difficult to accurately label the label information of each image, especially in areas where expertise is highly required. Active learning is a main method for reducing the cost of sample annotation, and the cost of query marking can be reduced to the maximum extent while the model performance is improved by actively selecting the most valuable images for annotation. However, the traditional active labeling method only considers the potential utility of the image for improving the model performance, for example, it is difficult to directly improve the model robustness by measuring the uncertainty of the classification model to the unlabeled image as an estimation of the utility. Therefore, how to design an effective active annotation strategy to improve the robustness of the model is an urgent problem to be solved, and has important practical significance.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the problems that target domain data are difficult to obtain in a real task and the robustness of a model is difficult to improve, the invention provides a robust depth image classification model training method based on active labeling.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a robust depth image classification model training method based on active labeling comprises the following steps:
step 1, collecting a large number of unmarked image sets
Figure 346309DEST_PATH_IMAGE001
And a small number of labeled training image data sets
Figure 823558DEST_PATH_IMAGE002
Step 2, carrying out annotation on the image set
Figure 673702DEST_PATH_IMAGE002
Adding noise disturbance to each image to obtain a noise-containing labeled image set
Figure 247903DEST_PATH_IMAGE003
Step 3, marking image sets with noises
Figure 384355DEST_PATH_IMAGE003
As a training set, initializing an image classification model f;
step 4, carrying out annotation on the image set which is not marked
Figure 473534DEST_PATH_IMAGE001
Carrying out multiple disturbance on each image, and calculating the value score of each unmarked image based on the prediction result of each unmarked image and multiple disturbed versions of each unmarked image by the model f
Figure 737156DEST_PATH_IMAGE004
Step 5, scoring obtained in step 4
Figure 165863DEST_PATH_IMAGE004
Sequencing, namely querying the marking information of the image for the user according to the sequence of the scores from large to small within the marking budget to obtain corresponding user feedback;
step 6, updating the labeled image set according to the user feedback result of the image category obtained in the step 5
Figure 348583DEST_PATH_IMAGE002
And unlabeled image set
Figure 816736DEST_PATH_IMAGE001
And obtaining a noise-containing label set according to the method in the step 2
Figure 946366DEST_PATH_IMAGE003
To update the prediction model
Figure 26317DEST_PATH_IMAGE005
And 7, returning to the step 4 or ending and outputting the prediction model f.
Further, the step 2 obtains a noisy labeled image set
Figure 786463DEST_PATH_IMAGE003
The specific method comprises the following steps:
for the
Figure 787917DEST_PATH_IMAGE002
Each image in
Figure 517975DEST_PATH_IMAGE006
Adding from Gaussian distribution
Figure 842646DEST_PATH_IMAGE007
Randomly labeled perturbation values
Figure 836010DEST_PATH_IMAGE008
Obtaining corresponding noisy images
Figure 121498DEST_PATH_IMAGE009
. The expression can be specifically as follows:
Figure 530614DEST_PATH_IMAGE010
then the image set containing noise label
Figure 788420DEST_PATH_IMAGE011
Wherein
Figure 749422DEST_PATH_IMAGE012
The number of marked images.
Further, the specific method for initializing the image classification model f in step 3 is as follows:
using predictive models
Figure 145375DEST_PATH_IMAGE013
For image sets with noise labels
Figure 217237DEST_PATH_IMAGE003
The medium image category is predicted and,
Figure 595128DEST_PATH_IMAGE014
are parameters of the predictive model. By using
Figure 602399DEST_PATH_IMAGE015
Is shown as
Figure 862479DEST_PATH_IMAGE016
Image frame
Figure 675714DEST_PATH_IMAGE009
Output on model f, wherein
Figure 95063DEST_PATH_IMAGE017
Representative image
Figure 335551DEST_PATH_IMAGE009
Is predicted to be the first
Figure 82927DEST_PATH_IMAGE018
The probability of an individual class of the object,
Figure 637537DEST_PATH_IMAGE019
representing the total number of categories of the image. By using
Figure 521179DEST_PATH_IMAGE020
Is shown as
Figure 198148DEST_PATH_IMAGE016
Image frame
Figure 58919DEST_PATH_IMAGE009
True mark, formAnd coding the one-hot code. Calculating the loss value of the model on each noisy image according to a formula, wherein the formula is as follows:
Figure 10694DEST_PATH_IMAGE021
noise-containing labeled image set through minimization model
Figure 217685DEST_PATH_IMAGE003
Optimizing the model by the upper loss value, wherein the specific formula is as follows:
Figure 737659DEST_PATH_IMAGE022
wherein, in the step (A),
Figure 194048DEST_PATH_IMAGE023
is a loss function.
Further, the step 4 calculates the value score of each unmarked image
Figure 152777DEST_PATH_IMAGE004
The specific method comprises the following steps:
for each unmarked image
Figure 401224DEST_PATH_IMAGE006
Adding
Figure 216734DEST_PATH_IMAGE024
Secondary disturbance to obtain corresponding disturbance image set
Figure 363681DEST_PATH_IMAGE025
Wherein
Figure 532625DEST_PATH_IMAGE026
,
Figure 510946DEST_PATH_IMAGE027
,
Figure 120525DEST_PATH_IMAGE028
Is a Gaussian distribution
Figure 754769DEST_PATH_IMAGE007
And (4) randomly marking a disturbance value, wherein the disturbance times m are hyper-parameters.
Calculating the model according to a formula
Figure 117617DEST_PATH_IMAGE029
The prediction result of (2) and the clean image
Figure 560231DEST_PATH_IMAGE006
And (3) the probability of inconsistency of the predicted result is obtained, and the formula is as follows:
Figure 920805DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure 104662DEST_PATH_IMAGE031
for the indicator function, when the input is true, the output is 1, and when the input is false, the output is 0;
calculating image set without user feedback according to formula
Figure 943305DEST_PATH_IMAGE001
Each image in
Figure 755272DEST_PATH_IMAGE006
To the mark model
Figure 349064DEST_PATH_IMAGE005
Value score of
Figure 364425DEST_PATH_IMAGE004
The formula is as follows:
Figure 272338DEST_PATH_IMAGE032
further, the step 6 is to update the labeled image set according to the user feedback result
Figure 79757DEST_PATH_IMAGE002
And unlabeled image set
Figure 204970DEST_PATH_IMAGE001
The specific method comprises the following steps:
the user provides category label information for the image being queried and the image is selected from the unlabeled data set
Figure 35523DEST_PATH_IMAGE001
Moving to annotated image data sets
Figure 278285DEST_PATH_IMAGE002
Has the advantages that: the invention provides a robust depth image classification model training method based on active labeling, which applies an active learning technology to the learning of a robust depth model, and effectively improves the robustness of the depth image classification model with the minimum labeling cost by actively selecting the most valuable image. Specifically, a batch of images which are most helpful for improving the robustness of the model are selected for query each time, so that the user can give image category information. In general, the prediction of a robust model tends to have stability, that is, the output of the model should remain consistent when small perturbations are added to the input image. However, under the same degree of disturbance, the prediction stability of the model on different images is different, for some images, when noise disturbance is encountered, the prediction result of the model is very unstable, and the images are added into the labeled set to train the model, so that the robustness of the model can be effectively improved. Therefore, when the images are selected, the active labeling method based on the inconsistency is provided, the potential utility of each unmarked image on improving the robustness of the model is measured by generating a series of disturbed images and adopting the prediction difference of the disturbed images, and the image with the maximum inconsistency value is selected for training the depth model. In the training process, the method adopts a mode of adding noise disturbance to the training image for training, and hopes that the robustness of the model to noise is gradually improved in the process of fitting the noisy image.
Drawings
FIG. 1 is a flow chart of the mechanism of the present invention;
FIG. 2 is a flow chart of calculating an example score;
FIG. 3 is a flow diagram of updating an annotation model.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
Examples
Fig. 1 shows a flow chart of the mechanism of the present invention. It is assumed that initially there is a data set consisting of a small number of annotated images
Figure 143473DEST_PATH_IMAGE002
And a data set consisting of a large number of unlabelled images
Figure 688855DEST_PATH_IMAGE001
. First pair, device labeled set
Figure 69021DEST_PATH_IMAGE002
Adding noise disturbance to each image to construct a noise-containing labeled image set
Figure 974529DEST_PATH_IMAGE003
And is based on
Figure 694223DEST_PATH_IMAGE003
And training to obtain a basic prediction model. Subsequently, the model pairs the unlabeled image dataset
Figure 800719DEST_PATH_IMAGE001
The image (2) is predicted to obtain the prediction result of each image which is not marked. And calculating to obtain the utility score of each image according to the model output. And sorting the images according to the utility scores, and inquiring the mark information from high to low to the user. Next, the user gives label information for the images, which are added to the training set
Figure 605864DEST_PATH_IMAGE002
Likewise, for the updated labeled set
Figure 65796DEST_PATH_IMAGE002
Adding noise disturbance to each image to obtain a noise-containing labeled image set
Figure 702313DEST_PATH_IMAGE003
. Finally, marking the image set by using noise
Figure 173238DEST_PATH_IMAGE003
And updating the model. The query process will loop until the marking overhead reaches the budget.
FIG. 2 is a flow diagram illustrating calculation of an example utility score. Firstly, each unlabelled image
Figure 200100DEST_PATH_IMAGE006
Adding
Figure 588356DEST_PATH_IMAGE024
Secondary disturbance to obtain corresponding disturbance image set
Figure 954746DEST_PATH_IMAGE033
Wherein
Figure 340728DEST_PATH_IMAGE026
,
Figure 917203DEST_PATH_IMAGE027
,
Figure 233784DEST_PATH_IMAGE028
Is a Gaussian distribution
Figure 516998DEST_PATH_IMAGE007
Randomly labeled perturbation values. Then, the model is calculated according to the formula
Figure 401777DEST_PATH_IMAGE029
The prediction result of (2) and the clean image
Figure 75335DEST_PATH_IMAGE006
And (3) the probability of inconsistency of the predicted result is obtained, and the formula is as follows:
Figure 539814DEST_PATH_IMAGE030
finally, calculating the image set without user feedback according to a formula
Figure 943114DEST_PATH_IMAGE001
Each image in
Figure 624893DEST_PATH_IMAGE006
Scoring value of annotation model f
Figure 644802DEST_PATH_IMAGE004
The formula is as follows:
Figure 381814DEST_PATH_IMAGE032
FIG. 3 is a flow chart illustrating updating an annotation model. In each training round, the user marked image is added into the training set
Figure 311724DEST_PATH_IMAGE002
In (1). Then, for
Figure 272726DEST_PATH_IMAGE002
Each image in
Figure 170144DEST_PATH_IMAGE006
Adding from Gaussian distribution
Figure 445268DEST_PATH_IMAGE007
Randomly labeled perturbation values
Figure 619897DEST_PATH_IMAGE008
. The expression can be specifically as follows:
Figure 627167DEST_PATH_IMAGE010
then the image set containing noise label
Figure 824930DEST_PATH_IMAGE011
Wherein
Figure 966062DEST_PATH_IMAGE012
The number of marked images. Subsequently, a predictive model is utilized
Figure 618367DEST_PATH_IMAGE005
For image sets with noise labels
Figure 858855DEST_PATH_IMAGE003
Predicting the medium image class by
Figure 606231DEST_PATH_IMAGE034
Is shown as
Figure 895261DEST_PATH_IMAGE016
Image frame
Figure 44483DEST_PATH_IMAGE009
Output on model f, wherein
Figure 455873DEST_PATH_IMAGE035
Representative image
Figure 549599DEST_PATH_IMAGE009
Is predicted to be the first
Figure 32533DEST_PATH_IMAGE018
The probability of an individual class of the object,
Figure 973945DEST_PATH_IMAGE019
representing the total number of categories of the image. By using
Figure 493919DEST_PATH_IMAGE036
Is shown as
Figure 215887DEST_PATH_IMAGE016
Image frame
Figure 174616DEST_PATH_IMAGE009
In the form of a one-hot code. Calculating the loss value of the model on each noisy image according to a formula, wherein the formula is as follows:
Figure 658949DEST_PATH_IMAGE037
then, the noise-containing labeled image set is subjected to minimization model
Figure 740038DEST_PATH_IMAGE003
Training a model by using the upper loss value, wherein the specific formula is as follows:
Figure 621406DEST_PATH_IMAGE022
finally, the model parameters are updated by a gradient descent algorithm. The above training procedure will be executed in a loop until the model converges or the maximum number of iterations is reached.

Claims (5)

1. A robust depth image classification model training method based on active labeling is characterized by comprising the following steps:
step 1, collecting a large number of unmarked image sets
Figure 7595DEST_PATH_IMAGE001
And a small number of labeled training image data sets
Figure 331260DEST_PATH_IMAGE002
Step 2, carrying out annotation on the image set
Figure 642156DEST_PATH_IMAGE002
Adding noise disturbance to each image in the image to obtain a noise-containing imageNoise labeled image set
Figure 771786DEST_PATH_IMAGE003
Step 3, marking image sets with noises
Figure 6065DEST_PATH_IMAGE003
As a training set, initializing an image classification model f;
step 4, carrying out annotation on the image set which is not marked
Figure 359686DEST_PATH_IMAGE001
Carrying out multiple disturbance on each image, and calculating the value score of each unmarked image based on the prediction result of each unmarked image and multiple disturbed versions of each unmarked image by the model f
Figure 361140DEST_PATH_IMAGE004
Step 5, scoring obtained in step 4
Figure 232144DEST_PATH_IMAGE004
Sequencing, namely querying the marking information of the image for the user according to the sequence of the scores from large to small within the marking budget to obtain corresponding user feedback;
step 6, updating the labeled image set according to the user feedback result of the image category obtained in the step 5
Figure 432181DEST_PATH_IMAGE002
And unlabeled image set
Figure 159966DEST_PATH_IMAGE001
And obtaining a noise-containing label set according to the method in the step 2
Figure 835666DEST_PATH_IMAGE003
To update the prediction model f;
and 7, returning to the step 4 or ending and outputting the prediction model f.
2. The robust depth image classification model training method based on active labeling according to claim 1, wherein step 2 obtains a noisy labeled image set
Figure 307099DEST_PATH_IMAGE003
The specific method comprises the following steps: for the
Figure 361643DEST_PATH_IMAGE002
Each image in
Figure 463591DEST_PATH_IMAGE005
Adding from Gaussian distribution
Figure 439637DEST_PATH_IMAGE006
Randomly labeled perturbation values
Figure 245919DEST_PATH_IMAGE007
The concrete expression is as follows:
Figure 312226DEST_PATH_IMAGE008
then the image set containing noise label
Figure 381814DEST_PATH_IMAGE009
Wherein
Figure 641894DEST_PATH_IMAGE010
The number of marked images.
3. The robust depth image classification model training method based on active labeling according to claim 1, wherein the specific method for initializing the image classification model f in the step 3 is as follows:
step 3.1: using predictive models
Figure 392812DEST_PATH_IMAGE011
For image sets with noise labels
Figure 625210DEST_PATH_IMAGE003
The medium image category is predicted and,
Figure 928016DEST_PATH_IMAGE012
for predicting parameters of the model, use
Figure 800026DEST_PATH_IMAGE013
Is shown as
Figure 213689DEST_PATH_IMAGE014
Image frame
Figure 300594DEST_PATH_IMAGE015
Output on model f, wherein
Figure 649667DEST_PATH_IMAGE016
Representative image
Figure 884339DEST_PATH_IMAGE015
Is predicted to be the first
Figure 39377DEST_PATH_IMAGE017
The probability of an individual class of the object,
Figure 666274DEST_PATH_IMAGE018
representing the total class number of the image; by using
Figure 576461DEST_PATH_IMAGE019
Is shown as
Figure 908217DEST_PATH_IMAGE014
Image frame
Figure 866945DEST_PATH_IMAGE015
The real mark of (1) is in the form of one-hot code; calculating the loss value of the model on each noisy image according to a formula, wherein the formula is as follows:
Figure 459601DEST_PATH_IMAGE020
step 3.2: noise-containing labeled image set through minimization model
Figure 665323DEST_PATH_IMAGE003
Optimizing the model by the upper loss value, wherein the specific formula is as follows:
Figure 812270DEST_PATH_IMAGE021
wherein, in the step (A),
Figure 371428DEST_PATH_IMAGE022
is a loss function.
4. The method for training the robust depth image classification model based on the active labeling of claim 1, wherein the step 4 is to calculate the value score of each unlabeled image
Figure 225114DEST_PATH_IMAGE004
The specific method comprises the following steps:
step 4.1 for each unlabelled image
Figure 945946DEST_PATH_IMAGE005
Adding
Figure 580189DEST_PATH_IMAGE023
Secondary disturbance to obtain corresponding disturbance image set
Figure 569136DEST_PATH_IMAGE024
Wherein
Figure 401963DEST_PATH_IMAGE025
,
Figure 496958DEST_PATH_IMAGE026
,
Figure 556181DEST_PATH_IMAGE027
Is a Gaussian distribution
Figure 457141DEST_PATH_IMAGE006
Randomly marking a disturbance value, wherein the disturbance times m are hyper-parameters;
step 4.2: calculating the model according to a formula
Figure 82157DEST_PATH_IMAGE028
The prediction result of (2) and the clean image
Figure 800583DEST_PATH_IMAGE005
And (3) the probability of inconsistency of the predicted result is obtained, and the formula is as follows:
Figure 940578DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 848491DEST_PATH_IMAGE030
for the indicator function, when the input is true, the output is 1, and when the input is false, the output is 0;
step 4.3: calculating image set without user feedback according to formula
Figure 531276DEST_PATH_IMAGE001
Each image in
Figure 30390DEST_PATH_IMAGE005
To classification model
Figure 860943DEST_PATH_IMAGE031
Value score of
Figure 726875DEST_PATH_IMAGE004
The formula is as follows:
Figure 388800DEST_PATH_IMAGE032
5. the method for training the robust depth image classification model based on the active labeling of claim 1, wherein the step 6 is to update the labeled image set according to the user feedback result
Figure 934182DEST_PATH_IMAGE002
And unlabeled image set
Figure 252031DEST_PATH_IMAGE001
The specific method comprises the following steps: the user provides category label information for the image being queried and the image is selected from the unlabeled data set
Figure 32905DEST_PATH_IMAGE001
Moving to annotated image data sets
Figure 939550DEST_PATH_IMAGE002
CN202210135383.8A 2022-02-15 2022-02-15 Robust depth image classification model training method based on active labeling Pending CN114187452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210135383.8A CN114187452A (en) 2022-02-15 2022-02-15 Robust depth image classification model training method based on active labeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210135383.8A CN114187452A (en) 2022-02-15 2022-02-15 Robust depth image classification model training method based on active labeling

Publications (1)

Publication Number Publication Date
CN114187452A true CN114187452A (en) 2022-03-15

Family

ID=80545908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210135383.8A Pending CN114187452A (en) 2022-02-15 2022-02-15 Robust depth image classification model training method based on active labeling

Country Status (1)

Country Link
CN (1) CN114187452A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313166A (en) * 2021-05-28 2021-08-27 华南理工大学 Ship target automatic labeling method based on feature consistency learning
CN113313178A (en) * 2021-06-03 2021-08-27 南京航空航天大学 Cross-domain image example-level active labeling method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313166A (en) * 2021-05-28 2021-08-27 华南理工大学 Ship target automatic labeling method based on feature consistency learning
CN113313178A (en) * 2021-06-03 2021-08-27 南京航空航天大学 Cross-domain image example-level active labeling method

Similar Documents

Publication Publication Date Title
CN112541355B (en) Entity boundary type decoupling few-sample named entity recognition method and system
CN113688665B (en) Remote sensing image target detection method and system based on semi-supervised iterative learning
CN112052818B (en) Method, system and storage medium for detecting pedestrians without supervision domain adaptation
CN110232448A (en) It improves gradient and promotes the method that the characteristic value of tree-model acts on and prevents over-fitting
CN116644755B (en) Multi-task learning-based few-sample named entity recognition method, device and medium
CN113469186B (en) Cross-domain migration image segmentation method based on small number of point labels
CN112001422B (en) Image mark estimation method based on deep Bayesian learning
CN112132014B (en) Target re-identification method and system based on non-supervised pyramid similarity learning
CN104268546A (en) Dynamic scene classification method based on topic model
CN114529900A (en) Semi-supervised domain adaptive semantic segmentation method and system based on feature prototype
CN108596204B (en) Improved SCDAE-based semi-supervised modulation mode classification model method
CN113283467B (en) Weak supervision picture classification method based on average loss and category-by-category selection
CN115186670B (en) Method and system for identifying domain named entities based on active learning
CN114187452A (en) Robust depth image classification model training method based on active labeling
CN116189671A (en) Data mining method and system for language teaching
CN113379037B (en) Partial multi-mark learning method based on complementary mark cooperative training
CN113313178B (en) Cross-domain image example level active labeling method
US11836223B2 (en) Systems and methods for automated detection of building footprints
CN111143625B (en) Cross-modal retrieval method based on semi-supervised multi-modal hash coding
CN114595695A (en) Self-training model construction method for few-sample intention recognition system
CN112419362B (en) Moving target tracking method based on priori information feature learning
CN113837220A (en) Robot target identification method, system and equipment based on online continuous learning
CN113705439B (en) Pedestrian attribute identification method based on weak supervision and metric learning
CN112860903B (en) Remote supervision relation extraction method integrated with constraint information
CN113269226B (en) Picture selection labeling method based on local and global information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220315