CN115564960B - Network image label denoising method combining sample selection and label correction - Google Patents

Network image label denoising method combining sample selection and label correction Download PDF

Info

Publication number
CN115564960B
CN115564960B CN202211408454.3A CN202211408454A CN115564960B CN 115564960 B CN115564960 B CN 115564960B CN 202211408454 A CN202211408454 A CN 202211408454A CN 115564960 B CN115564960 B CN 115564960B
Authority
CN
China
Prior art keywords
sample
samples
reusable
network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211408454.3A
Other languages
Chinese (zh)
Other versions
CN115564960A (en
Inventor
姚亚洲
黄丹
沈复民
孙泽人
申恒涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Code Geek Technology Co ltd
Original Assignee
Nanjing Code Geek Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Code Geek Technology Co ltd filed Critical Nanjing Code Geek Technology Co ltd
Priority to CN202211408454.3A priority Critical patent/CN115564960B/en
Publication of CN115564960A publication Critical patent/CN115564960A/en
Application granted granted Critical
Publication of CN115564960B publication Critical patent/CN115564960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a network image label denoising method combining sample selection and label correction, which comprises the following steps: s1, firstly, selecting a clean sample according to cosine similarity between the sample and a category center; s2, selecting a reusable sample from the rest samples through sample uncertainty dynamic state and correcting; s3, finally, updating the network by using the clean sample and the corrected reusable sample; according to the method, after the clean samples are selected according to the cosine similarity between the samples and the category centers, the reusable samples are dynamically selected from the rest samples according to the uncertainty of the samples and are corrected, and finally the clean samples and the corrected reusable samples are used for updating the network, so that the utilization rate of the samples is improved, and meanwhile, the fine-grained classification performance is improved.

Description

Network image label denoising method combining sample selection and label correction
Technical Field
The invention relates to the technical field of network label denoising, in particular to a network image label denoising method combining sample selection and label correction.
Background
For the noise problem, besides improving the accuracy of sample selection by reducing the sample coincidence rate between classes, another idea is to further reduce the influence of noise labels on the neural network by combining noise sample selection and loss correction. The method based on sample selection is to select clean samples for subsequent training by a certain method, wherein part of noise samples discarded by the sample selection method are internal noise, the samples are called reusable samples, and the true labels of the samples are still in a data set. Therefore, the utilization rate of the sample can be effectively improved by reusing the part of the sample, and the problem to be solved is a problem which needs to be solved for the classification of the fine-grained images which lack the data set.
Disclosure of Invention
The invention aims to provide a network image label denoising method combining sample selection and label correction, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a network image label denoising method combining sample selection and label correction comprises the following steps:
s1, firstly, selecting a clean sample according to cosine similarity between the sample and a category center;
s2, selecting a reusable sample from the rest samples through sample uncertainty dynamic state and correcting;
and S3, finally, updating the network by using the clean sample and the corrected reusable sample.
Further, in S1, the features of the picture are normalized in the Softmax layer, and the output process of the Softmax layer may be represented as:
Figure 775075DEST_PATH_IMAGE001
(6.1)
Figure 30607DEST_PATH_IMAGE002
(6.2)
after normalization, using a hyper-parameter s to scale cosine values, the Softmax output under the L2 constraint after feature normalization is calculated as follows:
Figure 342639DEST_PATH_IMAGE003
(6.3);
wherein the content of the first and second substances,
Figure 232973DEST_PATH_IMAGE004
and
Figure 949256DEST_PATH_IMAGE005
indicating the ith sample and its label.
Further, after normalization, the features are characterized by angles on a hypersphereDegree distribution, parameters of the last fully-connected layer
Figure 816718DEST_PATH_IMAGE006
For the center of each class generated by pre-training, the output of the network full-connection layer is the cosine distance between the picture feature and the center of each class
Figure 807808DEST_PATH_IMAGE007
(ii) a Recording the cosine similarity of each picture with its corresponding class center:
Figure 319692DEST_PATH_IMAGE008
(6.4)
Figure 970991DEST_PATH_IMAGE009
is the ith sample and its class center
Figure 60169DEST_PATH_IMAGE010
Sorting H, taking the example with large cosine similarity in the training of each batch, sending the example into a peer-to-peer network, and carrying out the next training; the selection formula is as follows:
Figure 323792DEST_PATH_IMAGE011
(6.5)
wherein the content of the first and second substances,
Figure 424603DEST_PATH_IMAGE012
for a correctable drop rate, D is the set of samples, and Dr is the reusable samples.
Further, the clean sample Dc is selected in S1, and the remaining samples can be divided into two types, namely a reusable sample Dr and a noise set Dn which need to be discarded in subsequent training;
when the sample is
Figure 138481DEST_PATH_IMAGE004
Prediction uncertainty of
Figure 855901DEST_PATH_IMAGE013
The sample belongs to the reusable sample set Dr if the following condition is satisfied:
Figure 156170DEST_PATH_IMAGE014
(6.6)
wherein
Figure 642646DEST_PATH_IMAGE013
Is a sample
Figure 261846DEST_PATH_IMAGE004
Is not determined, and
Figure 935404DEST_PATH_IMAGE015
to represent
Figure 806408DEST_PATH_IMAGE016
Median of uncertainty of medium samples, the uncertainty of each sample is measured by cross entropy:
Figure 6446DEST_PATH_IMAGE017
(6.7)。
further, each sample was recorded
Figure 436028DEST_PATH_IMAGE004
Last 10 predictions
Figure 596882DEST_PATH_IMAGE018
Prediction is updated as training progresses:
Figure 130631DEST_PATH_IMAGE019
(6.8)
according to
Figure 326120DEST_PATH_IMAGE018
Record the sample
Figure 162489DEST_PATH_IMAGE004
The category j with the largest number of predicted times and the number m,
Figure 840333DEST_PATH_IMAGE020
is a sample
Figure 912194DEST_PATH_IMAGE004
Probability of being predicted as j:
Figure 962190DEST_PATH_IMAGE021
(6.9)
the uncertainty of the prediction is the smallest when the n predictions are all the same, and the time is the smallest when the n predictions are all the same
Figure 969460DEST_PATH_IMAGE022
Figure 495119DEST_PATH_IMAGE023
(ii) a The uncertainty is the largest when the n predictions are different, and the time is the moment
Figure 246038DEST_PATH_IMAGE024
Figure 914654DEST_PATH_IMAGE025
And n is 10.
Further, in S3, during the previous n training sessions, the output of the Softmax layer is smoothed, and is propagated backwards with the following loss:
Figure 358405DEST_PATH_IMAGE026
(6.10)
Figure 840202DEST_PATH_IMAGE027
(6.11);
wherein
Figure 394811DEST_PATH_IMAGE028
The tag smoothing factor for the data set.
Further, after n times of training, the reusable sample Dr is selected using the formula (6.6) and used
Figure 684978DEST_PATH_IMAGE029
Updating the network:
Figure 798165DEST_PATH_IMAGE030
(6.12)
Figure 767258DEST_PATH_IMAGE031
(6.13)
Figure 594400DEST_PATH_IMAGE032
(6.14)
j is the category with the largest predicted times in the continuous n prediction processes.
Compared with the prior art, the invention has the beneficial effects that: according to the method, after the clean samples are selected according to the cosine similarity between the samples and the category centers, the reusable samples are dynamically selected from the rest samples according to the uncertainty of the samples and are corrected, and finally the clean samples and the corrected reusable samples are used together to update the network, so that the utilization rate of the samples is improved, and meanwhile, the fine-grained classification performance is improved.
Drawings
FIG. 1 is a schematic diagram of the front half of the CSSLC frame structure of the present invention;
FIG. 2 is a rear half of a CSSLC frame body structure of the present invention;
figure 3 is a diagram of the steps of a CSSLC method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, fig. 2 and fig. 3, the invention is a combination Sample selection and label Correction combined network image label denoising method (combination Sample selection with Loss Correction), referred to as CSSLC for short, different from the single Sample selection method and Loss Correction method, the method performs Loss Correction on a part of reusable samples on the basis of Sample selection, which can greatly improve the Sample utilization rate and the image classification performance;
firstly, a sample set is set
Figure 863707DEST_PATH_IMAGE033
The division into three sets: clean sample set Dc, reusable sample set Dr and noise set Dn, sample set
Figure 649261DEST_PATH_IMAGE034
Wherein
Figure 246595DEST_PATH_IMAGE004
Is the (i) th training sample,
Figure 664980DEST_PATH_IMAGE005
is that
Figure 257635DEST_PATH_IMAGE004
The label of (1); for the reusable sample set Dr,
Figure 479669DEST_PATH_IMAGE005
is not a sample
Figure 564300DEST_PATH_IMAGE004
The true label of (1) is recorded as
Figure 529982DEST_PATH_IMAGE035
In the next step, distinguishing a clean sample set Dc, reusing the sample set Dr and a noise set Dn, and performing loss correction on the reusable sample set Dr and then sending the reusable sample set Dr into a network for training;
on the premise of choosing a clean sample based on sample selection, a reusable sample is dynamically chosen again through the uncertainty of the sample for the noise sample which is to be discarded, and loss correction is carried out on the reusable sample. Since the higher the uncertainty for a sample, the more likely it is a noise sample, and the lower the uncertainty, the more likely it is a reusable sample.
In this embodiment, the conventional sample selection method first calculates the loss of the sample, then selects the sample according to the small loss, and selects the sample according to the cosine similarity between the sample and the class center, then calculates the loss, and selects the available sample before calculating the loss, and calculates the loss by using the samples.
Based on a simple observation, the network will fit a simple clean sample first, and the cosine similarity between the simple clean sample and the class center will be lower than that of the noise sample, so the clean sample is selected directly according to the cosine similarity between the sample and the class center.
The goal of Softmax is to maximize the probability of correct classification as much as possible, so it ignores some pictures that are difficult to resolve, i.e. low quality pictures, and preferentially fits high quality pictures; in order to increase the utilization rate of the picture, the features of the picture are normalized at the Softmax layer, so that the hard example gains more attention from the network, and the final output process of the Softmax layer can be expressed as:
Figure 242723DEST_PATH_IMAGE001
(6.1)
Figure 603035DEST_PATH_IMAGE002
(6.2)
after normalization, using a hyper-parameter s to scale cosine values, the Softmax output under the L2 constraint after feature normalization is calculated as follows:
Figure 174962DEST_PATH_IMAGE003
(6.3)
wherein the content of the first and second substances,
Figure 413176DEST_PATH_IMAGE004
and with
Figure 246003DEST_PATH_IMAGE005
Expressing the ith sample and its label, after normalization, the characteristics are distributed in angle on the hypersphere, and the parameters of the last full connection layer
Figure 278681DEST_PATH_IMAGE036
For the center of each class generated by pre-training, the output of the network full-connection layer is the cosine distance between the picture characteristic and the center of each class
Figure 836439DEST_PATH_IMAGE037
Recording the cosine similarity of each picture and the corresponding class center:
Figure 737399DEST_PATH_IMAGE008
(6.4)
Figure 300098DEST_PATH_IMAGE038
is the ith sample and its class center
Figure 769257DEST_PATH_IMAGE005
The cosine distances of the training data are sorted, the instances with large cosine similarity in each batch of training are sent to a peer-to-peer network, and the next step of training is carried outThe selection formula is as follows:
Figure 174830DEST_PATH_IMAGE011
(6.5)
wherein, the first and the second end of the pipe are connected with each other,
Figure 20427DEST_PATH_IMAGE012
for a correctable drop rate, D is the sample set, dr is the reusable sample, and the selected picture is fed into the peer-to-peer network update network.
In this embodiment, after the clean sample Dc is selected, the remaining samples may be divided into two types, where one type of label is in the data set, and through training, the network predicts the correct label of this type of sample, and through correcting the label of this type of sample, the network may still continue to learn from this type of sample set, which is called reusable sample Dr, and another type of label is not in the data set, which is called noise set Dn, and needs to be discarded in the subsequent training.
When a reusable sample is fed into the network, the network will tend to give a definite prediction (which is not consistent with the label given by the data set) after training, and when a noisy sample is fed into the network, the network will give an uncertain prediction, entropy is used to measure the uncertainty of the sample, and the reusable sample is selected.
When the sample is
Figure 201747DEST_PATH_IMAGE004
Prediction uncertainty of
Figure 841807DEST_PATH_IMAGE013
The sample belongs to the reusable sample set Dr if the following condition is satisfied:
Figure 469097DEST_PATH_IMAGE014
(6.6)
wherein
Figure 852805DEST_PATH_IMAGE013
Is a sample
Figure 390097DEST_PATH_IMAGE004
Is not determined, and
Figure 325692DEST_PATH_IMAGE015
to represent
Figure 79759DEST_PATH_IMAGE016
Median of uncertainty of samples, cross entropy is used to measure the uncertainty of each sample:
Figure 1579DEST_PATH_IMAGE017
(6.7)
record each sample
Figure 658956DEST_PATH_IMAGE004
Last 10 predictions
Figure 499873DEST_PATH_IMAGE018
Prediction is updated as training progresses:
Figure 242701DEST_PATH_IMAGE019
(6.8)
according to
Figure 466747DEST_PATH_IMAGE018
Record the sample
Figure 103265DEST_PATH_IMAGE004
The category j with the largest number of predicted times and the number m,
Figure 256029DEST_PATH_IMAGE039
is a sample
Figure 486153DEST_PATH_IMAGE004
Probability of being predicted as j:
Figure 15354DEST_PATH_IMAGE021
(6.9)
the uncertainty of the prediction is the smallest when the n predictions are all the same, and the time is the smallest when the n predictions are all the same
Figure 880280DEST_PATH_IMAGE022
Figure 328579DEST_PATH_IMAGE023
(ii) a The uncertainty is greatest when the n predictions are all different, when
Figure 780420DEST_PATH_IMAGE024
Figure 113312DEST_PATH_IMAGE025
And n is 10.
In this embodiment, a BCNN network is used for training, in the training, clean samples Dc are first selected, label smoothing is helpful for the network to learn in noisy data, in the previous n training processes, the output of the Softmax layer is smoothed, and back propagation is performed with the following loss:
Figure 334209DEST_PATH_IMAGE026
(6.10)
Figure 218988DEST_PATH_IMAGE027
(6.11);
wherein
Figure 656661DEST_PATH_IMAGE028
The tag smoothing factor for the data set.
TrainingAfter n times, reusable samples Dr are sorted out using equation (6.6) and used
Figure 262085DEST_PATH_IMAGE029
Updating the network:
Figure 462123DEST_PATH_IMAGE030
(6.12)
Figure 393170DEST_PATH_IMAGE031
(6.13)
Figure 554024DEST_PATH_IMAGE032
(6.14)
j is the category with the largest predicted times in the continuous n prediction processes.
The algorithm flow of the invention is as follows:
inputting:
training set D
Mini-batch training set Dm
Clean sample set Dc
Reusable sample set Dr
Total number of training times Tmax
Number of pre-training times Tk
Discard rate
Figure 87773DEST_PATH_IMAGE012
Number of iterations Nmax
And (3) outputting: updating the network h
Randomly initializing network parameters Dc = D, dr = D
Figure 781797DEST_PATH_IMAGE040
:
Figure 883746DEST_PATH_IMAGE041
:
Record each sample according to equation (6.8)
Figure 390950DEST_PATH_IMAGE004
Predictive tagging of
Figure 603757DEST_PATH_IMAGE042
Figure 919332DEST_PATH_IMAGE043
Will predict the label
Figure 425137DEST_PATH_IMAGE042
Is added to
Figure 950796DEST_PATH_IMAGE044
Figure 701715DEST_PATH_IMAGE045
Will be provided with
Figure 871796DEST_PATH_IMAGE042
Replace oldest records
Figure 909022DEST_PATH_IMAGE046
Figure 797344DEST_PATH_IMAGE047
The clean sample Dc is selected according to equation (6.5)
Updating the network h according to equation (6.11)
else:
Selecting a clean sample Dc according to formula (6.5)
Selecting reusable samples Dr according to formula (6.6)
Figure 116067DEST_PATH_IMAGE048
:
Correcting the label of the sample to a true label according to equation (6.13)
Figure 140655DEST_PATH_IMAGE049
Figure 614362DEST_PATH_IMAGE046
Updating the network h according to equation (6.14)
Figure 724400DEST_PATH_IMAGE046
Generally speaking, the invention mainly combines the sample selection with the loss correction, and provides a new method for selecting reusable samples, so that the method improves the sample utilization rate and improves the fine-grained classification performance.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing embodiments, or equivalents may be substituted for elements thereof.

Claims (5)

1. A network image label denoising method combining sample selection and label correction is characterized by comprising the following steps:
s1, firstly, selecting a clean sample according to cosine similarity between the sample and a category center;
s2, selecting a reusable sample from the rest samples through sample uncertainty dynamic state and correcting the reusable sample;
s3, finally, updating the network by using the clean sample and the corrected reusable sample;
in S1, the features of the picture are normalized in the Softmax layer, and the output process of the Softmax layer may be represented as:
Figure FDA0004053725570000011
w i x i =||w i ||||x i ||cosθ i =cosθ i (6.2)
after normalization, using a hyper-parameter s to scale cosine values, the Softmax output under the L2 constraint after feature normalization is calculated as follows:
Figure FDA0004053725570000012
wherein x is i And y i Represents the ith sample and its label;
after normalization, the characteristics are distributed on the hypersphere in an angle, and the parameter w of the last full-connection layer j For the center of each class generated by pre-training, the output of the network full-link layer is the cosine distance cos theta between the picture feature and the center of each class j (ii) a Recording the cosine similarity of each picture with its corresponding class center:
Figure FDA0004053725570000013
Figure FDA0004053725570000014
is the ith sample and its class center y i The cosine distances of the training data are used for sorting H, and the instances with high cosine similarity are taken from the training of each batch and sent to a peer-to-peer network for the next training; the selection formula is as follows:
Figure FDA0004053725570000015
where τ is a correctable discard rate, D is a set of samples, and Dr is a reusable sample.
2. The method for denoising network image labels combining sample selection and label correction as claimed in claim 1, wherein the clean samples Dc are selected in S1, and the remaining samples can be divided into two types, which are respectively reusable samples Dr and noise sets Dn, and need to be discarded in the subsequent training;
when the sample x i Prediction uncertainty of f (x) i ) The sample belongs to the reusable sample set Dr if the following condition is satisfied:
Figure FDA0004053725570000021
wherein f (x) i ) Is a sample x i And midf (x) i ) Is shown (D) r ∩D n ) Median of uncertainty of samples, cross entropy is used to measure the uncertainty of each sample:
Figure FDA0004053725570000022
3. the method for denoising network image labels combining sample selection and label correction as claimed in claim 2, wherein each sample x is recorded i Prediction Pre of the last 10 times i Prediction is updated as training progresses:
Pre i ={prec 1 ,prec 2 ,...,prec n } (6.8)
according to Pre i Record sample x i Class j with the largest predicted number of times and number m, p i Is a sample x i Probability of being predicted as j:
p i =m/n (6.9)
of the n predictions, where p is the same, the uncertainty is minimal when all n predictions are the same i =1,f(x i ) =0; the uncertainty is greatest when n predictions are different, when p is the time of prediction i =1/n,f(x i ) And (4) taking 10 as n, wherein the = log 1/n.
4. The method for denoising network image labels combining sample selection and label correction as claimed in claim 3, wherein in S3, the output of Softmax layer is smoothed in the previous n training processes, and the backward propagation is performed with the following loss:
Figure FDA0004053725570000023
Figure FDA0004053725570000024
where α is the tag smoothing factor for the data set.
5. The method as claimed in claim 4, wherein after training n times, the reusable sample Dr is selected using formula (6.6), and L is used CSSLC Updating the network:
Figure FDA0004053725570000031
Figure FDA0004053725570000032
Figure FDA0004053725570000033
j is the category with the largest predicted times in the continuous n prediction processes.
CN202211408454.3A 2022-11-10 2022-11-10 Network image label denoising method combining sample selection and label correction Active CN115564960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211408454.3A CN115564960B (en) 2022-11-10 2022-11-10 Network image label denoising method combining sample selection and label correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211408454.3A CN115564960B (en) 2022-11-10 2022-11-10 Network image label denoising method combining sample selection and label correction

Publications (2)

Publication Number Publication Date
CN115564960A CN115564960A (en) 2023-01-03
CN115564960B true CN115564960B (en) 2023-03-03

Family

ID=84769821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211408454.3A Active CN115564960B (en) 2022-11-10 2022-11-10 Network image label denoising method combining sample selection and label correction

Country Status (1)

Country Link
CN (1) CN115564960B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
CN113657449A (en) * 2021-07-15 2021-11-16 北京工业大学 Traditional Chinese medicine tongue picture greasy classification method containing noise labeling data
CN113657561A (en) * 2021-10-20 2021-11-16 之江实验室 Semi-supervised night image classification method based on multi-task decoupling learning
CN114169442A (en) * 2021-12-08 2022-03-11 中国电子科技集团公司第五十四研究所 Remote sensing image small sample scene classification method based on double prototype network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492574A (en) * 2021-12-22 2022-05-13 中国矿业大学 Pseudo label loss unsupervised countermeasure domain adaptive picture classification method based on Gaussian uniform mixing model
CN114897049A (en) * 2022-04-06 2022-08-12 济南融瓴科技发展有限公司 Label noise monitoring method based on meta-learning
CN115170813A (en) * 2022-06-30 2022-10-11 南京理工大学 Network supervision fine-grained image identification method based on partial label learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
CN113657449A (en) * 2021-07-15 2021-11-16 北京工业大学 Traditional Chinese medicine tongue picture greasy classification method containing noise labeling data
CN113657561A (en) * 2021-10-20 2021-11-16 之江实验室 Semi-supervised night image classification method based on multi-task decoupling learning
CN114169442A (en) * 2021-12-08 2022-03-11 中国电子科技集团公司第五十四研究所 Remote sensing image small sample scene classification method based on double prototype network

Also Published As

Publication number Publication date
CN115564960A (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN110532880B (en) Sample screening and expression recognition method, neural network, device and storage medium
CN109214353B (en) Training method and device for rapid detection of face image based on pruning model
CN111506773B (en) Video duplicate removal method based on unsupervised depth twin network
CN113688949B (en) Network image data set denoising method based on dual-network joint label correction
CN113627479B (en) Graph data anomaly detection method based on semi-supervised learning
CN114743037A (en) Deep medical image clustering method based on multi-scale structure learning
CN107229945A (en) A kind of depth clustering method based on competition learning
CN114330598A (en) Multi-source heterogeneous data fusion method and system based on fuzzy C-means clustering algorithm
CN116258978A (en) Target detection method for weak annotation of remote sensing image in natural protection area
CN114881125A (en) Label noisy image classification method based on graph consistency and semi-supervised model
CN115051929A (en) Network fault prediction method and device based on self-supervision target perception neural network
CN115564960B (en) Network image label denoising method combining sample selection and label correction
CN114842371A (en) Unsupervised video anomaly detection method
CN113192627A (en) Patient and disease bipartite graph-based readmission prediction method and system
CN116662832A (en) Training sample selection method based on clustering and active learning
CN111008940A (en) Image enhancement method and device
CN115578568A (en) Noise correction algorithm driven by small-scale reliable data set
CN115984682A (en) Fish quantity estimation method based on Unet and BP neural network
Cai et al. SSS-Net: A shadowed-sets-based semi-supervised sample selection network for classification on noise labeled images
CN115017988A (en) Competitive clustering method for state anomaly diagnosis
CN114677535A (en) Training method of domain-adaptive image classification network, image classification method and device
CN110188219B (en) Depth-enhanced redundancy-removing hash method for image retrieval
CN116778968B (en) Heart sound classifying method based on depth separable convolution and attention mechanism
Pei et al. Evidential Multi-Source-Free Unsupervised Domain Adaptation
CN111291602A (en) Video detection method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant