CN114332539A - Network training method for class unbalanced data set - Google Patents

Network training method for class unbalanced data set Download PDF

Info

Publication number
CN114332539A
CN114332539A CN202111671005.3A CN202111671005A CN114332539A CN 114332539 A CN114332539 A CN 114332539A CN 202111671005 A CN202111671005 A CN 202111671005A CN 114332539 A CN114332539 A CN 114332539A
Authority
CN
China
Prior art keywords
samples
sample
class
data set
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111671005.3A
Other languages
Chinese (zh)
Inventor
任维
姚鹏
徐亮
陈明潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yousheng Biotechnology Co ltd
University of Science and Technology of China USTC
Original Assignee
Shenzhen Yousheng Biotechnology Co ltd
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yousheng Biotechnology Co ltd, University of Science and Technology of China USTC filed Critical Shenzhen Yousheng Biotechnology Co ltd
Priority to CN202111671005.3A priority Critical patent/CN114332539A/en
Publication of CN114332539A publication Critical patent/CN114332539A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a network training method aiming at a class unbalanced data set, which comprises the following steps: acquiring a target image data set, and determining the number of categories and the amount of each type of sample; calculating the weight of each class by using the sample amount of each class, and constructing a limit error loss function by combining the set hyper-parameters; and training a neural network model by using the target image data set, substituting the prediction result of the sample and the real label into the error loss limiting function to carry out error calculation, and continuously updating the parameters of the neural network model by using back propagation until the network convergence reaches the expected target. The constructed error loss limiting function is weighted according to the number of categories, and by introducing the LDAM for regularizing tail generalization by the hyper-parameters, the training attention is more biased to the tail categories with less number, so that under-fitting of network training to the tail categories is prevented, the method can be applied to unbalanced image data set, and the recognition accuracy of the network to the unbalanced data set can be obviously improved.

Description

Network training method for class unbalanced data set
Technical Field
The invention relates to the technical field of deep learning, in particular to a network training method for a class unbalanced data set.
Background
With the development of artificial intelligence, deep learning has achieved remarkable results in various fields, and is widely applied in various fields nowadays. With the use of high-quality, large-scale datasets (such as ImageNet ILSVRC 2012, MS COCO, and the like), the field of computer vision has also made a significant breakthrough. But compared to these manually-intervened datasets with uniformly distributed tags, datasets in real-world scenarios tend to exhibit severe class imbalances.
This kind of imbalance can be divided into two categories: firstly, the number distribution of the categories is unbalanced: that is, the samples of the few classes (head class) occupy most of the sample data, while the samples of the most classes (tail class) have only a small number of samples. Secondly, the difficulty degree distribution of the categories is unbalanced: that is, due to the influence of factors such as acquisition and human factors, the data set itself has a situation that the difference between some categories and most categories is large, for example, the pixels are poor, the imaging is unclear or the difference between the foreground and the background is large, and these categories are often more difficult to train than other categories.
Such data sets with extremely unbalanced distribution of the number of categories and the degree of difficulty tend to have difficulty in achieving excellent image recognition accuracy under the conventional deep learning method. In order to enable a deep learning network to adapt to such unbalanced data sets, one may choose to boost the performance of the network through two aspects.
In terms of class imbalance: it is often necessary to focus on the characteristics of the tail class samples through specific research to encourage the network to find an optimal trade-off between the head class and the tail class. Aiming at the unbalance problem, the minimized edge generalized boundary Loss function (LDAM Loss) can be used for providing stronger regularization for the tail class than the head class, and the generalization error of the tail class is improved on the premise of keeping the accuracy of the head class with more occupied samples.
In terms of difficulty imbalance: in the actual samples, the phenomena of similarity of samples among classes and great variation of samples in the classes exist, so that some classes can be easily distinguished during classification, and some classes are easily confused and are difficult to distinguish. For this type of imbalance problem, the Focal Loss function (Focal local) reduces the Loss weight of the samples that are easy to classify, so that training focuses more on the classification of the difficult samples, thereby alleviating the difficulty and difficulty of sample data imbalance and improving network performance.
On the other hand, random clipping is widely used for deep learning network training as a data enhancement method, and can greatly enhance the spatial robustness of the model. However, when the foreground region of the image is small relative to the background region, the enhancement method using random cropping often generates some images that may only contain a very small amount of foreground or even no foreground, as shown in fig. 1, the left side represents the sample before cropping, and the right side represents the sample after cropping. For such samples, and for samples in which a label error may occur in the data set, it is referred to as "very difficult samples" or "outlier samples". These outlier samples may still have large loss values during the convergence phase of deep learning model training. If the model is forced to better classify these outliers during the training process, the accuracy of the classification of the otherwise large number of common samples tends to be significantly reduced.
At present, no method exists, and the problems of unbalanced sample quantity, unbalanced sample difficulty and easiness and containing 'abnormal samples' in a data set can be well and simultaneously processed. Therefore, it is necessary to construct a method to improve the performance of the deep learning network when there are partial "abnormal samples" in the unbalanced data set, so as to improve the classification performance of the network model.
Disclosure of Invention
The invention aims to provide a network training method aiming at a class imbalance data set, which can be widely applied to processing imbalance problems existing in a real scene, can obviously improve the accuracy of network training and improve the image classification performance of a network model.
The purpose of the invention is realized by the following technical scheme:
a method of network training for a class imbalance dataset, comprising:
acquiring a target image data set, and determining the number of categories and the amount of each type of sample;
calculating the weight of each class by using the sample amount of each class, and constructing a limit error loss function by combining the set hyper-parameters;
and training a neural network model by using the target image data set, substituting the prediction result of the sample and the real label into the error loss limiting function to carry out error calculation, and continuously updating the parameters of the neural network model by using back propagation until the network convergence reaches the expected target.
According to the technical scheme provided by the invention, the constructed error loss limiting function is weighted according to the number of categories, and the LDAM for regularizing the tail generalization by introducing the hyper-parameter can bias the training attention to more tail categories with less attention, so as to prevent under-fitting of network training to tail categories; the network training is carried out based on the error loss limiting function, the method can be widely applied to various unbalanced image data sets, and the recognition accuracy of the network to the unbalanced data sets can be remarkably improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic diagram of an abnormal sample caused by random cropping according to the background art of the present invention;
fig. 2 is a flowchart of a network training method for a class imbalance data set according to an embodiment of the present invention;
fig. 3 is a flowchart of the calculation of the limiting error loss function according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The terms that may be used herein are first described as follows:
the terms "comprising," "including," "containing," "having," or other similar terms of meaning should be construed as non-exclusive inclusions. For example: including a feature (e.g., material, component, ingredient, carrier, formulation, material, dimension, part, component, mechanism, device, process, procedure, method, reaction condition, processing condition, parameter, algorithm, signal, data, product, or article of manufacture), is to be construed as including not only the particular feature explicitly listed but also other features not explicitly listed as such which are known in the art.
The following describes a network training method for a class imbalance data set according to the present invention in detail. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art. Those not specifically mentioned in the examples of the present invention were carried out according to the conventional conditions in the art or conditions suggested by the manufacturer.
The embodiment of the invention provides a network training method for a class unbalanced data setThe method is mainly applied to the training process of a deep learning model, and the main principle can be described as follows: firstly, a target image data set is obtained, and the category number C and the amount N of each type of sample of the data sample are determined according to the target data setiSetting hyper-parameters gamma, T and S, calculating the weight of each class by using the sample amount of each class, and constructing a limit error loss function LE by combining the set hyper-parametersloss(z, y), performing iterative training on the neural network model by using the target image data set, substituting the prediction result of the sample and the real label into the error loss limiting function to perform error calculation, continuously updating the parameters of the neural network model by using back propagation until the network convergence reaches the expected target, and finally finishing the training. The problem that the number of samples of different data categories is unbalanced and the problem that the classification difficulty is unbalanced can be simultaneously handled by utilizing the limit error loss function, the influence of abnormal samples on the training process can be further reduced, and the limit error loss function can be applied to a data set with the problem that the categories are unbalanced, so that the influence of the category-like unbalance problem is effectively relieved. As shown in fig. 2, the main steps of the above scheme include:
step 1: acquiring a target image data set, performing common data augmentation (random cutting, random overturning and the like) on the target image data set, and determining the class number C and the number N of each class of samples according to the target image data seti,NiIs the total number of class i samples.
In an embodiment of the present invention, the target image dataset is a class imbalance dataset, and some classes have a large number of samples, and other classes have a small number of samples. Moreover, samples in the same category are also divided into difficult samples and simple samples, wherein the difficult samples are samples with errors exceeding a set maximum threshold value when the errors with the real label are predicted, and the simple samples are samples with errors not exceeding a set minimum threshold value when the errors with the real label are predicted; meanwhile, the target image dataset further contains an abnormal sample, and the abnormal sample comprises: the samples with lost foreground generated by the data augmentation operation contain only samples with foreground areas smaller than the set value or samples with wrong labels.
In the embodiment of the invention, the specific numerical values of various setting values such as the maximum threshold value, the minimum threshold value and the like can be set by a user according to actual conditions or experience, and the specific numerical values are not limited by the invention.
Step 2: and setting the hyperparameters gamma, T and S for limiting the error loss function.
In the embodiment of the invention, gamma, T and S are hyper-parameters, gamma is used for adjusting the rate of weight reduction of a sample (simple sample), T is a threshold value used for judging whether the sample is an abnormal sample, and S is used for adjusting the degree of deviation of the classification boundary of a head class and a tail class to the head class.
In the embodiment of the invention, the value of the hyper-parameter gamma is greater than 0, the larger the value is, the faster the weight reduction rate of the weight of a simple sample relative to the weight of a difficult sample is, and the training process focuses more on the classification of the difficult sample; the value of the hyper-parameter T is between 0 and 0.5, and more samples are considered as abnormal samples if the value is larger.
In the embodiment of the invention, the hyper-parameters gamma and T are set by using a target image data set in advance, and further search and optimization are carried out in the training process to carry out numerical optimization.
And step 3: and calculating the weight of the corresponding category by using the amount of each type of sample, and constructing a limit error loss function by combining the set hyper-parameters.
In the embodiment of the present invention, the limiting error loss function is expressed as:
Figure BDA0003449738270000041
wherein z = [ z ]1,…,zC],zjRepresenting the predicted value of the sample on the category j, C representing the number of categories, and y representing a real label; n is a radical ofyIndicating the amount of the sample corresponding to the authentic label,
Figure BDA0003449738270000051
i.e. the weight w of the corresponding categoryy(ii) a Loss (z, y) is a Loss function calculated in conjunction with the hyper-parameters.
The inventionIn an embodiment, the weight wyThe method is used for processing the influence of the problem of unbalanced quantity of samples of different classes. When the number of samples of a certain category is small, the corresponding weight wyThe loss calculated by the loss function is increased, so that the class with a small number is more concerned during the neural network training, and the influence of the unbalanced number of different classes is relieved.
In the embodiment of the present invention, the calculation mode of the Loss function Loss (z, y) is expressed as:
Figure BDA0003449738270000052
where σ is a constant value determined by the hyperparameter T, and can be expressed as: sigma = (1T)γlog(T)。pyRepresenting the probability that the network predicts the sample as a true label.
According to the calculation formula of the Loss function Loss (z, y), when p isyWhen the predicted value is less than or equal to the threshold value T, the predicted value is far away from the real label, the sample is considered to be an abnormal sample, the value of Loss (z, y) is limited to be a constant sigma, the Loss value obtained through calculation of the Loss function is correspondingly reduced, attention to the abnormal sample during neural network training is reduced, and therefore the influence of the abnormal sample on the training process is reduced. At the same time let w*=(1-py)γWhen p isyWhen the weight is larger, the output prediction value is closer to the real label value, the sample is a simple sample, and the weight w is*The smaller will be. The loss value obtained through the loss function calculation is correspondingly smaller, so that the attention to the simple sample is reduced during the neural network training, the difficult sample is paid more attention to, and the influence of the imbalance of the simple sample and the difficult sample is relieved.
In the embodiment of the invention, the parameter pyThe probability that a prediction sample obtained after normalization through a prediction value set z is a real label is obtained; in order to improve the classification accuracy, the LDAM method is adopted for calculation, which is expressed as:
Figure BDA0003449738270000053
wherein the content of the first and second substances,
Figure BDA0003449738270000054
s is a hyper-parameter used for adjusting the degree of deviation of the classification boundary of the head class and the tail class towards the head class, and e is a natural constant.
In the examples of the present invention, for pyThe LDAM method is adopted to replace the traditional cross entropy method, so that the classification boundary between different classes can be further clarified, the classification accuracy is improved, and particularly the classification accuracy of a few classes is improved.
Fig. 3 shows the limiting error loss function calculation flow.
And 4, step 4: the method is characterized in that a limiting error loss function is used in the back propagation process of neural network model training, and loss calculation is carried out on different types of data samples and different difficult and easy samples by using the limiting error loss function, so that the influence of the problems of unbalanced quantity of different types and unbalanced classification difficulty is relieved until the network converges, and the purpose of network training is finally achieved.
In the embodiment of the present invention, the network parameter updating process may be implemented by referring to a conventional technology, which is not described in detail herein, and the neural network model may be an image classification network of any current structural form.
The scheme of the embodiment of the invention mainly has the following beneficial effects:
1. by introducing the LDAM for regularizing tail generalization and the method for weighting the loss function according to the number of the categories, the training attention is more biased to the tail categories with less number, and under-fitting of the network training to the tail categories is prevented.
2. The contribution of the difficult samples to the loss function can be amplified by reducing the weight of the simple samples, and the training of the network is more focused on learning the characteristics of the difficult samples.
3. The negative influence on the neural network model training caused by the abnormal value condition caused by the pre-training process or label error can be effectively relieved.
4. The method can be widely applied to various unbalanced data sets, and can remarkably improve the recognition accuracy of the network on the unbalanced data sets.
In order to verify the effectiveness of the scheme, the classification of the images in the real scene is taken as an example, and a relevant experiment is carried out.
The selected dataset is the official dataset CIFAR10, and the uniform ten-classified original dataset is converted into an unbalanced sample in an exponentially decaying manner by a common unbalanced dataset conversion method, as shown in table 1.
Categories Aircraft with a flight control device Automobile Bird with bird-shaped wing Hair with bristles Deer shaped food Dog Frog Horse Ship with a detachable hull Truck
Number of 5000 2997 1796 1077 645 387 232 139 83 50
TABLE 1 unbalanced sample data distribution
In order to compare the actual effects of different Loss functions, a Resnet-32 neural network model is selected, and the super parameters T =0.5 and gamma =1.5 are set, and then a comparison experiment is performed on the basis of the super parameters T =0.5 and gamma =1.5 with the existing common Loss functions (Cross-enhancement Loss (CE), LDAM Loss, Focalloss), and the experimental results are shown in Table 2.
Loss function Average accuracy (%)
CE 70.54
LDAM Loss 73.35
FocalLoss 70.38
LE Loss (function of limiting error Loss provided by the invention)Number) 75.84
TABLE 2 results of the experiment
It can be observed that, when an unbalanced long-tail data set is used as a training set and training is performed by using an LE Loss, the accuracy of the limited error Loss function calculation method based on unbalanced data is higher than that of the existing CE Loss, LDAM Loss and Focal Loss functions.
Through the above description of the embodiments, it is clear to those skilled in the art that the above embodiments can be implemented by software, and can also be implemented by software plus a necessary general hardware platform. With this understanding, the technical solutions of the embodiments can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A network training method for a class imbalance data set is characterized by comprising the following steps:
acquiring a target image data set, and determining the number of categories and the amount of each type of sample;
calculating the weight of each class by using the sample amount of each class, and constructing a limit error loss function by combining the set hyper-parameters;
and training a neural network model by using the target image data set, substituting the prediction result of the sample and the real label into the error loss limiting function to carry out error calculation, and continuously updating the parameters of the neural network model by using back propagation until the network convergence reaches the expected target.
2. The method of claim 1, wherein the constraint error loss function is expressed as:
Figure FDA0003449738260000011
wherein z is [ z ]1,...,zC],zjRepresenting the predicted value of the sample on the category j, C representing the number of categories, and y representing a real label; n is a radical ofyIndicating the amount of the sample corresponding to the authentic label,
Figure FDA0003449738260000012
i.e. the weight w of the corresponding categoryy(ii) a Loss (z, y) is a Loss function calculated in conjunction with the hyper-parameters.
3. The method of claim 2, wherein the Loss function Loss (z, y) is calculated as:
Figure FDA0003449738260000013
wherein, gamma and T are both hyper-parameters, gamma is used for adjusting the rate of sample weight reduction, and T is a threshold value used for judging whether the sample is an abnormal sample; σ is a constant value determined by the hyperparameter T; p is a radical ofyRepresenting the probability that the network predicts the sample as a true label.
4. The method of claim 3, wherein the parameter p is a parameter pyBy using LDAMWas calculated by the method, expressed as:
Figure FDA0003449738260000014
wherein the content of the first and second substances,
Figure FDA0003449738260000015
s is a hyperparameter, and e is a natural constant.
5. The method of claim 3, wherein the constant σ is calculated as:
σ=(1-T)γlog(T)。
6. the method as claimed in claim 1, wherein the target image dataset is a class unbalanced dataset, and the samples in the same class are divided into difficult samples and simple samples, the difficult samples are samples whose error from the true label in the prediction exceeds a set maximum threshold, and the simple samples are samples whose error from the true label in the prediction does not exceed a set minimum threshold; meanwhile, the target image dataset further contains an abnormal sample, and the abnormal sample comprises: the samples with lost foreground generated by the data augmentation operation contain only samples with foreground areas smaller than the set value or samples with wrong labels.
7. The network training method for the class imbalance data set according to claim 3, wherein the value of the hyper-parameter γ is greater than 0, and the larger the value is, the faster the weight reduction rate of the weight of the simple sample is relative to that of the difficult sample is, and the training process focuses more on the classification of the difficult sample; the larger the value of the hyper-parameter T is, more samples are determined as abnormal samples;
the difficult samples are samples with errors exceeding a set maximum threshold value when the errors between the difficult samples and the real labels are predicted, and the simple samples are samples with errors not exceeding a set minimum threshold value when the errors between the simple samples and the real labels are predicted; the anomaly samples include: the samples with lost foreground generated by the data augmentation operation contain only samples with foreground areas smaller than the set value or samples with wrong labels.
8. The method as claimed in claim 1, 2 or 3, wherein the hyper-parameters are set in advance by using the target image dataset, and are numerically optimized during the training process.
CN202111671005.3A 2021-12-31 2021-12-31 Network training method for class unbalanced data set Pending CN114332539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111671005.3A CN114332539A (en) 2021-12-31 2021-12-31 Network training method for class unbalanced data set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111671005.3A CN114332539A (en) 2021-12-31 2021-12-31 Network training method for class unbalanced data set

Publications (1)

Publication Number Publication Date
CN114332539A true CN114332539A (en) 2022-04-12

Family

ID=81020599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111671005.3A Pending CN114332539A (en) 2021-12-31 2021-12-31 Network training method for class unbalanced data set

Country Status (1)

Country Link
CN (1) CN114332539A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863193A (en) * 2022-07-07 2022-08-05 之江实验室 Long-tail learning image classification and training method and device based on mixed batch normalization
CN115677346A (en) * 2022-11-07 2023-02-03 北京赛乐米克材料科技有限公司 Preparation method of color zirconium gem ceramic nose pad
CN116660389A (en) * 2023-07-21 2023-08-29 山东大禹水务建设集团有限公司 River sediment detection and repair system based on artificial intelligence
CN117743857A (en) * 2023-12-29 2024-03-22 北京海泰方圆科技股份有限公司 Text correction model training, text correction method, device, equipment and medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863193A (en) * 2022-07-07 2022-08-05 之江实验室 Long-tail learning image classification and training method and device based on mixed batch normalization
CN115677346A (en) * 2022-11-07 2023-02-03 北京赛乐米克材料科技有限公司 Preparation method of color zirconium gem ceramic nose pad
CN115677346B (en) * 2022-11-07 2023-09-12 北京赛乐米克材料科技有限公司 Preparation method of colored zirconium precious stone ceramic nose pad
CN116660389A (en) * 2023-07-21 2023-08-29 山东大禹水务建设集团有限公司 River sediment detection and repair system based on artificial intelligence
CN116660389B (en) * 2023-07-21 2023-10-13 山东大禹水务建设集团有限公司 River sediment detection and repair system based on artificial intelligence
CN117743857A (en) * 2023-12-29 2024-03-22 北京海泰方圆科技股份有限公司 Text correction model training, text correction method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN114332539A (en) Network training method for class unbalanced data set
CN108038859B (en) PCNN graph segmentation method and device based on PSO and comprehensive evaluation criterion
CN109242878B (en) Image multi-threshold segmentation method based on self-adaptive cuckoo optimization method
Xie et al. Image de-noising algorithm based on Gaussian mixture model and adaptive threshold modeling
Chou et al. Turbulent-PSO-based fuzzy image filter with no-reference measures for high-density impulse noise
KR101113006B1 (en) Apparatus and method for clustering using mutual information between clusters
Salmona et al. Can Push-forward Generative Models Fit Multimodal Distributions?
CN109871855B (en) Self-adaptive deep multi-core learning method
CN114283307B (en) Network training method based on resampling strategy
CN111696046A (en) Watermark removing method and device based on generating type countermeasure network
Yuan et al. Neighborloss: a loss function considering spatial correlation for semantic segmentation of remote sensing image
CN112434213A (en) Network model training method, information pushing method and related device
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
Yap et al. A recursive soft-decision approach to blind image deconvolution
Wu et al. Semi-supervised semantic segmentation via entropy minimization
Chen et al. Patch selection denoiser: An effective approach defending against one-pixel attacks
CN106952287A (en) A kind of video multi-target dividing method expressed based on low-rank sparse
Zhao et al. Underwater fish detection in sonar image based on an improved Faster RCNN
CN116629376A (en) Federal learning aggregation method and system based on no data distillation
CN113344935B (en) Image segmentation method and system based on multi-scale difficulty perception
CN116246126A (en) Iterative unsupervised domain self-adaption method and device
CN115294424A (en) Sample data enhancement method based on generation countermeasure network
Turan ANN Based Removal for Salt and Pepper Noise
CN110675344A (en) Low-rank denoising method and device based on real color image self-similarity
Li et al. Aggressive growing mixup: A faster and better semi-supervised learning approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220412

RJ01 Rejection of invention patent application after publication