WO2022042002A1 - 一种半监督学习模型的训练方法、图像处理方法及设备 - Google Patents
一种半监督学习模型的训练方法、图像处理方法及设备 Download PDFInfo
- Publication number
- WO2022042002A1 WO2022042002A1 PCT/CN2021/102726 CN2021102726W WO2022042002A1 WO 2022042002 A1 WO2022042002 A1 WO 2022042002A1 CN 2021102726 W CN2021102726 W CN 2021102726W WO 2022042002 A1 WO2022042002 A1 WO 2022042002A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- training
- semi
- supervised learning
- learning model
- sample set
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 435
- 238000000034 method Methods 0.000 title claims abstract description 162
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 230000006870 function Effects 0.000 claims description 198
- 230000015654 memory Effects 0.000 claims description 55
- 238000003860 storage Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 7
- 238000010801 machine learning Methods 0.000 abstract description 16
- 238000013473 artificial intelligence Methods 0.000 abstract description 12
- 230000008569 process Effects 0.000 description 55
- 238000012545 processing Methods 0.000 description 39
- 238000010586 diagram Methods 0.000 description 25
- 238000002372 labelling Methods 0.000 description 25
- 238000013528 artificial neural network Methods 0.000 description 23
- 239000011159 matrix material Substances 0.000 description 21
- 241000282326 Felis catus Species 0.000 description 17
- 239000013598 vector Substances 0.000 description 13
- 238000013461 design Methods 0.000 description 9
- 238000013500 data storage Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- MHABMANUFPZXEB-UHFFFAOYSA-N O-demethyl-aloesaponarin I Natural products O=C1C2=CC=CC(O)=C2C(=O)C2=C1C=C(O)C(C(O)=O)=C2C MHABMANUFPZXEB-UHFFFAOYSA-N 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 239000000872 buffer Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000011022 operating instruction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 241000272814 Anser sp. Species 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 241001494479 Pecora Species 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000010187 selection method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 240000008005 Crotalaria incana Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0895—Weakly supervised learning, e.g. semi-supervised or self-supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/771—Feature selection, e.g. selecting representative features from a multi-dimensional feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7753—Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Definitions
- the present application relates to the field of machine learning, and in particular, to a training method, image processing method and device for a semi-supervised learning model.
- the embodiments of the present application provide a training method, an image processing method, and a device for a semi-supervised learning model, which are used to predict the classification category (that is, the label of a part of unlabeled samples) through the trained first semi-supervised learning model in the current training stage. ), if the prediction is correct, the correct label of the sample can be obtained, otherwise an incorrect label of the sample can be excluded. After that, in the next training stage, use the above information to rebuild the training set (ie the first training set) to update the initial semi-supervised learning model , so as to improve the prediction accuracy of the model.
- the embodiments of the present application first provide a training method for a semi-supervised learning model, which can be used in the field of artificial intelligence.
- the method may include: first, the training device trains an initial semi-supervised learning model according to the obtained initial training set ( can be referred to as the initial model), so as to obtain the first semi-supervised learning model after training (which can be referred to as the first model after training).
- the initial training set one part is labeled samples, and the other part is unlabeled samples, wherein, This part of the labeled samples is called the first labeled sample set, and this part of the unlabeled samples is called the first unlabeled sample set.
- the training device selects an initial subset from the first unlabeled sample set in the initial training set, and each unlabeled sample in the initial subset constitutes test data, which is used for the training of the first model after training.
- Test predict each unlabeled sample in the selected initial subset through the first model after training, so as to obtain the predicted label corresponding to each selected unlabeled sample (the first model after training will output the selected
- the probability prediction of each unlabeled sample on each classification category usually a classification category with the highest probability is selected as the model's predicted label for the sample), and each predicted label constitutes the first predicted label set.
- the method of one-bit labeling is as follows: the labeler answers a "yes” or "no" question for the prediction label corresponding to each prediction sample, and if the prediction label is a correctly predicted classification category, the no-no is obtained.
- the positive label of the labeled sample (also called the correct label), for example, if the predicted label is "dog”, and the true label of the unlabeled sample is also "dog”, then the prediction is correct, and the unlabeled sample gets the positive label "dog””; if the predicted label is a wrongly predicted classification category, the negative label of the unlabeled sample is obtained, and an erroneous label of the unlabeled sample can be excluded accordingly. For example, if the predicted label is “cat”, for the unlabeled sample The true label of is indeed "dog”, then the prediction is wrong, and the unlabeled sample gets the negative label "not a cat”.
- the initial subset is divided into a first subset and a second subset, where the first subset is the sample set corresponding to the correctly predicted classification category (ie, positive label), and the second subset is The set of samples corresponding to mispredicted classification categories (i.e. negative labels).
- the training device reconstructs the training set.
- the reconstructed training set can be called the first training set.
- the positive labeled samples (ie the first subset) are put together with the existing labeled samples as the labeled samples at this stage, which can also be called the second labeled sample set; each negative labeled sample (ie the second subset) ) constitute the negative label sample set at this stage; the remaining unlabeled samples in the first unlabeled sample set constitute the second unlabeled sample set at this stage.
- These three types of samples together constitute the first training set.
- the initial model is retrained according to the first training set to obtain a second semi-supervised learning model with stronger training ability (may be referred to as the second model after training).
- the first semi-supervised learning model after training is used to predict the classification categories of a part of unlabeled samples, to obtain predicted labels, and to determine whether each predicted label is correct, and if the prediction is correct, the sample is obtained
- the correct label (that is, the positive label) of the sample, otherwise an incorrect label (that is, the negative label) of the sample can be excluded.
- the training equipment uses the above information to reconstruct the training set (ie the first training set), And retrain the initial semi-supervised learning model according to the first training set, so as to improve the prediction accuracy of the model, and, because you only need to answer "yes” or "no" to the predicted labels, this labeling method can alleviate the need for machine learning. A lot of manual labelling pressure with correctly labelled data.
- the network structure of the initial semi-supervised learning model may specifically have multiple representations, for example, may include any one of the following models: ⁇ -model, VAT, LPDSSL, TNAR , pseudo-label, DCT, mean teacher model.
- the semi-supervised learning models to which the training methods provided in the embodiments of the present application can be applied are described, which are universal and optional.
- the initial semi-supervised learning model is a learning model with only one loss function, such as any one of ⁇ -model, VAT, LPDSSL, TNAR, pseudo-label, and DCT
- the training device trains the initial model according to the first training set
- the obtained second model after training can be specifically: for the second labeled sample set and the second unlabeled sample set, the training device is trained according to the second labeled sample set and the second unlabeled sample set.
- the second unlabeled sample set uses the first loss function to train the initial semi-supervised learning model, and the first loss function is the original loss function loss1 of the initial semi-supervised learning model; for the negative label sample set, the training equipment is based on The negative label sample set uses the second loss function to train the initial semi-supervised learning model.
- the second loss function (may be called loss2) is the difference between the predicted value output by the model and the modified value.
- the modified value In order to set the corresponding dimension of the wrongly predicted classification category on the predicted value to a value of zero, the second loss function loss2 is the above-mentioned new loss function constructed for the unlabeled sample set.
- the loss function of the initial semi-supervised learning model is one
- a new loss function can be constructed for the negative label sample set, that is, for different types of sample sets in the training set, correspondingly adopt Different loss functions, and then training the initial semi-supervised learning model based on the total loss function is more targeted.
- the training method of the embodiment of the present application can be used not only to train the above-mentioned semi-supervised learning model with only one loss function, but also to train a loss function with two or more loss functions.
- the processes of the two semi-supervised learning models are similar.
- the initial semi-supervised learning model can be the mean teacher model. Since the training strategy of the mean teacher model is: assuming that the training samples are labeled samples (x1, y1) and unlabeled samples x2, where y1 is the label of x1.
- the training device trains the initial semi-supervised learning model according to the first training set, and obtains the second semi-supervised learning model after training.
- the training device may be: for the second labeled sample set, the training device uses the second labeled sample set to use
- the third loss function trains the mean teacher model, and the third loss function is the aforementioned loss function 1 (ie, loss11); the training device will also use the second labeled sample set and the second unlabeled sample set to use
- the fourth loss function trains the mean teacher model, the fourth loss function is the above-mentioned loss function 2 (ie loss12), and the third loss function loss11 and the fourth loss function loss12 are both original of the mean teacher model.
- Loss function in addition, for negative label samples, the training equipment will also use the fifth loss function to train the mean teacher model according to the negative label sample set, and the fifth loss function (can be called loss13) is the predicted value output by the model
- the difference between the modified value and the modified value, the modified value is the value that sets the corresponding dimension of the wrongly predicted classification category on the predicted value to zero, and the fifth loss function loss13 is the above-mentioned value for the unlabeled A new loss function constructed from the sample set.
- loss is the entire mean teacher
- the output value of the total loss function of the model, the training process is to make this total loss as small as possible.
- a new loss function can also be constructed for the negative label sample set, that is, for different types of sample sets in the training set, the corresponding use of Different loss functions, and then training the initial semi-supervised learning model based on the total loss function is more targeted.
- the third loss function may be a cross entropy loss function; and/or the fourth loss function may be a mean square error loss function.
- the second unlabeled sample set is used as the new first unlabeled sample set
- the second semi-supervised learning model is used as the new first semi-supervised learning model
- the accuracy of the model will be improved. Therefore, the most direct method is to divide the training process into multiple stages, each stage selects some samples from the first unlabeled sample set for prediction, reconstructs the training set for the predicted labels, and then uses the reconstructed training set to update the model. Therefore, the generalization ability and prediction accuracy of the trained second semi-supervised learning model obtained in each stage are stronger than those of the second semi-supervised learning model obtained in the previous stage.
- the above method further includes: The supervised learning model is deployed on the target device, the target device is used to acquire the target image, and the trained second semi-supervised learning model is used to predict the label of the target image.
- the specific use of the trained second semi-supervised learning model is described, that is, it is deployed on the target device to predict the label of the target image, that is, it is used to predict the category of the image.
- the semi-supervised learning model trained by the existing training method and the second semi-supervised learning model after training provided by the embodiment of the present application improves the accuracy of target image recognition.
- the selection of the initial subset by the training device from the first unlabeled sample set may specifically be: randomly selecting a preset number of unlabeled samples from the first unlabeled sample set to form the initial subset .
- a second aspect of the embodiments of the present application further provides an image processing method, the method specifically includes: first, an execution device acquires a target image; then, the execution device uses the target image as the input of the trained semi-supervised learning model, and outputs the For the prediction result of the target image, the semi-supervised learning model after training is the second semi-supervised learning model obtained in the first aspect or any possible implementation manner of the first aspect.
- an application method of the second semi-supervised learning model after training is described, that is, it is used to perform category prediction on images.
- the semi-supervised learning model provided by the embodiments of the present application improves the accuracy of target image recognition.
- a third aspect of the embodiments of the present application provides a training device, where the training device has a function of implementing the method of the first aspect or any possible implementation manner of the first aspect.
- This function can be implemented by hardware or by executing corresponding software by hardware.
- the hardware or software includes one or more modules corresponding to the above functions.
- a fourth aspect of the embodiments of the present application provides an execution device, where the execution device has a function of implementing the method of the second aspect.
- This function can be implemented by hardware or by executing corresponding software by hardware.
- the hardware or software includes one or more modules corresponding to the above functions.
- a fifth aspect of an embodiment of the present application provides a training device, which may include a memory, a processor, and a bus system, wherein the memory is used to store a program, and the processor is used to call the program stored in the memory to execute the first aspect of the embodiment of the present application or any one of the possible implementation methods of the first aspect.
- a sixth aspect of an embodiment of the present application provides a training device, which may include a memory, a processor, and a bus system, wherein the memory is used to store a program, and the processor is used to call the program stored in the memory to execute the second aspect of the embodiment of the present application Methods.
- a seventh aspect of the embodiments of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer can execute the first aspect or any one of the first aspects.
- An eighth aspect of the embodiments of the present application provides a computer program or computer program product, which, when running on a computer, enables the computer to execute the method of the first aspect or any possible implementation manner of the first aspect, or enables the computer The method of the second aspect above may be performed.
- FIG. 1 is a schematic diagram of a process of training and reasoning of a semi-supervised learning model provided by an embodiment of the present application
- FIG. 2 is a schematic diagram of a process of active learning model training and reasoning provided by an embodiment of the present application
- Fig. 3 is a schematic diagram of the mean teacher model
- FIG. 4 is a schematic structural diagram of an artificial intelligence main body framework provided by an embodiment of the present application.
- Fig. 5 is an overall flow chart of the training method of the semi-supervised learning model provided by the embodiment of the present application.
- FIG. 6 is a schematic diagram of a system architecture of a task processing system provided by an embodiment of the present application.
- FIG. 7 is a schematic flowchart of a training method for a semi-supervised learning model provided by an embodiment of the present application.
- FIG. 8 is a schematic flowchart of a training method for a semi-supervised learning model provided by an embodiment of the present application.
- Fig. 9 is a schematic diagram of the mean teacher model training process provided by the embodiment of the application.
- FIG. 10 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
- FIG. 11 is a schematic diagram of an application scenario provided by an embodiment of the present application.
- FIG. 12 is another schematic diagram of an application scenario provided by an embodiment of the present application.
- FIG. 13 is another schematic diagram of an application scenario provided by an embodiment of the present application.
- FIG. 14 is a schematic diagram of a training device provided by an embodiment of the present application.
- FIG. 15 is a schematic diagram of an execution device provided by an embodiment of the present application.
- FIG. 16 is another schematic diagram of the training device provided by the embodiment of the application.
- FIG. 17 is another schematic diagram of an execution device provided by an embodiment of the present application.
- FIG. 18 is a schematic structural diagram of a chip provided by an embodiment of the present application.
- the embodiments of the present application provide a training method, an image processing method, and a device for a semi-supervised learning model, which are used to predict the classification category (that is, the label of a part of unlabeled samples) through the trained first semi-supervised learning model in the current training stage. ), if the prediction is correct, the correct label of the sample can be obtained, otherwise an incorrect label of the sample can be excluded. After that, in the next training stage, use the above information to rebuild the training set (ie the first training set) to update the initial semi-supervised learning model , so as to improve the prediction accuracy of the model.
- the embodiments of the present application involve a lot of related knowledge about semi-supervised learning, learning models, etc.
- related terms and concepts that may be involved in the embodiments of the present application are first introduced below. It should be understood that the interpretation of related terms and concepts may be limited due to the specific circumstances of the embodiments of the present application, but it does not mean that the present application can only be limited to the specific circumstances, and there may also be specific circumstances in different embodiments. Differences are not specifically limited here.
- a neural network can be composed of neural units, which can be specifically understood as a neural network with an input layer, a hidden layer, and an output layer. Generally speaking, the first layer is the input layer, the last layer is the output layer, and the number of layers in the middle is all is the hidden layer. Among them, a neural network with many hidden layers is called a deep neural network (DNN).
- the work of each layer in a neural network can be expressed mathematically To describe, from the physical level, the work of each layer in the neural network can be understood as completing the transformation from the input space to the output space (that is, the row space of the matrix to the column through five operations on the input space (set of input vectors) Space), these five operations include: 1. Dimension raising/lowering; 2. Enlarging/reducing; 3.
- the learning model also referred to as a learner, a model, etc.
- machine learning eg, active learning, supervised learning, unsupervised learning, semi-supervised learning, etc.
- the error back propagation (BP) algorithm can be used to correct the size of the parameters in the initial neural network model, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, forwarding the input signal until the output will generate an error loss, and updating the parameters in the initial neural network model by back-propagating the error loss information, so that the error loss converges.
- the back-propagation algorithm is a back-propagation movement dominated by error loss, aiming to obtain the parameters of the optimal neural network model, such as the weight matrix.
- a feature refers to the input variable, the x variable in simple linear regression, a simple machine learning task may use a single feature, while a more complex machine learning task may use millions of features.
- the label is the y variable in simple linear regression, the label can be the future price of wheat, the species of animal/plant shown in the picture, the meaning of the audio clip, or anything.
- the label refers to the classification category of the picture. For example, there is a picture of a cat, people know it is a cat, but the computing device does not know it is a cat, what should I do? Then put a label on this picture, and the label is used to indicate to the computing device that the information contained in the picture is "cat", and then the computing device knows that it is a cat, and the computing device learns all cats based on this label. All cats can be known through this one cat. Therefore, labeling the data is to tell the computing device what the multiple features of the input variable describe (ie, y), and y can be called the label, or it can be called the target (ie, the target value).
- the task of machine learning is often the d-dimensional training of learning input Potential patterns in the sample set (which can be referred to as the training set for short).
- the learning models (also referred to as learners, models, etc.) used for tasks based on machine learning (eg, active learning, supervised learning, unsupervised learning, semi-supervised learning, etc.) are essentially neural The internet.
- the model defines the relationship between features and labels.
- the application of the model generally includes two stages: training and inference.
- the training stage is used to train the model according to the training set to obtain the trained model;
- the inference stage is used to
- the model makes label predictions on real unlabeled instances, and the prediction accuracy is one of the important indicators to measure the quality of a model training.
- Supervised learning refers to the learning tasks in which the training samples contain labeled information (that is, the data has labels), such as: common classification and regression algorithms;
- Unsupervised learning is a learning task in which the training samples do not contain labeled information (that is, the data is unlabeled), such as clustering algorithms, anomaly detection algorithms, etc.
- the situation encountered is often a compromise between the two, that is, only part of the samples are labeled, and the other part is unlabeled. If only labeled or unlabeled samples are used, on the one hand It will cause waste of some samples.
- the effect of the trained model is not very good.
- users need to mark the webpages they are interested in, but few users are willing to take the time to provide marks.
- the unlabeled sample set is directly discarded and the traditional supervised learning method is used, it is often due to the lack of training samples. If it is sufficient, the ability of the model to describe the overall distribution is weakened, thus affecting the generalization performance of the model.
- Semi-supervised learning is a learning method that combines supervised learning and unsupervised learning.
- the corresponding model used can be called a semi-supervised learning model, as shown in Figure 1.
- Figure 1 It shows the process of training and inference of the semi-supervised learning model.
- the training set used by the model consists of a part of labeled samples (a small part) and another part of unlabeled samples (most of them).
- the basic idea of semi-supervised learning is to use data
- the model on the distribution assumes that a model is established to label unlabeled samples, so that the model does not rely on external interaction and automatically uses unlabeled samples to improve learning performance.
- the training set used in active learning is similar to the training set used in semi-supervised learning, as shown in Figure 2, which illustrates the process of active learning model training and inference.
- the sample (a small part) is composed of another part of the unlabeled sample (the majority), but it is different from the semi-supervised learning model: the basic idea of active learning is to first train the active learning model with only the labeled samples in the training set, and then Based on the active learning model, unlabeled samples are predicted, and samples with high uncertainty or low classification confidence (such as the unlabeled sample a queried in Figure 2) are selected to consult and mark experts, such as experts Manually identify the selected unlabeled sample as "horse", then label the unlabeled sample with the label "horse”, and then classify the samples marked with real labels by experts as the type of labeled samples in the training set, and then The active learning model is retrained using the augmented labeled samples to improve the accuracy of the model.
- the problem with active learning is that it requires experts to
- the mean teacher model also known as the teacher-student model, is a semi-supervised learning model.
- the relevant structure of the model is shown in Figure 3.
- the model includes two sub-models, one is the student model and the other is the student model. teacher model. That is to say, the mean teacher model acts as both a student and a teacher. As a teacher, it generates the learning goals of the student model through the teacher model; as a student model, it uses the goals generated by the teacher model to learn.
- the network parameters of the teacher model are obtained by the weighted average of the network parameters of the student model in history (the first few steps).
- the network structure of the two sub-models in the mean teacher model is the same.
- the network parameters of the student model are updated according to the loss function gradient descent method; the network parameters of the teacher model are obtained iteratively through the network parameters of the student model.
- the mean teacher model is a type of semi-supervised learning model, part of the training set it uses is labeled samples, and the other part is unlabeled samples.
- the training samples are labeled samples (x1, y1) and unlabeled samples x2, where y1 is the label of x1.
- the predicted label label1 and the predicted label label2 should be the same, that is, the teacher model can resist the disturbance of the unlabeled sample x2, that is, it is hoped that the predicted labels of the student model and the teacher model are as equal as possible, so according to lable1 and label2 get the output value loss12 of loss function 2.
- update the student model according to loss loss11+ ⁇ *loss12, where ⁇ represents the balance coefficient, which is an adjustable parameter obtained through training, loss is the output value of the total loss function of the entire mean teacher model, and the training process is Make this total loss as small as possible.
- ⁇ represents the balance coefficient, which is an adjustable parameter obtained through training
- loss is the output value of the total loss function of the entire mean teacher model
- the training process is Make this total loss as small as possible.
- the network parameter ⁇ ' in the teacher model in Figure 3 is obtained by updating the network parameter ⁇ in the student model, and the update method is to update through the exponential moving average.
- the ⁇ ' in the teacher model in Figure 3 is Add parameters for perturbation processing to the input.
- Figure 4 shows a schematic structural diagram of the main frame of artificial intelligence. (Vertical axis)
- the two dimensions of the above artificial intelligence theme framework are explained.
- the "intelligent information chain” reflects a series of processes from data acquisition to processing. For example, it can be the general process of intelligent information perception, intelligent information representation and formation, intelligent reasoning, intelligent decision-making, intelligent execution and output. In this process, data has gone through the process of "data-information-knowledge-wisdom".
- the "IT value chain” reflects the value brought by artificial intelligence to the information technology industry from the underlying infrastructure of human intelligence, information (providing and processing technology implementation) to the industrial ecological process of the system.
- the infrastructure provides computing power support for artificial intelligence systems, realizes communication with the outside world, and supports through the basic platform. Communication with the outside world through sensors; computing power is provided by smart chips (hardware acceleration chips such as CPU, NPU, GPU, ASIC, FPGA); the basic platform includes distributed computing framework and network-related platform guarantee and support, which can include cloud storage and computing, interconnection networks, etc. For example, sensors communicate with external parties to obtain data, and these data are provided to the intelligent chips in the distributed computing system provided by the basic platform for calculation.
- smart chips hardware acceleration chips such as CPU, NPU, GPU, ASIC, FPGA
- the basic platform includes distributed computing framework and network-related platform guarantee and support, which can include cloud storage and computing, interconnection networks, etc. For example, sensors communicate with external parties to obtain data, and these data are provided to the intelligent chips in the distributed computing system provided by the basic platform for calculation.
- the data on the upper layer of the infrastructure is used to represent the data sources in the field of artificial intelligence.
- the data involves graphics, images, voice, and text, as well as IoT data from traditional devices, including business data from existing systems and sensory data such as force, displacement, liquid level, temperature, and humidity.
- Data processing usually includes data training, machine learning, deep learning, search, reasoning, decision-making, etc.
- machine learning and deep learning can perform symbolic and formalized intelligent information modeling, extraction, preprocessing, training, etc. on data.
- Reasoning refers to the process of simulating human's intelligent reasoning method in a computer or intelligent system, using formalized information to carry out machine thinking and solving problems according to the reasoning control strategy, and the typical function is search and matching.
- Decision-making refers to the process of making decisions after intelligent information is reasoned, usually providing functions such as classification, sorting, and prediction.
- some general capabilities can be formed based on the results of data processing, such as algorithms or a general system, such as translation, text analysis, computer vision processing, speech recognition, image identification, etc.
- Intelligent products and industry applications refer to the products and applications of artificial intelligence systems in various fields. They are the encapsulation of the overall artificial intelligence solution, the productization of intelligent information decision-making, and the realization of landing applications. Its application areas mainly include: intelligent terminals, intelligent manufacturing, Smart transportation, smart home, smart healthcare, autonomous driving, smart city, etc.
- the embodiments of the present application can be applied to the optimization of the training methods of various learning models in machine learning, and the learning models obtained by training the training methods of the present application can be specifically applied to various sub-fields in the field of artificial intelligence, such as computer vision fields, image processing fields, etc.
- the data in the data set acquired by the infrastructure in the embodiment of the present application may be multiple data of different types (also referred to as multiple types of data acquired by sensors such as cameras and radars)
- training data or training samples multiple training data constitute a training set), or multiple image data or multiple video data, as long as the training set satisfies the function for iterative training of the learning model, specifically this
- the data type in the training set is not limited.
- the training set used in the embodiment of the present application includes a part of labeled samples (a small part) and another part of unlabeled samples (most of them). This part of the labeled samples can be manually labeled by an annotator in advance.
- the one-bit labeling method is as follows: the labeler answers a “yes” or “no” question for the predicted label corresponding to each predicted sample, and if the predicted label is the correct classification category, the positive label of the unlabeled sample is obtained. (It can also be called the correct label), for example, if the predicted label is "dog", and the real label for the unlabeled sample is also "dog", then the prediction is correct, and the unlabeled sample gets the positive label "dog”; if the predicted label is "dog” is the wrongly predicted classification category, then the negative label of the unlabeled sample is obtained, and an erroneous label of the unlabeled sample can be excluded accordingly.
- the predicted label is "cat”
- the true label of the unlabeled sample is indeed "dog”
- the prediction is wrong
- the unlabeled sample gets the negative label "not a cat”.
- the corresponding number of positive labels and negative labels are obtained, and the positive label samples and the existing labeled samples are put together as the labeled samples at this stage.
- negative labels The samples are merged with the previously owned negative label samples, and there are the remaining unlabeled samples, which constitute all three types of samples in this stage.
- the training set trains the initial model, and obtains a second semi-supervised learning model with stronger training ability (may be referred to as the second model after training), and finally uses the predicted label of the second model after training to perform one-bit labeling again. You can get more positive labels, and repeat this process to get new models with more and more capabilities.
- FIG. 6 is a system architecture diagram of the task processing system provided by the embodiment of the application.
- the task processing system The system 200 includes an execution device 210 , a training device 220 , a database 230 , a client device 240 , a data storage system 250 and a data acquisition device 260 , and the execution device 210 includes a computing module 211 .
- the data collection device 260 is used to obtain the open-source large-scale data set required by the user (ie the initial training set shown in FIG.
- the training sets of each stage such as the training set are stored in the database 230, and the training device 220 trains the target model/rule 201 (that is, the initial model of each stage described above) based on the training sets of each stage maintained in the database 230,
- the trained model obtained by training (eg, the above-mentioned second model) is then used on the execution device 210 .
- the execution device 210 can call data, codes, etc. in the data storage system 250 , and can also store data, instructions, etc. in the data storage system 250 .
- the data storage system 250 may be placed in the execution device 210 , or the data storage system 250 may be an external memory relative to the execution device 210 .
- the second model trained by the training device 220 may be applied to different systems or devices (ie, the execution device 210 ), and may specifically be an edge device or an end-side device, such as a mobile phone, tablet, laptop, camera, and so on.
- the execution device 210 is configured with an I/O interface 212 for data interaction with external devices, and a “user” can input data to the I/O interface 212 through the client device 240 .
- the client device 240 may be a camera device of a monitoring system, and the target image captured by the camera device is input to the calculation module 211 of the execution device 210 as input data, and the calculation module 211 detects the input target image and obtains the detection The result (that is, the prediction label), and then output the detection result to the camera device or directly display it on the display interface (if any) of the execution device 210;
- the execution device 210 for example, when the execution device 210 is a mobile phone, the target task can be obtained directly through the mobile phone (for example, the target image can be captured by the camera of the mobile phone, or the recording module of the mobile phone is recorded.
- the target voice, etc., the target task is not limited here) or receive the target task sent by other devices (such as another mobile phone), and then the calculation module 211 in the mobile phone detects the target task and obtains the detection result, And directly present the detection result on the display interface of the mobile phone.
- the product form of the execution device 210 and the client device 240 is not limited here.
- FIG. 6 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship among the devices, devices, modules, etc. shown in the figure does not constitute any limitation.
- the data storage The system 250 is an external memory relative to the execution device 210.
- the data storage system 250 can also be placed in the execution device 210;
- the client device 240 is an external device relative to the execution device 210.
- Client device 240 may also be integrated in execution device 210 .
- the training of the initial model described in the above embodiments may be implemented on the cloud side, for example, by the training device 220 on the cloud side (the training device 220 may be set on one or more servers or virtual machines). above) obtain the training set, and train the initial model according to the training samples in the training set to obtain the second model after training, and then send the second model after training to the execution device 210 for application, for example, send to The execution device 210 performs label prediction.
- the training device 220 on the cloud side the training device 220 may be set on one or more servers or virtual machines.
- the training device 220 performs overall training on the initial model, and the trained second model is then sent to the execution device 210 for use; the above implementation
- the training of the initial model can also be implemented on the terminal side, that is, the training device 220 can be located on the terminal side.
- automatic driving vehicles, assisted driving vehicles, etc. to obtain a training set, and train the initial model according to the training samples in the training set to obtain a second model after training, and the second model after training can be directly used in the terminal device It can also be sent by the terminal device to other devices for use.
- this embodiment of the present application does not limit which device (cloud side or terminal side) the second model is trained or applied on.
- FIG. 7 is a schematic flowchart of the training method of the semi-supervised learning model provided by the embodiment of the present application, which may specifically include the following steps :
- the training device trains an initial semi-supervised learning model (which may be referred to as an initial model for short) according to the obtained initial training set, thereby obtaining a trained first semi-supervised learning model (which may be referred to as a first trained model for short).
- an initial semi-supervised learning model which may be referred to as an initial model for short
- a trained first semi-supervised learning model which may be referred to as a first trained model for short.
- the training set one part is labeled samples, and the other part is unlabeled samples.
- this part of labeled samples is called the first labeled sample set
- this part of unlabeled samples is called the first unlabeled sample set.
- the training device After the training device obtains the first model after training, it selects an initial subset from the first unlabeled sample set in the initial training set, and each unlabeled sample in the initial subset constitutes test data, which is used for the training of the first model after training. Test, predict each unlabeled sample in the selected initial subset through the first model after training, so as to obtain the predicted label corresponding to each selected unlabeled sample (the first model after training will output the selected The probability prediction of each unlabeled sample on each classification category, usually a classification category with the highest probability is selected as the model's prediction label for the sample), and each prediction label constitutes the first prediction label set.
- the initial training set includes 330 training samples, of which 30 are labeled samples and 300 are unlabeled samples, then these 30 labeled samples constitute the above-mentioned first sample.
- There is a labeled sample set and these 300 unlabeled samples constitute the above-mentioned first unlabeled sample set.
- the initial model is trained according to the 330 training samples in the initial training set, and the first model after training is obtained. After that, some unlabeled samples are selected from the 300 unlabeled samples to form the initial subset. It is 100 unlabeled samples, then these 100 unlabeled samples are input into the first model after training for prediction in turn, and the corresponding 100 predicted labels are obtained respectively. These 100 predicted labels constitute the above-mentioned first predicted label set.
- the manner in which the training device selects the initial subset from the first unlabeled sample set includes, but is not limited to, the following manner: randomly select a preset number of unlabeled sample sets from the first unlabeled sample set. Label samples constitute the initial subset. For example, assuming that the first unlabeled sample set includes 300 unlabeled samples, a preset number (e.g., 100, 150, etc.) of unlabeled samples can be randomly selected therefrom to constitute the initial subset.
- a preset number e.g., 100, 150, etc.
- the method of one-bit labeling is as follows: the labeler answers a "yes” or "no" question for the prediction label corresponding to each prediction sample, and if the prediction label is a correctly predicted classification category, the no-no is obtained.
- the positive label of the labeled sample (also called the correct label), for example, if the predicted label is "dog”, and the true label of the unlabeled sample is also "dog”, then the prediction is correct, and the unlabeled sample gets the positive label "dog””; if the predicted label is a wrongly predicted classification category, the negative label of the unlabeled sample is obtained, and an erroneous label of the unlabeled sample can be excluded accordingly. For example, if the predicted label is “cat”, for the unlabeled sample The true label of is indeed "dog”, then the prediction is wrong, and the unlabeled sample gets the negative label "not a cat”.
- the initial subset is divided into a first subset and a second subset, where the first subset is the sample set corresponding to the correctly predicted classification category (ie, positive label), and the second subset is The set of samples corresponding to mispredicted classification categories (i.e. negative labels).
- the annotator may be a manual annotator in the field, that is, the manual annotator determines whether the predicted labels are correct, that is, the annotator needs to answer the sample after observing the sample. Whether it belongs to the predicted category, if the prediction is correct, the correct label (ie, positive label) of the sample will be obtained, and if the prediction is wrong, the wrong label (ie, negative label) of the sample will be obtained.
- the annotator may be a computing device, the computing device knows the true labels of each unlabeled sample, and the computing device compares the true label of the same unlabeled sample with the predicted label to determine Whether the sample belongs to the predicted category, if the prediction is correct, the correct label (ie, positive label) of the sample will be obtained, and if the prediction is wrong, the wrong label (ie, negative label) of the sample will be obtained.
- the embodiment of the present application does not limit the specific expression form of the annotator.
- the training device After the training device obtains the one-bit labeled result, it obtains a corresponding number of positive labels and negative labels. After that, the training device will reconstruct the training set.
- the reconstructed training set can be called the first training set. It can be: put the positive labeled samples (ie the first subset) together with the existing labeled samples, as the labeled samples at this stage, which can also be called the second labeled sample set; each negative labeled sample (ie The second subset) constitutes the negative label sample set at this stage; the remaining unlabeled samples in the first unlabeled sample set constitute the second unlabeled sample set at this stage. These three types of samples together constitute the first training set.
- the training device After the first training set is constructed, the training device retrains the initial model according to the first training set to obtain a second semi-supervised learning model with stronger training ability (may be referred to as the second trained model for short).
- the original method is still used for the labeled sample set and the unlabeled sample set.
- a new loss function is constructed, and the newly constructed loss function is used to train the negative label sample set.
- the newly constructed loss function is defined as the model output.
- the difference between the predicted value and the modified value The modified value is a value that sets the corresponding dimension on the predicted value of the wrongly predicted classification category to zero.
- the model predicts a certain input sample (assuming the sample is a picture), and the output predicted value is an n-dimensional vector.
- the dimension represents n classification categories, and the value of these n dimensions represents the predicted probability of each corresponding classification category.
- the classification category with the largest predicted probability is selected as the predicted label of the model for the sample.
- the predicted probability Generally, it is the normalized predicted probability, that is, the sum of the predicted probabilities of all classification categories is 1.
- the predicted label for the sample is a negative label, that is, the negative label of the sample is "not a horse", so modify the value of the dimension corresponding to "horse" to zero, that is, modify 0.5 to 0, then modify the value Then it is [0.05, 0.04, 0.01, 0, 0.32, 0.08], and then the difference between the above predicted value and this modified value defines the loss function constructed for the negative label sample set.
- the adopted semi-supervised learning models (may be referred to as models for short) are different, so the training process according to the first training set will also be somewhat different, which will be explained separately below:
- the training device trains the initial model according to the first training set
- the obtained second model after training may specifically be: for the second labeled sample set and the second unlabeled sample set, the training device is trained according to the first training set.
- the second labeled sample set and the second unlabeled sample set use the first loss function to train the initial semi-supervised learning model, and the first loss function is the original loss function loss1 of the initial semi-supervised learning model; for negative label samples
- the training equipment uses the second loss function to train the initial semi-supervised learning model according to the negative label sample set.
- the second loss function (can be called loss2) is the difference between the predicted value output by the model and the modified value
- the modified value is the value of setting the corresponding dimension of the wrongly predicted classification category on the predicted value to zero
- the second loss function loss2 is the above-mentioned new loss function constructed for the unlabeled sample set, The details are not repeated here.
- the initial semi-supervised learning model includes any one of the following models: ⁇ -model, VAT, LPDSSL, TNAR, pseudo-label, and DCT.
- the original loss functions of these models are all one and known.
- the training method of the embodiments of the present application can be used to train the semi-supervised learning model with only one loss function as described above, and can also be used to train the semi-supervised learning model with two or more loss functions.
- the process of the supervised learning model is similar.
- the following takes the semi-supervised learning model as the mean teacher model as an example to illustrate the case where there are multiple loss functions of the initial semi-supervised learning model.
- the training samples are labeled samples (x1, y1) and unlabeled samples x2, where y1 is the label of x1.
- the predicted label label1 and the predicted label label2 should be the same, that is, the teacher model can resist the disturbance of the unlabeled sample x2, that is, it is hoped that the predicted labels of the student model and the teacher model are as equal as possible, so according to lable1 and label2 get the output value loss12 of loss function 2.
- ⁇ represents the balance coefficient, which is an adjustable parameter obtained through training
- loss is the output value of the total loss function of the entire mean teacher model, and the training process is Make this total loss as small as possible.
- the training device uses the third loss function to train the mean teacher model according to the second labeled sample set, and the third loss function is the above-mentioned loss function 1 (i.e. loss11); the training device will also use the fourth loss function to train the mean teacher model according to the second labeled sample set and the second unlabeled sample set, and the fourth loss function is the above-mentioned loss function 2 (ie loss12), the third loss function loss11 and the fourth loss function loss12 are the original loss functions of the mean teacher model; in addition, for negative label samples, the training equipment will also use the fifth loss function according to the negative label sample set To train the mean teacher model, the fifth loss function (which can be called loss13) is the difference between the predicted value output by the model and the modified value, and the modified value is to classify the wrongly predicted category in the predicted value. The value corresponding to the dimension above is set to zero, the fifth loss function loss13 is the above-mentioned new loss function constructed for
- the third loss function loss11 may be a cross loss function
- the fourth loss function loss12 may be a mean square error loss function
- stage 1 The above process from step 702 to step 705 is a stage (may be referred to as stage 1) of obtaining the second model after training.
- stage 1 Usually, when more training samples with correct labels can be obtained, the accuracy of the model will be improved.
- the most direct method is to divide the training process into multiple stages. Therefore, in some embodiments of the present application, in order to make the second model with stronger generalization ability, it will generally go through multiple stages of training, that is, after the training is used. If the predicted label of the second model is marked with one bit again, more positive labels can be obtained, and by repeating this process continuously, a new model with stronger and stronger ability can be obtained.
- the training device uses the second unlabeled sample set as the new first unlabeled sample set and the second semi-supervised learning model as the new first semi-supervised learning model, and repeats steps 702 to 705 until The second unlabeled sample set is empty.
- step 706 may not be included, that is, only one stage of training is performed to obtain a trained second model in one training stage, and the second model is relatively method, its generalization ability is also improved.
- step 706 is also not included in this case.
- the second model can be deployed on the target device for application.
- the target device may specifically be a mobile device, such as an edge device such as a camera and a smart home, or may be a mobile phone, a personal computer, a computer workstation, a tablet computer, a smart wearable device (such as a smart watch, end-side devices such as smart bracelets, smart headphones, etc.), game consoles, set-top boxes, and media consumption devices.
- the type of target device is not limited here.
- the training of the semi-supervised learning model described in the above embodiments may be implemented on the cloud side.
- the training equipment 220 ( The training device 220 may be set on one or more servers or virtual machines) to obtain a training set, and train an initial semi-supervised learning model according to the training samples in the training set, to obtain a trained semi-supervised learning model (eg, training After that, the trained second model is sent to the execution device 210 for application, for example, to the execution device 210 for label prediction.
- the training equipment 220 trains the models at each stage, and the trained second model is sent to the execution equipment 210 for use; the training of the initial semi-supervised learning model described in the above embodiment can also be performed in Terminal side implementation, that is, the training device 220 may be located on the terminal side, for example, the training set may be obtained from terminal devices (eg, mobile phones, smart watches, etc.), wheeled mobile devices (eg, autonomous driving vehicles, assisted driving vehicles, etc.), etc. , and train it according to the training samples in the training set to obtain a semi-supervised learning model after training (eg, the first model after training, the second model after training), and the second model after training can be directly used in the terminal It can also be sent by the terminal device to other devices for use. Specifically, this embodiment of the present application does not limit which device (cloud side or terminal side) the second model is trained or applied on.
- the training device predicts the classification categories of a part of unlabeled samples through the trained first semi-supervised learning model, obtains the predicted labels, and guesses whether each predicted label is not by a one-bit label.
- FIG. 8 is a schematic flowchart of the training method of the semi-supervised learning model provided by the embodiment of the application.
- the initial training set includes 330 training samples, of which 30 are labeled samples (as shown by the black bottom triangle in Figure 8), and 300 are unlabeled samples (as shown by the gray bottom circle in Figure 8)
- the 30 labeled samples constitute the above-mentioned first labeled sample set
- the 300 unlabeled samples constitute the above-mentioned first unlabeled sample set.
- the initial model train the initial model according to the 330 training samples in the initial training set (that is, the initialization in Figure 8, corresponding to stage 0) to obtain the first model after training, and then select some of the 300 unlabeled samples.
- the unlabeled samples constitute the initial subset. Assuming that 100 unlabeled samples are randomly selected to constitute the initial subset, then these 100 unlabeled samples are input into the first model after training for prediction in turn, and the corresponding 100 predicted labels are obtained respectively.
- the 100 predicted labels constitute the above-mentioned first predicted label set.
- the training equipment divides the 100 selected unlabeled samples into positive label samples (as shown by the white bottom triangle in Figure 8) and negative label samples (as shown in the white bottom circle in Figure 8) based on one-bit labeling As shown in the figure), assuming that the positive label samples are 40 and the negative label samples are 60, then these 40 positive label samples are integrated with the original 30 labeled samples to form the second labeled sample.
- the training equipment divides the 100 selected unlabeled samples into positive label samples (as shown by the white bottom triangle in Figure 8) and negative label samples (as shown in the white bottom circle in Figure 8) based on one-bit labeling
- the positive label samples are 40 and the negative label samples are 60
- these 40 positive label samples are integrated with the original 30 labeled samples to form the second labeled sample.
- Set after removing the selected 100 unlabeled samples from the original 300 unlabeled samples, there are still 200 unlabeled samples left, then the remaining 200 unlabeled samples constitute the second unlabeled samples,
- the second labeled sample set, the second unlabeled sample set, and the negative-label sample set constitute the first training set in the first stage.
- the training set through the method described in step 705, the second model after training (ie, the model M1 in FIG. 8 ) is obtained, and the second model after training obtained for the first time is the first stage (ie, the stage in FIG. 8 ). 1).
- the training equipment selects some unlabeled samples from the second unlabeled sample set to form the initial subset of the second stage (ie, stage 2 in Figure 8). It is assumed that 100 unlabeled samples are randomly selected to form the second stage.
- the initial subset (or other numbers, not limited here), then the 100 unlabeled samples are input into the second model after training for prediction in turn, and the corresponding 100 predicted labels are obtained respectively. These 100 predicted labels are The first predicted label set (also referred to as the second predicted label set) of the second stage is formed.
- the training equipment divides the 100 selected unlabeled samples into positive label samples and negative label samples based on one-bit labeling. Assuming that there are 65 positive label samples and 35 negative label samples, then The 65 positive labeled samples are integrated with the existing 70 labeled samples to form the second labeled sample set in the second stage, and the remaining 200 unlabeled samples in the first stage are removed from the second stage and selected.
- the second unlabeled sample set, the second unlabeled sample set and the negative label sample set of the second stage constitute the first stage.
- the first training set of the second stage (which can be called the second training set), according to the first training set of the second stage, through the method described in step 705 again, the second model after the second stage training (which can be called the second training set) is obtained. is the third model, namely the model M2 in FIG. 8 ), and the second model after being trained for the second time is the second stage (ie, the stage 2 in FIG. 8 ). And so on, according to the above method, until the second unlabeled sample set is empty.
- the mean teacher model is an example to illustrate the entire training process of the mean teacher model (the training sample is taken as an example): please refer to Figure 9 for details.
- 9 is a schematic diagram of the training process of the mean teacher model, the traditional semi-supervised learning method is based on the data set is the training set, where N is the number of all training samples, and x n is the nth sample in the image data. Other represents the true label of the sample x n , in the setting Unknown to the training algorithm, in particular only a small subset of sets containing L samples are relevant is provided (ie, the first labeled sample set described above in this application), in general, L is less than N.
- the training method of this application is to divide the training set into three parts in, is the sample set for one-bit annotation,
- the number of samples in is not limited.
- the annotator is provided with the corresponding image and the model's predicted label for the image.
- the annotator's job is to determine whether the image belongs to the classification category specified by the real label to which the image itself belongs. If the prediction is correct, then the image is assigned a positive label Otherwise it is assigned a negative label, denoted as From an information-theoretic perspective, the annotator provides the system with 1-bit supervisory information by answering a yes or no question.
- the obtained supervision information is log 2 C, and C is the total number of classification categories.
- accurately label 5K samples which provides 33.2K bits of information, and then complete the remaining 33.2K by answering yes or no questions. Since the annotator only needs to answer the question of yes or no, this method provides one-bit information, so the loss of annotating a picture is reduced, and under the same loss, more one-bit annotation information can be obtained.
- the most straightforward approach is to divide the training process into multiple stages, each of which Part of the samples are predicted, the training set is reconstructed for the predicted labels, and the model is updated with the reconstructed training set to strengthen the model.
- initial model is in As labeled data, It is trained through a semi-supervised learning process under the condition of unlabeled data.
- the Mean Teacher model is used for illustration, and it is assumed that the next training is divided into T stages (the termination condition of the training is that all unlabeled samples are selected).
- the embodiments of the present application design a label suppression method based on the mean Teacher model, that is, design a loss function corresponding to a negative label sample.
- the mean Teacher model consists of a teacher model and a student model. Given a training image, if it has the correct label, the corresponding cross-entropy loss function is calculated. Regardless of whether the training image has the correct label, the output of a teacher model and a student model are calculated. The distance between them is used as an additional loss function, and the additional loss function is the mean square error loss function.
- f(x; ⁇ ) be the expression function of the student model, where ⁇ represents the corresponding network parameters of the learning model, and the teacher model is denoted as f(x; ⁇ ′), where ⁇ ′ is the corresponding network parameter of the teacher model.
- the corresponding loss function (i.e. the total loss) is defined as:
- ⁇ and ⁇ both represent balance coefficients, which are adjustable parameters obtained through training.
- CE represents the cross entropy loss function
- MSE represents the mean square error loss function
- the output of the model is constrained by both the cross-entropy term and the consistency term.
- this method adds a new loss function based on the modified loss function. The value of the relevant position of the output f(x; ⁇ ′) of the binomial teacher model, so that the probability score of the corresponding category of the negative label is suppressed to 0.
- the predicted value output by the mean teacher model is A 100-dimensional vector, each dimension represents the predicted probability of the output predicted value corresponding to the classification category, assuming that a picture is a negative label "not a dog", the second dimension is the probability of "dog”, then the corresponding picture, the second dimension can be set to 0, and the difference between the predicted value before modification and the modified predicted value (that is, the modified value) is the third term of the loss function described in this application.
- the technical effects brought by the embodiments of the present application are further compared below.
- the training method is experimented on three popular image classification datasets, namely CIFAR100, Mini-Imagenet and Imagenet.
- CIFAR100 this application uses a 26-layer shake-shake regularized deep residual network.
- Mini-Imagenet and Imagenet this application uses a 50-layer residual network.
- this application trains a total of 180 epochs (that is, the process of training the training samples once), and trains 60 epochs on Imagenet.
- the mean squared error loss function is used as the consistency loss function on all three datasets.
- the weight parameter of the consistency loss function takes 1000 on CIFAR100 and 100 on Mini-Imagenet and Imagenet.
- the relevant batch size is adjusted according to hardware conditions.
- the rest of the parameter settings refer to the original settings of the mean teacher model.
- Table 1 the experiment proves that under the supervision information of the same number of bits, the performance of the training method provided by the embodiment of the present application exceeds other semi-supervised training methods, and the experimental results on the three data sets (that is, the prediction is accurate) rate) to demonstrate the effectiveness of the method in the embodiment of the present application.
- FIG. 10 is a schematic flowchart of the image processing method provided by the embodiment of the present application, which may specifically include the following steps:
- the second semi-supervised learning model after the above training can be deployed on the execution device for application. Specifically, the execution device first obtains the target image.
- the execution device takes the target image as the input of the trained semi-supervised learning model, and outputs the prediction result for the target image, where the trained semi-supervised learning model is the second semi-supervised learning model after training described in the above embodiment Model.
- an application method of the second semi-supervised learning model after training is described, that is, it is used to perform category prediction on images.
- the semi-supervised learning model provided by the embodiments of the present application improves the accuracy of target image recognition.
- the semi-supervised learning model that is, the second model after training
- the semi-supervised learning model trained in the embodiment of the present application can be used in the fields of smart cities, intelligent terminals and other fields to perform image classification processing
- the semi-supervised learning model trained in the present application can be used for image classification processing. It can be applied to various scenarios and problems in computer vision and other fields, such as some common tasks: face recognition, image classification, target detection, etc. Each of these scenarios will involve many semi-supervised learning models after training provided by the embodiments of the present application. The following will introduce multiple application scenarios that are applied to products.
- the pictures in the album may be input into the semi-supervised learning model after training for feature extraction.
- the feature obtains the predicted label of the picture (ie, the predicted classification category), and then classifies the pictures in the album according to the classification category of the picture, and obtains the album arranged according to the classification category.
- the pictures belonging to the same category may be arranged in one row or one row. For example, in the final album, the pictures in the first row belong to airplanes, and the pictures in the second row belong to cars.
- the semi-supervised learning model trained in the embodiment of the present application can be used to process the captured photo, and the category of the object being captured can be automatically identified, for example, the type of flower, animals etc.
- the semi-supervised learning model trained in the embodiment of the present application can identify a shared bicycle obtained by taking a photo, it can be recognized that the object belongs to a bicycle, and further, relevant information of the bicycle can also be displayed, as shown in FIG. 12 . .
- the semi-supervised learning model trained in the embodiment of the present application can also be used to find pictures containing the target object.
- the semi-supervised learning model trained in the embodiment of the present application can be used to take pictures.
- the obtained street view is to find out whether there is a target object in the street view picture, such as whether there is a face model on the left in FIG. 13 .
- the semi-supervised learning model trained in the embodiment of the present application can be used to process the image data or the image in the video data captured by the sensor (eg, camera) installed on the vehicle, so as to automatically Identify the types of obstacles on the road during driving, for example, it can automatically identify whether there are obstacles on the road ahead of the vehicle and what kinds of obstacles (such as oncoming trucks, pedestrians, cyclists, etc. Critical obstacles, or non-critical obstacles such as bushes, trees, buildings, etc. on the roadside).
- the sensor eg, camera
- the classification of photo albums, recognition of objects by photographing, object recognition, and object recognition of intelligent driving, etc. introduced above are only a few specific scenarios to which the image classification method of the embodiment of the present application is applied, and the semi-supervised learning after the training of the embodiment of the present application.
- the application of the model is not limited to the above-mentioned scenarios, and it can be applied to any scene that requires image classification or image recognition. As long as the field and equipment of the semi-supervised learning model can be used, the trained model provided by the embodiment of this application can be applied. The semi-supervised learning model will not be illustrated here.
- FIG. 14 is a schematic structural diagram of a training device provided by an embodiment of the application.
- the training device 1400 includes: a selection module 1401, a guessing module 1402, a building module 1403, and a training module 1404, wherein the selection module 1401 uses In selecting an initial subset from the first unlabeled sample set, and predicting the initial subset through the trained first semi-supervised learning model, the first predicted label set is obtained, and the first semi-supervised learning model is determined by the initial The semi-supervised learning model is obtained by training an initial training set, and the initial training set includes a first labeled sample set and the first unlabeled sample set; a guessing module 1402 is used to calculate the The initial subset is divided into a first subset and a second subset, the first subset is the sample set corresponding to the correctly predicted classification category, and the second subset is the sample set corresponding to the wrongly predicted classification category; constructing Module 1403 is used to construct a first training set, where the first training set includes a second labeled sample set, a second unlabeled sample set, and a negative labeled sample set, where the second
- a labeled sample set and a sample set of the first subset with correct classification categories is the second unlabeled sample set is the unlabeled samples in the first unlabeled sample set except the initial subset
- the set of negative label samples is a set of samples with misclassified categories including the second subset; the training module 1404 is used to train the initial semi-supervised learning model according to the first training set, and obtain training After the second semi-supervised learning model.
- the first semi-supervised learning model after training is used to predict the classification categories of a part of unlabeled samples to obtain predicted labels, and the guessing module 1402 judges whether each predicted label is correct, if the prediction is correct Then, the correct label (that is, the positive label) of the sample is obtained, otherwise an incorrect label (that is, the negative label) of the sample can be excluded.
- the building module 1403 uses the above information to reconstruct the training set (that is, the first training set), and retrain the initial semi-supervised learning model according to the first training set by the training module 1404, thereby improving the prediction accuracy of the model, and since the guessing module 1402 only needs to answer "yes” or "no" to the predicted labels , this labeling method can relieve the pressure of manual labeling that requires a large amount of correctly labelled data in machine learning.
- the network structure of the initial semi-supervised learning model can specifically have multiple representations, for example, it can include any one of the following models: ⁇ -model, VAT, LPDSSL, TNAR, pseudo-label, DCT, mean teacher model.
- the semi-supervised learning models to which the training methods provided in the embodiments of the present application can be applied are described, which are universal and optional.
- the training module 1404 which is specifically used to: for the second labeled sample set and the second unlabeled sample set, use the first loss function to train the initial semi-supervised learning model according to the second labeled sample set and the second unlabeled sample set,
- the first loss function is the original loss function loss1 of the initial semi-supervised learning model; for the negative label sample set, the second loss function is used to train the initial semi-supervised learning model according to the negative label sample set, and the second loss
- the function (which can be called loss2) is the difference between the predicted value output by the model and the modified value, the modified value is the value that sets the corresponding dimension of the wrongly predicted classification category on the predicted value to zero, the second
- the loss function loss2 is the new loss function constructed for the unlabeled sample set described above.
- the loss function of the initial semi-supervised learning model is one
- a new loss function can be constructed for the negative label sample set, that is, for different types of sample sets in the training set, correspondingly adopt Different loss functions, and then training the initial semi-supervised learning model based on the total loss function is more targeted.
- the training method of the embodiment of the present application can not only be used to train the above-mentioned semi-supervised learning model with only one loss function, but also can be used to train a semi-supervised learning model with two or more loss functions.
- the process of learning a model is similar.
- the initial semi-supervised learning model can be a mean teacher model. Since the training strategy of the mean teacher model is: assuming that the training samples are labeled samples (x1, y1) and unlabeled samples x2, where y1 is the label of x1.
- the training module 1404 is also specifically used to: for the second labeled sample set, according to the second labeled sample set, use the third loss function to train the mean teacher model, and the third loss function is the above-mentioned
- the aforementioned loss function 1 ie loss11
- the training equipment will also use the fourth loss function to train the mean teacher model according to the second labeled sample set and the second unlabeled sample set, and the fourth loss function is the above-mentioned
- the loss function 2 (ie loss12) mentioned above, the third loss function loss11 and the fourth loss function loss12 are the original loss functions of the mean teacher model;
- Five loss functions are used to train the mean teacher model, and the fifth loss function (which can be called loss13) is the difference between the predicted value output by the model and the modified value, and the modified value is to place the wrongly predicted classification in the If the corresponding dimension on the predicted value is set to zero, the fifth loss function loss13 is the above-mentioned new loss function constructed for the unlabeled sample
- a new loss function can also be constructed for the negative label sample set, that is, for different types of sample sets in the training set, the corresponding use of Different loss functions, and then training the initial semi-supervised learning model based on the total loss function is more targeted.
- the third loss function may be a cross entropy loss function; and/or the fourth loss function may be a mean square error loss function.
- the training device 1400 may further include: a triggering module 1405, where the triggering module 1405 is configured to use the second unlabeled sample set as a new first unlabeled sample set, and use the second unlabeled sample set As a new first semi-supervised learning model, the semi-supervised learning model triggers the selection module 1401, the guessing module 1402, the building module 1403 and the training module 1404 to repeat the corresponding steps until the second unlabeled The sample set is empty.
- a triggering module 1405 is configured to use the second unlabeled sample set as a new first unlabeled sample set, and use the second unlabeled sample set As a new first semi-supervised learning model
- the semi-supervised learning model triggers the selection module 1401, the guessing module 1402, the building module 1403 and the training module 1404 to repeat the corresponding steps until the second unlabeled The sample set is empty.
- the most direct method is to divide the training process into multiple stages, each stage selects some samples from the first unlabeled sample set for prediction, reconstructs the training set for the predicted labels, and then uses the reconstructed training set to update the model. Therefore, the generalization ability and prediction accuracy of the trained second semi-supervised learning model obtained in each stage are stronger than those of the second semi-supervised learning model obtained in the previous stage.
- the triggering module 1405 may further be used for: using the trained second semi-supervised learning model Deployed on the target device, the target device is used to obtain the target image, and the trained second semi-supervised learning model is used to predict the label of the target image.
- the specific use of the trained second semi-supervised learning model is described, that is, it is deployed on the target device to predict the label of the target image, that is, it is used to predict the category of the image.
- the semi-supervised learning model trained by the existing training method and the second semi-supervised learning model after training provided by the embodiment of the present application improves the accuracy of target image recognition.
- the selecting module 1401 is specifically configured to: randomly select a preset number of unlabeled samples from the first unlabeled sample set to form an initial subset.
- FIG. 15 is a schematic structural diagram of the execution device provided by the embodiment of the present application.
- the execution device 1500 includes: an acquisition module 1501 and an identification module 1502 , wherein the acquisition module 1501, for obtaining the target image; the identification module 1502, for taking the target image as the input of the semi-supervised learning model after training, and outputting the prediction result of the target image, and the semi-supervised learning model after the training is: The second semi-supervised learning model after training described in the above embodiment.
- an application method of the second semi-supervised learning model after training is described, that is, it is used to perform category prediction on images.
- the semi-supervised learning model provided by the embodiments of the present application improves the accuracy of target image recognition.
- FIG. 16 is a schematic structural diagram of the training device provided by the embodiment of the present application.
- the described training device 1400 is used to implement the functions of the training device in the corresponding embodiments of FIGS. 5-9 .
- the training device 1600 is implemented by one or more servers, and the training device 1600 may generate relatively large numbers due to different configurations or performances. Differences, which may include one or more central processing units (CPUs) 1622 (eg, one or more central processing units) and memory 1632, one or more storage media 1630 (such as one or more mass storage devices).
- the memory 1632 and the storage medium 1630 may be ephemeral storage or persistent storage.
- the program stored in the storage medium 1630 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the training device 1600 . Further, the central processing unit 1622 may be configured to communicate with the storage medium 1630 to execute a series of instruction operations in the storage medium 1630 on the training device 1600 .
- Training device 1600 may also include one or more power supplies 1626, one or more wired or wireless network interfaces 1650, one or more input and output interfaces 1658, and/or, one or more operating systems 1641, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
- operating systems 1641 such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
- the central processing unit 1622 is configured to execute the training method of the semi-supervised learning model executed by the training device in the embodiment corresponding to FIG. 7 .
- the central processing unit 1622 is configured to: first, train an initial semi-supervised learning model (which may be referred to as an initial model for short) according to the acquired initial training set, so as to obtain a trained first semi-supervised learning model (which may be referred to as a post-training model for short).
- an initial semi-supervised learning model which may be referred to as an initial model for short
- a trained first semi-supervised learning model which may be referred to as a post-training model for short.
- the initial training set one part is labeled samples, and the other part is unlabeled samples. Among them, this part of labeled samples is called the first labeled sample set, and this part of unlabeled samples is called the first unlabeled sample set. Label sample set.
- an initial subset is selected from the first unlabeled sample set in the initial training set, and each unlabeled sample in the initial subset constitutes test data for testing the trained first model.
- the trained first model predicts each unlabeled sample in the selected initial subset, so as to obtain the predicted label corresponding to each selected unlabeled sample (the trained first model will output each selected unlabeled sample).
- the classification category with the highest probability is usually selected as the predicted label of the model for the sample), and each predicted label constitutes the first predicted label set.
- the method of one-bit labeling is as follows: the labeler answers a "yes” or "no" question for the prediction label corresponding to each prediction sample, and if the prediction label is a correctly predicted classification category, the no-no is obtained.
- the positive label of the labeled sample (also called the correct label), for example, if the predicted label is "dog”, and the true label of the unlabeled sample is also "dog”, then the prediction is correct, and the unlabeled sample gets the positive label "dog””; if the predicted label is a wrongly predicted classification category, the negative label of the unlabeled sample is obtained, and an erroneous label of the unlabeled sample can be excluded accordingly. For example, if the predicted label is “cat”, for the unlabeled sample The true label of is indeed "dog”, then the prediction is wrong, and the unlabeled sample gets the negative label "not a cat”.
- the initial subset is divided into a first subset and a second subset, where the first subset is the sample set corresponding to the correctly predicted classification category (ie, positive label), and the second subset is The set of samples corresponding to mispredicted classification categories (i.e. negative labels).
- the training set is reconstructed. The reconstructed training set can be called the first training set.
- the samples (that is, the first subset) are put together with the existing labeled samples, as the labeled samples at this stage, which can also be called the second labeled sample set; each negative-labeled sample (that is, the second subset) is Constitute the negative label sample set in this stage; the remaining unlabeled samples in the first unlabeled sample set constitute the second unlabeled sample set in this stage.
- These three types of samples together constitute the first training set.
- the initial model is retrained according to the first training set to obtain a second semi-supervised learning model with stronger training ability (may be referred to as the second model after training).
- FIG. 17 is a schematic structural diagram of an execution device provided by an embodiment of the present application.
- the execution device 1700 may specifically be represented as various terminal devices, such as virtual Realistic VR devices, mobile phones, tablets, laptops, smart wearable devices, monitoring data processing devices or radar data processing devices, etc., are not limited here.
- the execution device 1500 described in the embodiment corresponding to FIG. 15 may be deployed on the execution device 1700 to implement the functions of the execution device in the embodiment corresponding to FIG. 10 .
- the execution device 1700 includes: a receiver 1701, a transmitter 1702, a processor 1703, and a memory 1704 (wherein the number of processors 1703 in the execution device 1700 may be one or more, and one processor is taken as an example in FIG. 17 ) , wherein the processor 1703 may include an application processor 17031 and a communication processor 17032 .
- the receiver 1701, the transmitter 1702, the processor 1703, and the memory 1704 may be connected by a bus or otherwise.
- Memory 1704 may include read-only memory and random access memory, and provides instructions and data to processor 1703 .
- a portion of memory 1704 may also include non-volatile random access memory (NVRAM).
- NVRAM non-volatile random access memory
- the memory 1704 stores processors and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, wherein the operating instructions may include various operating instructions for implementing various operations.
- the processor 1703 controls the operation of the execution device 1700 .
- various components of the execution device 1700 are coupled together through a bus system, where the bus system may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus.
- the various buses are referred to as bus systems in the figures.
- the method disclosed in the above-mentioned embodiment corresponding to FIG. 4 of the present application may be applied to the processor 1703 or implemented by the processor 1703 .
- the processor 1703 may be an integrated circuit chip, which has signal processing capability. In the implementation process, each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor 1703 or an instruction in the form of software.
- the above-mentioned processor 1703 may be a general-purpose processor, a digital signal processor (DSP), a microprocessor or a microcontroller, and may further include an application specific integrated circuit (ASIC), a field programmable Field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable Field-programmable gate array
- the processor 1703 may implement or execute the methods, steps and logic block diagrams disclosed in the embodiment corresponding to FIG. 10 of the present application.
- a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
- the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
- the storage medium is located in the memory 1704, and the processor 1703 reads the information in the memory 1704, and completes the steps of the above method in combination with its hardware.
- the receiver 1701 can be used to receive input numerical or character information, and to generate signal input related to performing the relevant settings and function control of the device 1700 .
- the transmitter 1702 can be used to output digital or character information through the first interface; the transmitter 1702 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 1702 can also include a display device such as a display screen .
- Embodiments of the present application further provide a computer-readable storage medium, where a program for performing signal processing is stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, it causes the computer to execute the programs described in the foregoing embodiments. Perform the steps performed by the device.
- the training device, execution device, etc. provided by the embodiments of the present application may be specifically a chip, and the chip includes: a processing unit and a communication unit, the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface, a pin or circuit etc.
- the processing unit can execute the computer execution instructions stored in the storage unit, so that the chip in the training device executes the training method of the semi-supervised learning model described in the embodiments shown in FIGS. 5-9, or the chip in the execution device executes the above The image processing method described in the embodiment shown in FIG. 10 .
- the storage unit is a storage unit in the chip, such as a register, a cache, etc.
- the storage unit may also be a storage unit located outside the chip in the wireless access device, such as only Read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), etc.
- ROM Read-only memory
- RAM random access memory
- FIG. 18 is a schematic structural diagram of a chip provided by an embodiment of the application.
- the chip can be represented as a neural network processor NPU 200, and the NPU 200 is mounted as a co-processor to the main CPU (Host CPU), tasks are allocated by the Host CPU.
- the core part of the NPU is the arithmetic circuit 2003, which is controlled by the controller 2004 to extract the matrix data in the memory and perform multiplication operations.
- the arithmetic circuit 2003 includes multiple processing units (Process Engine, PE). In some implementations, the arithmetic circuit 2003 is a two-dimensional systolic array. The arithmetic circuit 2003 may also be a one-dimensional systolic array or other electronic circuitry capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 2003 is a general-purpose matrix processor.
- PE Processing Unit
- the arithmetic circuit 2003 is a two-dimensional systolic array.
- the arithmetic circuit 2003 may also be a one-dimensional systolic array or other electronic circuitry capable of performing mathematical operations such as multiplication and addition.
- the arithmetic circuit 2003 is a general-purpose matrix processor.
- the arithmetic circuit fetches the data corresponding to the matrix B from the weight memory 2002 and buffers it on each PE in the arithmetic circuit.
- the arithmetic circuit fetches the data of matrix A and matrix B from the input memory 2001 to perform matrix operation, and stores the partial result or final result of the matrix in an accumulator 2008 .
- Unified memory 2006 is used to store input data and output data.
- the weight data directly passes through the storage unit access controller (Direct Memory Access Controller, DMAC) 2005, and the DMAC is transferred to the weight memory 2002.
- Input data is also transferred to unified memory 2006 via the DMAC.
- DMAC Direct Memory Access Controller
- the BIU is the Bus Interface Unit, that is, the bus interface unit 2010, which is used for the interaction between the AXI bus and the DMAC and the instruction fetch buffer (Instruction Fetch Buffer, IFB) 2009.
- IFB Instruction Fetch Buffer
- the bus interface unit 2010 (Bus Interface Unit, BIU for short) is used for the instruction fetch memory 2009 to obtain instructions from the external memory, and also for the storage unit access controller 2005 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
- the DMAC is mainly used to transfer the input data in the external memory DDR to the unified memory 2006 , the weight data to the weight memory 2002 , or the input data to the input memory 2001 .
- the vector calculation unit 2007 includes a plurality of operation processing units, and further processes the output of the operation circuit, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison and so on, if necessary. It is mainly used for non-convolutional/fully connected layer network computation in neural networks, such as Batch Normalization, pixel-level summation, and upsampling of feature planes.
- the vector computation unit 2007 can store the processed output vectors to the unified memory 2006 .
- the vector calculation unit 2007 may apply a linear function and/or a nonlinear function to the output of the operation circuit 2003, such as linear interpolation of the feature plane extracted by the convolutional layer, such as a vector of accumulated values, to generate activation values.
- the vector computation unit 2007 generates normalized values, pixel-level summed values, or both.
- the vector of processed outputs can be used as activation input to the arithmetic circuit 2003, eg, for use in subsequent layers in a neural network.
- the instruction fetch memory (instruction fetch buffer) 2009 connected to the controller 2004 is used to store the instructions used by the controller 2004;
- Unified memory 2006, input memory 2001, weight memory 2002 and instruction fetch memory 2009 are all On-Chip memories. External memory is private to the NPU hardware architecture.
- the processor mentioned in any one of the above may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the method in the first aspect.
- the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be A physical unit, which can be located in one place or distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- the connection relationship between the modules indicates that there is a communication connection between them, which may be specifically implemented as one or more communication buses or signal lines.
- U disk U disk
- mobile hard disk ROM
- RAM random access memory
- disk or CD etc.
- a computer device which can be a personal computer, training equipment, or network equipment, etc. to execute the methods described in the various embodiments of the present application.
- the computer program product includes one or more computer instructions.
- the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be retrieved from a website, computer, training device, or data Transmission from the center to another website site, computer, training facility or data center via wired (eg coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means.
- wired eg coaxial cable, fiber optic, digital subscriber line (DSL)
- wireless eg infrared, wireless, microwave, etc.
- the computer-readable storage medium can be any available medium that can be stored by a computer or a data storage device such as a training device, a data center, etc., that includes one or more available media integrated.
- the usable media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (digital video disc, DVD), or semiconductor media (eg, solid state disk (SSD)), and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
Abstract
本申请实施例公开了一种半监督学习模型的训练方法、图像处理方法及设备,可应用于人工智能领域的计算机视觉领域,该方法包括:首先通过训练后的第一半监督学习模型对一部分无标签样本的分类类别进行预测,得到预测标签,并通过一比特标注的方式判断各预测标签是否正确,如果预测正确则获得该样本的正确标签(即正标签),否则可排除掉该样本的一个错误标签(即负标签),之后,在下一训练阶段,利用上述信息重新构建训练集(即第一训练集),并根据第一训练集重新训练初始半监督学习模型,从而提高模型的预测准确率,由于一比特标注只需标注者针对预测标签回答"是"或"否",该标注方式能缓解机器学习中需要大量有正确标签数据的人工标注压力。
Description
本申请要求于2020年8月31日提交中国专利局、申请号为202010899716.5、申请名称为“一种半监督学习模型的训练方法、图像处理方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及机器学习领域,尤其涉及一种半监督学习模型的训练方法、图像处理方法及设备。
传统的机器学习任务分为无监督学习(数据无标签,如聚类、异常检测等)和监督学习(数据有标签,如分类、回归等),而半监督学习(semi-supervised learning,SSL)是模式识别和机器学习领域研究的重点问题,属于监督学习与无监督学习相结合的一种学习方法。半监督学习使用大量的无标签数据和一部分有标签数据来进行模式识别工作。
现实场景中数据的标签获取往往是十分昂贵的,然而现有的半监督学习模型对有标签数据的数量有一定要求,当有标签数据达到一定数量,半监督学习模型的泛化能力才能明显增强,并且半监督学习模型的预测准确率仍然有较大的提升空间。
基于此,利用少量有标签数据训练得到一种更高预测准确率的半监督学习模型的方法亟待推出。
发明内容
本申请实施例提供了一种半监督学习模型的训练方法、图像处理方法及设备,用于在当前训练阶段,通过训练后的第一半监督学习模型预测一部分无标签样本的分类类别(即标签),如果预测正确则获得该样本的正确标签,否则可排除掉该样本的一个错误标签,之后,在下一训练阶段利用上述信息重新构建训练集(即第一训练集)更新初始半监督学习模型,从而提高模型的预测准确率。
基于此,本申请实施例提供以下技术方案:
第一方面,本申请实施例首先提供一种半监督学习模型的训练方法,可用于人工智能领域中,该方法可以包括:首先,训练设备根据获取到的初始训练集训练初始半监督学习模型(可简称为初始模型),从而得到训练后的第一半监督学习模型(可简称为训练后的第一模型),该初始训练集中,一部分为有标签样本,另一部分为无标签样本,其中,这部分有标签样本称为第一有标签样本集,这部分无标签样本称为第一无标签样本集。得到训练后的第一模型后,训练设备再从初始训练集中的第一无标签样本集选取初始子集,初始子集中的各个无标签样本构成测试数据,用于对训练后的第一模型进行测试,通过该训练后的第一模型对选出的初始子集中的各个无标签样本进行预测,从而得到选取出的每个无标签样本对应的预测标签(训练后的第一模型会输出选取出来的每个无标签样本的在各个分类类别上的概率预测,通常选择概率最大的一个分类类别作为模型对该样本的预测标签), 各个预测标签就构成第一预测标签集。得到第一预测标签集之后,训练设备将根据该第一预测标签集对初始子集进行一比特标注,由于这种做法提供了log
22=1比特(bit)的信息量(即“是”或“否”),因此称为一比特标注。如上述所述,一比特标注的方式具体为:标注者针对每个预测样本对应的预测标签回答一个“是”或“否”的问题,若预测标签是预测正确的分类类别,则获得该无标签样本的正标签(也可称为正确标签),比如,预测标签为“狗”,对于该无标签样本的真实标签也是“狗”,那么则预测正确,该无标签样本获得正标签“狗”;若预测标签是预测错误的分类类别,则获得该无标签样本的负标签,据此可排除掉该无标签样本的一个错误标签,比如,预测标签为“猫”,对于该无标签样本的真实标签确是“狗”,那么则预测错误,该无标签样本获得负标签“不是猫”。经过一比特标注后,初始子集就被分为了第一子集和第二子集,其中,第一子集为预测正确的分类类别(即正标签)对应的样本集合,第二子集为预测错误的分类类别(即负标签)对应的样本集合。在获得一比特标注的结果后,即获得了相应数量的正标签和负标签,之后,训练设备重新构建训练集,重新构建的训练集可称为第一训练集,构建的具体方式可以是将正标签样本(即第一子集)与已有的有标签样本放到一起,作为本阶段的有标签样本,也可称为第二有标签样本集;各个负标签样本(即第二子集)就构成本阶段的负标签样本集;第一无标签样本集中余下的无标签样本就构成本阶段的第二无标签样本集。这三类样本共同构成第一训练集。构建好第一训练集后,再根据该第一训练集重新训练初始模型,得到训练后的能力更强的第二半监督学习模型(可简称为训练后的第二模型)。
在本申请上述实施方式中,首先,通过训练后的第一半监督学习模型对一部分无标签样本的分类类别进行预测,得到预测标签,并判断各个预测标签是否正确,如果预测正确则获得该样本的正确标签(即正标签),否则可排除掉该样本的一个错误标签(即负标签),之后,在下一训练阶段,训练设备再利用上述信息重新构建训练集(即第一训练集),并根据该第一训练集重新训练初始半监督学习模型,从而提高模型的预测准确率,并且,由于只需对预测标签回答“是”或“否”,这种标注方式能够缓解机器学习中需要大量有正确标签数据的人工标注压力。
在第一方面的一种可能的实现方式中,初始半监督学习模型的网络结构具体可以有多种表现形式,例如,可以包括如下模型中的任意一种:Π-model、VAT、LPDSSL、TNAR、pseudo-label、DCT、mean teacher模型。
在本申请上述实施方式中,阐述了可应用本申请实施例提供的训练方法的半监督学习模型可以有哪些,具备普适性和可选择性。
在第一方面的一种可能的实现方式中,若初始半监督学习模型是只具备一个损失函数的学习模型,如,Π-model、VAT、LPDSSL、TNAR、pseudo-label、DCT中的任意一种,那么训练设备根据第一训练集训练初始模型,得到训练后的第二模型具体可以是:针对第二有标签样本集和第二无标签样本集,训练设备根据第二有标签样本集和第二无标签样本集,利用第一损失函数对初始半监督学习模型进行训练,该第一损失函数就为初始半监督学习模型原有的损失函数loss1;针对负标签样本集,训练设备则根据该负标签样本集,利用第二损失函数对初始半监督学习模型进行训练,第二损失函数(可称为loss2)就为模型 输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在预测值上的对应维度置为零的值,该第二损失函数loss2就为上述所述的针对无标签样本集构建的新的损失函数。最后,根据loss=loss1+σ*loss2更新该初始模型,其中,σ表示平衡系数,是通过训练得到的一个可调节的参数,loss是整个半监督学习模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。
在本申请上述实施方式中,阐述了当初始半监督学习模型的损失函数为一个时,可针对负标签样本集再构建一个新的损失函数,即针对训练集中的不同类型的样本集,对应采用不同的损失函数,再基于总的损失函数对该初始半监督学习模型进行训练,更具有针对性。
在第一方面的一种可能的实现方式中,本申请实施例的训练方法除了可以用于训练上述原本只有一个损失函数的半监督学习模型,还可以用于训练损失函数有两个或多于两个的半监督学习模型,过程是类似的,具体地,该初始半监督学习模型可以是mean teacher模型。由于mean teacher模型的训练策略是:假设训练样本为有标签样本(x1,y1)以及无标签样本x2,其中,y1为x1的标签。将有标签样本(x1,y1)输入学生模型,从而计算损失函数1的输出值loss11;将无标签样本x2输入学生模型,从而计算得到预测标签label1,将无标签样本x2进行一些数据处理(一般为增加噪声的扰动处理)后输入教师模型,从而计算得到预测标签label2。如果mean teacher模型足够稳定,那么预测标签label1和预测标签label2应该是一样的,即教师模型能抵抗无标签样本x2的扰动,也就是说,希望学生模型和教师模型的预测标签尽量相等,因此根据lable1和label2得到损失函数2的输出值loss12。最后,根据loss=loss11+λ*loss12更新学生模型,其中,λ表示平衡系数,是通过训练得到的一个可调节的参数,loss是整个mean teacher模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。因此,训练设备根据第一训练集训练初始半监督学习模型,得到训练后的第二半监督学习模型具体还可以是:针对第二有标签样本集,训练设备根据第二有标签样本集,利用第三损失函数对mean teacher模型进行训练,该第三损失函数就为上述所述的损失函数1(即loss11);训练设备还将根据第二有标签样本集和第二无标签样本集,利用第四损失函数对mean teacher模型进行训练,该第四损失函数就为上述所述的损失函数2(即loss12),该第三损失函数loss11和第四损失函数loss12均为mean teacher模型原有的损失函数;此外,针对负标签样本,训练设备还会根据负标签样本集,利用第五损失函数对mean teacher模型进行训练,该第五损失函数(可称为loss13)就为模型输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在所述预测值上的对应维度置为零的值,该第五损失函数loss13就为上述所述的针对无标签样本集构建的新的损失函数。最后,训练设备再根据loss=loss11+λ*loss12+γ*loss13更新该初始的mean teacher模型,其中,λ和γ均表示平衡系数,是通过训练得到的可调节的参数,loss是整个mean teacher模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。
在本申请上述实施方式中,阐述了当初始半监督学习模型是mean teacher模型时,也可针对负标签样本集再构建一个新的损失函数,即针对训练集中的不同类型的样本集,对 应采用不同的损失函数,再基于总的损失函数对该初始半监督学习模型进行训练,更具有针对性。
在第一方面的一种可能的实现方式中,该第三损失函数可以为交叉熵损失函数;和/或,该第四损失函数可以为均方误差损失函数。
在本申请上述实施方式中,阐述了在mean teacher模型中,第三损失函数和第四损失函数的具体形式,具备可实现性。
在第一方面的一种可能的实现方式中,将第二无标签样本集作为新的第一无标签样本集、第二半监督学习模型作为新的第一半监督学习模型,重复执行上述步骤,直至第二无标签样本集为空。
在本申请上述实施方式中,通常当能够获得更多的正确标签的训练样本,模型的精度就会得到提升。因此最直接的方法是将训练过程分为多个阶段,每个阶段从第一无标签样本集中选出部分样本进行预测,针对预测标签重新构建训练集,再利用重新构建的训练集更新模型,从而每个阶段得到的训练后的第二半监督学习模型的泛化能力和预测准确率都比上一阶段得到的第二半监督学习模型更强。
在第一方面的一种可能的实现方式中,在根据第一训练集训练初始半监督学习模型,得到训练后的第二半监督学习模型之后,上述方法还包括:将训练后的第二半监督学习模型部署在目标设备上,该目标设备就用于获取目标图像,该训练后的第二半监督学习模型就用于对目标图像进行标签预测。
在本申请上述实施方式中,阐述了训练好的第二半监督学习模型的具体用处,即部署在目标设备上用于对目标图像进行标签预测,即用来对图像进行类别预测,相比于已有的训练方法训练得到的半监督学习模型,本申请实施例提供的训练后的第二半监督学习模型提高了目标图像识别的准确率。
在第一方面的一种可能的实现方式中,训练设备从第一无标签样本集选取初始子集具体可以是:从第一无标签样本集中随机选取预设数量的无标签样本构成初始子集。
在本申请上述实施方式中,阐述了如何从第一无标签样本集中选取无标签样本来构成初始子集的一种实现方式,随机选择的方式保证了选取样本的均衡。
本申请实施例第二方面还提供一种图像处理方法,该方法具体包括:首先,执行设备获取目标图像;之后,执行设备再将所述目标图像作为训练后的半监督学习模型的输入,输出对所述目标图像的预测结果,所述训练后的半监督学习模型为上述第一方面或第一方面任意一种可能实现方式中得到的第二半监督学习模型。
在本申请上述实施方式中,阐述了训练后的第二半监督学习模型的一种应用方式,即用来对图像进行类别预测,相比于已有的训练方法训练得到的半监督学习模型,本申请实施例提供的半监督学习模型提高了目标图像识别的准确率。
本申请实施例第三方面提供一种训练设备,该训练设备具有实现上述第一方面或第一方面任意一种可能实现方式的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
本申请实施例第四方面提供一种执行设备,该执行设备具有实现上述第二方面的方法 的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
本申请实施例第五方面提供一种训练设备,可以包括存储器、处理器以及总线系统,其中,存储器用于存储程序,处理器用于调用该存储器中存储的程序以执行本申请实施例第一方面或第一方面任意一种可能实现方式的方法。
本申请实施例第六方面提供一种训练设备,可以包括存储器、处理器以及总线系统,其中,存储器用于存储程序,处理器用于调用该存储器中存储的程序以执行本申请实施例第二方面的方法。
本申请实施例第七方面提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机可以执行上述第一方面或第一方面任意一种可能实现方式的方法,或,使得计算机可以执行上述第二方面的方法。
本申请实施例第八方面提供了一种计算机程序或计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面或第一方面任意一种可能实现方式的方法,或,使得计算机可以执行上述第二方面的方法。
图1为本申请实施例提供的半监督学习模型训练和推理的一个过程示意图;
图2为本申请实施例提供的主动学习模型训练和推理的一个过程示意图;
图3为mean teacher模型的一个示意图;
图4为本申请实施例提供的人工智能主体框架的一种结构示意图;
图5为本申请实施例提供的半监督学习模型的训练方法的一个总体流程图;
图6为本申请实施例提供的任务处理系统的一种系统架构的示意图;
图7为本申请实施例提供的半监督学习模型的训练方法的一个流程示意图;
图8为本申请实施例提供的半监督学习模型的训练方法一个流程示意图;
图9为本申请实施例提供的mean teacher模型训练过程的一个示意图;
图10为本申请实施例提供的图像处理方法的一种流程示意图;
图11为本申请实施例提供的应用场景的一个示意图;
图12为本申请实施例提供的应用场景的另一示意图;
图13为本申请实施例提供的应用场景的另一示意图;
图14为本申请实施例提供的训练设备的一个示意图;
图15为本申请实施例提供的执行设备的一个示意图;
图16为本申请实施例提供的训练设备的另一示意图;
图17为本申请实施例提供的执行设备的另一示意图;
图18为本申请实施例提供的芯片的一种结构示意图。
本申请实施例提供了一种半监督学习模型的训练方法、图像处理方法及设备,用于在 当前训练阶段,通过训练后的第一半监督学习模型预测一部分无标签样本的分类类别(即标签),如果预测正确则获得该样本的正确标签,否则可排除掉该样本的一个错误标签,之后,在下一训练阶段利用上述信息重新构建训练集(即第一训练集)更新初始半监督学习模型,从而提高模型的预测准确率。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
本申请实施例涉及了许多关于半监督学习、学习模型等相关知识,为了更好地理解本申请实施例的方案,下面先对本申请实施例可能涉及的相关术语和概念进行介绍。应理解的是,相关的术语和概念解释可能会因为本申请实施例的具体情况有所限制,但并不代表本申请仅能局限于该具体情况,在不同实施例的具体情况可能也会存在差异,具体此处不做限定。
(1)神经网络
神经网络可以是由神经单元组成的,具体可以理解为具有输入层、隐含层、输出层的神经网络,一般来说第一层是输入层,最后一层是输出层,中间的层数都是隐含层。其中,具有很多层隐含层的神经网络则称为深度神经网络(deep neural network,DNN)。神经网络中的每一层的工作可以用数学表达式
来描述,从物理层面,神经网络中的每一层的工作可以理解为通过五种对输入空间(输入向量的集合)的操作,完成输入空间到输出空间的变换(即矩阵的行空间到列空间),这五种操作包括:1、升维/降维;2、放大/缩小;3、旋转;4、平移;5、“弯曲”。其中1、2、3的操作由
完成,4的操作由“+b”完成,5的操作则由“a()”来实现。这里之所以用“空间”二字来表述是因为被分类的对象并不是单个事物,而是一类事物,空间是指这类事物所有个体的集合,其中,W是神经网络各层的权重矩阵,该矩阵中的每一个值表示该层的一个神经元的权重值。该矩阵W决定着上文所述的输入空间到输出空间的空间变换,即神经网络每一层的W控制着如何变换空间。训练神经网络的目的,也就是最终得到训练好的神经网络的所有层的权重矩阵。因此,神经网络的训练过程本质上就是学习控制空间变换的方式,更具体的就是学习权重矩阵。
需要注意的是,在本申请实施例中,基于机器学习(如,主动学习、监督学习、无监督学习、半监督学习等)任务所采用的学习模型(也可称为学习器、模型等),本质都是神经网络。
(2)损失函数(loss function)
在训练神经网络的过程中,因为希望神经网络的输出尽可能的接近真正想要预测的值,可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层神经网络的权重矩阵(当然,在第一次更新之前通常会有初始化的过程,即为神经 网络中的各层预先配置参数),比如,如果网络的预测值高了,就调整权重矩阵让它预测低一些,不断的调整,直到神经网络能够预测出真正想要的目标值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数(loss function)或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出值(loss)越高表示差异越大,那么神经网络的训练就变成了尽可能缩小这个loss的过程。
在神经网络的训练过程中,可以采用误差反向传播(back propagation,BP)算法修正初始的神经网络模型中参数的大小,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中的参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。
(3)特征
特征是指输入变量,即简单线性回归中的x变量,简单的机器学习任务可能会使用单个特征,而比较复杂的机器学习任务可能会使用数百万个特征。
(4)标签
标签是简单线性回归中的y变量,标签可以是小麦未来的价格、图片中显示的动/植物品种、音频剪辑的含义或任何事物。在本申请实施例中,标签是指图片的分类类别。比如说有一张猫的图片,人们都知道它是只猫,但是计算设备不知道它是只猫,怎么办呢?那么给这张图片打上一个标签,该标签就用于向计算设备指示该图片蕴含的信息是“猫”,然后计算设备就知道它是只猫,计算设备根据这个标签对所有的猫进行学习就能通过这一只猫认识所有的猫。因此,给数据打标签,就是告诉计算设备,输入变量的多个特征描述的是什么(即y),y可以称之为label,也可以称之为target(即目标值)。
(5)样本
样本是指数据的特定实例,一个样本x代表的是一个对象,样本x通常用一个特征向量x=(x
1,x
2,…,x
d)∈R
d表示,其中,d代表样本x的维度(即特征个数),样本分为有标签样本和无标签样本,有标签样本同时包含特征和标签,无标签样本包含特征但不包含标签,机器学习的任务往往就是学习输入的d维训练样本集(可简称为训练集)中潜在的模式。
(6)模型
在本申请实施例中,基于机器学习(如,主动学习、监督学习、无监督学习、半监督学习等)任务所采用的学习模型(也可称为学习器、模型等),本质都是神经网络。模型定义了特征与标签之间的关系,模型的应用一般包括训练和推理两个阶段,训练阶段用于根据训练集对模型进行训练,以得到训练后的模型;推理阶段用于将训练后的模型对真实的无标签实例进行标签预测,而预测准确率是衡量一个模型训练的好坏的重要指标之一。
(7)半监督学习(semi-supervised learning,SSL)
根据训练样本是否有标签,传统的机器学习任务分为监督学习和无监督学习,监督学习指的是训练样本包含标记信息(即数据有标签)的学习任务,例如:常见的分类与回归 算法;无监督学习则是训练样本不包含标记信息(即数据无标签)的学习任务,例如:聚类算法、异常检测算法等。在实际生活中,遇到的情况往往是两者的折衷,即仅有部分样本是带标签的,另一部分样本是不带标签的,如果仅使用带标签或不带标签的样本,则一方面会造成部分样本的浪费,另一方面由于所使用的样本量较小,训练得到的模型效果并不是很好。例如:做网页推荐时需要让用户标记出感兴趣的网页,但是少有用户愿意花时间来提供标记,若直接丢弃掉无标签样本集,使用传统的监督学习方法,常常会由于训练样本的不充足,使得模型刻画总体分布的能力减弱,从而影响了模型的泛化性能。
基于此,半监督学习应运而生,半监督学习是属于监督学习与无监督学习相结合的一种学习方法,对应所使用的模型可称为半监督学习模型,如图1所示,图1示意的是半监督学习模型训练和推理的过程,该模型所使用的训练集由一部分有标签样本(少部分)和另一部分无标签样本(大部分)构成,半监督学习的基本思想是利用数据分布上的模型假设建立模型对无标签样例进行标签,让模型不依赖外界交互、自动地利用未标记样本来提升学习性能。
(8)主动学习(active learning)
当使用一些传统的监督学习方法做分类时,往往是训练样本规模越大,分类效果就越好。但在现实生活的很多场景中,有标签样本的获取比较困难,这需要领域内的专家(即标注者)来进行人工标注出正确标签,所花费的时间成本和经济成本都很大。而且,如果训练样本的规模过于庞大,训练的时间花费也会比较多。那有没有办法能够使用较少的有标签样本来获得性能较好的模型呢?主动学习提供了这种可能。
主动学习所使用的训练集与半监督学习所使用的训练集类似,如图2所示,图2示意的是主动学习模型训练和推理的过程,该模型所使用的训练集也是由一部分有标签样本(少部分)和另一部分无标签样本(大部分)构成,但与半监督学习模型不同的地方在于:主动学习的基本思想是首先仅用训练集中这部分有标签样本训练主动学习模型,再基于该主动学习模型对无标签样本进行预测,从中挑选出不确定性高或分类置信度低的样本(如,图2中查询到的无标签样本a)来咨询专家并进行标记,如,专家人工识别出该选出来的无标签样本为“马”,那么就给该无标签样本标记标签“马”,之后,将专家标记上真实标签的样本归为训练集中有标签样本那一类,再使用扩充后的有标签样本重新训练该主动学习模型,以提高模型的精确度。主动学习的问题是:需要专家参与样本的正确标记(即标记真实标签)。
(9)mean teacher模型
mean teacher模型也可称为教师学生模型(teacher-student model),是一种半监督学习模型,该模型的相关结构如图3所示,该模型包括两个子模型,一个是学生模型,另一个教师模型。也就是说,mean teacher模型即充当学生,又充当老师,作为老师,通过教师模型产生学生模型学习时的目标;作为学生模型,则利用教师模型产生的目标来进行学习。而教师模型的网络参数是由历史上(前几个step)学生模型的网络参数经过加权平均得到。mean teacher模型中的两个子模型的网络结构是一样的,在训练过程中,学生模型的网络参数根据损失函数梯度下降法更新得到;教师模型的网络参数通过学生模型的网络参数迭代 得到。
由于mean teacher模型属于半监督学习模型的一种,因此其使用的训练集也是一部分为有标签样本,另一部分为无标签样本。下面介绍一下mean teacher模型的训练策略:假设训练样本为有标签样本(x1,y1)以及无标签样本x2,其中,y1为x1的标签。将有标签样本(x1,y1)输入学生模型,从而计算损失函数1的输出值loss11;将无标签样本x2输入学生模型,从而计算得到预测标签label1,将无标签样本x2进行一些数据处理(一般为增加噪声的扰动处理)后输入教师模型,从而计算得到预测标签label2。如果mean teacher模型足够稳定,那么预测标签label1和预测标签label2应该是一样的,即教师模型能抵抗无标签样本x2的扰动,也就是说,希望学生模型和教师模型的预测标签尽量相等,因此根据lable1和label2得到损失函数2的输出值loss12。最后,根据loss=loss11+λ*loss12更新学生模型,其中,λ表示平衡系数,是通过训练得到的一个可调节的参数,loss是整个mean teacher模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。在每个step中,更新学生模型的网络参数后,再利用学生模型的网络参数更新教师模型的网络参数。其中,图3中教师模型中的网络参数θ’是由学生模型中的网络参数θ更新得到,更新的方式是通过滑动平均(exponential moving average)更新,图3中的教师模型中的η’为对输入增加扰动处理的参数。
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
首先,对人工智能系统总体工作流程进行描述,请参见图4,图4示出的为人工智能主体框架的一种结构示意图,下面从“智能信息链”(水平轴)和“IT价值链”(垂直轴)两个维度对上述人工智能主题框架进行阐述。其中,“智能信息链”反映从数据的获取到处理的一列过程。举例来说,可以是智能信息感知、智能信息表示与形成、智能推理、智能决策、智能执行与输出的一般过程。在这个过程中,数据经历了“数据—信息—知识—智慧”的凝练过程。“IT价值链”从人智能的底层基础设施、信息(提供和处理技术实现)到系统的产业生态过程,反映人工智能为信息技术产业带来的价值。
(1)基础设施
基础设施为人工智能系统提供计算能力支持,实现与外部世界的沟通,并通过基础平台实现支撑。通过传感器与外部沟通;计算能力由智能芯片(CPU、NPU、GPU、ASIC、FPGA等硬件加速芯片)提供;基础平台包括分布式计算框架及网络等相关的平台保障和支持,可以包括云存储和计算、互联互通网络等。举例来说,传感器和外部沟通获取数据,这些数据提供给基础平台提供的分布式计算系统中的智能芯片进行计算。
(2)数据
基础设施的上一层的数据用于表示人工智能领域的数据来源。数据涉及到图形、图像、语音、文本,还涉及到传统设备的物联网数据,包括已有系统的业务数据以及力、位移、液位、温度、湿度等感知数据。
(3)数据处理
数据处理通常包括数据训练、机器学习、深度学习、搜索、推理、决策等方式。
其中,机器学习和深度学习可以对数据进行符号化和形式化的智能信息建模、抽取、预处理、训练等。
推理是指在计算机或智能系统中,模拟人类的智能推理方式,依据推理控制策略,利用形式化的信息进行机器思维和求解问题的过程,典型的功能是搜索与匹配。
决策是指智能信息经过推理后进行决策的过程,通常提供分类、排序、预测等功能。
(4)通用能力
对数据经过上面提到的数据处理后,进一步基于数据处理的结果可以形成一些通用的能力,比如可以是算法或者一个通用系统,例如,翻译,文本的分析,计算机视觉的处理,语音识别,图像的识别等等。
(5)智能产品及行业应用
智能产品及行业应用指人工智能系统在各领域的产品和应用,是对人工智能整体解决方案的封装,将智能信息决策产品化、实现落地应用,其应用领域主要包括:智能终端、智能制造、智能交通、智能家居、智能医疗、自动驾驶、智慧城市等。
本申请实施例可以应用在机器学习中各种学习模型的训练方法优化上,而通过本申请的训练方法训练得到的学习模型具体可以应用在人工智能领域的各个细分领域中,如,计算机视觉领域、图像处理领域等,具体的,结合图4来讲,本申请实施例中基础设施获取的数据集中的数据可以是通过摄像头、雷达等传感器获取到的不同类型的多个数据(也可称为训练数据或训练样本,多个训练数据就构成训练集),也可以是多个图像数据或多个视频数据,只要该训练集满足用于对学习模型进行迭代训练的功能即可,具体此处对训练集内的数据类型不限定。为便于理解,以下本申请实施例均以训练集为图像数据为例进行示意。这里需要注意的是,本申请实施例所用的训练集包括一部分有标签样本(少部分)和另一部分无标签样本(大部分),这部分有标签样本可事先由标注者人工标记出真实标签。
本申请实施例半监督学习模型的训练方法的总体流程如图5所示,首先,根据初始训练集的有标签样本和无标签样本对初始半监督学习模型(可简称为初始模型)进行训练,得到训练后的第一半监督学习模型(可简称为训练后的第一模型),之后,从初始训练集内所有无标签样本中选取一部分无标签样本作为训练后的第一模型的输入,由训练后的第一模型对这部分选取出来的无标签样本进行预测,得到选取出来的每个无标签样本的预测标签,再对各个预测标签进行一比特标注,由于这种做法提供了log
22=1比特(bit)的信息量(即“是”或“否”),因此称为一比特标注。一比特标注的方式具体为:标注者针对每个预测样本对应的预测标签回答一个“是”或“否”的问题,若预测标签是预测正确的分类类别,则获得该无标签样本的正标签(也可称为正确标签),比如,预测标签为“狗”,对于该无标签样本的真实标签也是“狗”,那么则预测正确,该无标签样本获得正标签“狗”;若预测标签是预测错误的分类类别,则获得该无标签样本的负标签,据此可排除掉该无标签样本的一个错误标签,比如,预测标签为“猫”,对于该无标签样本的真实标签确是“狗”,那么则预测错误,该无标签样本获得负标签“不是猫”。在每一阶段获得一比特标注的结果后,即获得了相应数量的正标签和负标签,将正标签样本与已有的有标签样本放到一起,作为本阶段的有标签样本,对于负标签样本,与之前拥有的负标签样本合并到一起,此外 还有余下的无标签样本,这构成了本阶段的所有的三类样本,这三类样本共同构成第一训练集,之后根据该第一训练集训练初始模型,得到训练后的能力更强的第二半监督学习模型(可简称为训练后的第二模型),最后利用训练后的第二模型的预测标签再次进行一比特标注,就可以得到更多的正标签,不断重复这一过程就可以得到能力越来越强的新模型。
需要说明的是,图5所述的应用流程可部署在训练设备上,请参阅图6,图6为本申请实施例提供的任务处理系统的一种系统架构图,在图6中,任务处理系统200包括执行设备210、训练设备220、数据库230、客户设备240、数据存储系统250和数据采集设备260,执行设备210中包括计算模块211。其中,数据采集设备260用于获取用户需要的开源的大规模数据集(即图4所示的初始训练集),并将初始训练集以及后续基于初始训练集构建的第一训练集、第二训练集等各个阶段的训练集存入数据库230中,训练设备220基于数据库230中的维护的各个阶段的训练集对目标模型/规则201(即上述所述的各个阶段的初始模型)进行训练,训练得到的训练后的模型(如,上述所述的第二模型)再在执行设备210上进行运用。执行设备210可以调用数据存储系统250中的数据、代码等,也可以将数据、指令等存入数据存储系统250中。数据存储系统250可以置于执行设备210中,也可以为数据存储系统250相对执行设备210是外部存储器。
经由训练设备220训练后的第二模型可以应用于不同的系统或设备(即执行设备210)中,具体可以是边缘设备或端侧设备,例如,手机、平板、笔记本电脑、摄像头等等。在图6中,执行设备210配置有I/O接口212,与外部设备进行数据交互,“用户”可以通过客户设备240向I/O接口212输入数据。如,客户设备240可以是监控系统的摄像设备,通过该摄像设备拍摄的目标图像作为输入数据输入至执行设备210的计算模块211,由计算模块211对输入的该目标图像进行检测后得出检测结果(即预测标签),再将该检测结果输出至摄像设备或直接在执行设备210的显示界面(若有)进行显示;此外,在本申请的一些实施方式中,客户设备240也可以集成在执行设备210中,如,当执行设备210为手机时,则可以直接通过该手机获取到目标任务(如,可以通过该手机的摄像头拍摄到目标图像,或,通过该手机的录音模块录取到的目标语音等,此处对目标任务不做限定)或者接收其他设备(如,另一个手机)发送的目标任务,再由该手机内的计算模块211对该目标任务进行检测后得出检测结果,并直接将该检测结果呈现在手机的显示界面。此处对执行设备210与客户设备240的产品形态不做限定。
值得注意的,图6仅是本申请实施例提供的一种系统架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在图6中,数据存储系统250相对执行设备210是外部存储器,在其它情况下,也可以将数据存储系统250置于执行设备210中;在图6中,客户设备240相对执行设备210是外部设备,在其他情况下,客户设备240也可以集成在执行设备210中。
还需要说明的是,上述实施例所述的初始模型的训练可以是均在云侧实现,例如,可以由云侧的训练设备220(该训练设备220可设置在一个或多个服务器或者虚拟机上)获取训练集,并根据训练集内的训练样本对初始模型进行训练,得到训练后的第二模型,之后,该训练后的第二模型再发送给执行设备210进行应用,例如,发送给执行设备210进 行标签预测,示例性地,图6对应的系统架构中所述,就是由训练设备220对初始模型进行整体训练,训练后的第二模型再发送给执行设备210进行使用;上述实施例所述的初始模型的训练也可以是均在终端侧实现,即训练设备220可以是位于终端侧,例如,可以由终端设备(如,手机、智能手表等)、轮式移动设备(如,自动驾驶车辆、辅助驾驶车辆等)等获取训练集,并根据训练集内的训练样本对初始模型进行训练,得到训练后的第二模型,该训练后的第二模型就可以直接在该终端设备使用,也可以由该终端设备发送给其他的设备进行使用。具体本申请实施例对第二模型在哪个设备(云侧或终端侧)上进行训练或应用不做限定。
接下来介绍本申请实施例所提供的半监督学习模型的训练方法,请参阅图7,图7为本申请实施例提供的半监督学习模型的训练方法的一种流程示意图,具体可以包括如下步骤:
701、根据初始训练集训练初始半监督学习模型,得到训练后的第一半监督学习模型,初始训练集包括第一有标签样本集和第一无标签样本集。
首先,训练设备根据获取到的初始训练集训练初始半监督学习模型(可简称为初始模型),从而得到训练后的第一半监督学习模型(可简称为训练后的第一模型),该初始训练集中,一部分为有标签样本,另一部分为无标签样本,其中,这部分有标签样本称为第一有标签样本集,这部分无标签样本称为第一无标签样本集。
这里需要说明的是,根据初始训练集如何训练一个已知网络结构的初始半监督学习模型是已知的,具体此处不予赘述。
702、从第一无标签样本集选取初始子集,并通过训练后的第一半监督学习模型对所述初始子集进行预测,得到第一预测标签集。
训练设备得到训练后的第一模型后,再从初始训练集中的第一无标签样本集选取初始子集,初始子集中的各个无标签样本构成测试数据,用于对训练后的第一模型进行测试,通过该训练后的第一模型对选出的初始子集中的各个无标签样本进行预测,从而得到选取出的每个无标签样本对应的预测标签(训练后的第一模型会输出选取出来的每个无标签样本的在各个分类类别上的概率预测,通常选择概率最大的一个分类类别作为模型对该样本的预测标签),各个预测标签就构成第一预测标签集。
为便于理解,下面举例进行示意:假设初始训练集中包括330个训练样本,其中,30个为有标签样本,300个为无标签样本,那么这30个有标签样本就构成上述所述的第一有标签样本集,这300个无标签样本就构成上述所述的第一无标签样本集。首先,根据初始训练集中的330个训练样本对初始模型进行训练,得到训练后的第一模型,之后,再从这300个无标签样本中选取部分无标签样本构成初始子集,假设选出的是100个无标签样本,那么这100个无标签样本就依次输入训练后的第一模型进行预测,分别得到对应的100个预测标签,这100个预测标签就构成上述所述的第一预测标签集。
需要说明的是,在本申请的一些实施方式中,训练设备从第一无标签样本集中选取初始子集的方式包括但不限于如下方式:从第一无标签样本集中随机选取预设数量的无标签样本构成初始子集。例如,假设第一无标签样本集包括300个无标签样本,可以从中随机 选取预设数量(如,100、150等)的无标签样本构成该初始子集。
703、根据第一预测标签集将初始子集分为第一子集和第二子集,第一子集为预测正确的分类类别对应的样本集合,第二子集为预测错误的分类类别对应的样本集合。
训练设备得到第一预测标签集之后,将根据该第一预测标签集对初始子集进行一比特标注,由于这种做法提供了log
22=1比特(bit)的信息量(即“是”或“否”),因此称为一比特标注。如上述所述,一比特标注的方式具体为:标注者针对每个预测样本对应的预测标签回答一个“是”或“否”的问题,若预测标签是预测正确的分类类别,则获得该无标签样本的正标签(也可称为正确标签),比如,预测标签为“狗”,对于该无标签样本的真实标签也是“狗”,那么则预测正确,该无标签样本获得正标签“狗”;若预测标签是预测错误的分类类别,则获得该无标签样本的负标签,据此可排除掉该无标签样本的一个错误标签,比如,预测标签为“猫”,对于该无标签样本的真实标签确是“狗”,那么则预测错误,该无标签样本获得负标签“不是猫”。经过一比特标注后,初始子集就被分为了第一子集和第二子集,其中,第一子集为预测正确的分类类别(即正标签)对应的样本集合,第二子集为预测错误的分类类别(即负标签)对应的样本集合。
需要注意的是,在本申请的一些实施方式中,标注者可以是本领域的人工标注者,即通过人工标注者判断这些预测标签是否正确,即需要标注者在观察该样本后,回答该样本是否属于预测的那个类别,预测正确则得到该样本的正确标签(即正标签),预测错误则得到该样本的错误标签(即负标签)。在本申请的另一些实施方式中,标注者可以是计算设备,该计算设备已知各个无标签样本的真实标签,计算设备将同一个无标签样本的真实标签与预测标签进行比对,从而判断该样本是否属于预测的那个类别,预测正确则得到该样本的正确标签(即正标签),预测错误则得到该样本的错误标签(即负标签)。具体本申请实施例对标注者的具体表现形式不做限定。
704、构建第一训练集,所述第一训练集包括第二有标签样本集、第二无标签样本集和负标签样本集。
训练设备在获得一比特标注的结果后,即获得了相应数量的正标签和负标签,之后,训练设备将重新构建训练集,重新构建的训练集可称为第一训练集,构建的具体方式可以是:将正标签样本(即第一子集)与已有的有标签样本放到一起,作为本阶段的有标签样本,也可称为第二有标签样本集;各个负标签样本(即第二子集)就构成本阶段的负标签样本集;第一无标签样本集中余下的无标签样本就构成本阶段的第二无标签样本集。这三类样本共同构成第一训练集。
705、根据第一训练集训练初始半监督学习模型,得到训练后的第二半监督学习模型。
构建好第一训练集后,训练设备再根据该第一训练集重新训练初始模型,得到训练后的能力更强的第二半监督学习模型(可简称为训练后的第二模型)。
需要说明的是,由于本申请构建的第一训练集相对于初始训练集不同的地方在于:多了一个负标签样本集,针对有标签样本集和无标签样本集,依然采用原有的方式进行训练,针对负标签样本集,则构建一个新的损失函数,利用新构建的损失函数对负标签样本集进行训练,该新构建的损失函数定义为模型输出的预测值与修改值之间的差值,所述修改值 为将预测错误的分类类别在预测值上的对应维度置为零的值。
为便于针对负标签样本构建的对应损失函数,下面举例进行示意:一般来说,模型对输入的某个样本(假设样本为图片)进行预测,输出的预测值为一个n维向量,这n个维度表示的就是n个分类类别,这n个维度上的取值就表示每个对应的分类类别的预测概率,通常选择预测概率最大的一个分类类别作为模型对该样本的预测标签,该预测概率一般为归一化后的预测概率,即所有的分类类别的预测概率相加和为1。假设有6个分类类别,分别为“猪”、“狗”、“猫”、“马”、“羊”、“鹅”,将样本输入模型后,模型会输出样本在上述各个分类类别上的概率预测,假设模型输出的预测值为[0.05,0.04,0.01,0.5,0.32,0.08],且预测值各个维度的分类类别依次对应“猪”、“狗”、“猫”、“马”、“羊”、“鹅”这6个分类类别,那么由输出的预测值可知,对应“马”这个分类类别的预测概率最大,因此模型对该样本的预测标签为“马”,假设通过一比特标注后发现针对该样本的预测标签是个负标签,即该样本的负标签为“不是马”,因此将对应“马”的那个维度的取值修改为零,即将0.5修改成0,那么修改值则为[0.05,0.04,0.01,0,0.32,0.08],然后将上述预测值与该修改值之间的差值定义针对负标签样本集构建的损失函数。
需要说明的是,在本申请的一些实施方式中,所采用的半监督学习模型(可简称为模型)不同,那么根据第一训练集进行训练的过程也会有些不同,下面分别进行阐述:
(1)初始半监督学习模型的损失函数为一个的情况。
在本申请的一些实施方式中,训练设备根据第一训练集训练初始模型,得到训练后的第二模型具体可以是:针对第二有标签样本集和第二无标签样本集,训练设备根据第二有标签样本集和第二无标签样本集,利用第一损失函数对初始半监督学习模型进行训练,该第一损失函数就为初始半监督学习模型原有的损失函数loss1;针对负标签样本集,训练设备则根据该负标签样本集,利用第二损失函数对初始半监督学习模型进行训练,第二损失函数(可称为loss2)就为模型输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在预测值上的对应维度置为零的值,该第二损失函数loss2就为上述所述的针对无标签样本集构建的新的损失函数,具体此处不予赘述。
最后,根据loss=loss1+σ*loss2更新该初始模型,其中,σ表示平衡系数,是通过训练得到的一个可调节的参数,loss是整个半监督学习模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。
需要说明的是,在本申请的一些实施方式中,该初始半监督学习模型包括如下模型中的任意一种:Π-model、VAT、LPDSSL、TNAR、pseudo-label、DCT。这些模型原有的损失函数均为一个且均已知。
(2)初始半监督学习模型的损失函数为多个的情况。
在本申请的一些实施方式中,本申请实施例的训练方法除了可以用于训练上述原本只有一个损失函数的半监督学习模型,还可以用于训练损失函数有两个或多于两个的半监督学习模型,过程是类似的,下面以半监督学习模型为mean teacher模型为例,对初始半监督学习模型的损失函数为多个的情况进行示意。
下面先介绍一下mean teacher模型的训练策略:假设训练样本为有标签样本(x1,y1) 以及无标签样本x2,其中,y1为x1的标签。将有标签样本(x1,y1)输入学生模型,从而计算损失函数1的输出值loss11;将无标签样本x2输入学生模型,从而计算得到预测标签label1,将无标签样本x2进行一些数据处理(一般为增加噪声的扰动处理)后输入教师模型,从而计算得到预测标签label2。如果mean teacher模型足够稳定,那么预测标签label1和预测标签label2应该是一样的,即教师模型能抵抗无标签样本x2的扰动,也就是说,希望学生模型和教师模型的预测标签尽量相等,因此根据lable1和label2得到损失函数2的输出值loss12。最后,根据loss=loss11+λ*loss12更新学生模型,其中,λ表示平衡系数,是通过训练得到的一个可调节的参数,loss是整个mean teacher模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。
基于该mean teacher模型,针对第二有标签样本集,训练设备根据第二有标签样本集,利用第三损失函数对mean teacher模型进行训练,该第三损失函数就为上述所述的损失函数1(即loss11);训练设备还将根据第二有标签样本集和第二无标签样本集,利用第四损失函数对mean teacher模型进行训练,该第四损失函数就为上述所述的损失函数2(即loss12),该第三损失函数loss11和第四损失函数loss12均为mean teacher模型原有的损失函数;此外,针对负标签样本,训练设备还会根据负标签样本集,利用第五损失函数对mean teacher模型进行训练,该第五损失函数(可称为loss13)就为模型输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在所述预测值上的对应维度置为零的值,该第五损失函数loss13就为上述所述的针对无标签样本集构建的新的损失函数,具体此处不予赘述。
需要说明的是,在本申请的一些实施方式中,第三损失函数loss11可以是交叉损失函数,第四损失函数loss12可以是均方误差损失函数。
最后,根据loss=loss11+λ*loss12+γ*loss13更新该初始的mean teacher模型,其中,λ和γ均表示平衡系数,是通过训练得到的可调节的参数,loss是整个mean teacher模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。
706、将第二无标签样本集作为新的第一无标签样本集、第二半监督学习模型作为新的第一半监督学习模型,重复执行步骤702至步骤705,直至第二无标签样本集为空。
上述步骤702至步骤705的过程是得到训练后的第二模型的一个阶段(可称为阶段1),通常当能够获得更多的正确标签的训练样本,模型的精度会得到提升。最直接的方法是将训练过程分为多个阶段,因此,在本申请的一些实施方式中,为使得泛化能力更强的第二模型,一般会经过多个阶段的训练,即利用训练后的第二模型的预测标签再次进行一比特标注,就可以得到更多的正标签,不断重复这一过程就可以得到能力越来越强的新模型。具体地,训练设备就是将第二无标签样本集作为新的第一无标签样本集,并将第二半监督学习模型作为新的第一半监督学习模型,重复执行步骤702至步骤705,直至第二无标签样本集为空。
需要说明的是,在本申请的一些实施方式中,也可以不包括步骤706,即只进行一个阶段的训练,得到一个训练阶段的训练后的第二模型,该第二模型相对已有的训练方法,其泛化能力也是提高了的。
还需要说明的是,在本申请的一些实施方式中,假设构成初始子集的无标签样本就是第一无标签样本中的所有样本,那么在该第一训练阶段,第二无标签样本就为空,那么这种情况下也不包括步骤706。
还需要说明的是,得到了训练后的第二模型,就可将该第二模型部署在目标设备上进行应用。在本申请实施例中,目标设备具体可以是移动端的设备,如摄像头、智能家居等边缘设备,也可以是如手机、个人计算机、计算机工作站、平板电脑、智能可穿戴设备(如,智能手表、智能手环、智能耳机等)、游戏机、机顶盒、媒体消费设备等端侧设备,具体此处对目标设备的类型不做限定。
还需要说明的是,上述实施例所述的半监督学习模型的训练可以是均在云侧实现,例如,基于图6所示的任务处理系统的架构图,可以由云侧的训练设备220(该训练设备220可设置在一个或多个服务器或者虚拟机上)获取训练集,并根据训练集内的训练样本对初始半监督学习模型进行训练,得到训练后的半监督学习模型(如,训练后的第一模型、第二模型),之后,该训练后的第二模型再发送给执行设备210进行应用,例如,发送给执行设备210进行标签预测,示例性地,图6对应的系统架构中所述,就是由训练设备220对各个阶段的模型进行训练,训练后的第二模型再发送给执行设备210进行使用;上述实施例所述的初始半监督学习模型的训练也可以是均在终端侧实现,即训练设备220可以是位于终端侧,例如,可以由终端设备(如,手机、智能手表等)、轮式移动设备(如,自动驾驶车辆、辅助驾驶车辆等)等获取训练集,并根据训练集内的训练样本对其进行训练,得到训练后的半监督学习模型(如,训练后的第一模型、第二模型),该训练后的第二模型就可以直接在该终端设备使用,也可以由该终端设备发送给其他的设备进行使用。具体本申请实施例对第二模型在哪个设备(云侧或终端侧)上进行训练或应用不做限定。
在本申请上述实施方式中,首先,训练设备通过训练后的第一半监督学习模型对一部分无标签样本的分类类别进行预测,得到预测标签,并通过一比特标注的方式来猜测各个预测标签是否正确,如果预测正确则获得该样本的正确标签(即正标签),否则可排除掉该样本的一个错误标签(即负标签),之后,在下一训练阶段,训练设备利用上述信息重新构建训练集(即第一训练集),并根据该第一训练集重新训练初始半监督学习模型,从而提高模型的预测准确率,并且,由于一比特标注只需标注者针对预测标签回答“是”或“否”,这种标注方式能够缓解机器学习中需要大量有正确标签数据的人工标注压力。
为便于理解上述图7对应的本申请所述的训练方法,下面举例对整个训练的过程进行示意:请参阅图8,图8为本申请实施例提供的半监督学习模型的训练方法一个流程示意图,假设初始训练集中包括330个训练样本,其中,30个为有标签样本(如图8中黑色底三角形所示意),300个为无标签样本(如图8中灰色底圆点所示意),那么这30个有标签样本就构成上述所述的第一有标签样本集,这300个无标签样本就构成上述所述的第一无标签样本集。首先,根据初始训练集中的330个训练样本对初始模型进行训练(即图8中的初始化,对应阶段0),得到训练后的第一模型,之后,再从这300个无标签样本中选取部分无标签样本构成初始子集,假设随机选取了100个无标签样本构成初始子集,那么这100个无标签样本就依次输入训练后的第一模型进行预测,分别得到对应的100个预测标 签,这100个预测标签就构成上述所述的第一预测标签集。之后,训练设备基于一比特标注的方式,将这100个选出来的无标签样本分为正标签样本(如图8中的白色底三角形所示意)和负标签样本(如图8中白色底圆形所示意),假设分出的正标签样本为40个,负标签样本为60个,那么就将这40个正标签样本与原来的30个有标签样本整合到一起,构成第二有标签样本集,原来的300个无标签样本除去选出来的100个无标签样本后,还剩下200个无标签样本,那么剩下的这200个无标签样本就构成第二无标签样本,根据一比特标注得到的60个负标签样本就构成负标签样本集,因此,第二有标签样本集、第二无标签样本集和负标签样本集就构成第一阶段的第一训练集,根据该第一训练集,通过步骤705所述的方式,得到训练后的第二模型(即图8中的模型M1),第一次得到训练后的第二模型就为第一阶段(即图8中的阶段1)。之后,训练设备又从第二无标签样本集中选取部分无标签样本构成第二阶段(即图8中的阶段2)的初始子集,假设又随机选取了100个无标签样本构成第二阶段的初始子集(也可以是其他数量,此处不限定),那么这100个无标签样本就依次输入训练后的第二模型进行预测,分别得到对应的100个预测标签,这100个预测标签就构成第二阶段的第一预测标签集(也可称为第二预测标签集)。之后,训练设备基于一比特标注的方式,将这100个选出来的无标签样本分为正标签样本和负标签样本,假设分出的正标签样本为65个,负标签样本为35个,那么就将这65个正标签样本与已有的70个有标签样本整合到一起,构成第二阶段的第二有标签样本集,第一阶段剩下的200个无标签样本除去第二阶段选出来的100个无标签样本后,还剩下100个无标签样本,那么剩下的这100个无标签样本就构成第二阶段的第二无标签样本,根据一比特标注得到的35个负标签样本与之前第一阶段得到的60个负标签样本就构成第二阶段的负标签样本集,因此,第二阶段的第二有标签样本集、第二无标签样本集和负标签样本集就构成第二阶段的第一训练集(可称为第二训练集),根据该第二阶段的第一训练集,再次通过步骤705所述的方式,得到第二阶段训练后的第二模型(可称为第三模型,即图8中的模型M2),第二次得到训练后的第二模型就为第二阶段(即图8中的阶段2)。依次类推,按照上述方式,直至第二无标签样本集为空。
此外,为进一步理解半监督学习模型的更多实施细节,下面以mean teacher模型为例,对mean teacher模型的整个训练的过程(训练样本以图片为例)进行示意:具体请参阅图9,图9为mean teacher模型训练过程的一个示意图,传统的半监督学习方法以数据集
为训练集,其中,N是所有训练样本的数目,x
n是图片数据中第n个样本。另
表示样本x
n的真实标签,在设定中
对于训练算法是未知的,特别的只有小部分包含L个样本的集合的相关的
被提供(即本申请上述所述的第一有标签样本集),一般来说,L小于N。那就是说,
被分为两个子集
和
分别代表有标签样本集和无标签样本集。本申请的训练方法则是将训练集分为三个部分
其中,
是用于一比特标注的样本集合,
中的样本数量不做限定。对于
中的每个样例,标注者被提供以对应图片和模型对该图片的预测标签,标注者的工作就是判断这张图片是否属于图片本身所属的真实标签所指定的分类类别,如果预测正确,则该图片被分配以正标签
否则它被分配以一个负标签,表示为
从信息论的角度来说,标注者通过回答一个是或否的问题, 为系统提供了1比特的监督信息。对于需要获得样本的正确标签的标注方法来说,所获得的监督信息为log
2C,C为分类类别的总数,例如,对于一个100类的数据集,可以选择标注10K的正确标签,这提供了10K×log
2100=66.4K比特的信息量。或是准确地标注5K样本,这提供33.2K比特的信息量,然后将剩下的33.2K信息量通过回答是或否的问题来完成。由于标注者只需回答是或否的问题,这种做法提供一比特的信息,所以标注一张图片的损耗减少了,在相同损耗下,可以得到更多的一比特标注信息。
通常当能够获得更多的正确标签的训练样本,模型的精度会得到提升。因此最直接的方法是将训练过程分为多个阶段,每个阶段对
的部分样本进行预测,针对预测标签重新构建训练集,再利用重新构建的训练集更新模型,以此来加强模型。初始模型
是在
作为有标签数据,
作为无标签数据的条件下,通过一个半监督学习过程训练得到的。在本申请实施例中,则利用Mean Teacher模型进行示意,假设接下来的训练分为T个阶段(训练的终止条件是无标签样本都被选完)。另
表示第t-1阶段的无准确标签样本集(就是指负标签样本和无标签样本的集合),
这里使用
分别表示预测正确的和预测错误的样本集,这两个集合被初始化为空集。在第t阶段,首先从无标签样本集
当中随机选出固定数量的子集
然后利用上一阶段得到的模型
为
中的样本预测标签。通过检查正确标签,预测正确的样本被加入正标签集合
预测错误的样本被加入负标签集合
所以整个训练集就被分为了三个部分:有准确标签的集合
有负标签的集合
和无标签样本集
最终
然后将
更新为
得到更强的模型。
之后,本申请实施例基于mean Teacher模型来设计标签抑制方法,即设计负标签样本对应的损失函数。mean Teacher模型包含教师模型和学生模型两部分,给定一个训练图片,如果它拥有正确标签则计算相应的交叉熵损失函数,不论训练图片有无正确标签,都计算一个教师模型和学生模型的输出之间的距离作为额外的损失函数,该额外的损失函数就为均方误差损失函数。另设f(x;θ)为学生模型的表达函数,其中θ表示学习模型相应的网络参数,教师模型被表示为f(x;θ′),其中θ′为教师模型相应的网络参数。相应的损耗函数(即总的loss)被定义为如下形式:
其中,λ和γ均表示平衡系数,是通过训练得到的可调节的参数。式中,
表示对某个阶段的所有样本输出的预测值取均值,CE表示交叉熵损失函数,MSE表示均方误差损失函数,
表示正标签。对于拥有正确标签的样本,模型的输出同时受交叉熵项和一致性项的约束,对于拥有负标签的样本,本方法通过增加一个新的损失函数,该新的损失函数基于修改损耗函数当中第二项教师模型的输出f(x;θ′)的相关位置的值,使得负标签对应类别的概率得分被抑制为0,例如,假设有100个分类类别,那么mean teacher模型输出的预测值就是一个100维向量,每个维度表示的是输出的预测值对应分类类别所占的预测概率,假设某个图片为负标签“不是狗”,第2个维度是“狗”的概率,那么对应该图片,就可以将第2个维度置为0,修改前的预测值和修改后的预测值(即修改值)之间的差值,就是本申请所述的损耗函数的第三项。
为了对本申请实施例所带来的有益效果有更为直观的认识,以下对本申请实施例所带来的技术效果作进一步的对比,基于mean teacher模型,本申请实施例所提供的半监督学习的训练方法在三个流行的图片分类数据集上进行实验,分别是CIFAR100、Mini-Imagenet和Imagenet。对于CIFAR100,本申请使用一个26层的shake-shake正则深度残差网络。对于Mini-Imagenet和Imagenet,本申请使用一个50层的残差网络。在CIFAR100和Mini-Imagenet上,本申请共训练180个epoch(即将所述训练样本训练一次的过程),在Imagenet上训练60个epoch。在三个数据集上都使用均方误差损失函数作为一致性损失函数。一致性损失函数的权重参数在CIFAR100上取1000,在Mini-Imagenet和Imagenet取100。相关的的每批样本的大小(batch size)根据硬件条件做了调整。其余的参数设置参照mean teacher模型的原始设置。由表1可知,实验证明了在相同比特数的监督信息下,本申请实施例所提供的训练方法的表现超过其他的半监督的训练方法,在三个数据集上的实验结果(即预测准确率)论证了本申请实施例方法的有效性。
表1:测试结果对比
本申请实施例还提供一种图像处理方法,请参阅图10,图10为本申请实施例提供的图像处理方法的一种流程示意图,具体可以包括如下步骤:
1001、获取目标图像。
上述训练后的第二半监督学习模型就可部署在执行设备上进行应用,具体地,执行设备先获取到目标图像。
1002、将目标图像作为训练后的半监督学习模型的输入,输出对所述目标图像的预测结果。
之后,执行设备将目标图像作为训练后的半监督学习模型的输入,并输出对目标图像的预测结果,该训练后的半监督学习模型为上述实施例所述的训练后的第二半监督学习模型。
在本申请上述实施方式中,阐述了训练后的第二半监督学习模型的一种应用方式,即用来对图像进行类别预测,相比于已有的训练方法训练得到的半监督学习模型,本申请实施例提供的半监督学习模型提高了目标图像识别的准确率。
由于智慧城市、智能终端等领域中都可以用到本申请实施例训练好的半监督学习模型(即训练后的第二模型)来进行图像分类处理,例如,本申请训练好的半监督学习模型可应用于计算机视觉等领域的各种场景和问题,比如常见的一些任务:人脸识别、图像分类、目标检测等。其中每类场景中都会涉及很多可用本申请实施例所提供的训练后的半监督学习模型,下面将对多个落地到产品的多个应用场景进行介绍。
(1)相册分类
用户在手机和云盘上存储了大量图片,按照类别对相册进行分类管理能提高用户的使用体验。利用本申请实施例的训练后的半监督学习模型对相册中的图片进行分类,能够得到按照类别进行排列或者存储的相册,可以方便用户对不同的物体类别进行分类管理,从而方便用户的查找,能够节省用户的管理时间,提高相册管理的效率。
具体地,如图11所示,在采用本申请实施例训练后的半监督学习模型进行相册分类时,可以先将相册中图片输入该训练后的半监督学习模型进行特征提取,根据提取到的特征得到图片的预测标签(即预测的分类类别),接下来再根据图片的分类类别对相册中的图片进行归类,得到按照分类类别进行排列的相册。其中,在根据分类类别对相册中的图片进行排列时,可以将属于同一类的图片排列在一行或者一行。例如,在最终得到的相册中,第一行的图片都属于飞机,第二行的图片都属于汽车。
(2)拍照识物
用户在拍照时,可以利用本申请实施例训练后的半监督学习模型对拍到的照片进行处理,能够自动识别出被拍物体的类别,例如,可以自动识别出被拍物体是何种花卉、动物等。例如,利用本申请实施例训练后的半监督学习模型对拍照得到的共享单车进行识别,能够识别出该物体属于自行车,进一步的,还可以显示该自行车的相关信息,具体可如图12所示。
(3)目标识别
针对得到的图片,还可以利用本申请实施例训练后的半监督学习模型寻找包含有目标对象的图片,例如,如图13所示,可以利用本申请实施例训练后的半监督学习模型对拍照得到的街景寻找该街景图片中是否有目标对象,如是否存在图13中左边的人脸模型。
(4)智能驾驶的物体识别
在自动驾驶的应用场景中,可以利用本申请实施例训练后的半监督学习模型对安装在车辆上的传感器(如,摄像头)拍摄到的图像数据或视频数据中的图像进行处理,从而能够自动识别出在行驶过程中路面上的各种障碍物的类别,例如,可以自动识别出自车的前方行驶路面是否有障碍物以及何种障碍物(如,迎面驶来的卡车、行人、骑行者等关键障碍物,或,路边的灌木丛、树木、建筑物等非关键障碍物等)。
应理解,上文介绍的相册分类、拍照识物、目标识别、智能驾驶的物体识别等只是本申请实施例的图像分类方法所应用的几个具体场景,本申请实施例训练后的半监督学习模型在应用时并不限于上述场景,其能够应用到任何需要进行图像分类或者图像识别的场景中,只要能使用半监督学习模型的领域和设备,都可应用本申请实施例提供的训练好的半监督学习模型,此处不再举例示意。
在上述所对应的实施例的基础上,为了更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关设备。具体参阅图14,图14为本申请实施例提供的训练设备的一种结构示意图,训练设备1400包括:选取模块1401、猜测模块1402、构建模块1403和训练模块1404,其中,选取模块1401,用于从第一无标签样本集选取初始子集,并通过训练后的第一半监督学习模型对所述初始子集进行预测,得到第一预测标签集,所述第一 半监督学习模型由初始半监督学习模型通过初始训练集训练得到,所述初始训练集包括第一有标签样本集和所述第一无标签样本集;猜测模块1402,用于根据所述第一预测标签集将所述初始子集分为第一子集和第二子集,所述第一子集为预测正确的分类类别对应的样本集合,所述第二子集为预测错误的分类类别对应的样本集合;构建模块1403,用于构建第一训练集,所述第一训练集包括第二有标签样本集、第二无标签样本集和负标签样本集,所述第二有标签样本集为包括所述第一有标签样本集和所述第一子集的具有正确分类类别的样本集合,所述第二无标签样本集为所述第一无标签样本集中除所述初始子集之外的无标签样本的集合,所述负标签样本集为包括所述第二子集的具有错误分类类别的样本集合;训练模块1404,用于根据所述第一训练集训练所述初始半监督学习模型,得到训练后的第二半监督学习模型。
在本申请上述实施方式中,首先,通过训练后的第一半监督学习模型对一部分无标签样本的分类类别进行预测,得到预测标签,并通过猜测模块1402判断各个预测标签是否正确,如果预测正确则获得该样本的正确标签(即正标签),否则可排除掉该样本的一个错误标签(即负标签),之后,在下一训练阶段,构建模块1403利用上述信息重新构建训练集(即第一训练集),并通过训练模块1404根据该第一训练集重新训练初始半监督学习模型,从而提高模型的预测准确率,并且,由于猜测模块1402只需对预测标签回答“是”或“否”,这种标注方式能够缓解机器学习中需要大量有正确标签数据的人工标注压力。
在一种可能的设计中,初始半监督学习模型的网络结构具体可以有多种表现形式,例如,可以包括如下模型中的任意一种:Π-model、VAT、LPDSSL、TNAR、pseudo-label、DCT、mean teacher模型。
在本申请上述实施方式中,阐述了可应用本申请实施例提供的训练方法的半监督学习模型可以有哪些,具备普适性和可选择性。
在一种可能的设计中,若初始半监督学习模型是只具备一个损失函数的学习模型,如,Π-model、VAT、LPDSSL、TNAR、pseudo-label、DCT中的任意一种,那么训练模块1404,具体用于:针对第二有标签样本集和第二无标签样本集,根据第二有标签样本集和第二无标签样本集,利用第一损失函数对初始半监督学习模型进行训练,该第一损失函数就为初始半监督学习模型原有的损失函数loss1;针对负标签样本集,则根据该负标签样本集,利用第二损失函数对初始半监督学习模型进行训练,第二损失函数(可称为loss2)就为模型输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在预测值上的对应维度置为零的值,该第二损失函数loss2就为上述所述的针对无标签样本集构建的新的损失函数。最后,再根据loss=loss1+σ*loss2更新该初始模型,其中,σ表示平衡系数,是通过训练得到的一个可调节的参数,loss是整个半监督学习模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。
在本申请上述实施方式中,阐述了当初始半监督学习模型的损失函数为一个时,可针对负标签样本集再构建一个新的损失函数,即针对训练集中的不同类型的样本集,对应采用不同的损失函数,再基于总的损失函数对该初始半监督学习模型进行训练,更具有针对性。
在一种可能的设计中,本申请实施例的训练方法除了可以用于训练上述原本只有一个损失函数的半监督学习模型,还可以用于训练损失函数有两个或多于两个的半监督学习模型,过程是类似的,具体地,该初始半监督学习模型可以是mean teacher模型。由于mean teacher模型的训练策略是:假设训练样本为有标签样本(x1,y1)以及无标签样本x2,其中,y1为x1的标签。将有标签样本(x1,y1)输入学生模型,从而计算损失函数1的输出值loss11;将无标签样本x2输入学生模型,从而计算得到预测标签label1,将无标签样本x2进行一些数据处理(一般为增加噪声的扰动处理)后输入教师模型,从而计算得到预测标签label2。如果mean teacher模型足够稳定,那么预测标签label1和预测标签label2应该是一样的,即教师模型能抵抗无标签样本x2的扰动,也就是说,希望学生模型和教师模型的预测标签尽量相等,因此根据lable1和label2得到损失函数2的输出值loss12。最后,根据loss=loss11+λ*loss12更新学生模型,其中,λ表示平衡系数,是通过训练得到的一个可调节的参数,loss是整个mean teacher模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。因此,所述训练模块1404,具体还用于:针对第二有标签样本集,根据第二有标签样本集,利用第三损失函数对mean teacher模型进行训练,该第三损失函数就为上述所述的损失函数1(即loss11);训练设备还将根据第二有标签样本集和第二无标签样本集,利用第四损失函数对mean teacher模型进行训练,该第四损失函数就为上述所述的损失函数2(即loss12),该第三损失函数loss11和第四损失函数loss12均为mean teacher模型原有的损失函数;此外,针对负标签样本,还会根据负标签样本集,利用第五损失函数对mean teacher模型进行训练,该第五损失函数(可称为loss13)就为模型输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在所述预测值上的对应维度置为零的值,该第五损失函数loss13就为上述所述的针对无标签样本集构建的新的损失函数。最后,再根据loss=loss11+λ*loss12+γ*loss13更新该初始的mean teacher模型,其中,λ和γ均表示平衡系数,是通过训练得到的可调节的参数,loss是整个mean teacher模型总的损失函数的输出值,训练的过程就是使得这个总的loss尽可能的小。
在本申请上述实施方式中,阐述了当初始半监督学习模型是mean teacher模型时,也可针对负标签样本集再构建一个新的损失函数,即针对训练集中的不同类型的样本集,对应采用不同的损失函数,再基于总的损失函数对该初始半监督学习模型进行训练,更具有针对性。
在一种可能的设计中,该第三损失函数可以为交叉熵损失函数;和/或,该第四损失函数可以为均方误差损失函数。
在本申请上述实施方式中,阐述了在mean teacher模型中,第三损失函数和第四损失函数的具体形式,具备可实现性。
在一种可能的设计中,该训练设备1400还可以包括:触发模块1405,触发模块1405用于将所述第二无标签样本集作为新的第一无标签样本集,且将所述第二半监督学习模型作为新的第一半监督学习模型,触发所述选取模块1401、所述猜测模块1402、所述构建模块1403和所述训练模块1404重复执行对应步骤,直至所述第二无标签样本集为空。
在本申请上述实施方式中,通常当能够获得更多的正确标签的训练样本,模型的精度 就会得到提升。因此最直接的方法是将训练过程分为多个阶段,每个阶段从第一无标签样本集中选出部分样本进行预测,针对预测标签重新构建训练集,再利用重新构建的训练集更新模型,从而每个阶段得到的训练后的第二半监督学习模型的泛化能力和预测准确率都比上一阶段得到的第二半监督学习模型更强。
在一种可能的设计中,在根据第一训练集训练初始半监督学习模型,得到训练后的第二半监督学习模型之后,触发模块1405还可用于:将训练后的第二半监督学习模型部署在目标设备上,该目标设备就用于获取目标图像,该训练后的第二半监督学习模型就用于对目标图像进行标签预测。
在本申请上述实施方式中,阐述了训练好的第二半监督学习模型的具体用处,即部署在目标设备上用于对目标图像进行标签预测,即用来对图像进行类别预测,相比于已有的训练方法训练得到的半监督学习模型,本申请实施例提供的训练后的第二半监督学习模型提高了目标图像识别的准确率。
在一种可能的设计中,所述选取模块1401,具体用于:从第一无标签样本集中随机选取预设数量的无标签样本构成初始子集。
在本申请上述实施方式中,阐述了如何从第一无标签样本集中选取无标签样本来构成初始子集的一种实现方式,随机选择的方式保证了选取样本的均衡。
需要说明的是,图14对应实施例所述的训练设备1400中各模块/单元之间的信息交互、执行过程等内容,与本申请中图5-9对应的实施例基于同一构思,具体内容可参见本申请前述所示实施例中的叙述,此处不再赘述。
本申请实施例还提供一种执行设备,具体参阅图15,图15为本申请实施例提供的执行设备的一种结构示意图,执行设备1500包括:获取模块1501和识别模块1502,其中,获取模块1501,用于获取目标图像;识别模块1502,用于将所述目标图像作为训练后的半监督学习模型的输入,输出对所述目标图像的预测结果,所述训练后的半监督学习模型为上述实施例所述的训练后的第二半监督学习模型。
在本申请上述实施方式中,阐述了训练后的第二半监督学习模型的一种应用方式,即用来对图像进行类别预测,相比于已有的训练方法训练得到的半监督学习模型,本申请实施例提供的半监督学习模型提高了目标图像识别的准确率。
需要说明的是,图15对应实施例所述的执行设备1500中各模块/单元之间的信息交互、执行过程等内容,与本申请中图10对应的实施例基于同一构思,具体内容可参见本申请前述所示实施例中的叙述,此处不再赘述。
接下来介绍本申请实施例提供的一种训练设备,请参阅图16,图16为本申请实施例提供的训练设备的一种结构示意图,训练设备1600上可以部署有图14对应实施例中所描述的训练设备1400,用于实现图5-9对应实施例中训练设备的功能,具体的,训练设备1600由一个或多个服务器实现,训练设备1600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)1622(例如,一个或一个以上中央处理器)和存储器1632,一个或一个以上存储应用程序1642或数据1644的存储介质1630(例如一个或一个以上海量存储设备)。其中,存储器1632和存储介质1630可 以是短暂存储或持久存储。存储在存储介质1630的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对训练设备1600中的一系列指令操作。更进一步地,中央处理器1622可以设置为与存储介质1630通信,在训练设备1600上执行存储介质1630中的一系列指令操作。
训练设备1600还可以包括一个或一个以上电源1626,一个或一个以上有线或无线网络接口1650,一个或一个以上输入输出接口1658,和/或,一个或一个以上操作系统1641,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
本申请实施例中,中央处理器1622,用于执行图7对应实施例中的训练设备执行的半监督学习模型的训练方法。具体地,中央处理器1622用于:首先,根据获取到的初始训练集训练初始半监督学习模型(可简称为初始模型),从而得到训练后的第一半监督学习模型(可简称为训练后的第一模型),该初始训练集中,一部分为有标签样本,另一部分为无标签样本,其中,这部分有标签样本称为第一有标签样本集,这部分无标签样本称为第一无标签样本集。得到训练后的第一模型后,再从初始训练集中的第一无标签样本集选取初始子集,初始子集中的各个无标签样本构成测试数据,用于对训练后的第一模型进行测试,通过该训练后的第一模型对选出的初始子集中的各个无标签样本进行预测,从而得到选取出的每个无标签样本对应的预测标签(训练后的第一模型会输出选取出来的每个无标签样本的在各个分类类别上的概率预测,通常选择概率最大的一个分类类别作为模型对该样本的预测标签),各个预测标签就构成第一预测标签集。得到第一预测标签集之后,将根据该第一预测标签集对初始子集进行一比特标注,由于这种做法提供了log
22=1比特(bit)的信息量(即“是”或“否”),因此称为一比特标注。如上述所述,一比特标注的方式具体为:标注者针对每个预测样本对应的预测标签回答一个“是”或“否”的问题,若预测标签是预测正确的分类类别,则获得该无标签样本的正标签(也可称为正确标签),比如,预测标签为“狗”,对于该无标签样本的真实标签也是“狗”,那么则预测正确,该无标签样本获得正标签“狗”;若预测标签是预测错误的分类类别,则获得该无标签样本的负标签,据此可排除掉该无标签样本的一个错误标签,比如,预测标签为“猫”,对于该无标签样本的真实标签确是“狗”,那么则预测错误,该无标签样本获得负标签“不是猫”。经过一比特标注后,初始子集就被分为了第一子集和第二子集,其中,第一子集为预测正确的分类类别(即正标签)对应的样本集合,第二子集为预测错误的分类类别(即负标签)对应的样本集合。在获得一比特标注的结果后,即获得了相应数量的正标签和负标签,之后,重新构建训练集,重新构建的训练集可称为第一训练集,构建的具体方式可以是将正标签样本(即第一子集)与已有的有标签样本放到一起,作为本阶段的有标签样本,也可称为第二有标签样本集;各个负标签样本(即第二子集)就构成本阶段的负标签样本集;第一无标签样本集中余下的无标签样本就构成本阶段的第二无标签样本集。这三类样本共同构成第一训练集。构建好第一训练集后,再根据该第一训练集重新训练初始模型,得到训练后的能力更强的第二半监督学习模型(可简称为训练后的第二模型)。
需要说明的是,中央处理器1622执行上述各个步骤的具体方式,与本申请中图7对应的方法实施例基于同一构思,其带来的技术效果与本申请中图7对应的实施例相同,具体 内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
接下来介绍本申请实施例提供的一种执行设备,请参阅图17,图17为本申请实施例提供的执行设备的一种结构示意图,执行设备1700具体可以表现为各种终端设备,如虚拟现实VR设备、手机、平板、笔记本电脑、智能穿戴设备、监控数据处理设备或者雷达数据处理设备等,此处不做限定。其中,执行设备1700上可以部署有图15对应实施例中所描述的执行设备1500,用于实现图10对应实施例中执行设备的功能。具体的,执行设备1700包括:接收器1701、发射器1702、处理器1703和存储器1704(其中执行设备1700中的处理器1703的数量可以一个或多个,图17中以一个处理器为例),其中,处理器1703可以包括应用处理器17031和通信处理器17032。在本申请的一些实施例中,接收器1701、发射器1702、处理器1703和存储器1704可通过总线或其它方式连接。
存储器1704可以包括只读存储器和随机存取存储器,并向处理器1703提供指令和数据。存储器1704的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。存储器1704存储有处理器和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。
处理器1703控制执行设备1700的操作。具体的应用中,执行设备1700的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
本申请上述图4对应实施例揭示的方法可以应用于处理器1703中,或者由处理器1703实现。处理器1703可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1703中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1703可以是通用处理器、数字信号处理器(digital signal processing,DSP)、微处理器或微控制器,还可进一步包括专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。该处理器1703可以实现或者执行本申请图10对应的实施例中公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1704,处理器1703读取存储器1704中的信息,结合其硬件完成上述方法的步骤。
接收器1701可用于接收输入的数字或字符信息,以及产生与执行设备1700的相关设置以及功能控制有关的信号输入。发射器1702可用于通过第一接口输出数字或字符信息;发射器1702还可用于通过第一接口向磁盘组发送指令,以修改磁盘组中的数据;发射器1702还可以包括显示屏等显示设备。
本申请实施例中还提供一种计算机可读存储介质,该计算机可读存储介质中存储有用于进行信号处理的程序,当其在计算机上运行时,使得计算机执行如前述所示实施例描述 中执行设备所执行的步骤。
本申请实施例提供的训练设备、执行设备等具体可以为芯片,芯片包括:处理单元和通信单元,所述处理单元例如可以是处理器,所述通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行存储单元存储的计算机执行指令,以使训练设备内的芯片执行上述图5-9所示实施例描述的半监督学习模型的训练方法,或以使得执行设备内的芯片执行上述图10所示实施例描述的图像处理方法。可选地,所述存储单元为所述芯片内的存储单元,如寄存器、缓存等,所述存储单元还可以是所述无线接入设备端内的位于所述芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
具体的,请参阅图18,图18为本申请实施例提供的芯片的一种结构示意图,所述芯片可以表现为神经网络处理器NPU 200,NPU 200作为协处理器挂载到主CPU(Host CPU)上,由Host CPU分配任务。NPU的核心部分为运算电路2003,通过控制器2004控制运算电路2003提取存储器中的矩阵数据并进行乘法运算。
在一些实现中,运算电路2003内部包括多个处理单元(Process Engine,PE)。在一些实现中,运算电路2003是二维脉动阵列。运算电路2003还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路2003是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路从权重存储器2002中取矩阵B相应的数据,并缓存在运算电路中每一个PE上。运算电路从输入存储器2001中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)2008中。
统一存储器2006用于存放输入数据以及输出数据。权重数据直接通过存储单元访问控制器(Direct Memory Access Controller,DMAC)2005,DMAC被搬运到权重存储器2002中。输入数据也通过DMAC被搬运到统一存储器2006中。
BIU为Bus Interface Unit即,总线接口单元2010,用于AXI总线与DMAC和取指存储器(Instruction Fetch Buffer,IFB)2009的交互。
总线接口单元2010(Bus Interface Unit,简称BIU),用于取指存储器2009从外部存储器获取指令,还用于存储单元访问控制器2005从外部存储器获取输入矩阵A或者权重矩阵B的原数据。
DMAC主要用于将外部存储器DDR中的输入数据搬运到统一存储器2006或将权重数据搬运到权重存储器2002中或将输入数据数据搬运到输入存储器2001中。
向量计算单元2007包括多个运算处理单元,在需要的情况下,对运算电路的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。主要用于神经网络中非卷积/全连接层网络计算,如Batch Normalization(批归一化),像素级求和,对特征平面进行上采样等。
在一些实现中,向量计算单元2007能将经处理的输出的向量存储到统一存储器2006。例如,向量计算单元2007可以将线性函数和/或非线性函数应用到运算电路2003的输出, 例如对卷积层提取的特征平面进行线性插值,再例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元2007生成归一化的值、像素级求和的值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路2003的激活输入,例如用于在神经网络中的后续层中的使用。
控制器2004连接的取指存储器(instruction fetch buffer)2009,用于存储控制器2004使用的指令;
统一存储器2006,输入存储器2001,权重存储器2002以及取指存储器2009均为On-Chip存储器。外部存储器私有于该NPU硬件架构。
其中,上述任一处提到的处理器,可以是一个通用中央处理器,微处理器,ASIC,或一个或多个用于控制上述第一方面方法的程序执行的集成电路。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,训练设备,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、训练设备或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、训练设备或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的训练设备、数据 中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(digital video disc,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
Claims (23)
- 一种半监督学习模型的训练方法,其特征在于,包括:从第一无标签样本集选取初始子集,并通过训练后的第一半监督学习模型对所述初始子集进行预测,得到第一预测标签集,所述第一半监督学习模型由初始半监督学习模型通过初始训练集训练得到,所述初始训练集包括第一有标签样本集和所述第一无标签样本集;根据所述第一预测标签集将所述初始子集分为第一子集和第二子集,所述第一子集为预测正确的分类类别对应的样本集合,所述第二子集为预测错误的分类类别对应的样本集合;构建第一训练集,所述第一训练集包括第二有标签样本集、第二无标签样本集和负标签样本集,所述第二有标签样本集为包括所述第一有标签样本集和所述第一子集的具有正确分类类别的样本集合,所述第二无标签样本集为所述第一无标签样本集中除所述初始子集之外的无标签样本的集合,所述负标签样本集为包括所述第二子集的具有错误分类类别的样本集合;根据所述第一训练集训练所述初始半监督学习模型,得到训练后的第二半监督学习模型。
- 根据权利要求2所述的方法,其特征在于,所述初始半监督学习模型为mean teacher模型,则所述根据所述第一训练集训练所述初始半监督学习模型,得到训练后的第二半监督学习模型包括:根据所述第二有标签样本集,利用第三损失函数对所述mean teacher模型进行训练;根据所述第二有标签样本集和所述第二无标签样本集,利用第四损失函数对所述mean teacher模型进行训练,所述第三损失函数和所述第四损失函数为所述mean teacher模型原有的损失函数;根据所述负标签样本集,利用第五损失函数对所述mean teacher模型进行训练,所述第五损失函数为模型输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在所述预测值上的对应维度置为零的值。
- 根据权利要求4所述的方法,其特征在于,所述第三损失函数为交叉熵损失函数;和/或,所述第四损失函数为均方误差损失函数。
- 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:将所述第二无标签样本集作为新的第一无标签样本集、所述第二半监督学习模型作为新的第一半监督学习模型,重复执行上述步骤,直至所述第二无标签样本集为空。
- 根据权利要求1-6中任一项所述的方法,其特征在于,所述在根据所述第一训练集训练所述初始半监督学习模型,得到训练后的第二半监督学习模型之后,所述方法还包括:将所述训练后的第二半监督学习模型部署在目标设备上,所述目标设备用于获取目标图像,所述训练后的第二半监督学习模型用于对所述目标图像进行标签预测。
- 根据权利要求1-7中任一项所述的方法,其特征在于,所述从第一无标签样本集选取初始子集包括:从所述第一无标签样本集随机选取预设数量的无标签样本构成所述初始子集。
- 一种图像处理方法,其特征在于,包括:获取目标图像;将所述目标图像作为训练后的半监督学习模型的输入,输出对所述目标图像的预测结果,所述训练后的半监督学习模型为权利要求1-8中任一项所述方法中的第二半监督学习模型。
- 一种训练设备,其特征在于,包括:选取模块,用于从第一无标签样本集选取初始子集,并通过训练后的第一半监督学习模型对所述初始子集进行预测,得到第一预测标签集,所述第一半监督学习模型由初始半监督学习模型通过初始训练集训练得到,所述初始训练集包括第一有标签样本集和所述第一无标签样本集;猜测模块,用于根据所述第一预测标签集将所述初始子集分为第一子集和第二子集,所述第一子集为预测正确的分类类别对应的样本集合,所述第二子集为预测错误的分类类别对应的样本集合;构建模块,用于构建第一训练集,所述第一训练集包括第二有标签样本集、第二无标签样本集和负标签样本集,所述第二有标签样本集为包括所述第一有标签样本集和所述第一子集的具有正确分类类别的样本集合,所述第二无标签样本集为所述第一无标签样本集中除所述初始子集之外的无标签样本的集合,所述负标签样本集为包括所述第二子集的具有错误分类类别的样本集合;训练模块,用于根据所述第一训练集训练所述初始半监督学习模型,得到训练后的第二半监督学习模型。
- 根据权利要求11所述的设备,其特征在于,所述初始半监督学习模型为mean teacher模型,则所述训练模块,具体还用于:根据所述第二有标签样本集,利用第三损失函数对所述mean teacher模型进行训练;根据所述第二有标签样本集和所述第二无标签样本集,利用第四损失函数对所述mean teacher模型进行训练,所述第三损失函数和所述第四损失函数为所述mean teacher模型原有的损失函数;根据所述负标签样本集,利用第五损失函数对所述mean teacher模型进行训练,所述第五损失函数为模型输出的预测值与修改值之间的差值,所述修改值为将预测错误的分类类别在所述预测值上的对应维度置为零的值。
- 根据权利要求13所述的设备,其特征在于,所述第三损失函数为交叉熵损失函数;和/或,所述第四损失函数为均方误差损失函数。
- 根据权利要求10-14中任一项所述的设备,其特征在于,所述设备还包括:触发模块,用于将所述第二无标签样本集作为新的第一无标签样本集、所述第二半监督学习模型作为新的第一半监督学习模型,触发所述选取模块、所述猜测模块、所述构建模块和所述训练模块重复执行对应步骤,直至所述第二无标签样本集为空。
- 根据权利要求10-15中任一项所述的设备,其特征在于,所述触发模块,还用于:将所述训练后的第二半监督学习模型部署在目标设备上,所述目标设备用于获取目标图像,所述训练后的第二半监督学习模型用于对所述目标图像进行标签预测。
- 根据权利要求10-16中任一项所述的设备,其特征在于,所述选取模块,具体用于:从所述第一无标签样本集随机选取预设数量的无标签样本构成所述初始子集。
- 一种执行设备,其特征在于,包括:获取模块,用于获取目标图像;识别模块,用于将所述目标图像作为训练后的半监督学习模型的输入,输出对所述目标图像的预测结果,所述训练后的半监督学习模型为权利要求1-8中任一项所述方法中的第二半监督学习模型。
- 一种训练设备,包括处理器和存储器,其特征在于,所述处理器与所述存储器耦合,其特征在于,所述存储器,用于存储程序;所述处理器,用于执行所述存储器中的程序,使得所述训练设备执行如权利要求1-8中任一项所述的方法。
- 一种执行设备,包括处理器和存储器,其特征在于,所述处理器与所述存储器耦合,其特征在于,所述存储器,用于存储程序;所述处理器,用于执行所述存储器中的程序,使得所述执行设备执行如权利要求9所述的方法。
- 一种计算机可读存储介质,包括程序,其特征在于,当其在计算机上运行时,使得计算机执行如权利要求1-8中任一项所述的方法,或,使得计算机执行如权利要求9所述的方法。
- 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机执行如权利要求1-8中任一项所述的方法,或,使得计算机执行如权利要求9所述的方法。
- 一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行如权利要求1-8中任一项所述的方法,或,执行如权利要求9所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21859844.9A EP4198820A4 (en) | 2020-08-31 | 2021-06-28 | TRAINING METHOD FOR SEMI-SUPERVISED LEARNING MODEL, IMAGE PROCESSING METHOD AND APPARATUS |
US18/173,310 US20230196117A1 (en) | 2020-08-31 | 2023-02-23 | Training method for semi-supervised learning model, image processing method, and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010899716.5A CN112183577A (zh) | 2020-08-31 | 2020-08-31 | 一种半监督学习模型的训练方法、图像处理方法及设备 |
CN202010899716.5 | 2020-08-31 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/173,310 Continuation US20230196117A1 (en) | 2020-08-31 | 2023-02-23 | Training method for semi-supervised learning model, image processing method, and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022042002A1 true WO2022042002A1 (zh) | 2022-03-03 |
Family
ID=73924607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/102726 WO2022042002A1 (zh) | 2020-08-31 | 2021-06-28 | 一种半监督学习模型的训练方法、图像处理方法及设备 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230196117A1 (zh) |
EP (1) | EP4198820A4 (zh) |
CN (1) | CN112183577A (zh) |
WO (1) | WO2022042002A1 (zh) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627381A (zh) * | 2022-04-21 | 2022-06-14 | 南通大学 | 一种基于改进半监督学习的电瓶车头盔识别方法 |
CN114693995A (zh) * | 2022-04-14 | 2022-07-01 | 北京百度网讯科技有限公司 | 应用于图像处理的模型训练方法、图像处理方法和设备 |
CN114743109A (zh) * | 2022-04-28 | 2022-07-12 | 湖南大学 | 多模型协同优化高分遥感图像半监督变化检测方法及系统 |
CN114758197A (zh) * | 2022-06-15 | 2022-07-15 | 深圳瀚维智能医疗科技有限公司 | 数据筛选方法、装置及计算机可读存储介质 |
CN114781526A (zh) * | 2022-04-26 | 2022-07-22 | 西安理工大学 | 基于判别特征学习和熵的深度半监督图像分类方法 |
CN114781207A (zh) * | 2022-03-29 | 2022-07-22 | 中国人民解放军军事科学院国防科技创新研究院 | 基于不确定性和半监督学习的热源布局温度场预测方法 |
CN114998691A (zh) * | 2022-06-24 | 2022-09-02 | 浙江华是科技股份有限公司 | 半监督船舶分类模型训练方法及装置 |
CN115082800A (zh) * | 2022-07-21 | 2022-09-20 | 阿里巴巴达摩院(杭州)科技有限公司 | 图像分割方法 |
CN115127192A (zh) * | 2022-05-20 | 2022-09-30 | 中南大学 | 基于图神经网络的半监督的冷水机组故障诊断方法及系统 |
US11495012B1 (en) * | 2021-11-19 | 2022-11-08 | Seoul National University R&Db Foundation | Semi-supervised learning method for object detection in autonomous vehicle and server for performing semi-supervised learning for object detection in autonomous vehicle |
CN115346076A (zh) * | 2022-10-18 | 2022-11-15 | 安翰科技(武汉)股份有限公司 | 病理图像识别方法及其模型训练方法、系统和存储介质 |
CN115482418A (zh) * | 2022-10-09 | 2022-12-16 | 宁波大学 | 基于伪负标签的半监督模型训练方法、系统及应用 |
CN115913769A (zh) * | 2022-12-20 | 2023-04-04 | 石家庄曲竹闻网络科技有限公司 | 基于人工智能的数据安全存储方法及系统 |
CN116089838A (zh) * | 2023-03-01 | 2023-05-09 | 中南大学 | 窃电用户智能识别模型训练方法和识别方法 |
CN116170829A (zh) * | 2023-04-26 | 2023-05-26 | 浙江省公众信息产业有限公司 | 一种独立专网业务的运维场景识别方法及装置 |
CN116188876A (zh) * | 2023-03-29 | 2023-05-30 | 上海锡鼎智能科技有限公司 | 基于信息混合的半监督学习方法及半监督学习装置 |
CN116258861A (zh) * | 2023-03-20 | 2023-06-13 | 南通锡鼎智能科技有限公司 | 基于多标签学习的半监督语义分割方法以及分割装置 |
CN116306875A (zh) * | 2023-05-18 | 2023-06-23 | 成都理工大学 | 基于空间预学习与拟合的排水管网样本增量学习方法 |
CN116665064A (zh) * | 2023-07-27 | 2023-08-29 | 城云科技(中国)有限公司 | 基于生成蒸馏与特征扰动的城市变化图生成方法及其应用 |
CN116935368A (zh) * | 2023-06-14 | 2023-10-24 | 北京百度网讯科技有限公司 | 深度学习模型训练方法、文本行检测方法、装置及设备 |
CN117033993A (zh) * | 2022-04-29 | 2023-11-10 | 华东交通大学 | 一种基于最小角排序选择的优选训练集的方法 |
CN117113198A (zh) * | 2023-09-24 | 2023-11-24 | 元始智能科技(南通)有限公司 | 一种基于半监督对比学习的旋转设备小样本故障诊断方法 |
CN117149551A (zh) * | 2023-10-30 | 2023-12-01 | 鹰驾科技(深圳)有限公司 | 一种车载无线通信芯片的测试方法 |
CN117611957A (zh) * | 2024-01-19 | 2024-02-27 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | 基于统一正负伪标签的无监督视觉表征学习方法及系统 |
CN117910539A (zh) * | 2024-03-19 | 2024-04-19 | 电子科技大学 | 一种基于异构半监督联邦学习的家庭特征识别方法 |
CN117975342A (zh) * | 2024-03-28 | 2024-05-03 | 江西尚通科技发展有限公司 | 半监督多模态情感分析方法、系统、存储介质及计算机 |
CN118035879A (zh) * | 2024-04-11 | 2024-05-14 | 湖北省地质环境总站 | 一种基于InSAR与半监督学习的地质灾害隐患识别方法 |
CN118053602A (zh) * | 2024-04-16 | 2024-05-17 | 首都医科大学宣武医院 | 基于智慧病房呼叫系统的数据处理方法及数据处理系统 |
CN118047359A (zh) * | 2024-01-24 | 2024-05-17 | 广东聚力胜智能科技有限公司 | 一种磷酸铁的制备设备控制方法及系统 |
CN118097306A (zh) * | 2024-04-18 | 2024-05-28 | 江西师范大学 | 一种基于时频混合对比学习的多变量时序分类方法及系统 |
CN118296390A (zh) * | 2024-06-06 | 2024-07-05 | 齐鲁工业大学(山东省科学院) | 可穿戴行为识别模型的训练方法、行为识别方法及系统 |
CN118520281A (zh) * | 2024-07-24 | 2024-08-20 | 山东科技大学 | 基于机器学习的花岗岩构造环境判别方法 |
CN118551274A (zh) * | 2024-07-24 | 2024-08-27 | 西安科技大学 | 一种煤岩石识别系统 |
Families Citing this family (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112183577A (zh) * | 2020-08-31 | 2021-01-05 | 华为技术有限公司 | 一种半监督学习模型的训练方法、图像处理方法及设备 |
CN112651467B (zh) * | 2021-01-18 | 2024-05-07 | 第四范式(北京)技术有限公司 | 卷积神经网络的训练方法和系统以及预测方法和系统 |
CN112801298B (zh) * | 2021-01-20 | 2023-09-01 | 北京百度网讯科技有限公司 | 异常样本检测方法、装置、设备和存储介质 |
CN112784818B (zh) * | 2021-03-03 | 2023-03-14 | 电子科技大学 | 基于分组式主动学习在光学遥感图像上的识别方法 |
CN113052025B (zh) * | 2021-03-12 | 2024-08-13 | 咪咕文化科技有限公司 | 图像融合模型的训练方法、图像融合方法及电子设备 |
CN113033566B (zh) * | 2021-03-19 | 2022-07-08 | 北京百度网讯科技有限公司 | 模型训练方法、识别方法、设备、存储介质及程序产品 |
CN112926496B (zh) * | 2021-03-19 | 2024-08-23 | 京东方科技集团股份有限公司 | 用于预测图像清晰度的神经网络、训练方法及预测方法 |
CN113158554B (zh) * | 2021-03-25 | 2023-02-14 | 腾讯科技(深圳)有限公司 | 模型优化方法、装置、计算机设备及存储介质 |
CN113139051B (zh) * | 2021-03-29 | 2023-02-10 | 广东外语外贸大学 | 文本分类模型训练方法、文本分类方法、设备和介质 |
CN113033682B (zh) * | 2021-03-31 | 2024-04-30 | 北京有竹居网络技术有限公司 | 视频分类方法、装置、可读介质、电子设备 |
CN113128669A (zh) * | 2021-04-08 | 2021-07-16 | 中国科学院计算技术研究所 | 一种用于半监督学习的神经网络模型以及半监督学习方法 |
CN113298159B (zh) * | 2021-05-28 | 2024-06-28 | 平安科技(深圳)有限公司 | 目标检测方法、装置、电子设备及存储介质 |
CN114943868B (zh) * | 2021-05-31 | 2023-11-14 | 阿里巴巴新加坡控股有限公司 | 图像处理方法、装置、存储介质及处理器 |
CN113239901B (zh) * | 2021-06-17 | 2022-09-27 | 北京三快在线科技有限公司 | 场景识别方法、装置、设备及存储介质 |
CN113688665B (zh) * | 2021-07-08 | 2024-02-20 | 华中科技大学 | 一种基于半监督迭代学习的遥感影像目标检测方法及系统 |
CN113283948B (zh) * | 2021-07-14 | 2021-10-29 | 腾讯科技(深圳)有限公司 | 预测模型的生成方法、装置、设备和可读介质 |
CN113516251B (zh) * | 2021-08-05 | 2023-06-06 | 上海高德威智能交通系统有限公司 | 一种机器学习系统及模型训练方法 |
CN113436192A (zh) * | 2021-08-26 | 2021-09-24 | 深圳科亚医疗科技有限公司 | 一种病理图像的分类学习方法、分类系统及可读介质 |
CN113642671B (zh) * | 2021-08-27 | 2024-03-05 | 京东科技信息技术有限公司 | 基于任务分布变化的半监督元学习方法及装置 |
CN113673622B (zh) * | 2021-08-31 | 2023-04-07 | 三一专用汽车有限责任公司 | 激光点云数据标注方法、装置、设备及产品 |
CN113850301B (zh) * | 2021-09-02 | 2024-06-18 | 支付宝(杭州)信息技术有限公司 | 训练数据的获取方法和装置、模型训练方法和装置 |
CN113762393B (zh) * | 2021-09-08 | 2024-04-30 | 杭州网易智企科技有限公司 | 模型训练方法、注视点检测方法、介质、装置和计算设备 |
CN113792798B (zh) * | 2021-09-16 | 2024-07-09 | 平安科技(深圳)有限公司 | 基于多源数据的模型训练方法、装置及计算机设备 |
CN114021720A (zh) * | 2021-09-30 | 2022-02-08 | 联想(北京)有限公司 | 一种标签筛选方法及装置 |
US12067082B2 (en) * | 2021-10-29 | 2024-08-20 | International Business Machines Corporation | Temporal contrastive learning for semi-supervised video action recognition |
CN114092097B (zh) * | 2021-11-23 | 2024-05-24 | 支付宝(杭州)信息技术有限公司 | 风险识别模型的训练方法、交易风险确定方法及装置 |
CN113837670A (zh) * | 2021-11-26 | 2021-12-24 | 北京芯盾时代科技有限公司 | 风险识别模型训练方法及装置 |
CN114241264B (zh) * | 2021-12-17 | 2022-10-28 | 深圳尚米网络技术有限公司 | 用户判别模型训练方法、用户判别方法及相关装置 |
CN114218872B (zh) * | 2021-12-28 | 2023-03-24 | 浙江大学 | 基于dbn-lstm半监督联合模型的剩余使用寿命预测方法 |
CN114972725B (zh) * | 2021-12-30 | 2023-05-23 | 华为技术有限公司 | 模型训练方法、可读介质和电子设备 |
CN114299349B (zh) * | 2022-03-04 | 2022-05-13 | 南京航空航天大学 | 一种基于多专家系统和知识蒸馏的众包图像学习方法 |
CN114758180B (zh) * | 2022-04-19 | 2023-10-10 | 电子科技大学 | 一种基于知识蒸馏的轻量化花卉识别方法 |
CN115146761B (zh) * | 2022-05-26 | 2024-09-06 | 腾讯科技(深圳)有限公司 | 一种缺陷检测模型的训练方法和相关装置 |
CN115100717B (zh) * | 2022-06-29 | 2024-07-19 | 腾讯科技(深圳)有限公司 | 特征提取模型的训练方法、卡通对象的识别方法及装置 |
CN115115886B (zh) * | 2022-07-11 | 2024-08-06 | 北京航空航天大学 | 基于teacher-student模型的半监督目标检测方法 |
CN114898186B (zh) * | 2022-07-12 | 2022-09-30 | 中科视语(北京)科技有限公司 | 细粒度图像识别模型训练、图像识别方法及装置 |
CN114915599B (zh) * | 2022-07-19 | 2022-11-11 | 中国电子科技集团公司第三十研究所 | 一种基于半监督聚类学习的暗网站点会话识别方法及系统 |
CN115527083B (zh) * | 2022-09-27 | 2023-04-11 | 中电金信软件有限公司 | 图像标注方法、装置和电子设备 |
CN115331697B (zh) * | 2022-10-14 | 2023-01-24 | 中国海洋大学 | 多尺度环境声音事件识别方法 |
CN115879535B (zh) * | 2023-02-10 | 2023-05-23 | 北京百度网讯科技有限公司 | 一种自动驾驶感知模型的训练方法、装置、设备和介质 |
CN116978008B (zh) * | 2023-07-12 | 2024-04-26 | 睿尔曼智能科技(北京)有限公司 | 一种融合rgbd的半监督目标检测方法和系统 |
CN116664602B (zh) * | 2023-07-26 | 2023-11-03 | 中南大学 | 基于少样本学习的octa血管分割方法及成像方法 |
CN116935188B (zh) * | 2023-09-15 | 2023-12-26 | 腾讯科技(深圳)有限公司 | 模型训练方法、图像识别方法、装置、设备及介质 |
CN116935447B (zh) * | 2023-09-19 | 2023-12-26 | 华中科技大学 | 基于自适应师生结构的无监督域行人重识别方法及系统 |
CN117009883B (zh) * | 2023-09-28 | 2024-04-02 | 腾讯科技(深圳)有限公司 | 对象分类模型构建方法、对象分类方法、装置和设备 |
CN117152587B (zh) * | 2023-10-27 | 2024-01-26 | 浙江华是科技股份有限公司 | 一种基于对抗学习的半监督船舶检测方法及系统 |
CN117558050B (zh) * | 2023-11-17 | 2024-05-28 | 西安理工大学 | 面向边缘计算端的实时人脸表情识别方法及人机交互系统 |
CN117635917B (zh) * | 2023-11-29 | 2024-09-13 | 北京声迅电子股份有限公司 | 基于半监督学习的目标检测模型训练方法及目标检测方法 |
CN117497064A (zh) * | 2023-12-04 | 2024-02-02 | 电子科技大学 | 基于半监督学习的单细胞三维基因组数据分析方法 |
CN117726884B (zh) * | 2024-02-09 | 2024-05-03 | 腾讯科技(深圳)有限公司 | 对象类别识别模型的训练方法、对象类别识别方法及装置 |
CN117789067B (zh) * | 2024-02-27 | 2024-05-10 | 山东字节信息科技有限公司 | 一种基于机器学习的无人机农作物监测方法及系统 |
CN118236070B (zh) * | 2024-03-26 | 2024-10-18 | 中国矿业大学 | 基于短时峰-峰间期信号及深度学习的识别分类方法 |
CN118015316B (zh) * | 2024-04-07 | 2024-06-11 | 之江实验室 | 一种图像匹配模型训练的方法、装置、存储介质、设备 |
CN118279700B (zh) * | 2024-05-30 | 2024-08-09 | 广东工业大学 | 一种工业质检网络训练方法及装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318242A (zh) * | 2014-10-08 | 2015-01-28 | 中国人民解放军空军工程大学 | 一种高效的svm主动半监督学习算法 |
CN108021931A (zh) * | 2017-11-20 | 2018-05-11 | 阿里巴巴集团控股有限公司 | 一种数据样本标签处理方法及装置 |
US20190122120A1 (en) * | 2017-10-20 | 2019-04-25 | Dalei Wu | Self-training method and system for semi-supervised learning with generative adversarial networks |
CN110298415A (zh) * | 2019-08-20 | 2019-10-01 | 视睿(杭州)信息科技有限公司 | 一种半监督学习的训练方法、系统和计算机可读存储介质 |
CN111222648A (zh) * | 2020-01-15 | 2020-06-02 | 深圳前海微众银行股份有限公司 | 半监督机器学习优化方法、装置、设备及存储介质 |
CN112183577A (zh) * | 2020-08-31 | 2021-01-05 | 华为技术有限公司 | 一种半监督学习模型的训练方法、图像处理方法及设备 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2666631C2 (ru) * | 2014-09-12 | 2018-09-11 | МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи | Обучение dnn-студента посредством распределения вывода |
KR102492318B1 (ko) * | 2015-09-18 | 2023-01-26 | 삼성전자주식회사 | 모델 학습 방법 및 장치, 및 데이터 인식 방법 |
KR20200045128A (ko) * | 2018-10-22 | 2020-05-04 | 삼성전자주식회사 | 모델 학습 방법 및 장치, 및 데이터 인식 방법 |
-
2020
- 2020-08-31 CN CN202010899716.5A patent/CN112183577A/zh active Pending
-
2021
- 2021-06-28 EP EP21859844.9A patent/EP4198820A4/en active Pending
- 2021-06-28 WO PCT/CN2021/102726 patent/WO2022042002A1/zh unknown
-
2023
- 2023-02-23 US US18/173,310 patent/US20230196117A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318242A (zh) * | 2014-10-08 | 2015-01-28 | 中国人民解放军空军工程大学 | 一种高效的svm主动半监督学习算法 |
US20190122120A1 (en) * | 2017-10-20 | 2019-04-25 | Dalei Wu | Self-training method and system for semi-supervised learning with generative adversarial networks |
CN108021931A (zh) * | 2017-11-20 | 2018-05-11 | 阿里巴巴集团控股有限公司 | 一种数据样本标签处理方法及装置 |
CN110298415A (zh) * | 2019-08-20 | 2019-10-01 | 视睿(杭州)信息科技有限公司 | 一种半监督学习的训练方法、系统和计算机可读存储介质 |
CN111222648A (zh) * | 2020-01-15 | 2020-06-02 | 深圳前海微众银行股份有限公司 | 半监督机器学习优化方法、装置、设备及存储介质 |
CN112183577A (zh) * | 2020-08-31 | 2021-01-05 | 华为技术有限公司 | 一种半监督学习模型的训练方法、图像处理方法及设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4198820A4 |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11495012B1 (en) * | 2021-11-19 | 2022-11-08 | Seoul National University R&Db Foundation | Semi-supervised learning method for object detection in autonomous vehicle and server for performing semi-supervised learning for object detection in autonomous vehicle |
CN114781207B (zh) * | 2022-03-29 | 2024-04-12 | 中国人民解放军军事科学院国防科技创新研究院 | 基于不确定性和半监督学习的热源布局温度场预测方法 |
CN114781207A (zh) * | 2022-03-29 | 2022-07-22 | 中国人民解放军军事科学院国防科技创新研究院 | 基于不确定性和半监督学习的热源布局温度场预测方法 |
CN114693995A (zh) * | 2022-04-14 | 2022-07-01 | 北京百度网讯科技有限公司 | 应用于图像处理的模型训练方法、图像处理方法和设备 |
CN114627381A (zh) * | 2022-04-21 | 2022-06-14 | 南通大学 | 一种基于改进半监督学习的电瓶车头盔识别方法 |
CN114627381B (zh) * | 2022-04-21 | 2024-09-24 | 南通大学 | 一种基于改进半监督学习的电瓶车头盔识别方法 |
CN114781526A (zh) * | 2022-04-26 | 2022-07-22 | 西安理工大学 | 基于判别特征学习和熵的深度半监督图像分类方法 |
CN114743109A (zh) * | 2022-04-28 | 2022-07-12 | 湖南大学 | 多模型协同优化高分遥感图像半监督变化检测方法及系统 |
CN114743109B (zh) * | 2022-04-28 | 2024-07-02 | 湖南大学 | 多模型协同优化高分遥感图像半监督变化检测方法及系统 |
CN117033993A (zh) * | 2022-04-29 | 2023-11-10 | 华东交通大学 | 一种基于最小角排序选择的优选训练集的方法 |
CN115127192A (zh) * | 2022-05-20 | 2022-09-30 | 中南大学 | 基于图神经网络的半监督的冷水机组故障诊断方法及系统 |
CN115127192B (zh) * | 2022-05-20 | 2024-01-23 | 中南大学 | 基于图神经网络的半监督的冷水机组故障诊断方法及系统 |
CN114758197A (zh) * | 2022-06-15 | 2022-07-15 | 深圳瀚维智能医疗科技有限公司 | 数据筛选方法、装置及计算机可读存储介质 |
CN114998691A (zh) * | 2022-06-24 | 2022-09-02 | 浙江华是科技股份有限公司 | 半监督船舶分类模型训练方法及装置 |
CN115082800B (zh) * | 2022-07-21 | 2022-11-15 | 阿里巴巴达摩院(杭州)科技有限公司 | 图像分割方法 |
CN115082800A (zh) * | 2022-07-21 | 2022-09-20 | 阿里巴巴达摩院(杭州)科技有限公司 | 图像分割方法 |
CN115482418A (zh) * | 2022-10-09 | 2022-12-16 | 宁波大学 | 基于伪负标签的半监督模型训练方法、系统及应用 |
CN115482418B (zh) * | 2022-10-09 | 2024-06-07 | 北京呈创科技股份有限公司 | 基于伪负标签的半监督模型训练方法、系统及应用 |
CN115346076A (zh) * | 2022-10-18 | 2022-11-15 | 安翰科技(武汉)股份有限公司 | 病理图像识别方法及其模型训练方法、系统和存储介质 |
CN115913769B (zh) * | 2022-12-20 | 2023-09-08 | 海口盛通达投资控股有限责任公司 | 基于人工智能的数据安全存储方法及系统 |
CN115913769A (zh) * | 2022-12-20 | 2023-04-04 | 石家庄曲竹闻网络科技有限公司 | 基于人工智能的数据安全存储方法及系统 |
CN116089838B (zh) * | 2023-03-01 | 2023-09-26 | 中南大学 | 窃电用户智能识别模型训练方法和识别方法 |
CN116089838A (zh) * | 2023-03-01 | 2023-05-09 | 中南大学 | 窃电用户智能识别模型训练方法和识别方法 |
CN116258861B (zh) * | 2023-03-20 | 2023-09-22 | 南通锡鼎智能科技有限公司 | 基于多标签学习的半监督语义分割方法以及分割装置 |
CN116258861A (zh) * | 2023-03-20 | 2023-06-13 | 南通锡鼎智能科技有限公司 | 基于多标签学习的半监督语义分割方法以及分割装置 |
CN116188876A (zh) * | 2023-03-29 | 2023-05-30 | 上海锡鼎智能科技有限公司 | 基于信息混合的半监督学习方法及半监督学习装置 |
CN116188876B (zh) * | 2023-03-29 | 2024-04-19 | 上海锡鼎智能科技有限公司 | 基于信息混合的半监督学习方法及半监督学习装置 |
CN116170829A (zh) * | 2023-04-26 | 2023-05-26 | 浙江省公众信息产业有限公司 | 一种独立专网业务的运维场景识别方法及装置 |
CN116306875B (zh) * | 2023-05-18 | 2023-08-01 | 成都理工大学 | 基于空间预学习与拟合的排水管网样本增量学习方法 |
CN116306875A (zh) * | 2023-05-18 | 2023-06-23 | 成都理工大学 | 基于空间预学习与拟合的排水管网样本增量学习方法 |
CN116935368A (zh) * | 2023-06-14 | 2023-10-24 | 北京百度网讯科技有限公司 | 深度学习模型训练方法、文本行检测方法、装置及设备 |
CN116665064A (zh) * | 2023-07-27 | 2023-08-29 | 城云科技(中国)有限公司 | 基于生成蒸馏与特征扰动的城市变化图生成方法及其应用 |
CN116665064B (zh) * | 2023-07-27 | 2023-10-13 | 城云科技(中国)有限公司 | 基于生成蒸馏与特征扰动的城市变化图生成方法及其应用 |
CN117113198A (zh) * | 2023-09-24 | 2023-11-24 | 元始智能科技(南通)有限公司 | 一种基于半监督对比学习的旋转设备小样本故障诊断方法 |
CN117113198B (zh) * | 2023-09-24 | 2024-06-28 | 元始智能科技(南通)有限公司 | 一种基于半监督对比学习的旋转设备小样本故障诊断方法 |
CN117149551B (zh) * | 2023-10-30 | 2024-02-09 | 鹰驾科技(深圳)有限公司 | 一种车载无线通信芯片的测试方法 |
CN117149551A (zh) * | 2023-10-30 | 2023-12-01 | 鹰驾科技(深圳)有限公司 | 一种车载无线通信芯片的测试方法 |
CN117611957B (zh) * | 2024-01-19 | 2024-03-29 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | 基于统一正负伪标签的无监督视觉表征学习方法及系统 |
CN117611957A (zh) * | 2024-01-19 | 2024-02-27 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | 基于统一正负伪标签的无监督视觉表征学习方法及系统 |
CN118047359A (zh) * | 2024-01-24 | 2024-05-17 | 广东聚力胜智能科技有限公司 | 一种磷酸铁的制备设备控制方法及系统 |
CN117910539A (zh) * | 2024-03-19 | 2024-04-19 | 电子科技大学 | 一种基于异构半监督联邦学习的家庭特征识别方法 |
CN117910539B (zh) * | 2024-03-19 | 2024-05-31 | 电子科技大学 | 一种基于异构半监督联邦学习的家庭特征识别方法 |
CN117975342B (zh) * | 2024-03-28 | 2024-06-11 | 江西尚通科技发展有限公司 | 半监督多模态情感分析方法、系统、存储介质及计算机 |
CN117975342A (zh) * | 2024-03-28 | 2024-05-03 | 江西尚通科技发展有限公司 | 半监督多模态情感分析方法、系统、存储介质及计算机 |
CN118035879A (zh) * | 2024-04-11 | 2024-05-14 | 湖北省地质环境总站 | 一种基于InSAR与半监督学习的地质灾害隐患识别方法 |
CN118053602A (zh) * | 2024-04-16 | 2024-05-17 | 首都医科大学宣武医院 | 基于智慧病房呼叫系统的数据处理方法及数据处理系统 |
CN118097306A (zh) * | 2024-04-18 | 2024-05-28 | 江西师范大学 | 一种基于时频混合对比学习的多变量时序分类方法及系统 |
CN118296390A (zh) * | 2024-06-06 | 2024-07-05 | 齐鲁工业大学(山东省科学院) | 可穿戴行为识别模型的训练方法、行为识别方法及系统 |
CN118520281A (zh) * | 2024-07-24 | 2024-08-20 | 山东科技大学 | 基于机器学习的花岗岩构造环境判别方法 |
CN118551274A (zh) * | 2024-07-24 | 2024-08-27 | 西安科技大学 | 一种煤岩石识别系统 |
Also Published As
Publication number | Publication date |
---|---|
EP4198820A1 (en) | 2023-06-21 |
US20230196117A1 (en) | 2023-06-22 |
CN112183577A (zh) | 2021-01-05 |
EP4198820A4 (en) | 2023-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022042002A1 (zh) | 一种半监督学习模型的训练方法、图像处理方法及设备 | |
WO2021238281A1 (zh) | 一种神经网络的训练方法、图像分类系统及相关设备 | |
WO2021190451A1 (zh) | 训练图像处理模型的方法和装置 | |
US20210012198A1 (en) | Method for training deep neural network and apparatus | |
WO2020238293A1 (zh) | 图像分类方法、神经网络的训练方法及装置 | |
WO2021063171A1 (zh) | 决策树模型的训练方法、系统、存储介质及预测方法 | |
US11768876B2 (en) | Method and device for visual question answering, computer apparatus and medium | |
WO2022012407A1 (zh) | 一种用于神经网络的训练方法以及相关设备 | |
WO2022001805A1 (zh) | 一种神经网络蒸馏方法及装置 | |
WO2023221928A1 (zh) | 一种推荐方法、训练方法以及装置 | |
CN113807399B (zh) | 一种神经网络训练方法、检测方法以及装置 | |
WO2022206498A1 (zh) | 一种基于联邦迁移学习的模型训练方法及计算节点 | |
CN111738403B (zh) | 一种神经网络的优化方法及相关设备 | |
WO2022111387A1 (zh) | 一种数据处理方法及相关装置 | |
CN113011568B (zh) | 一种模型的训练方法、数据处理方法及设备 | |
WO2024001806A1 (zh) | 一种基于联邦学习的数据价值评估方法及其相关设备 | |
US20240232575A1 (en) | Neural network obtaining method, data processing method, and related device | |
WO2022222854A1 (zh) | 一种数据处理方法及相关设备 | |
CN115631008A (zh) | 商品推荐方法、装置、设备及介质 | |
CN115063585A (zh) | 一种无监督语义分割模型的训练方法及相关装置 | |
CN115131604A (zh) | 一种多标签图像分类方法、装置、电子设备及存储介质 | |
WO2024175079A1 (zh) | 一种模型的量化方法以及相关设备 | |
WO2024199404A1 (zh) | 一种消费预测方法及其相关设备 | |
WO2024179485A1 (zh) | 一种图像处理方法及其相关设备 | |
WO2024114659A1 (zh) | 一种摘要生成方法及其相关设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21859844 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021859844 Country of ref document: EP Effective date: 20230313 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |