WO2022194049A1 - 一种对象处理方法及装置 - Google Patents
一种对象处理方法及装置 Download PDFInfo
- Publication number
- WO2022194049A1 WO2022194049A1 PCT/CN2022/080397 CN2022080397W WO2022194049A1 WO 2022194049 A1 WO2022194049 A1 WO 2022194049A1 CN 2022080397 W CN2022080397 W CN 2022080397W WO 2022194049 A1 WO2022194049 A1 WO 2022194049A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- label
- sample
- object processing
- target sample
- samples
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 262
- 238000000034 method Methods 0.000 claims abstract description 72
- 238000012549 training Methods 0.000 claims abstract description 59
- 239000000523 sample Substances 0.000 claims description 462
- 238000009826 distribution Methods 0.000 claims description 55
- 239000013074 reference sample Substances 0.000 claims description 48
- 230000000875 corresponding effect Effects 0.000 claims description 36
- 238000012937 correction Methods 0.000 claims description 33
- 238000004590 computer program Methods 0.000 claims description 20
- 238000003860 storage Methods 0.000 claims description 15
- 230000002596 correlated effect Effects 0.000 claims description 9
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 18
- 238000010801 machine learning Methods 0.000 description 11
- 241000282461 Canis lupus Species 0.000 description 9
- 230000014509 gene expression Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 230000002902 bimodal effect Effects 0.000 description 6
- 241000282326 Felis catus Species 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000282421 Canidae Species 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241000282376 Panthera tigris Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002715 modification method Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/041—Abduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
Definitions
- the present application relates to the technical field of artificial intelligence, and in particular, to an object processing method and device.
- some machine learning algorithms such as deep neural network (DNN) have strong nonlinear feature representation ability and can learn clean samples (ie, samples with correct labels) in the sample dataset.
- DNN deep neural network
- the noise samples that is, the samples with wrong labels
- the sample screening method requires the principle that the loss function distribution of clean samples and noise samples are different, such as bimodal distribution.
- the loss values of clean samples and noise samples are large, so that neither can well fit the above loss function distribution such as bimodal distribution. If the sample screening is performed in the above manner, a large number of samples may be misjudged, for example, clean samples may be misjudged as noise samples, and noise samples may be misjudged as clean samples, which will affect the performance of the machine learning model.
- the present application provides an object processing method, which solves the problem of low detection accuracy in the related art.
- the present application also provides corresponding apparatuses, devices, computer-readable storage media, and computer program products.
- the embodiments of the present application provide an object processing method, which can acquire the true labels corresponding to each sample in the noise-containing sample set, and use the true labels to correct the labels of each sample. Based on the above-mentioned label inference and correction mechanism, it is possible to identify the noisy samples in the noise-containing sample set and improve the labels of all samples, improve the training quality of the object processing network using the noise-containing sample set, and improve the performance of the object processing network. processing performance.
- the embodiment of the present application does not limit the set of noise-containing samples, so that the method of training the network by using the set of noise-containing samples has strong generalization ability.
- the object to be processed is input to an object processing network, and the processing result of the object to be processed is output through the object processing network; wherein, the object processing network is obtained by training a set of noise-containing samples, and the set of noise-containing samples is obtained by training. Including at least one mislabeled noise sample, the training includes:
- the initial object processing network is supervised and trained by using the target samples after the corrected labels to obtain the object processing network.
- the obtaining the inferred label of the target sample in the noise-containing sample set includes:
- the inferred label of the target sample is determined according to the feature similarity between the target sample and the plurality of reference samples respectively.
- a plurality of reference samples of the target sample are determined by using the feature similarity, and an inferred label of the target sample is determined by using the feature similarity.
- the feature similarity can be used to indicate the importance of the reference sample for inferring the true label of the target sample. Therefore, using the feature similarity to determine the reference sample and the inferred label of the target sample can obtain more accurate inferred label.
- the feature similarity is based on the first feature distance between the target sample and the reference sample and the first feature distance between the reference sample and the class center corresponding to its label. Two feature distances are determined, and the feature similarity is negatively correlated with the first feature distance and the second feature distance.
- the importance of other samples to the true label of the target sample i can be measured according to the intra-class and inter-class relationship of the target sample at the same time, and the accuracy is high.
- the determining the inferred label of the target sample according to the feature similarity between the target sample and the multiple reference samples respectively includes:
- the label probability distribution of the target sample is determined, and the label probability distribution includes the probability that the target sample corresponds to each label, and the Labels include labels in the set of noisy samples;
- the label with the highest probability in the label probability distribution is used as the inferred label of the target sample.
- the probability distribution of the target sample on all labels can be obtained according to the feature similarity, that is, the softened label of the target sample can be obtained, and a more accurate inference of the target sample can be obtained. Label.
- the inferred label of the target sample is determined by using an object processing branch network, and the object processing branch network is also trained based on the set of noisy samples.
- multiple object processing branch networks can be simultaneously trained by using the noise-containing sample set, and the inferred labels determined by different branch networks can be exchanged.
- This method can not only overcome the error generated by the network branch during the self-iteration process, but also integrate the advantages of different network branches that can filter different noises.
- modifying the label of the noise sample according to the inferred label includes:
- the label of the target sample can be jointly corrected by using the inferred label and the prediction result of the object processing network, so as to improve the accuracy of the corrected label.
- the prediction result includes a prediction result of the target sample or a sample obtained by performing data enhancement on the target sample.
- enhancing the target samples can enrich the number of samples and reduce the possibility of overfitting of the object processing network.
- modifying the label of the target sample according to the inferred label and the prediction result includes:
- the weighted sum of the inferred label and the prediction result is used as the modified label of the target sample, and the weight of the inferred label and the prediction result is determined according to the probability corresponding to the inferred label.
- the weighted sum of the inferred label and the prediction result is used as the revised label, and the confidence of the inferred label is used as the weight of the inferred label, which can further improve the accuracy of the revised label .
- the use of the target sample after label correction to supervise and train an initial object processing network to obtain the object processing network includes:
- the initial object processing network is trained using clean samples and/or noise samples in the set of noisy samples to obtain the object processing network.
- the object processing network can be trained by various selection methods such as clean samples and/or noise samples.
- the training of the initial object processing network by using clean samples and noise samples in the noise-containing sample set to obtain the object processing network includes:
- the initial object processing network is trained by using the fused samples to obtain the object processing network.
- the clean samples may be used as a basis, any other samples may be fused on the clean samples, and the fused samples may be used.
- Training the object processing network can enhance the impact of clean samples on the network, while taking advantage of the value of the noisy samples.
- an embodiment of the present application provides a method for generating an object processing network, where the object processing network is obtained by training a noise-containing sample set, the noise-containing sample set includes at least one noise sample with an incorrect label, including :
- the initial object processing network is supervised and trained by using the target samples after the corrected labels to obtain the object processing network.
- the obtaining the inferred label of the target sample in the noise-containing sample set includes:
- the inferred label of the target sample is determined according to the feature similarity between the target sample and the plurality of reference samples respectively.
- the feature similarity is based on the first feature distance between the target sample and the reference sample and the first feature distance between the reference sample and the class center corresponding to its label. Two feature distances are determined, and the feature similarity is negatively correlated with the first feature distance and the second feature distance.
- determining the inferred label of the target sample according to the feature similarity between the target sample and the multiple reference samples including:
- the label probability distribution of the target sample is determined, and the label probability distribution includes the probability that the target sample corresponds to each label, and the Labels include labels in the set of noisy samples;
- the label with the highest probability in the label probability distribution is used as the inferred label of the target sample.
- the inferred label of the target sample is determined by using an object processing branch network, and the object processing branch network is also trained based on the set of noisy samples.
- modifying the label of the noise sample according to the inferred label includes:
- the prediction result includes a prediction result of the target sample or a sample obtained by performing data enhancement on the target sample.
- modifying the label of the target sample according to the inferred label and the prediction result includes:
- the weighted sum of the inferred label and the prediction result is used as the modified label of the target sample, and the weight of the inferred label and the prediction result is determined according to the probability corresponding to the inferred label.
- supervising and training an initial object processing network using the target sample after the corrected label, and obtaining the object processing network includes:
- the initial object processing network is trained using clean samples and/or noise samples in the set of noisy samples to obtain the object processing network.
- the training of the initial object processing network by using clean samples and noise samples in the noise-containing sample set to obtain the object processing network includes:
- the initial object processing network is trained by using the fused samples to obtain the object processing network.
- an embodiment of the present application provides an object processing apparatus, the apparatus comprising:
- an object processing network which is used to output the processing result of the object to be processed;
- the object processing network is obtained by training a noise-containing sample set, and the noise-containing sample set includes at least one noise sample with an incorrect label;
- the label inference module is used to obtain the inferred label of the target sample in the noise-containing sample set
- the label correction module is configured to correct the label of the target sample according to the inferred label; the target sample after the label correction is used to supervise and train an initial object processing network to obtain the object processing network.
- the label inference module is specifically used for:
- the inferred label of the target sample is determined according to the feature similarity between the target sample and the plurality of reference samples respectively.
- the feature similarity is based on the first feature distance between the target sample and the reference sample and the first feature distance between the reference sample and the class center corresponding to its label. Two feature distances are determined, and the feature similarity is negatively correlated with the first feature distance and the second feature distance.
- the label inference module is specifically used for:
- the label probability distribution of the target sample is determined, and the label probability distribution includes the probability that the target sample corresponds to each label, and the Labels include labels in the set of noisy samples;
- the label with the highest probability in the label probability distribution is used as the inferred label of the target sample.
- the inferred label of the target sample is determined by using an object processing branch network, and the object processing branch network is also trained based on the set of noisy samples.
- the label correction module is specifically used for:
- the prediction result includes a prediction result of the target sample or a sample obtained by performing data enhancement on the target sample.
- the label correction module is specifically used for:
- the weighted sum of the inferred label and the prediction result is used as the modified label of the target sample, and the weight of the inferred label and the prediction result is determined according to the probability corresponding to the inferred label.
- the object processing network is specifically used for:
- the initial object processing network is trained using clean samples and/or noise samples in the set of noisy samples to obtain the object processing network.
- the object processing network is specifically used for:
- the initial object processing network is trained by using the fused samples to obtain the object processing network.
- an embodiment of the present application provides an apparatus for generating an object processing network, wherein the object processing network is obtained by training a noise-containing sample set, and the noise-containing sample set includes at least one label with an incorrect label.
- Noise samples including:
- the label inference module is used to obtain the inferred label of the target sample in the noise-containing sample set
- the label correction module is configured to correct the label of the target sample according to the inferred label; the target sample after the label correction is used to supervise and train the object processing network until the training termination condition is reached.
- the label inference module is specifically used for:
- the inferred label of the target sample is determined according to the feature similarity between the target sample and the plurality of reference samples respectively.
- the feature similarity is based on the first feature distance between the target sample and the reference sample and the first feature distance between the reference sample and the class center corresponding to its label. Two feature distances are determined, and the feature similarity is negatively correlated with the first feature distance and the second feature distance.
- the label inference module is specifically used for:
- the label probability distribution of the target sample is determined, and the label probability distribution includes the probability that the target sample corresponds to each label, and the Labels include labels in the set of noisy samples;
- the label with the highest probability in the label probability distribution is used as the inferred label of the target sample.
- the inferred label of the target sample is determined by using an object processing branch network, and the object processing branch network is also trained based on the set of noisy samples.
- the label correction module is specifically used for:
- the prediction result includes a prediction result of the target sample or a sample obtained by performing data enhancement on the target sample.
- the label correction module is specifically used for:
- the weighted sum of the inferred label and the prediction result is used as the modified label of the target sample, and the weight of the inferred label and the prediction result is determined according to the probability corresponding to the inferred label.
- the object processing network is specifically used for:
- the initial object processing network is trained using clean samples and/or noise samples in the set of noisy samples to obtain the object processing network.
- the object processing network is specifically used for:
- the initial object processing network is trained by using the fused samples to obtain the object processing network.
- an embodiment of the present application provides an object processing apparatus, comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above-mentioned various instructions when executing the instructions Any possible way to implement the aspect.
- embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, any one of the above aspects may be implemented.
- embodiments of the present application provide a computer program product, comprising computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in an electronic
- the processor in the electronic device executes the method that any one of the above aspects may implement
- an embodiment of the present application provides a chip, where the chip includes at least one processor, where the processor is configured to run a computer program or computer instructions stored in a memory to execute any method that may be implemented in any of the foregoing aspects.
- the chip may further include a memory for storing computer programs or computer instructions.
- the chip may further include a communication interface for communicating with other modules other than the chip.
- one or more chips may constitute a chip system.
- Fig. 1 is a distribution diagram of the loss value of clean samples and noise samples in the related art
- FIG. 2 is a schematic structural diagram of a module of an object processing apparatus 100 according to an embodiment of the present application
- FIG. 3 is a schematic flowchart of an object processing method provided by an embodiment of the present application.
- FIG. 4 is a schematic flowchart of a method for inferring a label according to an embodiment of the present application
- FIG. 5 is a schematic flowchart of a method for inferring a label according to a reference sample according to an embodiment of the present application
- FIG. 6 is a schematic diagram of a dual-branch network training provided by an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a processing device according to an embodiment of the present application.
- Loss adjustment means that in the process of constructing the target loss function for model training, the noise samples are given smaller weights and the clean samples are given larger weights, thereby reducing the impact of noise samples on model training.
- Sample selection refers to the use of clean samples to update network parameters in the training process to directly eliminate the influence of noise samples.
- the small-loss based criterion relies on the different distributions of loss functions for clean samples and noisy samples, as shown in Figure 1, which presents a bimodal distribution. Specifically, a beta mixture model (BMM) or a Gaussian mixture model (GMM) can be used to model the loss function distribution of the samples, and set a threshold to distinguish clean samples from noise samples.
- BMM beta mixture model
- GMM Gaussian mixture model
- the loss functions of clean samples and noisy samples obviously follow a bimodal distribution, but not all sample sets follow a bimodal distribution, such as the sample loss of noisy sample sets such as WebVision The function does not follow a bimodal distribution. Therefore, the training method based on the small-loss criterion has lower generalization ability. For the training method based on the small-loss criterion, in some training stages, especially at the beginning of training, neither clean samples nor noise samples can be well fitted, resulting in large loss values for all samples, resulting in very high It is difficult to distinguish clean samples from noisy samples with the distribution of the loss function. If it is directly distinguished by a set threshold, a large number of samples will be misjudged, that is, clean samples may be judged as noise samples, and noise samples may also be judged as clean samples, which will affect the final model performance.
- embodiments of the present application provide an object processing method.
- the method can acquire the true labels corresponding to each sample in the noise-containing sample set, and use the true labels to correct the labels of each sample.
- Based on the above-mentioned label inference and correction mechanism it is possible to identify the noisy samples in the noise-containing sample set and improve the labels of all samples, improve the training quality of the object processing network using the noise-containing sample set, and improve the performance of the object processing network. processing performance.
- the embodiment of the present application does not limit the set of noise-containing samples, so that the method of training the network by using the set of noise-containing samples has strong generalization ability.
- the object processing method provided in this embodiment of the present application may be applied to, but not limited to, the application scenario shown in FIG. 2 .
- the scene includes an object processing apparatus 100 , and the object processing apparatus 100 may include an object processing network 101 , a label inference module 103 and a label correction module 105 .
- the object processing apparatus 100 can be set in a processing device, and the processing device has a central processing unit (Central Processing Unit, CPU) and/or a graphics processing unit (Graphics Processing Unit, GPU), for processing the input object to be processed, to obtain the processing result.
- the objects to be processed include data such as images, texts, and voices.
- the processing methods include image classification, speech recognition, text recognition, and other processing services based on any machine learning model based on supervised learning.
- the processing device may be a physical device or a physical device cluster, such as a terminal, a server, or a server cluster.
- the processing device may also be a virtualized cloud device, such as at least one cloud computing device in a cloud computing cluster.
- the object processing network 101 can be obtained by training based on a noise-containing sample set, and the noise-containing sample set can include at least one noise sample with an incorrect label, such as the three sample examples shown in FIG. 2 , wherein the dog The image of is labeled with wolf, so in the set of noisy samples, the image of dog is a noise sample.
- the training of the object processing network 101 needs to rely on the label inference module 103 and the label correction module 105 .
- the object processing network 101 can separately extract the feature information of each sample in the noise-containing sample set, and can also determine the object processing result of each sample according to the feature information, such as the probability distribution of each object on all labels , here, all labels refer to all labels involved in the noise-containing sample set, or refer to a preset label set, and the label set at least includes all the labels involved in the noise-containing sample set.
- the label inference module 103 is configured to determine the inferred label of each sample in the noise-containing sample set according to the feature information, and determine whether the corresponding sample is a noise sample or a clean sample according to the inferred label.
- the label correction module 105 may be configured to correct the labels of the samples according to the inferred labels of the samples. In one embodiment, the label correction module 105 specifically corrects the label of the sample according to the inferred label and the object processing result determined by the object processing network 101 .
- the labeled samples are used to supervise the initial object processing network for training the object processing network 101 , and the object processing network 101 is obtained after multiple iterative adjustments.
- the trained object processing network 101 can be used directly.
- the object processing network 101 shown in FIG. 2 can be directly used to classify images and identify the types of objects in each image.
- the training method may include:
- S301 Acquire an inferred label of a target sample in the noise-containing sample set.
- the characteristic information of each sample in the noise-containing sample set may be used to determine the inferred label of the target sample.
- the object processing network 101 can extract feature information of each sample.
- the method for determining the inferred label of the target sample can include:
- S401 Determine feature information of each sample in the noise-containing sample set by using the object processing network.
- S403 Determine a plurality of reference samples of the target sample according to the feature information, and the feature similarity between the target sample and the reference sample satisfies a preset condition.
- the target sample in this embodiment of the present application may refer to any sample in the noise-containing sample set.
- a plurality of reference samples whose feature similarity with the target sample satisfies a preset condition may be selected from the noise-containing sample set.
- the preset condition may include that the similarity between the target sample and the reference sample is greater than a preset threshold, and may also include that the similarity between the target sample and the reference sample is the highest among all similarities.
- the feature similarity may be based on a first feature distance between the target sample and the reference sample and a second feature distance between the reference sample and the class center corresponding to its label is determined, and the feature similarity is negatively correlated with the first feature distance and the second feature distance.
- the feature similarity can be calculated by using the following expression (1):
- d(f i , f j ) represents the first feature distance between the target sample i and the reference sample j
- the feature distance is the distance between the feature information of the two samples in the feature space, and the smaller the distance, the more the sample The higher the similarity between them
- d (f i , f cp ) represents the second feature distance between the reference sample j and the class center cp corresponding to its label
- m and ⁇ are used to balance the first feature distance and the Describe the relationship between the second feature distances.
- G p represents the samples whose original label is p (also the label of the reference sample j) in the noise-containing sample set
- f n represents the feature information of the nth sample.
- the feature similarity between the target sample i and other samples j in the noise-containing sample set D can be expressed by the following expression (3):
- the feature similarity in the above feature similarity set may be sorted, and the K samples with the largest feature similarity are used as the reference samples.
- a sample whose feature similarity is greater than a preset threshold may also be used as the reference sample.
- the above two conditions may also be satisfied at the same time, which is not limited in this application.
- the feature similarity S may be used to represent the importance of the reference sample j to the true label of the target sample i.
- the reference sample j is a noise sample, it will cause large noise interference to the real label of the target sample i, and its influence needs to be eliminated.
- the reference sample j is a noise sample
- d(fi , f cp ) is large, which can significantly reduce the feature similarity S, thereby reducing the influence of the reference sample j on the inference process, that is, the target sample is used.
- the inter-class relationship of i To sum up, the importance of other samples to the true label of target sample i can be measured according to the intra-class and inter-class relationship of target sample i at the same time, and the accuracy is high.
- expression (1) is only one example of determining the feature similarity, and the present application does not limit the manner of constructing the feature similarity.
- S405 Determine the inferred label of the target sample according to the feature similarity between the target sample and the multiple reference samples respectively.
- the feature similarity between the reference sample and the target sample may represent the importance of the reference sample for inferring the true label of the target sample. Based on this, the inferred label of the target sample may be determined by using the feature similarity between the target sample and the plurality of reference samples respectively.
- the multiple reference samples correspond to an original label, for example, the reference sample is an image, and the category label of the image is one of cat, dog, boat, and wolf.
- the feature similarity between the target sample and the reference sample expresses the similarity between the two samples, and it may happen that the similarity between the target sample and multiple reference samples with different labels is relatively close at the same time. happening.
- the label probability distribution of the target sample may be determined according to the feature similarity between the target sample and the multiple samples, and the label probability distribution includes the The target samples correspond to the probability of each label respectively.
- the label probability distribution is used to express the target
- the labels of the samples are the likelihood of cat, dog, boat, wolf, flower, bear, etc.
- the label probability distribution can more accurately express the true label of the target object. Based on this, in an embodiment of the present application, as shown in FIG. 5 , it may specifically include:
- S501 Determine the probability distribution of the target sample on all labels according to the feature similarity between the target sample and the multiple reference samples respectively.
- the multiple reference samples may be divided according to different labels, and the feature similarity between at least one reference sample corresponding to label n and the target sample i can be determined.
- n represents the label n
- C represents the total number of categories of the label
- II represents the example function
- ⁇ i ⁇ i1 ,..., ⁇ in ,..., ⁇ iC ⁇ (5)
- the feature similarity sum vector ⁇ i can be normalized, which can specifically include the following expression:
- the normalized result of ⁇ i can be Sharpening processing, specifically, in one embodiment, may include:
- T represents the sharpening temperature coefficient, which is used to represent the intensity of sharpening.
- the result after sharpening may also be as the probability distribution of target sample i over all labels.
- S503 Use the label with the highest probability in the probability distribution as the inferred label of the target sample.
- the label corresponding to the maximum probability value can be determined from the probability distribution, and the label is used as the inferred label of the target sample i.
- the inferred label can be expressed as:
- the decision result can be expressed as:
- the value of the example function II indicates that the original label yi of the target sample i is a clean sample, and the value of 0 indicates that the original label yi of the target sample i is a noise sample.
- the label of the target sample may be corrected.
- the corrected result It can include the probability distribution of the target sample on all labels, for example, it can be expressed as:
- S305 Supervise and train an initial object processing network by using the target sample after the corrected label to obtain the object processing network.
- the target sample after the label correction can be used to supervise and train the initial object processing network of the object processing network 101 to obtain the object processing network 101 .
- the training of the object processing network 101 includes a process of multiple iterations of S301 and S303 until the object processing network 101 reaches convergence or reaches a preset number of iterations and other training termination conditions.
- the purpose of training the object processing network 101 is to enable the object processing network 101 to process and obtain more accurate results. Therefore, in the process of continuous training, the performance of the object processing network 101 is also continuously enhanced, and based on this, the prediction result of the object processing network 101 can be used to correct the label of the target sample. That is, the inferred label of the target sample and the prediction result can be used to jointly correct the label of the target sample.
- the weighted sum of the inferred label and the prediction result may be used as the modified label of the target sample, and the weight of the inferred label and the predicted result is based on the weight of the inferred label. The corresponding probability is determined.
- the corrected result It can be expressed as:
- ⁇ i represents the inferred label of the target sample i
- the corresponding weight also known as the confidence, can be the inferred label
- the corresponding probability value that is, p i represents the prediction result of the object processing network 101 on the target sample i, and the weight corresponding to the prediction result is (1- ⁇ i ).
- the label of the target sample i is jointly corrected by using the inferred label of the target sample i and the prediction result of the object processing network 101, which can improve the accuracy of the corrected label.
- data enhancement may be performed on the target sample, and multiple samples after data enhancement of the target sample may be obtained to enrich the number of samples.
- specific data enhancement methods may include operations such as rotation, scaling, color adjustment, cropping, and background replacement of the image, which are not limited in this application.
- the prediction results of the multiple samples can be obtained by using the object processing network 101 respectively, and the prediction results pi of the target sample i by the object processing network 101 can be determined according to the prediction results of the multiple samples.
- the prediction result can be expressed as:
- x i,m represents the m-th enhanced sample of the target sample i
- M represents the total number of enhanced samples of the target sample i
- ⁇ represents the parameters of the object processing network 101
- P( xi,m , ⁇ ) represents the object processing network 101 Prediction results for x i,m .
- the object processing network 101 may be trained using the clean samples and/or the noise samples. That is to say, the object processing network 101 can be trained using the clean samples alone, and the object processing network 101 can also be trained using the noise samples alone. Of course, the object processing network 101 can also be trained using the clean samples and the noise samples. . In the process of training the object processing network 101 using the clean samples and the noise samples, the clean samples and the noise samples may be used as training sets to train the object processing network 101 . In another embodiment of the present application, the impact of the clean samples on the object processing network 101 can also be enhanced.
- data enhancement may be performed on the clean sample, and the data enhancement method may include: for the target clean sample, a sample may be selected from the clean sample and/or the noise sample, which is similar to the target clean sample.
- fusion For images, the fusion method may include, for example, superposition of image pixel information, superposition of corrected labels, and the like.
- each machine learning model may produce two machine learning models with different performances, and each machine learning model has its own advantages.
- multiple different object processing branch networks can be used to process the same batch of noise-containing sample sets respectively, and obtain inferred labels corresponding to each sample in the noise-containing sample set . Then, each object processing branch network can send the determined inferred labels of each sample to other object processing branch networks.
- the object processing network 101 and the object processing network 101 ′ are two different network branches, but are trained based on the same set of noisy samples.
- different types of noise samples can be filtered, different initial network parameters of the object processing network 101 and the object processing network 101' can be set respectively, or the set of processing noise samples can be set.
- the order of the samples is not the same, so that the object processing network 101 and the object processing network 101' have different performance advantages.
- the object processing network 101 and the object processing network 101 ′ can respectively determine the inferred labels of the respective samples according to the above-mentioned manner of determining the inferred labels of the target samples.
- the object processing network 101 can determine the first feature information of the first target sample in the noise-containing sample set, and the label inference module 103 can determine the inferred label of the first target sample according to the first feature information
- the object processing network 101 ′ can determine the second feature information of the second target sample in the noise-containing sample set, and the label inference module 103 can determine the inference of the second target sample according to the second feature information. Label.
- the object processing network 101 and the object processing network 101 ′ can exchange the inferred labels of the determined target samples.
- the label correction module 105 performs label correction on the second target sample
- the label correction module 105' performs label correction on the first target sample.
- the object processing network 101 and the object processing network 101 ′ may be integrated into the process of correcting the labels of the first target sample and the second target sample, respectively, for the first target sample. sample, and the prediction result of the second target sample. As shown in FIG.
- the prediction result of the object processing network 101 for the first target sample can be passed to the label correction network 105 and the label correction network 105 ′.
- the object processing network 101 ′ to the second target sample The prediction result of is passed to the label correction network 105 ′ and the label correction network 105 .
- pi in Expression (11) may include the prediction results of the two networks, which may include:
- x i represents the first/second target sample i
- ⁇ represents the parameters of the object processing network 101
- ⁇ ' represents the parameters of the object processing network 101'
- P( xi , ⁇ ) represents the prediction of the object processing network 101 on x i
- P'( xi , ⁇ ') represents the prediction result of xi by the object processing network 101'.
- the prediction result p i can also include the following expression:
- x i,m represents the m-th sample of the first/second target sample i after data enhancement
- M represents the total number of samples after data enhancement for the first/second target sample i
- ⁇ represents the parameters of the object processing network 101
- ⁇ ' represents the parameters of the object processing network 101'
- P(x i,m , ⁇ ) represents the prediction result of the object processing network 101 for x i,m
- P'( xi,m , ⁇ ') represents the object processing network 101' The prediction result of x i,m .
- FIG. 6 only shows the case where there are two network branches.
- the network branches can be The samples are sent to other network branches, and the samples can be obtained from other network branches.
- network 1 can send samples to network 2
- network 2 can send samples to network 3
- network 3 can send samples to network 1.
- the objects to be processed may be input into the multiple object processing branch networks respectively, and the Output the corresponding processing result. Then, the average of a plurality of the processing results may be used as the final processing result of the object to be processed.
- the apparatus 100 includes:
- the object processing network 101 is used to output the processing result of the object to be processed; the object processing network is obtained by training a noise-containing sample set, and the noise-containing sample set includes at least one noise sample with an incorrect label;
- the label inference module 103 is configured to obtain the inferred label of the target sample in the noise-containing sample set;
- the label correction module 105 is configured to correct the label of the target sample according to the inferred label; the target sample after the corrected label is used to supervise and train an initial object processing network to obtain the object processing network.
- the label inference module is specifically used for:
- the inferred label of the target sample is determined according to the feature similarity between the target sample and the plurality of reference samples respectively.
- the feature similarity is based on the first feature distance between the target sample and the reference sample and the first feature distance between the reference sample and the class center corresponding to its label. Two feature distances are determined, and the feature similarity is negatively correlated with the first feature distance and the second feature distance.
- the label inference module is specifically used for:
- the label probability distribution of the target sample is determined, and the label probability distribution includes the probability that the target sample corresponds to each label, and the Labels include labels in the set of noisy samples;
- the label with the highest probability in the label probability distribution is used as the inferred label of the target sample.
- the inferred label of the target sample is determined by using an object processing branch network, and the object processing branch network is also trained based on the set of noisy samples.
- the label correction module is specifically used for:
- the prediction result includes a prediction result of the target sample or a sample obtained by performing data enhancement on the target sample.
- the label correction module is specifically used for:
- the weighted sum of the inferred label and the prediction result is used as the modified label of the target sample, and the weight of the inferred label and the prediction result is determined according to the probability corresponding to the inferred label.
- the object processing network is specifically used for:
- the initial object processing network is trained using clean samples and/or noise samples in the set of noisy samples to obtain the object processing network.
- the object processing network is specifically used for:
- the initial object processing network is trained by using the fused samples to obtain the object processing network.
- the object processing apparatus 100 may correspond to executing the methods described in the embodiments of the present application, and the above-mentioned and other operations and/or functions of the respective modules in the object processing apparatus 100 are for the purpose of realizing FIG. 3 , FIG. 4 , The corresponding flow of each method in FIG. 5 is not repeated here for brevity.
- connection relationship between the modules indicates that there is a communication connection between them, which can be specifically implemented as one or more communication buses or signal lines.
- An embodiment of the present application further provides a device 700 for implementing the functions of the object processing apparatus 100 in the system architecture diagram shown in FIG. 2 above.
- the device 700 may be a physical device or a physical device cluster, or a virtualized cloud device, such as at least one cloud computing device in a cloud computing cluster.
- the present application uses the device 700 as an independent physical device to illustrate the structure of the device 700 as an example.
- FIG. 7 provides a schematic structural diagram of a device 700 .
- the device 700 includes a bus 701 , a processor 702 , a communication interface 703 and a memory 704 .
- the processor 702 , the memory 704 and the communication interface 703 communicate through the bus 701 .
- the bus 701 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like.
- the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 7, but it does not mean that there is only one bus or one type of bus.
- the communication interface 703 is used for external communication. For example, get image and point cloud data of the target environment, etc.
- the processor 702 may be a central processing unit (central processing unit, CPU).
- Memory 704 may include volatile memory, such as random access memory (RAM).
- RAM random access memory
- Memory 704 may also include non-volatile memory, such as read-only memory (ROM), flash memory, HDD, or SSD.
- Executable code is stored in the memory 704, and the processor 702 executes the executable code to execute the aforementioned object processing method.
- the object processing network 101 in FIG. 2 the object processing network 101 in FIG.
- the software or program codes required for the functions of the inference module 103 and the label correction module 105 are stored in the memory 704 .
- the processor 702 executes program codes corresponding to each module stored in the memory 704, such as program codes corresponding to the object processing network 101, the label inference module 103, and the label correction module 105, to determine the processing result of the object to be processed.
- Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium includes instructions, the instructions instruct the device 700 to execute the above object processing method applied to the object processing apparatus 100 .
- An embodiment of the present application further provides a computer program product, when the computer program product is executed by a computer, the computer executes any one of the foregoing object processing methods.
- the computer program product can be a software installation package, and when any one of the aforementioned object processing methods needs to be used, the computer program product can be downloaded and executed on a computer.
- the computer program product includes one or more computer instructions.
- the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be retrieved from a website, computer, training device, or data Transmission from the center to another website site, computer, training facility or data center via wired (eg coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means.
- wired eg coaxial cable, fiber optic, digital subscriber line (DSL)
- wireless eg infrared, wireless, microwave, etc.
- the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a training device, a data center, or the like that includes an integration of one or more available media.
- the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
- Computer readable program instructions or code described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as the "C" language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer may be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, may be connected to an external computer (eg, use an internet service provider to connect via the internet).
- electronic circuits such as programmable logic circuits, Field-Programmable Gate Arrays (FPGA), or Programmable Logic Arrays (Programmable Logic Arrays), are personalized by utilizing state information of computer-readable program instructions.
- Logic Array, PLA the electronic circuit can execute computer readable program instructions to implement various aspects of the present application.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in hardware (eg, circuits or ASICs (Application) that perform the corresponding functions or actions. Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented by a combination of hardware and software, such as firmware.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Image Analysis (AREA)
Abstract
本申请涉及人工智能领域,公开了一种对象处理方法及装置。所述方法包括:将待处理对象输入至对象处理网络,经所述对象处理网络输出所述待处理对象的处理结果;其中,所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本,所述训练包括:获取所述含噪声样本集合中目标样本的推断标签;根据所述推断标签,修正所述目标样本的标签;利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络。
Description
本申请要求于2021年03月15日提交中国专利局、申请号为202110276806.3、发明名称为“一种对象处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及人工智能技术领域,尤其涉及一种对象处理方法及装置。
人工智能的发展离不开机器学习模型,而机器学习模型的训练依赖于样本数据的质量。在有监督学习中,样本数据中标注有标签信息,标签信息越准确,样本数据的质量越高。利用专家经验标注高质量的标签信息需要耗费较多的人力成本。为了降低成本,可以利用数据收集平台(如Amazon Mechanical Turk)或者网络爬虫等方式获取大量低成本的样本数据,这些样本数据中往往还有大量的噪声样本,噪声样本即包括错误的标签信息。
相关技术中,一些机器学习算法(如深度神经网络(DNN))拥有较强的非线性特征表征能力,可以学习样本数据集中的干净样本(即标签正确的样本)。具体来说,可以根据机器学习模型的损失函数分布,例如基于small-loss准则,筛选出样本数据中的噪声样本(即标签有误的样本),并增大干净样本对模型训练的影响,降低或者消除噪声样本的影响。但是,基于损失函数分布的样本筛选方式需要基于干净样本和噪声样本的损失函数分布不同的原则,如呈现双峰分布。在实际筛选过程中,尤其在训练开始阶段,干净样本和噪声样本的损失值都较大,导致都不能很好地拟合上述如双峰分布等损失函数分布。如果按照上述方式进行样本筛选,可能导致大量样本被误判,如干净样本被误判为噪声样本,噪声样本有可能被误判为干净样本,影响到机器学习模型的性能。
因此,相关技术中亟需一种提升利用含噪声样本集训练机器学习模型的性能。
发明内容
本申请提供了一种对象处理方法,解决了相关技术中检测准确度不高的问题。本申请还提供了对应的装置、设备、计算机可读存储介质以及计算机程序产品。
第一方面,本申请的实施例提供了一种对象处理方法,该方法可以获取到所述含噪声样本集合中各个样本对应的真实标签,并利用所述真实标签修正各个样本的标签。基于上述的标签推断和修正机制,可以识别出含噪声样本集合中的噪声样本并改善所有样本的标签,提高利用含噪声样本集合训练所述对象处理网络的训练质量,提升所述对象处理网络的处理性能。另一方面,本申请实施例对于所述含噪声样本集合不做限制,使得利用含噪声样本集合训练网络的方式具有较强的泛化能力。
具体地,将待处理对象输入至对象处理网络,经所述对象处理网络输出所述待处理对象的处理结果;其中,所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本,所述训练包括:
获取所述含噪声样本集合中目标样本的推断标签;
根据所述推断标签,修正所述目标样本的标签;
利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述获取所述含噪声样本集合中目标样本的推断标签,包括:
利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;
根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
本实施例中,利用特征相似度确定所述目标样本的多个参考样本,并利用所述特征相似度确定所述目标样本的推断标签。具体来说,特征相似度可以用于表示所述参考样本对于推断所述目标样本真实标签的重要程度,因此,利用特征相似度确定所述目标样本的参考样本以及推断标签,能够获取到比较准确的推断标签。
可选的,在本申请的一个实施例中,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
本实施例中,可以同时根据所述目标样本的类内类间关系衡量其他样本对目标样本i真实标签的重要程度,准确性较高。
可选的,在本申请的一个实施例中,所述根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签,包括:
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;
将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
本实施例中,可以根据所述特征相似度,获取到所述目标样本在所有标签上的概率分布,即获取到所述目标样本的软化标签,能够获取到更加准确的所述目标样本的推断标签。
可选的,在本申请的一个实施例中,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
本实施例中,可以利用所述含噪声样本集合同时训练多个对象处理分支网络,并将不同分支网络所确定的推断标签进行交换。该方式不仅可以克服网络分支在自我迭代过程中所产生的误差,还可以融合不同网络分支能够过滤不同噪声的优势。
可选的,在本申请的一个实施例中,所述根据所述推断标签,修正所述噪声样本的标签,包括:
利用所述对象处理网络确定所述目标样本的预测结果;
根据所述推断标签和所述预测结果修正所述目标样本的标签。
本实施例中,可以利用所述推断标签与所述对象处理网络的预测结果联合修正所述目标样本的标签,提升修正后的标签的准确性。
可选的,在本申请的一个实施例中,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
本实施例中,对所述目标样本进行增强,可以丰富样本数量,降低所述对象处理网络产生过拟合的可能性。
可选的,在本申请的一个实施例中,所述根据所述推断标签和所述预测结果修正所述目标样本的标签,包括:
将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
本实施例中,将所述推断标签与所述预测结果的加权和作为修正后的标签,并将所述推断标签的置信度作为所述推断标签的权重,可以进一步提升修正后标签的准确性。
可选的,在本申请的一个实施例中,所述利利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络,包括:
根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;
基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
本实施例中,可以利用干净样本和/或噪声样本等多种选择方式训练所述对象处理网络。
可选的,在本申请的一个实施例中,所述利用所述含噪声样本集合中的干净样本和噪声样本训练所述初始对象处理网络,得到所述对象处理网络,包括:
从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签进行融合,获取融合后的样本;
利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
本实施例中,在利用干净样本和/或噪声样本训练所述对象处理网络的过程中,可以以所述干净样本作为基础,在所述干净样本上融合其他任何的样本,利用融合之后的样本训练对象处理网络,可以增强干净样本对网络的影响,同时发挥了所述噪声样本的价值。
第二方面,本申请的实施例提供了一种生成对象处理网络的方法,所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本,包括:
获取所述含噪声样本集合中目标样本的推断标签;
根据所述推断标签,修正所述目标样本的标签;
利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述获取所述含噪声样本集合中目标样本的推断标签,包括:
利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;
根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
可选的,在本申请的一个实施例中,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
可选的,在本申请的一个实施例中,所述根据所述目标样本分别与所述多个参考样本之 间的特征相似度,确定所述目标样本的推断标签,包括:
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;
将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
可选的,在本申请的一个实施例中,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
可选的,在本申请的一个实施例中,所述根据所述推断标签,修正所述噪声样本的标签,包括:
利用所述对象处理网络确定所述目标样本的预测结果;
根据所述推断标签和所述预测结果修正所述目标样本的标签。
可选的,在本申请的一个实施例中,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
可选的,在本申请的一个实施例中,所述根据所述推断标签和所述预测结果修正所述目标样本的标签,包括:
将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
可选的,在本申请的一个实施例中,所述利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络包括:
根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;
基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述利用所述含噪声样本集合中的干净样本和噪声样本训练所述初始对象处理网络,得到所述对象处理网络,包括:
从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签进行融合,获取融合后的样本;
利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
第三方面,本申请的实施例提供了一种对象处理装置,该装置包括:
对象处理网络,用于输出待处理对象的处理结果;所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本;
标签推断模块,用于获取所述含噪声样本集合中目标样本的推断标签;
标签修正模块,用于根据所述推断标签,修正所述目标样本的标签;修正标签后的所述目标样本被用于监督训练初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述标签推断模块,具体用于:
利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;
根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推 断标签。
可选的,在本申请的一个实施例中,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
可选的,在本申请的一个实施例中,所述标签推断模块,具体用于:
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;
将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
可选的,在本申请的一个实施例中,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
可选的,在本申请的一个实施例中,所述标签修正模块,具体用于:
利用所述对象处理网络确定所述目标样本的预测结果;
根据所述推断标签和所述预测结果修正所述目标样本的标签。
可选的,在本申请的一个实施例中,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
可选的,在本申请的一个实施例中,所述标签修正模块,具体用于:
将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
可选的,在本申请的一个实施例中,所述对象处理网络,具体用于:
根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;
基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述对象处理网络,具体用于:
从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签进行融合,获取融合后的样本;
利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
第四方面,本申请的实施例提供了一种生成对象处理网络的装置,其特征在于,所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本,包括:
标签推断模块,用于获取所述含噪声样本集合中目标样本的推断标签;
标签修正模块,用于根据所述推断标签,修正所述目标样本的标签;修正标签后的所述目标样本被用于监督训练所述对象处理网络,直至达到训练终止条件。
可选的,在本申请的一个实施例中,所述标签推断模块,具体用于:
利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;
根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推 断标签。
可选的,在本申请的一个实施例中,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
可选的,在本申请的一个实施例中,所述标签推断模块,具体用于:
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;
将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
可选的,在本申请的一个实施例中,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
可选的,在本申请的一个实施例中,所述标签修正模块,具体用于:
利用所述对象处理网络确定所述目标样本的预测结果;
根据所述推断标签和所述预测结果修正所述目标样本的标签。
可选的,在本申请的一个实施例中,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
可选的,在本申请的一个实施例中,所述标签修正模块,具体用于:
将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
可选的,在本申请的一个实施例中,所述对象处理网络,具体用于:
根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;
基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述对象处理网络,具体用于:
从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签进行融合,获取融合后的样本;
利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
第五方面,本申请的实施例提供了一种对象处理装置,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现上述各方面任一项可能实现的方法。
第六方面,本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述各方面任一项可能实现的方法。
第七方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述各方面任一项可能实现的方法
第八方面,本申请实施例提供一种芯片,该芯片包括至少一个处理器,该处理器用于运行存储器中存储的计算机程序或计算机指令,以执行上述各方面任一项可能实现的方法。
可选的,该芯片还可以包括存储器,该存储器用于存储计算机程序或计算机指令。
可选的,该芯片还可以包括通信接口,用于与芯片以外的其他模块进行通信。
可选的,一个或多个芯片可以构成芯片系统。
本申请的这些和其他方面在以下(多个)实施例的描述中会更加简明易懂。
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示例性实施例、特征和方面,并且用于解释本申请的原理。
图1为相关技术中干净样本与噪声样本的损失值分布图;
图2为本申请实施例提供的一种对象处理装置100的模块结构示意图;
图3为本申请实施例提供的一种对象处理方法的流程示意图;
图4为本申请实施例提供的一种推断标签方法的流程示意图;
图5为本申请实施例提供的一种根据参考样本推断标签方法的流程示意图;
图6为本申请实施例提供的一种双分支网络训练的示意图;
图7为本申请实施例提供的一种处理设备的结构示意图。
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。
相关技术中利用含噪声样本集训练机器学习模型的算法大多基于small-loss准则,主要包括损失调整和样本筛选两种算法。损失调整是指在构建模型训练的目标损失函数的过程中,赋予噪声样本较小的权重,干净样本较大的权重,从而降低噪声样本对模型训练的影响。样本选择指是指在训练过程中,利用干净样本更新网络参数,直接剔除噪声样本影响。基于small-loss的准则依赖于干净样本和噪声样本的损失函数分布不相同,如呈现图1所示的双峰分布。具体来说,可以采用贝塔混合模型(BMM)或者高斯混合模型(GMM)对样本的损失函数分布进行建模,并通过设定阈值来区分干净样本和噪声样本。
在一些特定的样本集下,如CIFAR-10loss,干净样本和噪声样本的损失函数明显遵循双峰分布,但并不是所有的样本集都遵循双峰分布,如WebVision等含噪声样本集的样本损失函数并不遵循双峰分布。因此,基于small-loss准则的训练方式具有较低的泛化能力。对于基于small-loss准则的训练方式,在一些训练阶段,尤其是在训练开始阶段,无论是干净样本还是噪声样本都不能被很好地拟合,导致所有样本的损失值都较大,导致很难用损失函数的分布区分干净样本和有噪样本。如果直接通过设定的一个阈值来区分的话,会导致大量样本被误判,即干净样本可能被判为噪声样本,噪声样本也有可能被判为干净样本,影响最终模型性能。
基于类似于上文的技术需求,本申请实施例提供了一种对象处理方法。该方法可以获取到所述含噪声样本集合中各个样本对应的真实标签,并利用所述真实标签修正各个样本的标签。基于上述的标签推断和修正机制,可以识别出含噪声样本集合中的噪声样本并改善所有样本的标签,提高利用含噪声样本集合训练所述对象处理网络的训练质量,提升所述对象处理网络的处理性能。另一方面,本申请实施例对于所述含噪声样本集合不做限制,使得利用含噪声样本集合训练网络的方式具有较强的泛化能力。
本申请实施例提供的对象处理方法可以应用于包括但不限于如图2所示的应用场景。如图2所示,该场景中包括对象处理装置100,对象处理装置100可以包括对象处理网络101、标签推断模块103和标签修正模块105。对象处理装置100可以设置于处理设备中,该处理设备具有中央处理器(Central Processing Unit,CPU)和/或图形处理器(Graphics Processing Unit,GPU),用于对输入的待处理对象进行处理,从而获取处理结果。所述待处理对象包括图像、文字、语音等数据,对应的,处理的方式包括图像分类、语音识别、文本识别等任何基于有监督学习的机器学习模型处理业务。需要说明的是,所述处理设备可以是物理设备或物理设备集群,例如终端、服务器、或服务器集群。当然,所述处理设备也可以是虚拟化的云设备,例如云计算集群中的至少一个云计算设备。
在具体实现时,对象处理网络101可以基于含噪声样本集合训练得到,所述含噪声样本集合中可以包括至少一个标签有误的噪声样本,如图2所示的三个样本示例,其中,狗的图像被标注上狼的标签,因此,在所述含噪声样本集合中,狗的图像是噪声样本。对象处理网络101的训练需要依赖于标签推断模块103和标签修正模块105。在训练过程中,对象处理网络101可以分别提取所述含噪声样本集合中各个样本的特征信息,还可以根据所述特征信息确定各个样本的对象处理结果,例如各个对象在所有标签上的概率分布,此处,所有标签是指所述含噪声样本集合中所涉及到的所有标签,或者指预设的标签集合,该标签集合中至少包括所述含噪声样本集合中所涉及到的所有标签。标签推断模块103用于根据所述特征信息确定所述含噪声样本集合中各个样本的推断标签,并根据所述推断标签确定对应的样本是噪声样本还是干净样本。例如,确定出含噪声样本集合中的原始标签为狼的图像的推断标签为狗,则可以确定该图像为噪声样本,其他图像为干净样本。其中,所述原始标签为训练之前样本最原始的标签,该原始标签不被后续训练所影响。标签修正模块105可以用于根据样本的推断标签,修正样本的标签。在其中的一个实施例中,标签修正模块105具体根据所述推断标签以及对象处理网络101所确定的对象处理结果,修正样本的标签。修正标签后的样本被用于监督训练对象处理网络101的初始对象处理网络,经过多次迭代调整,得到对象处理网络101。
训练完成的对象处理网络101可以直接使用,例如,图2所示的对象处理网络101可以直接用于对图像进行分类,识别各个图像中对象的类型。
下面结合附图对本申请所述的对象处理方法进行详细的说明。虽然本申请提供了如下述实施例或附图所示的方法操作步骤,但基于常规或者无需创造性的劳动在所述方法中可以包括更多或者更少的操作步骤。在逻辑性上不存在必要因果关系的步骤中,这些步骤的执行顺序不限于本申请实施例提供的执行顺序。所述方法在实际中的对象处理过程中或者装置执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境)。
下面结合附图3具体说明对象处理网络101的训练方式,如图3所示,所述训练方式可以包括:
S301:获取所述含噪声样本集合中目标样本的推断标签。
本申请实施例中,可以利用所述含噪声样本集合中各个样本的特征信息,确定所述目标样本的推断标签。如上所述,对象处理网络101可以提取各个样本的特征信息,具体来说,如图4所示,确定所述目标样本的推断标签的方法可以包括:
S401:利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息。
S403:根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件。
本申请实施例中的目标样本可以指所述含噪声样本集合中的任意一个样本。本申请实施例中,可以在所述含噪声样本集合中选取与所述目标样本之间的特征相似度满足预设条件的多个参考样本。所述预设条件可以包括所述目标样本与所述参考样本之间的相似度大于预设阈值,也可以包括所述目标样本和所述参考样本之间的相似度为所有相似度中最高的若干个。在本申请的一个实施例中,所述特征相似度可以根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。在一个具体的示例中,所述特征相似度可以利用下述表达式(1)计算得到:
其中,d(f
i,f
j)表示目标样本i和参考样本j之间的第一特征距离,所述特征距离即为两个样本的特征信息在特征空间中的距离,距离越小表示样本之间的相似度越高,d(f
i,f
cp)表示参考样本j与其标签对应的类中心c
p之间的第二特征距离,m和∈用于平衡所述第一特征距离与所述第二特征距离之间的关系。G
p表示所述含噪声样本集中原始标签都为p(也是参考样本j的标签)的样本,f
n表示第n个样本的特征信息。
基于上述表达式(1)和(2),目标样本i与所述含噪声样本集D中其他样本j之间的特征相似度可以利用下述表达式(3)表示:
在本申请的一个实施例中,可以对上述特征相似度集合中的特征相似度进行排序,并将特征相似度最大的K个样本作为所述参考样本。在另一个实施例中,还可以将所述特征相似度大于预设阈值的样本作为所述参考样本。当然,还可以同时满足上述两个条件,本申请在此不做限制。
本申请实施例,所述特征相似度S可以用于表示参考样本j对目标样本i真实标签的重要程度。目标样本i和参考样本j之间的第一特征距离d(f
i,f
j)越小,表示参考样本j对推断目标样本i真实标签的过程影响越大,也就是利用到了目标样本i的类内关系。另一方面,如果参考样本j是噪声样本,则会对目标样本i的真实标签产生较大的噪声干扰,需要消除其影响。基于此,在参考样本j为噪声样本的情况下,d(f
i,f
cp)较大,可以显著降低特征相似度S,从而降低参考样本j对推断过程的影响,也就是利用到了目标样本i的类间关系。综上所述,可以 同时根据目标样本i的类内类间关系衡量其他样本对目标样本i真实标签的重要程度,准确性较高。
当然,表达式(1)只是确定所述特征相似度的其中一种实施例,本申请对于构建所述特征相似度的方式不做限制。
S405:根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
本申请实施例中,所述参考样本与所述目标样本之间的特征相似度可以表示所述参考样本对推断所述目标样本真实标签的重要程度。基于此,可以利用所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
在实际应用场景下,所述多个参考样本分别对应于一个原始标签,例如,参考样本为图像,图像的类别标签是猫、狗、船、狼中的一种。所述目标样本与所述参考样本之间的特征相似度表达的是两个样本之间的相似程度,有可能发生目标样本同时与多个不同标签的参考样本之间的相似度都比较接近的情况。为了更加准确地推断所述目标样本的真实标签,可以根据所述目标样本与所述多个样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率。在一个示例中,经过统计,含噪声样本集合中所涉及到的所有标签有{猫、狗、船、狼、花、熊、……},那么,所述标签概率分布用于表达所述目标样本的标签分别为猫、狗、船、狼、花、熊等的可能性,例如,所述标签概率分布可以表示为{猫=0.6、狗=0.1、船=0.02、狼=0.05、花=0.003、熊=0.08、……}。所述标签概率分布可以更加准确地表达所述目标对象的真实标签。基于此,在本申请的一个实施例中,如图5所示,具体可以包括:
S501:根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本在所有标签上的概率分布。
在本申请的一个实施例中,首先,可以将所述多个参考样本按照标签的不同进行划分,并确定标签n对应的至少一个参考样本分别与所述目标样本i之间的特征相似度之和ρ
in:
ρ
in=ΣS(f
i,f
j)II{y
j=n},n=1,1,...,C. (4)
其中,n表示标签n,C表示标签的类别总数,Ⅱ表示示例函数,Ⅱ{y
j=n}表示在参考样本j的原始标签为n的情况下,y
j=1;否则为0。
这样,可以得到一个特征相似度之和向量ρ
i:
ρ
i={ρ
i1,...,ρ
in,...,ρ
iC} (5)
由于概率值通常分布于0到1之间,因此,可以对特征相似度之和向量ρ
i进行归一化处理,具体可以包括下述表达式:
当然,在本申请的另一个实施例中,还可以将锐化处理后的结果
作为目标样本i在所有标签上的概率分布。在一个具体的示例中,对于图2中标签为狼的图像,
可以表示为{猫=0.1,船=0.06,狼=0.6,狗=0.7,老虎=0.16,……}这种形式的概率分布。
S503:将所述概率分布中概率最大的标签作为所述目标样本的推断标签。
在获取到目标样本i在所有标签上的概率分布之后,那么,可以根据从所述概率分布中确定出最大概率值所对应的标签,并将该标签作为目标样本i的推断标签。具体的,所述推断标签可以表示为:
基于此,如果目标样本i的原始标签与所述推断标签不相同,则可以确定目标样本i为噪声样本;否则为干净样本。具体地,在一个示例中,判决结果可以表示为:
其中,示例函数Ⅱ的值为1表示目标样本i的原始标签y
i为干净样本,值为0表示目标样本i的原始标签y
i为噪声样本。
在一个具体的示例中,基于对于图2中标签为狼的图像的概率分布,可以确定所述概率分布中概率值最高的标签为狗,那么,可以确定该图像的推断标签为狗,与原始标签狼不相同,因此,可以确定该图像为噪声样本,其他图像均为干净样本。
S303:根据所述推断标签,修正所述目标样本的标签。
S305:利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络。
本申请实施例中,在对所述目标样本的标签修正之后,可以利用所述修正标签后的所述目标样本监督训练对象处理网络101的初始对象处理网络,得到对象处理网络101。需要说明的是,训练对象处理网络101包括对S301和S303多次迭代处理的过程,直至对象处理网络101达到收敛或者达到预设数量的迭代次数等训练终止条件。
本申请实施例中,对对象处理网络101的训练目的在于使得对象处理网络101能够处理得到更加准确的结果。因此,在不断训练的过程中,对象处理网络101的性能也在不断增强,基于此,可以将对象处理网络101的预测结果用于修正所述目标样本的标签。也就是说,可以利用所述目标样本的推断标签和所述预测结果联合修正所述目标样本的标签。在本申请的一个实施例中,可以将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。在一个示例中,修正后的结果
可以表示为:
本申请实施例中,利用目标样本i的推断标签和对象处理网络101的预测结果联合修正所述目标样本i的标签,能够提升修正后的标签的准确性。
为了降低对象处理网络101产生过拟合的可能性,可以对所述目标样本进行数据增强,获取到对所述目标样本数据增强后的多个样本,以丰富样本数量。对于图像而言,具体的数据增强的方式可以包括对图像的旋转、缩放、色彩调整、裁剪、替换背景等操作,本申请在此不做限制。然后,可以利用对象处理网络101分别获取到所述多个样本的预测结果,并根据所述多个样本的预测结果确定对象处理网络101对目标样本i的预测结果p
i。在一个具体的示例中,所述预测结果可以表示为:
其中,x
i,m表示目标样本i的第m个增强样本,M表示目标样本i的增强样本总数,θ表示对象处理网络101的参数,P(x
i,m,θ)表示对象处理网络101对x
i,m的预测结果。
在本申请实施例中,可以使用所述干净样本和/或所述噪声样本训练对象处理网络101训练。也就是说,可以单独使用所述干净样本训练对象处理网络101,也可以单独使用所述噪声样本训练对象处理网络101,当然,也可以使用所述干净样本和所述噪声样本训练对象处理网络101。在使用所述干净样本和所述噪声样本训练对象处理网络101的过程中,可以将所述干净样本和所述噪声样本作为训练集合训练对象处理网络101。在本申请的另一个实施例中,还可以增强所述干净样本对对象处理网络101的影响。具体来说,可以对所述干净样本进行数据增强,数据增强的方式可以包括:对于目标干净样本,可以从所述干净样本和/或所述噪声样本中选取一个样本与所述目标干净样本相融合。对于图像而言,融合的方式例如可以包括图像像素信息的叠加、修正后的标签的叠加等等。利用上述融合之后的样本训练对象处理网络,可以增强干净样本对网络的影响,同时发挥了所述噪声样本的价值。
在实际应用环境中,利用相同的机器学习模型训练同一批数据,可能产生具有两种不同性能的机器学习模型,且每个机器学习模型具有各自的优势。基于此,在本申请的一个实施例中,可以利用多个不同的对象处理分支网络分别对同一批含噪声样本集合进行处理,并获取到所述含噪声样本集合中的各个样本对应的推断标签。然后,各个对象处理分支网络可以将确定的各个样本的推断标签发送至其他对象处理分支网络中。
下面结合附图6说明上述实施例的方法,如图6所示,对象处理网络101和对象处理网络101’为两个不同的网络分支,但是基于相同的含噪声样本集合训练。为了使得两个不同的网络分支具有各自的性能优势,如可以过滤不同类型的噪声样本,可以分别设置对象处理网络101和对象处理网络101’不同的初始网络参数,或者,设置处理含噪声样本集合中样本的顺序不相同,使得对象处理网络101和对象处理网络101’具有不同的性能优势。具体来说,对象处理网络101和对象处理网络101’可以分别按照上述提供的确定目标样本的推断标签的方式确定各个样本的推断标签。如图6所示,对象处理网络101可以确定出所述含噪声样本集合中第一目标样本的第一特征信息,标签推断模块103可以据此第一特征信息确定第一目标样本的推断标签,另一个分支网络上,对象处理网络101’可以确定出所述含噪声样本集合 中第二目标样本的第二特征信息,标签推断模块103可以据此第二特征信息确定第二目标样本的的推断标签。根据上述实施例的方法,对象处理网络101和对象处理网络101’可以将确定的目标样本的推断标签进行互换。这样,标签修正模块105对第二目标样本进行标签修正,另一方面,标签修正模块105’对第一目标样本进行标签修正。修正的方式可以参考上述表达式(11)和(12),在此不再赘述。在本申请的一个实施例中,还可以在修正所述第一目标样本、所述第二目标样本的标签的过程中同时融入对象处理网络101和对象处理网络101’分别对所述第一目标样本、所述第二目标样本的预测结果。如图6所示,可以将对象处理网络101对第一目标样本的预测结果传递给标签修正网络105和标签修正网络105’,另一个网络分支上,将对象处理网络101’对第二目标样本的预测结果传递给标签修正网络105’和标签修正网络105。在一个示例中,表达式(11)中的p
i中可以包括两个网络的预测结果,可以包括:
其中,x
i表示第一/二目标样本i,θ表示对象处理网络101的参数,θ’表示对象处理网络101’的参数,P(x
i,θ)表示对象处理网络101对x
i的预测结果,P’(x
i,θ’)表示对象处理网络101’对x
i的预测结果。
在本申请的一个实施例中,还可以考虑到对目标样本的数据增强,因此,预测结果p
i还可以包括下述表达式:
其中,x
i,m表示第一/二目标样本i的第m个经过数据增强后的样本,M表示对第一/二目标样本i数据增强后的样本总数,θ表示对象处理网络101的参数,θ’表示对象处理网络101’的参数,P(x
i,m,θ)表示对象处理网络101对x
i,m的预测结果,P’(x
i,m,θ’)表示对象处理网络101’对x
i,m的预测结果。
需要说明的是,图6仅示出了有两个网络分支的情况,在其他实施例中,有三个或者更多的网络分支的情况下,在进行样本交换的过程中,可以遵循将网络分支的样本发送给其他网络分支,并从其他网络分支获取到样本即可。例如,对于具有三个网络的情况,网络1可以将样本发送给网络2,网络2可以将样本发送给网络3,网络3可以将样本发送给网络1。
在本申请实施例中,在同时训练得到多个对象处理分支网络的情况下,可以将所述待处理对象分别输入至所述多个对象处理分支网络,经所述多个对象处理分支网络分别输出对应的处理结果。然后,可以将多个所述处理结果的平均作为最终对所述待处理对象的处理结果。
上文中结合图1至图6,详细描述了本申请所提供的对象处理方法,下面将结合附图,描述根据本申请所提供的对象处理装置100和设备700。
参见图2所示的系统架构图中对象处理装置100的结构示意图,如图2所示,该装置100包括:
对象处理网络101,用于输出待处理对象的处理结果;所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本;
标签推断模块103,用于获取所述含噪声样本集合中目标样本的推断标签;
标签修正模块105,用于根据所述推断标签,修正所述目标样本的标签;修正标签后的所述目标样本被用于监督训练初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述标签推断模块,具体用于:
利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;
根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
可选的,在本申请的一个实施例中,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
可选的,在本申请的一个实施例中,所述标签推断模块,具体用于:
根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;
将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
可选的,在本申请的一个实施例中,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
可选的,在本申请的一个实施例中,所述标签修正模块,具体用于:
利用所述对象处理网络确定所述目标样本的预测结果;
根据所述推断标签和所述预测结果修正所述目标样本的标签。
可选的,在本申请的一个实施例中,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
可选的,在本申请的一个实施例中,所述标签修正模块,具体用于:
将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
可选的,在本申请的一个实施例中,所述对象处理网络,具体用于:
根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;
基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
可选的,在本申请的一个实施例中,所述对象处理网络,具体用于:
从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签进行融合,获取融合后的样本;
利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
根据本申请实施例的对象处理装置100可对应于执行本申请实施例中描述的方法,并且对象处理装置100中的各个模块的上述和其它操作和/或功能分别为了实现图3、图4、图5中的各个方法的相应流程,为了简洁,在此不再赘述。
另外需说明的是,以上所描述的实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图 中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
本申请实施例还提供了一种设备700,用于实现上述图2所示的系统架构图中对象处理装置100的功能。其中,设备700可以是物理设备或物理设备集群,也可以是虚拟化的云设备,如云计算集群中的至少一个云计算设备。为了便于理解,本申请以设备700为独立的物理设备对该设备700的结构进行示例说明。
图7提供了一种设备700的结构示意图,如图7所示,设备700包括总线701、处理器702、通信接口703和存储器704。处理器702、存储器704和通信接口703之间通过总线701通信。总线701可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图7中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信接口703用于与外部通信。例如,获取目标环境的图像和点云数据等等。
其中,处理器702可以为中央处理器(central processing unit,CPU)。存储器704可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。存储器704还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,HDD或SSD。
存储器704中存储有可执行代码,处理器702执行该可执行代码以执行前述对象处理方法。
具体地,在实现图2所示实施例的情况下,且图2实施例中所描述的对象处理装置100的各模块为通过软件实现的情况下,执行图2中的对象处理网络101、标签推断模块103、标签修正模块105功能所需的软件或程序代码存储在存储器704中。处理器702执行存储器704中存储的各模块对应的程序代码,如对象处理网络101、标签推断模块103、标签修正模块105对应的程序代码,以确定待处理对象的处理结果。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质包括指令,所述指令指示设备700执行上述应用于对象处理装置100的对象处理方法。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品被计算机执行时,所述计算机执行前述对象处理方法的任一方法。该计算机程序产品可以为一个软件安装包,在需要使用前述对象处理方法的任一方法的情况下,可以下载该计算机程序产品并在计算机上执行该计算机程序产品。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,训练设备,或者网络设备等)执行本申请各个实施例所述的方 法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、训练设备或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、训练设备或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的训练设备、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
这里所描述的计算机可读程序指令或代码可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可 编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本申请的多个实施例的装置、系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。
也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行相应的功能或动作的硬件(例如电路或ASIC(Application Specific Integrated Circuit,专用集成电路))来实现,或者可以用硬件和软件的组合,如固件等来实现。
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其它变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其它单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。
Claims (33)
- 一种对象处理方法,其特征在于,包括:将待处理对象输入至对象处理网络,经所述对象处理网络输出所述待处理对象的处理结果;其中,所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本,所述训练包括:获取所述含噪声样本集合中目标样本的推断标签;根据所述推断标签,修正所述目标样本的标签;利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络。
- 根据权利要求1所述的方法,其特征在于,所述获取所述含噪声样本集合中目标样本的推断标签,包括:利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
- 根据权利要求2所述的方法,其特征在于,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
- 根据权利要求2或3所述的方法,其特征在于,所述根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签,包括:根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
- 根据权利要求1所述的方法,其特征在于,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
- 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述推断标签,修正所述噪声样本的标签,包括:利用所述对象处理网络确定所述目标样本的预测结果;根据所述推断标签和所述预测结果修正所述目标样本的标签。
- 根据权利要求6所述的方法,其特征在于,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
- 根据权利要求6或7所述的方法,其特征在于,所述根据所述推断标签和所述预测结果修正所述目标样本的标签,包括:将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
- 根据权利要求1-8任一项所述的方法,其特征在于,所述利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络包括:根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
- 根据权利要求9所述的方法,其特征在于,所述利用所述含噪声样本集合中的干净样本和噪声样本训练所述初始对象处理网络,得到所述对象处理网络,包括:从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签进行融合,获取融合后的样本;利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
- 一种生成对象处理网络的方法,其特征在于,所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本,包括:获取所述含噪声样本集合中目标样本的推断标签;根据所述推断标签,修正所述目标样本的标签;利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络。
- 根据权利要求11所述的方法,其特征在于,所述获取所述含噪声样本集合中目标样本的推断标签,包括:利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
- 根据权利要求12所述的方法,其特征在于,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
- 根据权利要求12或13所述的方法,其特征在于,所述根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签,包括:根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标 签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
- 根据权利要求11所述的方法,其特征在于,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
- 根据权利要求11-15任一项所述的方法,其特征在于,所述根据所述推断标签,修正所述噪声样本的标签,包括:利用所述对象处理网络确定所述目标样本的预测结果;根据所述推断标签和所述预测结果修正所述目标样本的标签。
- 根据权利要求16所述的方法,其特征在于,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
- 根据权利要求16或17所述的方法,其特征在于,所述根据所述推断标签和所述预测结果修正所述目标样本的标签,包括:将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
- 根据权利要求11-18任一项所述的方法,其特征在于,所述利用修正标签后的所述目标样本监督训练初始对象处理网络,得到所述对象处理网络包括:根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
- 根据权利要求19所述的方法,其特征在于,所述利用所述含噪声样本集合中的干净样本和噪声样本训练所述初始对象处理网络,得到所述对象处理网络,包括:从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签进行融合,获取融合后的样本;利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
- 一种对象处理装置,其特征在于,包括:对象处理网络,用于输出待处理对象的处理结果;所述对象处理网络利用含噪声样本集合训练得到,所述含噪声样本集合包括至少一个标签有误的噪声样本;标签推断模块,用于获取所述含噪声样本集合中目标样本的推断标签;标签修正模块,用于根据所述推断标签,修正所述目标样本的标签;修正标签后的所述目标样本被用于监督训练初始对象处理网络,得到所述对象处理网络。
- 根据权利要求21所述的装置,其特征在于,所述标签推断模块,具体用于:利用所述对象处理网络分别确定所述含噪声样本集合中各个样本的特征信息;根据所述特征信息确定所述目标样本的多个参考样本,所述目标样本与所述参考样本之间的特征相似度满足预设条件;根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的推断标签。
- 根据权利要求22所述的装置,其特征在于,所述特征相似度根据所述目标样本与所述参考样本之间的第一特征距离以及所述参考样本与其标签对应的类中心之间的第二特征距离确定,且所述特征相似度与所述第一特征距离及所述第二特征距离负相关。
- 根据权利要求22或23所述的装置,其特征在于,所述标签推断模块,具体用于:根据所述目标样本分别与所述多个参考样本之间的特征相似度,确定所述目标样本的标签概率分布,所述标签概率分布包括所述目标样本分别对应各个标签的概率,所述各个标签包括所述含噪声样本集合中的标签;将所述标签概率分布中概率最大的标签作为所述目标样本的推断标签。
- 根据权利要求21所述的装置,其特征在于,所述目标样本的推断标签利用对象处理分支网络所确定,所述对象处理分支网络同样基于所述含噪声样本集合训练。
- 根据权利要求21-25任一项所述的装置,其特征在于,所述标签修正模块,具体用于:利用所述对象处理网络确定所述目标样本的预测结果;根据所述推断标签和所述预测结果修正所述目标样本的标签。
- 根据权利要求26所述的装置,其特征在于,所述预测结果包括对所述目标样本或者所述目标样本进行数据增强后的样本的预测结果。
- 根据权利要求26或27所述的装置,其特征在于,所述标签修正模块,具体用于:将所述推断标签和所述预测结果的加权和作为所述目标样本修正后的标签,所述推断标签和所述预测结果的权重根据所述推断标签所对应的概率确定。
- 根据权利要求21-28任一项所述的装置,其特征在于,所述对象处理网络,具体用于:根据所述目标样本的原始标签与所述推断标签是否相同,确定所述目标样本为干净样本还是噪声样本;基于修正后的标签,利用所述含噪声样本集合中的干净样本和/或噪声样本训练所述初始对象处理网络,得到所述对象处理网络。
- 根据权利要求29所述的装置,其特征在于,所述对象处理网络,具体用于:从所述干净样本和/或所述噪声样本中随机抽取样本与所述干净样本及其修正后的标签 进行融合,获取融合后的样本;利用所述融合后的样本训练所述初始对象处理网络,得到所述对象处理网络。
- 一种对象处理装置,其特征在于,包括:处理器;和用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现权利要求1-10或者权利要求11-20任意一项所述的方法。
- 一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1-10或者权利要求11-20中任意一项所述的方法。
- 一种计算机程序产品,其特征在于,包括计算机可读代码,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述权利要求1-10或者权利要求11-20中任意一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110276806.3A CN115147670A (zh) | 2021-03-15 | 2021-03-15 | 一种对象处理方法及装置 |
CN202110276806.3 | 2021-03-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022194049A1 true WO2022194049A1 (zh) | 2022-09-22 |
Family
ID=83321615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/080397 WO2022194049A1 (zh) | 2021-03-15 | 2022-03-11 | 一种对象处理方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115147670A (zh) |
WO (1) | WO2022194049A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012569B (zh) * | 2023-03-24 | 2023-08-15 | 广东工业大学 | 一种基于深度学习的含噪数据下的多标签图像识别方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657745A (zh) * | 2015-01-29 | 2015-05-27 | 中国科学院信息工程研究所 | 一种已标注样本的维护方法及双向学习交互式分类方法 |
CN108898166A (zh) * | 2018-06-13 | 2018-11-27 | 北京信息科技大学 | 一种图像标注方法 |
CN110363228A (zh) * | 2019-06-26 | 2019-10-22 | 南京理工大学 | 噪声标签纠正方法 |
CN111414946A (zh) * | 2020-03-12 | 2020-07-14 | 腾讯科技(深圳)有限公司 | 基于人工智能的医疗影像的噪声数据识别方法和相关装置 |
-
2021
- 2021-03-15 CN CN202110276806.3A patent/CN115147670A/zh active Pending
-
2022
- 2022-03-11 WO PCT/CN2022/080397 patent/WO2022194049A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657745A (zh) * | 2015-01-29 | 2015-05-27 | 中国科学院信息工程研究所 | 一种已标注样本的维护方法及双向学习交互式分类方法 |
CN108898166A (zh) * | 2018-06-13 | 2018-11-27 | 北京信息科技大学 | 一种图像标注方法 |
CN110363228A (zh) * | 2019-06-26 | 2019-10-22 | 南京理工大学 | 噪声标签纠正方法 |
CN111414946A (zh) * | 2020-03-12 | 2020-07-14 | 腾讯科技(深圳)有限公司 | 基于人工智能的医疗影像的噪声数据识别方法和相关装置 |
Also Published As
Publication number | Publication date |
---|---|
CN115147670A (zh) | 2022-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021136365A1 (zh) | 基于机器学习模型的应用开发方法、装置及电子设备 | |
WO2019169688A1 (zh) | 车辆定损方法、装置、电子设备及存储介质 | |
EP3227836B1 (en) | Active machine learning | |
US20190279088A1 (en) | Training method, apparatus, chip, and system for neural network model | |
WO2020063314A1 (zh) | 字符切分识别方法、装置、电子设备、存储介质 | |
WO2018166114A1 (zh) | 图片识别的方法、系统、电子装置及介质 | |
CN111523640B (zh) | 神经网络模型的训练方法和装置 | |
CN106897746B (zh) | 数据分类模型训练方法和装置 | |
WO2020253127A1 (zh) | 脸部特征提取模型训练方法、脸部特征提取方法、装置、设备及存储介质 | |
EP3620982B1 (en) | Sample processing method and device | |
JP7483005B2 (ja) | データ・ラベル検証 | |
JP2017224027A (ja) | データのラベリングモデルに係る機械学習方法、コンピュータおよびプログラム | |
US11403560B2 (en) | Training apparatus, image recognition apparatus, training method, and program | |
JP7480811B2 (ja) | サンプル分析の方法、電子装置、コンピュータ可読記憶媒体、及びコンピュータプログラム | |
EP4220555A1 (en) | Training method and apparatus for image segmentation model, image segmentation method and apparatus, and device | |
WO2020168754A1 (zh) | 基于预测模型的绩效预测方法、装置及存储介质 | |
CN111652320B (zh) | 一种样本分类方法、装置、电子设备及存储介质 | |
EP4343616A1 (en) | Image classification method, model training method, device, storage medium, and computer program | |
WO2022194049A1 (zh) | 一种对象处理方法及装置 | |
CN113011532A (zh) | 分类模型训练方法、装置、计算设备及存储介质 | |
US20200050899A1 (en) | Automatically filtering out objects based on user preferences | |
WO2024146266A1 (zh) | 模型训练方法、装置、电子设备、介质及程序产品 | |
US20230289597A1 (en) | Method and a system for generating secondary tasks for neural networks | |
US11875554B2 (en) | Method for generating image label, and device | |
CN115879002A (zh) | 一种训练样本生成方法、模型训练方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22770404 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22770404 Country of ref document: EP Kind code of ref document: A1 |