CN113869512A - Supplementary label learning method based on self-supervision and self-distillation - Google Patents
Supplementary label learning method based on self-supervision and self-distillation Download PDFInfo
- Publication number
- CN113869512A CN113869512A CN202111177718.4A CN202111177718A CN113869512A CN 113869512 A CN113869512 A CN 113869512A CN 202111177718 A CN202111177718 A CN 202111177718A CN 113869512 A CN113869512 A CN 113869512A
- Authority
- CN
- China
- Prior art keywords
- network
- self
- supervision
- distillation
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004821 distillation Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000006870 function Effects 0.000 claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 36
- 238000013140 knowledge distillation Methods 0.000 claims abstract description 12
- 230000007246 mechanism Effects 0.000 claims abstract description 10
- 238000007418 data mining Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000013145 classification model Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a supplementary label learning method based on self-supervision and self-distillation, which comprises the following steps: the method comprises the steps of constructing a self-supervision mode, constructing a classification network, training the network and constructing a knowledge distillation mechanism. According to the method, a better characteristic expression is obtained by using a self-supervision mode, so that a model for network learning is better, the performance of a classifier is better finally, information contained in the model is mined out by using a self-distillation mode, and a teacher-student network is constructed by using the information, so that the performance of the model can be further improved, the accuracy of the model is greatly improved, a good performance is obtained to a certain extent based on a loss function of deep learning, and the aim of end-to-end training can be fulfilled.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a supplementary label learning method based on self-supervision and self-distillation.
Background
Complementary label learning is a classical weak label learning problem, in the task, we only know that the label of a certain sample does not belong to a certain class, but does not know the specific label of the sample, the final goal is to learn a sample classifier which can give out a correct label, and the existing deep learning-based scheme mainly proposes different loss functions to solve the problem, so that the problem can be directly trained in an end-to-end mode;
the existing methods neglect how to obtain more information from the data and the model, and in fact, the original data contains the overall distribution characteristics of the data, and if only the label information is supplemented by the original data, an effective classifier is difficult to obtain.
Therefore, a supplementary label learning method based on self-supervision and self-distillation is provided to solve the problems in the prior art, the performance of the model is further improved, and the accuracy of the model is greatly improved.
Disclosure of Invention
The invention aims to provide a supplementary label learning method based on self-supervision and self-distillation, so as to solve the problem that an effective classifier is difficult to obtain in the prior art, which is proposed in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: the supplementary label learning method based on self-supervision and self-distillation comprises the following steps:
s1, constructing an automatic supervision mode, namely firstly, calculating in a data set by using a data mining method, collecting data used for training a data mining model in the process, namely training data, and then constructing a corresponding automatic supervision mode based on the characteristics of the existing training data;
s2, constructing a classification network, and then constructing a multi-task classification network based on the self-supervision data constructed in the S1 and the original data based on the supplementary tags, wherein the first task is based on the traditional supplementary tags, the second task is based on the constructed self-supervision tasks, the tasks based on the self-supervision adopt the traditional cross entropy loss function, and the data based on the supplementary tags adopts the following loss function:
s3, training the network, then training the network based on the data and the loss function in S2, and training the network in an end-to-end mode based on a multitask mode, wherein the trained loss function is the sum of two task loss functions, and the specific forms of the self-supervision loss and the supplementary tag loss are as follows:
s4, constructing a knowledge distillation mechanism, and finally constructing the knowledge distillation mechanism based on the trained network after finishing the data training based on the self-supervision and supplementary labels, wherein the trained model is used as a teacher network, and a network with the same structure is selected as a student network, and then information is provided for students based on the output of the teacher network, the knowledge distillation is to train the small model by constructing a light-weight small model and utilizing the supervision information of the large model with better performance to achieve better performance and precision, compared with the traditional off-line distillation mode, the self-supervision distillation mode does not need to train a teacher network model in advance, but the training of the student network completes a distillation process, and the distillation mechanism is as follows:
preferably, for the common data in S1, the different classes of data are constructed by using a rotation matrix, which is as follows:
T(x)=Wx+b,
preferably, the cross entropy loss function in S2 can not only measure the effect of the model, but also minimize the output result of the supplementary tag, and in addition, the cross entropy can be used to determine how close the actual output is to the expected output.
Preferably, the loss function in S3 is an indispensable configuration for training the neural network, and the loss function numerically measures the performance of the model and generates an updated network by graduating network parameters.
Preferably, in S4, in the distillation process, besides learning the knowledge of the teacher network, the original supplementary label information needs to be satisfied, and the distillation information needs to be satisfied by self-supervision.
Preferably, in S1, the image is rotated by 0 °, 90 °, 180 ° and 270 ° to construct corresponding data.
Preferably, the multitask classification network in S2 is constructed by loading the network weight of the seed network into the shared feature extraction network of the multitask classification network, and freezing the network weight in the shared feature extraction network.
Preferably, the teacher network in S4 is configured to construct a teacher-student network based on the trained network information, so that the student network learns hidden information of the teacher network, and a better classification performance is finally obtained.
The invention has the technical effects and advantages that:
according to the method, a better characteristic expression is obtained by using a self-supervision mode, so that a model for network learning is better, the performance of a classifier is better finally, information contained in the model is mined out by using a self-distillation mode, and a teacher-student network is constructed by using the information, so that the performance of the model can be further improved, the accuracy of the model is greatly improved, a good performance is obtained to a certain extent based on a loss function of deep learning, and the aim of end-to-end training can be fulfilled.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a supplementary label learning method based on self-supervision and self-distillation, which comprises the following steps:
s1, constructing a self-supervision mode, firstly, calculating in a data set by using a data mining method, and collecting data used for training a data mining model in the process, namely training data, wherein the training data generally has the following requirements: the method includes the steps that data samples are as large as possible, data are diversified, the quality of the data samples is high, then a corresponding self-supervision mode is constructed based on the existing training data characteristics, for image data, an image rotation mode is adopted, corresponding data are constructed by rotating images by 0 degrees, 90 degrees, 180 degrees and 270 degrees respectively, then image data rotated by different angles serve as different categories to construct a supervised learning task, and for common data in S1, different categories of data are constructed in a rotation matrix mode, and the specific mode is as follows:
T(x)=Wx+b,
the self-supervision learning mode can be regarded as an ideal state of machine learning, a model directly learns by itself from non-label data without marking data, training data is data used for constructing a data mining model in the data mining process, the more important step in the data mining is classification, the classification concept is to learn a classification function or construct a classification model on the basis of the existing data, and the function or the model can map data records in a database to one of given classes, so that the function or the model can be applied to data prediction;
the construction and implementation of the classifier generally goes through the following steps:
selecting samples, and dividing all the samples into a training sample and a test sample;
executing a classifier algorithm on the training samples to generate a classification model;
executing a classification model on the test sample to generate a prediction result;
and calculating necessary evaluation indexes according to the prediction result, and evaluating the performance of the classification model.
S2, constructing a classification network, and then constructing a multi-task classification network based on the self-supervision data constructed in the S1 and the original data based on the supplementary tags, wherein the first task is based on the traditional supplementary tags, the second task is based on the constructed self-supervision tasks, the tasks based on the self-supervision adopt the traditional cross entropy loss function, and the data based on the supplementary tags adopts the following loss function:
the cross entropy loss function can not only measure the effect of the model, but also minimize the output result of the supplementary label, in addition, the cross entropy can be used for judging the closeness degree of the actual output and the expected output, the network weight of the seed network is required to be loaded into the shared feature extraction network of the multi-task classification network when the multi-task classification network is constructed, and the network weight in the shared feature extraction network is frozen;
the cross entropy loss function is a smooth function, the essence of which is the application of cross entropy in information theory in classification problems, and the definition of the cross entropy indicates that the minimum cross entropy is equivalent to the relative entropy of a minimum observed value and an estimated value, so that the minimum cross entropy is a proxy loss providing unbiased estimation;
s3, training the network, then training the network based on the data and the loss function in S2, and training the network in an end-to-end mode based on a multitask mode, wherein the trained loss function is the sum of two task loss functions, and the specific forms of the self-supervision loss and the supplementary tag loss are as follows:
the loss function is an indispensable configuration for training a neural network, the performance of a model is measured by the loss function through numerical values, the updating network is generated by solving gradients for network parameters, the loss function in deep learning has two basic requirements, one is a function for returning a single numerical value, the other is a function for returning a single numerical value, the loss function is almost derivable everywhere, the value of the loss function determines the current training effect, and the gradients generated by the loss function guide the iterative updating of the parameters;
s4, constructing a knowledge distillation mechanism, and finally constructing the knowledge distillation mechanism based on the trained network after finishing the data training based on the self-supervision and supplementary labels, wherein the trained model is used as a teacher network, and a network with the same structure is selected as a student network, and then information is provided for students based on the output of the teacher network, the knowledge distillation is to train the small model by constructing a light-weight small model and utilizing the supervision information of the large model with better performance to achieve better performance and precision, compared with the traditional off-line distillation mode, the self-supervision distillation mode does not need to train a teacher network model in advance, but the training of the student network completes a distillation process, and the distillation mechanism is as follows:
the knowledge distillation refers to transferring the knowledge of a pre-trained teacher model to a student model in a distillation mode, the self-distillation refers to distilling the knowledge to the teacher model, the teacher network is used for constructing a teacher student network based on trained network information, the student network learns hidden information of the teacher network, and finally a better classification performance is obtained;
knowledge distillation can promote the model precision, reduce model time delay, compress network parameter, carry out the domain migration between the picture label and reduce the mark volume, generally when using the distillation, often can look for a student's network that parameter quantity is littleer, compare so compared with the teacher, this light-weight network can not be fine study the latent relation that hides before the data set, and the target of distillation is the generalization ability that lets the student study the teacher, the result that obtains in theory can be better than the student of the simple fitting training data.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.
Claims (8)
1. The supplementary label learning method based on self-supervision and self-distillation is characterized by comprising the following steps: the method comprises the following steps:
s1, constructing an automatic supervision mode, namely firstly, calculating in a data set by using a data mining method, collecting data used for training a data mining model in the process, namely training data, and then constructing a corresponding automatic supervision mode based on the characteristics of the existing training data;
s2, constructing a classification network, and then constructing a multi-task classification network based on the self-supervision data constructed in the S1 and the original data based on the supplementary tags, wherein the first task is based on the traditional supplementary tags, the second task is based on the constructed self-supervision tasks, the tasks based on the self-supervision adopt the traditional cross entropy loss function, and the data based on the supplementary tags adopts the following loss function:
s3, training the network, then training the network based on the data and the loss function in S2, and training the network in an end-to-end mode based on a multitask mode, wherein the trained loss function is the sum of two task loss functions, and the specific forms of the self-supervision loss and the supplementary tag loss are as follows:
s4, constructing a knowledge distillation mechanism, and finally constructing the knowledge distillation mechanism based on the trained network after finishing the data training based on the self-supervision and supplementary labels, wherein the trained model is used as a teacher network, and a network with the same structure is selected as a student network, and then information is provided for students based on the output of the teacher network, the knowledge distillation is to train the small model by constructing a light-weight small model and utilizing the supervision information of the large model with better performance to achieve better performance and precision, compared with the traditional off-line distillation mode, the self-supervision distillation mode does not need to train a teacher network model in advance, but the training of the student network completes a distillation process, and the distillation mechanism is as follows:
2. the self-supervision and self-distillation based supplementary label learning method according to claim 1, characterized in that: for the common data in S1, different classes of data are constructed by using a rotation matrix, which is as follows:
T(x)=Wx+b,
3. the self-supervision and self-distillation based supplementary label learning method according to claim 1, characterized in that: the cross entropy loss function in S2 can not only measure the effect of the model, but also minimize the output result of the supplementary tag, and in addition, the cross entropy can be used to determine how close the actual output is to the expected output.
4. The self-supervision and self-distillation based supplementary label learning method according to claim 1, characterized in that: the loss function in S3 is an indispensable configuration for training the neural network, and the loss function measures the performance of the model numerically and generates an updated network by graduating network parameters.
5. The self-supervision and self-distillation based supplementary label learning method according to claim 1, characterized in that: in the distillation process in S4, besides learning the knowledge of the teacher network, the original supplementary label information needs to be satisfied, and the distillation information needs to be satisfied by self-supervision.
6. The self-supervision and self-distillation based supplementary label learning method according to claim 1, characterized in that: in S1, the image is rotated by 0 °, 90 °, 180 °, and 270 °, respectively, to construct corresponding data.
7. The self-supervision and self-distillation based supplementary label learning method according to claim 1, characterized in that: when the multitask classification network in S2 is constructed, the network weight of the seed network needs to be loaded into the shared feature extraction network of the multitask classification network, and the network weight in the shared feature extraction network is frozen.
8. The self-supervision and self-distillation based supplementary label learning method according to claim 1, characterized in that: the teacher network in the step S4 is used to construct a teacher student network based on the trained network information, so that the student network learns the hidden information of the teacher network, and a better classification performance is finally obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111177718.4A CN113869512B (en) | 2021-10-09 | 2021-10-09 | Self-supervision and self-distillation-based supplementary tag learning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111177718.4A CN113869512B (en) | 2021-10-09 | 2021-10-09 | Self-supervision and self-distillation-based supplementary tag learning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113869512A true CN113869512A (en) | 2021-12-31 |
CN113869512B CN113869512B (en) | 2024-05-21 |
Family
ID=79002330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111177718.4A Active CN113869512B (en) | 2021-10-09 | 2021-10-09 | Self-supervision and self-distillation-based supplementary tag learning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113869512B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114863248A (en) * | 2022-03-02 | 2022-08-05 | 武汉大学 | Image target detection method based on deep supervision self-distillation |
CN114972839A (en) * | 2022-03-30 | 2022-08-30 | 天津大学 | Generalized continuous classification method based on online contrast distillation network |
CN116415005A (en) * | 2023-06-12 | 2023-07-11 | 中南大学 | Relationship extraction method for academic network construction of scholars |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019136946A1 (en) * | 2018-01-15 | 2019-07-18 | 中山大学 | Deep learning-based weakly supervised salient object detection method and system |
WO2021023202A1 (en) * | 2019-08-07 | 2021-02-11 | 交叉信息核心技术研究院(西安)有限公司 | Self-distillation training method and device for convolutional neural network, and scalable dynamic prediction method |
CN112418343A (en) * | 2020-12-08 | 2021-02-26 | 中山大学 | Multi-teacher self-adaptive joint knowledge distillation |
CN112465111A (en) * | 2020-11-17 | 2021-03-09 | 大连理工大学 | Three-dimensional voxel image segmentation method based on knowledge distillation and countertraining |
CN113139651A (en) * | 2020-01-20 | 2021-07-20 | 北京三星通信技术研究有限公司 | Training method and device of label proportion learning model based on self-supervision learning |
CN113378940A (en) * | 2021-06-15 | 2021-09-10 | 北京市商汤科技开发有限公司 | Neural network training method and device, computer equipment and storage medium |
-
2021
- 2021-10-09 CN CN202111177718.4A patent/CN113869512B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019136946A1 (en) * | 2018-01-15 | 2019-07-18 | 中山大学 | Deep learning-based weakly supervised salient object detection method and system |
WO2021023202A1 (en) * | 2019-08-07 | 2021-02-11 | 交叉信息核心技术研究院(西安)有限公司 | Self-distillation training method and device for convolutional neural network, and scalable dynamic prediction method |
CN113139651A (en) * | 2020-01-20 | 2021-07-20 | 北京三星通信技术研究有限公司 | Training method and device of label proportion learning model based on self-supervision learning |
CN112465111A (en) * | 2020-11-17 | 2021-03-09 | 大连理工大学 | Three-dimensional voxel image segmentation method based on knowledge distillation and countertraining |
CN112418343A (en) * | 2020-12-08 | 2021-02-26 | 中山大学 | Multi-teacher self-adaptive joint knowledge distillation |
CN113378940A (en) * | 2021-06-15 | 2021-09-10 | 北京市商汤科技开发有限公司 | Neural network training method and device, computer equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
葛仕明;赵胜伟;刘文瑜;李晨钰;: "基于深度特征蒸馏的人脸识别", 北京交通大学学报, no. 06, 15 December 2017 (2017-12-15) * |
赵胜伟;葛仕明;叶奇挺;罗朝;李强;: "基于增强监督知识蒸馏的交通标识分类", 中国科技论文, no. 20, 23 October 2017 (2017-10-23) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114863248A (en) * | 2022-03-02 | 2022-08-05 | 武汉大学 | Image target detection method based on deep supervision self-distillation |
CN114863248B (en) * | 2022-03-02 | 2024-04-26 | 武汉大学 | Image target detection method based on deep supervision self-distillation |
CN114972839A (en) * | 2022-03-30 | 2022-08-30 | 天津大学 | Generalized continuous classification method based on online contrast distillation network |
CN114972839B (en) * | 2022-03-30 | 2024-06-25 | 天津大学 | Generalized continuous classification method based on online comparison distillation network |
CN116415005A (en) * | 2023-06-12 | 2023-07-11 | 中南大学 | Relationship extraction method for academic network construction of scholars |
CN116415005B (en) * | 2023-06-12 | 2023-08-18 | 中南大学 | Relationship extraction method for academic network construction of scholars |
Also Published As
Publication number | Publication date |
---|---|
CN113869512B (en) | 2024-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113869512A (en) | Supplementary label learning method based on self-supervision and self-distillation | |
CN111724083B (en) | Training method and device for financial risk identification model, computer equipment and medium | |
CN108108808B (en) | Position prediction method and device based on deep belief network | |
Mota-Vargas et al. | Taxonomy and ecological niche modeling: Implications for the conservation of wood partridges (genus Dendrortyx) | |
CN111210111B (en) | Urban environment assessment method and system based on online learning and crowdsourcing data analysis | |
CN105069483B (en) | The method that a kind of pair of categorized data set is tested | |
CN111126576A (en) | Novel training strategy for deep learning | |
CN112052818A (en) | Unsupervised domain adaptive pedestrian detection method, unsupervised domain adaptive pedestrian detection system and storage medium | |
CN108009571A (en) | A kind of semi-supervised data classification method of new direct-push and system | |
CN112288013A (en) | Small sample remote sensing scene classification method based on element metric learning | |
CN115984653B (en) | Construction method of dynamic intelligent container commodity identification model | |
CN111160526B (en) | Online testing method and device for deep learning system based on MAPE-D annular structure | |
CN109993188B (en) | Data tag identification method, behavior identification method and device | |
CN112801162B (en) | Adaptive soft label regularization method based on image attribute prior | |
WO2021258482A1 (en) | Beauty prediction method and device based on migration and weak supervision, and storage medium | |
CN112102015A (en) | Article recommendation method, meta-network processing method, device, storage medium and equipment | |
CN116108195A (en) | Dynamic knowledge graph prediction method and device based on time sequence element learning | |
CN114627085A (en) | Target image identification method and device, storage medium and electronic equipment | |
CN113837220A (en) | Robot target identification method, system and equipment based on online continuous learning | |
Serrano et al. | Inter-task similarity measure for heterogeneous tasks | |
CN110796195B (en) | Image classification method including online small sample excitation | |
CN112182422A (en) | Skill recommendation method, skill recommendation device, electronic identification and medium | |
CN111563413A (en) | Mixed dual-model-based age prediction method | |
Netto et al. | Prediction of environmental conditions for maritime navigation using a network of sensors: A practical application of graph neural networks | |
CN114896479B (en) | Online learning method, system and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |