CN117828356A - Binary collaborative data balance optimization method, system and storage medium - Google Patents

Binary collaborative data balance optimization method, system and storage medium Download PDF

Info

Publication number
CN117828356A
CN117828356A CN202410232496.9A CN202410232496A CN117828356A CN 117828356 A CN117828356 A CN 117828356A CN 202410232496 A CN202410232496 A CN 202410232496A CN 117828356 A CN117828356 A CN 117828356A
Authority
CN
China
Prior art keywords
classifier
binary
data
model
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410232496.9A
Other languages
Chinese (zh)
Other versions
CN117828356B (en
Inventor
韩婷婷
刘如倩
窦淑伟
张文霞
郎吉豪
李文轩
韩纪星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Normal University
Original Assignee
Tianjin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Normal University filed Critical Tianjin Normal University
Priority to CN202410232496.9A priority Critical patent/CN117828356B/en
Publication of CN117828356A publication Critical patent/CN117828356A/en
Application granted granted Critical
Publication of CN117828356B publication Critical patent/CN117828356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of model training, and discloses a binary collaborative data balance optimization method, a binary collaborative data balance optimization system and a binary collaborative data balance optimization storage medium, wherein the binary collaborative data balance optimization method comprises the following steps: constructing a binary data set on the basis of the basic data set, wherein the binary data set comprises a minority class set and a majority class set; the main classifier and the auxiliary classifier are trained cooperatively, the main classifier is trained by adopting original sample data, and the auxiliary classifier is trained by adopting a binary data set; correcting the probability vector of each category output by the main classifier according to the output of the auxiliary classifier; calculating the gradient of the loss function on the model parameters according to the total loss, and adopting gradient descent optimization to update the model parameters; the method is helpful for improving the performance of multi-classification tasks of the overall model, especially when the data of the multi-classification tasks are relatively less or the classification imbalance problem exists; meanwhile, the classification learning of the main classifier and the auxiliary classifier can help the integral model learn the shared characteristics and representation, so that the efficiency and performance of the integral model are improved.

Description

Binary collaborative data balance optimization method, system and storage medium
Technical Field
The invention relates to the technical field of model training, in particular to a binary collaborative data balance optimization method, a binary collaborative data balance optimization system and a binary collaborative data balance optimization storage medium.
Background
Data imbalance is a common challenge in the fields of machine learning and data mining, which refers to the significant difference in the number of samples between different classes in a classification problem. This problem is widespread in numerous applications, such as natural language processing, image classification, etc. The nature of the data imbalance is that some classes have significantly fewer samples than others, which may result in machine learning models that perform poorly in training and prediction. Processing data imbalance is a significant link in model training. The data imbalance problem affects the different stages of model training, mainly including the following aspects: data preparation stage: the data imbalance problem first manifests itself in the data preparation phase. In building a training set, an imbalance in the number of samples of different categories may result in the model over-relying on a greater number of categories, with less knowledge of the fewer number of categories. Characteristic engineering: data imbalance may also affect the choice of feature engineering. Because of the uneven distribution of the different categories, certain features may exhibit stronger signals in a greater number of categories and weaker in a lesser number of categories. This may lead to feature selection and transformation problems under unbalanced data. Model training: during the model training phase, data imbalance may result in models that tend to predict a greater number of classes, as this approach may minimize training errors. This may lead to poor recognition performance for a few categories, as models have limited learning of them.
Defects and deficiencies of the prior art:
in dealing with unbalanced data sets, strategies commonly employed are: (1) resampling. Oversampling increases its weight in model training by adding a few classes of samples, but this may lead to overfitting, as repeated samples in the dataset may make the model too sensitive to certain specific samples. In contrast, undersampling reduces the majority class of samples, but this can lead to information loss, as some valuable data points are removed, thereby affecting the overall performance of the model. (2) threshold adjustment. The sensitivity of the model is adjusted by changing the classification threshold. However, this approach typically ignores the relationship between the different categories, possibly resulting in misclassification. Simple thresholding is not always able to handle complex class imbalance conditions. (3) generating a synthetic sample. The composite sample may not exactly match the distribution of the real data, resulting in an inaccurate model being trained. This may be particularly severe as the model may be too dependent on the synthesized data, not the real data. (4) an integrated learning method. The adaptation of different integration methods to unbalanced data sets varies. Experience is often required to select and adjust the appropriate integration method, which increases the complexity of model development. (5) changing the evaluation index. It only affects the way the model is evaluated, and does not directly improve the model performance. Therefore, it does not solve the fundamental problem, i.e., the performance of the model on unbalanced data. (6) weight adjustment method. Attempts to balance the dataset by giving higher weights to the majority categories, but this is not always sufficient, as the model may still not adequately capture features of the minority categories. This is especially a problem when a few categories are very rare in the data. (7) design of loss function. It generally requires field expertise and is therefore not necessarily applicable to all problems. Furthermore, it may increase the complexity of model development. (8) The multi-label classification approach works well for certain types of unbalanced data sets, but is not applicable in all cases and can increase the complexity of the model, making it difficult to manage.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a binary collaborative data balance optimization method, a binary collaborative data balance optimization system and a binary collaborative data balance optimization storage medium.
In order to achieve the above object, the present invention provides the following technical solutions:
a binary collaborative data balance optimization method comprises the following steps:
constructing a binary data set on the basis of the basic data set, wherein the binary data set comprises a minority class set and a majority class set;
the main classifier and the auxiliary classifier are trained cooperatively, the main classifier is trained by adopting original sample data, and the auxiliary classifier is trained by adopting a binary data set;
correcting the main classifier, and correcting the probability vector of each category output by the main classifier according to the output of the auxiliary classifier;
and updating model parameters, namely calculating the total loss of the main classifier and the auxiliary classifier, calculating the gradient of a loss function on the model parameters according to the total loss, and adopting gradient descent optimization to update the model parameters.
In the present invention, preferably, the minority class set is obtained by combining several classes with few sample data in the original sample data, and the majority class set is obtained by combining several classes with more sample data in the original sample data.
In the present invention, preferably, the main classifier and the auxiliary classifier have the same structure, and both the main classifier and the auxiliary classifier adopt a neural network model.
In the present invention, preferably, the auxiliary classifier receives training of the binary data set and outputs a first soft label, wherein the soft label is expressed as probability distribution of each category.
In the present invention, preferably, the main classifier receives the training of the original data set and outputs the original category information and the probability vector of each category.
In the present invention, preferably, when the correction of the main classifier is performed, the auxiliary classifier predicts the same sample data to obtain a minority class or a majority class of the same sample data, and then the probability value of the minority class or the majority class corresponding to the probability vector obtained by the prediction of the same sample data by the main classifier is added, and the corrected probability vector is used as the second soft label.
In the present invention, preferably, the total loss includes a loss of the primary classifier, which is a difference value between the second soft tag and the target tag of the multi-classification task, and a loss of the secondary classifier, which is a difference value between the first soft tag and the target tag of the binary classification.
The binary collaborative data balance optimization system comprises a sample processing module, a model building module and a training updating module, wherein the sample processing module builds a binary data set based on original sample data, the model building module builds a composite model comprising a main classifier and an auxiliary classifier based on an original neural network model, and the training updating module is used for training the composite model.
A storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of a binary collaborative data balance optimization method as described above.
Compared with the prior art, the invention has the beneficial effects that:
the method of the invention can provide the performance of the whole model: by adopting the auxiliary classifier to perform the classification tasks, the auxiliary classification tasks can provide additional supervision signals to help the main classifier learn the features and the representations better, which is helpful for improving the performance of the multi-classification tasks of the overall model, especially when the data of the multi-classification tasks are relatively less or the classification imbalance problem exists; meanwhile, the classification learning of the main classifier and the auxiliary classifier can help the integral model learn the shared characteristics and representation, so that the efficiency and performance of the integral model are improved;
label noise can be relieved, and generalization capability is improved: if the labels of the multi-classification tasks have noise or errors, the classification tasks of the auxiliary classifier can help to screen out samples of correct labels, so that the negative influence of the label noise on the multi-classification tasks is reduced; meanwhile, training a model to solve the problem that the two classification tasks related to the multi-classification task need to learn not only the information of the multi-classification task but also the information of the two classification tasks related to the multi-classification task, so that the structure of data is better understood, and the generalization capability is improved.
Drawings
FIG. 1 is a flow chart of a binary collaborative data balance optimization method according to the present invention.
FIG. 2 is a schematic diagram of a model structure of a binary collaborative data balance optimization method according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1 to 2, a preferred embodiment of the present invention provides a binary collaborative data balance optimization method, which can provide additional supervisory signals to help a model learn features and representations better to cope with the problem of relatively fewer data or unbalanced categories, and can screen out samples of correct labels to reduce negative effects of label noise on multi-classification tasks, and specifically includes the following steps:
constructing a binary data set on the basis of the basic data set, wherein the binary data set comprises a minority class set and a majority class set;
the main classifier and the auxiliary classifier are trained cooperatively, the main classifier is trained by adopting original sample data so as to enable the main classifier to carry out multi-class classification tasks, and the auxiliary classifier is trained by adopting a binary data set so as to enable the auxiliary classifier to carry out minority class or majority class classification tasks;
correcting the main classifier, and correcting the probability vector of each category output by the main classifier according to the output of the auxiliary classifier;
and updating model parameters, and respectively calculating the losses of the main classifier and the auxiliary classifier. The double loss of the main classifier and the auxiliary classifier is used for updating model parameters.
Specifically, unbalanced sample data refers to that the difference of sample data of each category is large, sample data of a certain category is large, and data samples of certain category are small, so that a few categories with few sample data in the original sample data are combined to obtain a few category samples, then other categories with large residual sample data in the original sample data are combined to obtain a majority category set, so that the original sample data are divided into two sets, the two sets form a binary data set, classification is performed based on the binary data set, and the obtained classification result is that the minority category or the majority category is two.
Specifically, the main classifier and the auxiliary classifier have the same structure, and are both neural network models, wherein the main classifier is an original model, and is trained based on unbalanced multi-classification sample data, so that the original model is more prone to predicting a large number of classes when being classified after being trained, the recognition effect of the model on a small number of classes is poor, the classification capability of the model is poor, the model which is the same as the original model is added on the basis of the original model to serve as the auxiliary classifier, the original model serves as the main classifier, the main classifier and the auxiliary classifier serve as new models together, after being trained, classification tasks are carried out together, and the auxiliary classifier can help the model to learn characteristics and representation better, so that the performance of the multi-classification task is improved.
Specifically, the auxiliary classifier receives training of a binary data set and outputs a first soft label, the soft label is expressed as probability distribution of each category, the auxiliary classifier performs a classification task, namely the auxiliary classifier is trained, and the output first soft label is the probability distribution of the input data belonging to a few categories or a plurality of categories.
Specifically, the main classifier receives the training of the original data set, outputs the original category information and the probability vector of each category, performs multi-classification tasks, namely, the main classifier is trained, and outputs probability distribution of the input data belonging to each category.
Further, when the main classifier is trained, the main classifier firstly receives the original sample data input through deep learning and outputs probability vectors belonging to each class, and then the probability vectors of each class need to be corrected by referring to the output of the auxiliary classifier. The correction strategy is as follows: and predicting the same sample data by the auxiliary classifier to obtain the data belonging to a few categories or a plurality of categories, and adding an original prediction result value of the position of the corresponding few categories or the plurality of categories in an original prediction result obtained by predicting the same sample data by the main classifier. If the auxiliary classifier predicts first to obtain probability vector distribution of the specific class, which is obtained by predicting the same data when the sub-classification should belong to a few classes, wherein the probability value of the position corresponding to the few classes is increased; if the auxiliary classifier predicts that the sub-classification should belong to the plurality of categories, the probability value of the position corresponding to the plurality of categories is increased in the probability vector predicted by the main classifier for the same data, and the probability vector corrected by the auxiliary classifier is the second soft label.
Specifically, the total loss of the model comprises two parts, wherein the two parts are the loss of the main classifier and the loss of the auxiliary classifier, the loss of the main classifier is the difference value between the corrected second soft label output by the main classifier and the target label of the multi-classification task, namely the difference value between the class probability output by the main classifier and the actual class, and the loss of the auxiliary classifier is the difference value between the first soft label and the target label of the binary classification, namely the difference value between the binary class probability output by the auxiliary classifier and the actual majority class or minority class. The total loss is used for calculating the gradient of the loss function to the model parameters, the gradient is used for updating the model parameters, and the double loss of the main classifier and the auxiliary classifier guides the model parameters to update, and optimization methods such as gradient descent and the like are generally adopted.
In a specific embodiment, in a data set of four unbalanced classification tasks, for example, the original sample data amounts of four categories are 10, 78, 829, 738, where there is a large gap between the sample data amounts of the first two categories and the sample data amounts of the second two categories, the original four classification data is re-divided, the sample data of the first two categories are combined into a minority class set, the sample data of the second two categories are combined into a majority class set, the minority class set and the majority class set form a binary data set, the binary data set is used for training an auxiliary classifier, so that the auxiliary classifier performs classification of the minority class or the majority class, and the original sample data is used for training a main classifier, so that the main classifier performs the four classification tasks.
The auxiliary classifier carries out deep learning training by a binary data set to generate a first soft label, calculates loss by difference between the first soft label and a target label of which original data belongs to a few categories or a plurality of categories, calculates by classical two-category cross entropy loss, and marks as loss1.
The main classifier is subjected to deep learning, is trained by the original sample number, outputs probability vector distribution of each category, and re-updates the probability vector distribution of each sample of each batch according to the classification result of the auxiliary classifier, wherein the specific updating steps are as follows:
dividing sample data into a plurality of batches, training in sequence, predicting samples of one batch by an auxiliary classifier, obtaining a prediction result by using the number of samples of the batch as T, and circularly traversing the prediction result. When the auxiliary classifier predicts the lot numberWhen the individual samples belong to a few categories, increasing probability values of positions corresponding to the few categories in an original prediction result of predicting the samples by the main classifier; when the auxiliary classifier predicts +.>When each sample belongs to the class of the majority class, the main classifier predicts the bits corresponding to the majority class in the original prediction result of the sampleThe probability value of the placement increases. For example, the main classifier performs four classification tasks, the four classes being marked +.>、/>、/>And->. Assume that the number of original samples is 10, 78, 829, 738 in order. In the classification task, class 1 and class 2 of this example are combined into one class, denoted +.>Meaning a collection of a few categories. Class 3 and class 4 are combined into one class and resampled in this class such that the number of samples of this class is equal to +.>The number is approximately the same, denoted as +.>Meaning a collection of most categories. For->The original prediction result obtained by the prediction of the main classifier is +.>=[/>,/>,/>,/>]The probability vector predicted by the auxiliary classifier is P= [ -I ]>,/>]. Let->>/>Then the original prediction result predicted by the main classifier is corrected to be +.>=[/>+k, />+k, />,/>]. Where k is the difference before and after correction, i.e. the increase value mentioned above. The value of k in the invention takes the absolute value of the maximum value of the original prediction result of the current sample, namely +.>
For new prediction resultsThe difference between the second soft label mode and the four-classified target label is calculated by adopting four-classification cross entropy loss and is marked as loss2.
Total loss loss=loss 1+ loss2. The gradient of the loss function to the network parameters is calculated based on the total loss for back propagation and parameter updating, both using prior art techniques and not described in detail.
The invention provides a binary collaborative data balance optimization system, which specifically comprises a sample processing module, a model building module and a training updating module, wherein the sample processing module builds a binary data set based on original sample data, the model building module builds a composite model comprising a main classifier and an auxiliary classifier based on an original neural network model, and the training updating module is used for training the composite model.
In other preferred embodiments of the present invention, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method as described in the above embodiments.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is directed to the preferred embodiments of the present invention, but the embodiments are not intended to limit the scope of the invention, and all equivalent changes or modifications made under the technical spirit of the present invention should be construed to fall within the scope of the present invention.

Claims (9)

1. The binary collaborative data balance optimization method is characterized by comprising the following steps:
constructing a binary data set on the basis of the basic data set, wherein the binary data set comprises a minority class set and a majority class set;
the main classifier and the auxiliary classifier are trained cooperatively, the main classifier is trained by adopting original sample data so as to enable the main classifier to carry out multi-class classification tasks, and the auxiliary classifier is trained by adopting a binary data set so as to enable the auxiliary classifier to carry out minority class or majority class classification tasks;
correcting the main classifier, and correcting the probability vector of each category output by the main classifier according to the output of the auxiliary classifier;
and updating model parameters, namely calculating the total loss of the main classifier and the auxiliary classifier, calculating the gradient of a loss function on the model parameters according to the total loss, and adopting gradient descent optimization to update the model parameters.
2. The method according to claim 1, wherein the minority class set is obtained by merging a plurality of classes with a sample data amount smaller than a set value in original sample data, and the majority class set is obtained by merging the remaining sample data in the original sample data.
3. The binary collaborative data balance optimization method according to claim 2, wherein the primary classifier and the secondary classifier are identical in structure and each adopts a neural network model.
4. The method of claim 1, wherein the secondary classifier is trained on a binary data set to output a first soft label, the soft label being a probability distribution belonging to a minority class or a majority class.
5. The method of claim 4, wherein the master classifier receives training of the raw dataset and outputs the raw prediction results for each class.
6. The method for data balance optimization according to claim 1, wherein when the main classifier is modified, the auxiliary classifier predicts the same sample data to obtain a minority class or a majority class, then the main classifier predicts the same sample data to obtain a predicted result value of the minority class or the majority class corresponding to the original predicted result, and the modified predicted result vector is further converted into a probability vector as the second soft label.
7. The method of claim 1, wherein the total loss comprises a loss of the primary classifier and a loss of the secondary classifier, the loss of the primary classifier being a difference value between the second soft label and the target label of the multi-classification task, the loss of the secondary classifier being a difference value between the first soft label and the target label of the binary classification.
8. A system for implementing a binary collaborative data balance optimization method according to any of claims 1-7, comprising a sample processing module, a model building module and a training update module, wherein the sample processing module builds a binary data set based on original sample data, the model building module builds a composite model comprising a primary classifier and a secondary classifier based on an original neural network model, and the training update module is configured to train the composite model.
9. A storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of a binary collaborative data balance optimization method of any one of claims 1-7.
CN202410232496.9A 2024-03-01 2024-03-01 Binary collaborative data balance optimization method, system and storage medium Active CN117828356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410232496.9A CN117828356B (en) 2024-03-01 2024-03-01 Binary collaborative data balance optimization method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410232496.9A CN117828356B (en) 2024-03-01 2024-03-01 Binary collaborative data balance optimization method, system and storage medium

Publications (2)

Publication Number Publication Date
CN117828356A true CN117828356A (en) 2024-04-05
CN117828356B CN117828356B (en) 2024-05-28

Family

ID=90515594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410232496.9A Active CN117828356B (en) 2024-03-01 2024-03-01 Binary collaborative data balance optimization method, system and storage medium

Country Status (1)

Country Link
CN (1) CN117828356B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462301A (en) * 2014-11-28 2015-03-25 北京奇虎科技有限公司 Network data processing method and device
CN107316061A (en) * 2017-06-22 2017-11-03 华南理工大学 A kind of uneven classification ensemble method of depth migration study
KR20200027834A (en) * 2018-09-05 2020-03-13 성균관대학교산학협력단 Methods and apparatuses for processing data based on representation model for unbalanced data
US20200160175A1 (en) * 2018-11-15 2020-05-21 D-Wave Systems Inc. Systems and methods for semantic segmentation
CN113887655A (en) * 2021-10-21 2022-01-04 深圳前海微众银行股份有限公司 Model chain regression prediction method, device, equipment and computer storage medium
CN114925750A (en) * 2022-04-25 2022-08-19 北京三快在线科技有限公司 Information recommendation method and device, computer readable storage medium and electronic equipment
CN116010875A (en) * 2022-12-01 2023-04-25 国网重庆市电力公司营销服务中心 Method and device for classifying ammeter faults, electronic equipment and computer storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462301A (en) * 2014-11-28 2015-03-25 北京奇虎科技有限公司 Network data processing method and device
CN107316061A (en) * 2017-06-22 2017-11-03 华南理工大学 A kind of uneven classification ensemble method of depth migration study
KR20200027834A (en) * 2018-09-05 2020-03-13 성균관대학교산학협력단 Methods and apparatuses for processing data based on representation model for unbalanced data
US20200160175A1 (en) * 2018-11-15 2020-05-21 D-Wave Systems Inc. Systems and methods for semantic segmentation
CN113887655A (en) * 2021-10-21 2022-01-04 深圳前海微众银行股份有限公司 Model chain regression prediction method, device, equipment and computer storage medium
CN114925750A (en) * 2022-04-25 2022-08-19 北京三快在线科技有限公司 Information recommendation method and device, computer readable storage medium and electronic equipment
CN116010875A (en) * 2022-12-01 2023-04-25 国网重庆市电力公司营销服务中心 Method and device for classifying ammeter faults, electronic equipment and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINHO PARK ET AL.: "Imbalanced Classification via Feature Dictionary-Based Minority Oversampling", IEEE ACCESS, 22 March 2022 (2022-03-22) *
陆克中 等: "面向概念漂移和类不平衡数据流的在线分类算法", 电 子 学 报, 31 March 2022 (2022-03-31) *

Also Published As

Publication number Publication date
CN117828356B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN114841257B (en) Small sample target detection method based on self-supervision comparison constraint
CN116644755B (en) Multi-task learning-based few-sample named entity recognition method, device and medium
CN113469186A (en) Cross-domain migration image segmentation method based on small amount of point labels
CN112861982A (en) Long-tail target detection method based on gradient average
CN110705640A (en) Method for constructing prediction model based on slime mold algorithm
CN111191685A (en) Method for dynamically weighting loss function
CN116468938A (en) Robust image classification method on label noisy data
CN115270988A (en) Fine adjustment method, device and application of knowledge representation decoupling classification model
CN117828356B (en) Binary collaborative data balance optimization method, system and storage medium
CN116993548A (en) Incremental learning-based education training institution credit assessment method and system for LightGBM-SVM
CN112270334A (en) Few-sample image classification method and system based on abnormal point exposure
CN116756391A (en) Unbalanced graph node neural network classification method based on graph data enhancement
CN117079120A (en) Target recognition model optimization method based on improved GA algorithm
AU2021103316A4 (en) Remote sensing image scene classification method based on automatic machine learning
CN113627538B (en) Method for training asymmetric generation of image generated by countermeasure network and electronic device
US20240020531A1 (en) System and Method for Transforming a Trained Artificial Intelligence Model Into a Trustworthy Artificial Intelligence Model
CN113379037B (en) Partial multi-mark learning method based on complementary mark cooperative training
CN113722439A (en) Cross-domain emotion classification method and system based on antagonism type alignment network
CN114595695A (en) Self-training model construction method for few-sample intention recognition system
CN113177599A (en) Enhanced sample generation method based on GAN
CN112070127A (en) Intelligent analysis-based mass data sample increment analysis method
CN112215849B (en) Color space-based image unsupervised segmentation optimization method
CN116935102B (en) Lightweight model training method, device, equipment and medium
CN115496115B (en) Continuous electromagnetic signal classification method based on vector space separation
US20220261648A1 (en) Gradient pruning for efficient training of machine learning models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant