CN110705641A - Wine classification method based on Bayesian optimization and electronic nose - Google Patents

Wine classification method based on Bayesian optimization and electronic nose Download PDF

Info

Publication number
CN110705641A
CN110705641A CN201910943018.8A CN201910943018A CN110705641A CN 110705641 A CN110705641 A CN 110705641A CN 201910943018 A CN201910943018 A CN 201910943018A CN 110705641 A CN110705641 A CN 110705641A
Authority
CN
China
Prior art keywords
lightgbm
optimization
model
bayesian
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910943018.8A
Other languages
Chinese (zh)
Inventor
张磊
乔淼
赵策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201910943018.8A priority Critical patent/CN110705641A/en
Publication of CN110705641A publication Critical patent/CN110705641A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Abstract

The invention relates to a wine classification method based on Bayesian optimization and electronic nose, which comprises the following steps: s1.LightGBM algorithm, adopting Leaf-wise tree building method, finding out the Leaf with maximum splitting gain from all the current leaves each time when building the tree, then splitting, and circulating in this way; LightGBM prunes the tree using the maximum tree depth, avoiding overfitting; s2, Bayesian optimization algorithm: s3, building a BO-LightGBM, and performing self-optimizing adjustment on the hyper-parameter of the LightGBM by using a Bayesian hyper-parameter optimizing algorithm; bayesian optimization utilizes a probability model to act a complex optimization function, the prior of an object to be optimized is introduced into the probability model, and the model can effectively reduce unnecessary sampling; the invention has the beneficial effects that: bayesian optimization realizes the most advanced result on some global optimization problems by constructing a probability model of a function to be optimized and utilizing the probability model to determine the optimization method of the next evaluation point, and the method is a better solution for superparametric optimization.

Description

Wine classification method based on Bayesian optimization and electronic nose
Technical Field
The invention relates to the technical field of intelligent wine classification, in particular to a wine classification method based on Bayesian optimization and an electronic nose.
Background
Wine is a beverage obtained by fermenting and aging grapes, and is one of the most popular beverages in the world. The classification of wine is vital to maintain the high economic value of wine products, protect wine quality, prevent illegal labels and guarantee import and export wine quality. However, the conventional methods for detecting wine mainly include chemical component analysis, professional wine taster identification, gas chromatograph and the like, and because these methods are time-consuming, labor-consuming and inefficient, it is very important to research a fast and efficient wine classification method.
The grape wine is popular as a health drink, and meanwhile, the quality detection is more and more regarded. The electronic nose is applied to wine detection as a portable and quick detection method. However, the problem of classification models is rarely considered in electronic nose detection.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a wine classification method based on Bayesian optimization and electronic nose.
The wine classification method based on Bayesian optimization and electronic nose provided by the invention is realized by the following technical scheme:
a wine classification method based on Bayesian optimization and electronic nose comprises the following steps:
s1.LightGBM algorithm
Adopting a Leaf-wise tree building method, finding out the Leaf with the maximum splitting gain from all the current leaves each time when building the tree, then splitting, and circulating in the way; LightGBM prunes the tree using the maximum tree depth, avoiding overfitting;
s2, Bayesian optimization algorithm:
a. selecting a prior function to map the hyperparameter lambda to a loss function L, wherein TPE (Tree ParzenEstimator) is selected; in order to improve the efficiency of searching an optimal hyper-parameter set, an optimized proxy model is established, ten times of cross loss values of a real LightGBM model are used for evaluation, a Parzen window density estimator is used for generating a probability model by TPE, initial observation values are divided into two groups, then a plurality of Gaussian probability distributions generated by each point are added to obtain the probability density of each group, the points corresponding to more than 20% of performance indexes of loss values are divided into l (lambda) groups, and the values of 20% of the performance indexes are evaluated by c*The rest are classified into v (λ) group, and the formula is as follows:
Figure BDA0002223437370000021
c loss value, λ: hyper-parameters;
b. selecting an acquisition function, constructing an effective function from the posterior distribution of the model to determine a next sampling set, and selecting EI (expectedprovement); after the proxy function is defined, the next set of optimal desired hyper-parameters is selected by maximizing the acquisition function, which generates candidates using a probability density function, and the Expected Improvement (EI) value for each sample point is calculated from equation (2):
Figure BDA0002223437370000022
cminrepresenting the current minimum loss value, c (lambda) representing the loss value under the over-parameter lambda;
definition of gamma-p (c < c)*) (4)
From equations (1), (4):
Figure BDA0002223437370000023
according to the Bayesian formula:
Figure BDA0002223437370000025
obtained by the formulae (2), (5), (6) and (7):
Figure BDA0002223437370000026
equation (8) indicates that EI is proportional to l (λ)/v (λ), and in order to maximize the expected improvement, it is desirable that the parameter be more likely to have its new parameter evaluated in the form of l (λ)/g (λ) at l (λ) than v (λ), returning to the set that yields the highest value at l (λ)/v (λ);
construction of BO-LightGBM
Self-optimizing and adjusting the hyperparameter of the LightGBM by using a Bayesian hyperparameter optimization algorithm; bayesian optimization utilizes a probability model to act a complex optimization function, the prior of an object to be optimized is introduced into the probability model, and the model can effectively reduce unnecessary sampling; the execution steps are as follows:
a. the observation domain is generated by a random search:
Figure BDA0002223437370000031
b. when T is less than T, circulating;
c. according to the formula (1) HtDividing into two groups;
d. defining l (λ), v (λ) respectively by adding likelihood probability distributions for all points in each group;
e. maximizing equation (3) yields a candidate set: lambda [ alpha ]*=argmaxEI(λ);
f. Substituting candidate set into LightGBM model for 10-fold cross-validation search loss value
c(λ*);
g. Updating the observation value range by the hyper-parameter setting and the corresponding loss value: ht+1=Ht∪(λ*,c(λ*));
h. Loop termination
i. Returning the optimal parameter λ of the loss value minimization in Hbest
In the step S1: the LightGBM algorithm adopts an improved histogram algorithm, divides the characteristic value into k intervals, selects split points in the k intervals, and meanwhile, the histogram algorithm used has a regularization effect and can effectively prevent overfitting.
The invention has the beneficial effects that:
bayesian optimization realizes the most advanced result on some global optimization problems by constructing a probability model of a function to be optimized and utilizing the probability model to determine the optimization method of the next evaluation point, and the method is a better solution for superparametric optimization.
7 types of wine data collected by the electronic nose are used as experimental data, and a Bayesian optimization algorithm (BO) is used for optimizing LightGBM model parameters to construct a BO-LightGBM classification model, and the BO-LightGBM classification model is applied to wine classification. Through experimental comparison, Bayesian optimization shows the optimal and fastest optimization result relative to parameter searching methods such as random search, grid search and the like. In addition, compared with the LightGBM original model, the SVM model and the RF model, the Adaboost model and the BO-LightGBM model show the best classification result on the classification accuracy.
Drawings
FIG. 1 is a histogram of LightGBM of the present invention;
FIG. 2 is a schematic representation of an electronic nose on sample detection;
FIG. 3 is a graph of a hyperparametric density profile;
FIG. 4 is a graph of loss values as a function of iteration;
FIG. 5 is a diagram of the BO-LightGBM classification results.
Detailed Description
The technical solutions of the present invention will be described clearly and completely by the following embodiments, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The definitions in the text:
gradient hoist (Light Gradient Boosting machine. LightGBM)
Tree structure estimator (TPE)
(EI) anticipated improvements
BO-LightGBM (gradient elevator algorithm based on Bayes optimization)
Hyperopt response surface
sklern: (library of python)
Adaboost (adaptive elevator algorithm)
1-5, a wine classification method based on Bayesian optimization and electronic nose comprises the following steps:
s1.LightGBM algorithm
Adopting a Leaf-wise tree building method, finding out the Leaf with the maximum splitting gain from all the current leaves each time when building the tree, then splitting, and circulating in the way; LightGBM prunes the tree using the maximum tree depth, avoiding overfitting;
s2, Bayesian optimization algorithm:
a. selecting a prior function to map the hyperparameter lambda to a loss function L, wherein TPE (Tree ParzenEstimator) is selected; in order to improve the efficiency of searching an optimal hyper-parameter set, an optimized proxy model is established, ten times of cross loss values of a real LightGBM model are used for evaluation, a Parzen window density estimator is used for generating a probability model by TPE, initial observation values are divided into two groups, then a plurality of Gaussian probability distributions generated by each point are added to obtain the probability density of each group, the points corresponding to more than 20% of performance indexes of loss values are divided into l (lambda) groups, and the values of 20% of the performance indexes are evaluated by c*The rest are classified into v (λ) group, and the formula is as follows:
Figure BDA0002223437370000051
c loss value, λ: hyper-parameters;
b. selecting an acquisition function, constructing an effective function from the posterior distribution of the model to determine a next sampling set, and selecting EI (expected improvement); after the proxy function is defined, the next set of optimal desired hyper-parameters is selected by maximizing the acquisition function, which generates candidates using a probability density function, and the Expected Improvement (EI) value for each sample point is calculated from equation (2):
Figure BDA0002223437370000052
cminrepresenting the current minimum loss value, c (lambda) representing the loss value under the over-parameter lambda;
definition of gamma-p (c < c)*) (4)
From equations (1), (4):
Figure BDA0002223437370000053
Figure BDA0002223437370000054
according to the Bayesian formula:
Figure BDA0002223437370000055
obtained by the formulae (2), (5), (6) and (7):
equation (8) indicates that EI is proportional to l (λ)/v (λ), and in order to maximize the expected improvement, it is desirable that the parameter be more likely to have its new parameter evaluated in the form of l (λ)/g (λ) at l (λ) than v (λ), returning to the set that yields the highest value at l (λ)/v (λ);
construction of BO-LightGBM
Self-optimizing and adjusting the hyperparameter of the LightGBM by using a Bayesian hyperparameter optimization algorithm; bayesian optimization utilizes a probability model to act a complex optimization function, the prior of an object to be optimized is introduced into the probability model, and the model can effectively reduce unnecessary sampling; the execution steps are as follows:
a. the observation domain is generated by a random search:
b. when T is less than T, circulating;
c. according to the formula (1) HtDividing into two groups;
d. defining l (λ), v (λ) respectively by adding likelihood probability distributions for all points in each group;
e. maximizing equation (3) yields a candidate set: lambda [ alpha ]*=argmaxEI(λ);
f. Substituting candidate set into LightGBM model for 10-fold cross-validation search loss value c (lambda)*);
g. Updating the observation value range by the hyper-parameter setting and the corresponding loss value: ht+1=Ht∪(λ*,c(λ*));
h. Loop termination
i. Returning the optimal parameter λ of the loss value minimization in Hbest
In the step S1: the LightGBM algorithm adopts an improved histogram algorithm, divides the characteristic value into k intervals, selects split points in the k intervals, and meanwhile, the histogram algorithm used has a regularization effect and can effectively prevent overfitting.
The light Gradient Boosting machine algorithm is a data model based on GBDT, the GBDT is an integrated learning algorithm combining weak learners into a powerful learner, regression trees are used as the weak learners in the GBDT algorithm, each tree learns conclusions and residuals of all previous trees, a current residual regression tree is obtained by using the residual of each prediction result and a target value as a target of next learning, and results of a plurality of decision trees are added together to be output as final prediction. Although GBDT has achieved a good learning effect in many machine learning tasks, GBDT is subject to an improvement in precision and efficiency, and microsoft proposed a LightGBM algorithm based on GBDT and XGboost in 2017.
Experimental samples:
the wine samples are respectively different varieties of wines such as Cabernet Sauvignon, Matheran, Longzhibao, Mello, Chardonnay, Miscanson, Pinelizhu and the like in 2018 years of China production area.
The following table is a detailed table of wine samples of different varieties,
Figure BDA0002223437370000071
in the last column of the table, "+" indicates that the process for a given sample is the same, provided by the manufacturer but not disclosed in detail. All experiments were performed in a laboratory at a temperature of 25 + -1 deg.C and a relative humidity of 50 + -2%.
The experimental process comprises the following steps:
the sensor matrix in the PEN3 electronic nose device has high requirements on the detection environment, and pre-experiments of the device are required before detection. In the preliminary experiment, parameters in the detection process of the electronic nose are respectively selected, and the detection conditions are selected through the experiment as follows: the head space volume is 500ml, the head space time is 10min, and the ambient temperature is controlled to be 22-25 ℃. The specific detection process is shown in fig. 3: and (3) sealing 50ml of wine in a small bottle by using a preservative film, standing and balancing the wine and the air in the small bottle for 10min to ensure that sample gas can be fully volatilized in a closed beaker, and carrying out formal experiments after the gas reaches a saturated stable state. Before gas collection, clean air treated by active carbon is sucked at the speed of 500ml, and an air chamber and an air passage of the electronic nose are cleaned, wherein the cleaning time is 60 s; during detection, the air inlet needle and the air replenishing needle are simultaneously inserted into a beaker sealed by a preservative film, an air pump arranged in the electronic nose starts to work, sample gas is sucked at the speed of 300ml/min, and the collection time is 90 s; in order to avoid accidental errors caused by manual operation in the experimental process and ensure the accuracy and reliability of the sample, the same sample is subjected to three times of repeated tests. And storing the gas information acquired each time into a computer in a text mode so as to carry out subsequent data analysis and processing work.
Classification results of BO-LightGBM model
Dividing 630x10 sample data into two parts, wherein 80% is used as a training data set, and 20% is used for testing data, and judging the classification performance of the model. Bayesian hyper-parameter optimization is realized by Hyperopt, and LightGBM algorithm is realized by LightGBM library. Selecting 7 hyper-parameters which have large influence on the algorithm to adjust, wherein the parameter expression and the value range are shown in the following table, adopting skleran to set the parameter range, setting the iteration frequency as 1000, and setting a probability density distribution diagram of the hyper-parameters in 1000 iterations as shown in figure 3. The following table is a spatial table of hyper-parameters:
Figure BDA0002223437370000081
fig. 3 is a density distribution diagram of hyper-parameters, in which the horizontal axis represents a parameter selection range, the vertical axis represents a density estimation value, and the vertical line represents an optimal value with the highest accuracy, and it can be found that the optimal values of the parameters all appear in a region with dense distribution, which indicates that bayesian optimization takes more time to find the most promising hyper-parameter value, and verifies the advantage of the aforementioned bayesian optimization in parameter optimization according to the historical optimal value.
For the convenience of experimental comparison, random search, grid search and bayesian optimization are compared, and fig. 4 is a curve of the error rate of the test set along with the change of iteration times.
It can be seen from fig. 4 that the error rate of bayes is rapidly reduced in the previous 100 iterations, and since the LightGBM hyper-parameter combinations are numerous, the random search and the grid search cannot rapidly select the optimal parameters, the error rate of bayes is continuously reduced after 100 iterations, and the error rate of grid search and random search becomes gentle. Finally, Bayesian parameter adjustment obtains better results than the other two methods.
The following table is a classification model result comparison table
Figure BDA0002223437370000082
Figure BDA0002223437370000091
And selecting an optimal parameter combination of a hyper-parameter adjustment algorithm, and respectively training a support vector machine, a random forest, an Adaboost, a LightGBM model and a BO-LightGBM model on a data set, wherein the table above is a classification result of 5 models. Experiments show that the LightGBM classification model optimized through Bayesian optimization has the highest accuracy, Adaboost is the second, and results show that the improved LightGBM is superior to a single classifier and an integrated learning classifier.
In fig. 5, dots represent actual values, and stars represent predicted values. From the figure, 120 of the 126 test samples are correctly classified, 95.238% of classification results are obtained, the expected classification standard is achieved, and the feasibility of the model is verified.
The above examples are merely illustrative of embodiments of the present invention, which are described in more detail and detail, and should not be construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (2)

1. A wine classification method based on Bayesian optimization and electronic nose is characterized in that: the method comprises the following steps:
s1.LightGBM algorithm
Adopting a Leaf-wise tree building method, finding out the Leaf with the maximum splitting gain from all the current leaves each time when building the tree, then splitting, and circulating in the way; LightGBM prunes the tree using the maximum tree depth, avoiding overfitting;
s2, Bayesian optimization algorithm:
a. selecting a prior function to map the hyperparameter lambda to a loss function L, wherein TPE (Tree ParzenEstimator) is selected; in order to improve the efficiency of searching an optimal hyper-parameter set, an optimized proxy model is established, ten times of cross loss values of a real LightGBM model are used for evaluation, a Parzen window density estimator is used for generating a probability model by TPE, initial observation values are divided into two groups, then a plurality of Gaussian probability distributions generated by each point are added to obtain the probability density of each group, the points corresponding to more than 20% of performance indexes of loss values are divided into l (lambda) groups, and the values of 20% of the performance indexes are evaluated by c*The rest are classified into v (λ) group, and the formula is as follows:
c loss value, λ: hyper-parameters;
b. selecting an acquisition function, constructing an effective function from the posterior distribution of the model to determine a next sampling set, and selecting EI (expected improvement); after the proxy function is defined, the next set of optimal desired hyper-parameters is selected by maximizing the acquisition function, which generates candidates using a probability density function, and the Expected Improvement (EI) value for each sample point is calculated from equation (2):
Figure FDA0002223437360000012
cminrepresenting the current minimum loss value, c (lambda) representing the loss value under the over-parameter lambda;
definition of gamma-p (c < c)*) (4)
From equations (1), (4):
Figure FDA0002223437360000021
Figure FDA0002223437360000022
according to the Bayesian formula:
Figure FDA0002223437360000023
obtained by the formulae (2), (5), (6) and (7):
Figure FDA0002223437360000024
equation (8) indicates that EI is proportional to l (λ)/v (λ), and in order to maximize the expected improvement, it is desirable that the parameter be more likely to have its new parameter evaluated in the form of l (λ)/g (λ) at l (λ) than v (λ), returning to the set that yields the highest value at l (λ)/v (λ);
construction of BO-LightGBM
Self-optimizing and adjusting the hyperparameter of the LightGBM by using a Bayesian hyperparameter optimization algorithm; bayesian optimization utilizes a probability model to act a complex optimization function, the prior of an object to be optimized is introduced into the probability model, and the model can effectively reduce unnecessary sampling; the execution steps are as follows:
a. the observation domain is generated by a random search:
Figure FDA0002223437360000025
b. when T is less than T, circulating;
c. according to the formula (1) HtDividing into two groups;
d. defining l (λ), v (λ) respectively by adding likelihood probability distributions for all points in each group;
e. maximizing equation (3) yields a candidate set: lambda [ alpha ]*=argmaxEI(λ);
f. Substituting candidate set into LightGBM model for 10-fold cross-validation search loss value c (lambda)*);
g. Updating the observation value range by the hyper-parameter setting and the corresponding loss value: ht+1=Ht∪(λ*,c(λ*));
h. Loop termination
i. Returning the optimal parameter λ of the loss value minimization in Hbest
2. The wine classification method based on Bayesian optimization and electronic nose as claimed in claim 1, wherein in step S1: the LightGBM algorithm adopts an improved histogram algorithm, divides the characteristic value into k intervals, selects split points in the k intervals, and meanwhile, the histogram algorithm used has a regularization effect and can effectively prevent overfitting.
CN201910943018.8A 2019-09-30 2019-09-30 Wine classification method based on Bayesian optimization and electronic nose Pending CN110705641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910943018.8A CN110705641A (en) 2019-09-30 2019-09-30 Wine classification method based on Bayesian optimization and electronic nose

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910943018.8A CN110705641A (en) 2019-09-30 2019-09-30 Wine classification method based on Bayesian optimization and electronic nose

Publications (1)

Publication Number Publication Date
CN110705641A true CN110705641A (en) 2020-01-17

Family

ID=69197047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910943018.8A Pending CN110705641A (en) 2019-09-30 2019-09-30 Wine classification method based on Bayesian optimization and electronic nose

Country Status (1)

Country Link
CN (1) CN110705641A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036619A (en) * 2020-08-17 2020-12-04 中国标准化研究院 Method for judging whether roasted duck exceeds shelf end point by combining electronic nose with Bayesian algorithm
CN113128132A (en) * 2021-05-18 2021-07-16 河南工业大学 Grain pile humidity and condensation prediction method based on depth time sequence
CN113433270A (en) * 2021-06-29 2021-09-24 北京中医药大学 Rapid identification method of curcuma traditional Chinese medicine by combining electronic nose with LightGBM
CN113592060A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Neural network optimization method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592060A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Neural network optimization method and device
WO2021218470A1 (en) * 2020-04-30 2021-11-04 华为技术有限公司 Neural network optimization method and device
CN112036619A (en) * 2020-08-17 2020-12-04 中国标准化研究院 Method for judging whether roasted duck exceeds shelf end point by combining electronic nose with Bayesian algorithm
CN112036619B (en) * 2020-08-17 2023-07-11 中国标准化研究院 Method for judging whether roast duck exceeds goods shelf end point by combining electronic nose with Bayes algorithm
CN113128132A (en) * 2021-05-18 2021-07-16 河南工业大学 Grain pile humidity and condensation prediction method based on depth time sequence
CN113433270A (en) * 2021-06-29 2021-09-24 北京中医药大学 Rapid identification method of curcuma traditional Chinese medicine by combining electronic nose with LightGBM

Similar Documents

Publication Publication Date Title
CN110705641A (en) Wine classification method based on Bayesian optimization and electronic nose
CN107909101B (en) Semi-supervised transfer learning character identifying method and system based on convolutional neural networks
Richard et al. Neuralnetwork-viterbi: A framework for weakly supervised video learning
CN109740676B (en) Object detection and migration method based on similar targets
Han et al. A new method in wheel hub surface defect detection: Object detection algorithm based on deep learning
CN109800717B (en) Behavior recognition video frame sampling method and system based on reinforcement learning
WO2021252798A1 (en) Basecaller with dilated convolutional neural network
CN109740481A (en) Atrial fibrillation Modulation recognition method of the CNN based on jump connection in conjunction with LSTM
CN109145685B (en) Fruit and vegetable hyperspectral quality detection method based on ensemble learning
CN110222738B (en) Multi-view dictionary learning classification method for mixed sampling industrial big data
CN107195297B (en) Data normalization fused self-adaptive variation bird group voice recognition system
CN108376266A (en) One-class support vector machines Optimization Method of Kernel Parameter based on sample edge point internal point
CN112069911A (en) Fruit and vegetable quality detection method based on multispectral image information and TLMD-WOA-SIFT
CN115131549A (en) Significance target detection training method based on self-boosting learning
CN111291820B (en) Target detection method combining positioning information and classification information
CN107633009A (en) A kind of Weakly supervised Document Classification Method based on mark confidence level
Martínez-Muñoz et al. Selection of decision stumps in bagging ensembles
CN109359677B (en) Noise-resistant online multi-classification kernel learning algorithm
CN111143436A (en) Data mining method for big data
Liu et al. Fast implementation of object detection algorithm based on homomorphic model transformation
CN109409226A (en) A kind of finger vena plot quality appraisal procedure and its device based on cascade optimization CNN
Ma et al. Sample Weighting with Hierarchical Equalization Loss for Dense Object Detection
Huaitie et al. Hybrid optimization method for parameter selection of support vector machine
CN116223480B (en) Method and system for detecting heavy metal content in coal chemical industry sludge
CN112000915B (en) Gas sensor array data fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200117

WD01 Invention patent application deemed withdrawn after publication