CN110245802A - Based on the cigarette void-end rate prediction technique and system for improving gradient promotion decision tree - Google Patents
Based on the cigarette void-end rate prediction technique and system for improving gradient promotion decision tree Download PDFInfo
- Publication number
- CN110245802A CN110245802A CN201910539106.1A CN201910539106A CN110245802A CN 110245802 A CN110245802 A CN 110245802A CN 201910539106 A CN201910539106 A CN 201910539106A CN 110245802 A CN110245802 A CN 110245802A
- Authority
- CN
- China
- Prior art keywords
- tree model
- gradient
- regression tree
- model
- indicate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/04—Manufacturing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The present invention discloses a kind of based on the cigarette void-end rate prediction technique for improving gradient promotion decision tree, the following steps are included: obtaining same batch pipe tobacco in the technological parameter under each process of throwing and the loose-ends ratio data under final volume job contract sequence, technological parameter and loose-ends ratio data are formed into raw data set;Raw data set is divided based on raw data set combination correlation analysis method, obtains key process parameter collection;The data concentrated to the key process parameter are normalized, and by carrying out random division to the data after normalized, obtain training dataset and test data set;Based on the training sample that training data is concentrated, and then constructs and improve gradient promotion decision-tree model;Test data set sample is input to and improves gradient promotion decision-tree model, cigarette void-end rate is predicted.It is contacted being established between each process parameter of Primary Processing and volume packet loose-ends ratio index, realizes the prediction more accurate to cigarette void-end rate.
Description
Technical field
The present invention relates to cigarette void-end rate electric powder prediction more particularly to a kind of decision tree is promoted based on improving gradient
Cigarette void-end rate prediction technique and system.
Background technique
Cigarette process is an important link in Cigarette processing, and technique superiority and inferiority directly affects the quality and product of pipe tobacco
Matter.However, always inevitably cigarette void-end situation in the actual production process, supplementary material and efficiency are caused
Waste;Currently, solving the problems, such as that the main method of cigarette void-end is improved cigarette machine itself and to Primary Processing technical study
It attempts, but these modes are only capable of carrying out in the form that small lot is tested, and it is less to obtain data volume, so that it cannot discovery is more
Rule and more preferably parameter and standard, cannot reach desired effect of optimization.Therefore, it is necessary to be visited using a kind of new method
Rope excavates many technological parameters in Primary Processing and the internal relation between cigarette void-end rate, look-ahead cigarette working procedure
Loose-ends ratio, and the variation tendency of cigarette void-end rate is found as early as possible.
Common prediction technique mainly includes the prediction technique based on statistical regression and the prediction technique based on machine learning.
Prediction technique based on statistical regression has exponential smoothing, ARMIA etc..Since production of cigarettes process parameter is more, it is based on
The prediction technique operation time of statistical regression is longer, and often omits certain important process during fit mathematics model
Parameter causes precision of prediction poor.And the prediction technique based on machine learning mainly includes neural network, k nearest neighbor algorithm etc., is passed
Application study of the BP neural network model in terms of prediction of uniting has achieved biggish development, but since neural network model is one
A black box process, model explanation is not strong, there are convergence rate compared with slow, generalization ability is poor the disadvantages of, be unable to reach satisfied pre-
Survey result.
Summary of the invention
The shortcomings that present invention is directed in the prior art provides a kind of based on the cigarette void-end for improving gradient promotion decision tree
Rate prediction technique and system.
In order to solve the above-mentioned technical problem, the present invention is addressed by following technical proposals:
A kind of cigarette void-end rate prediction technique promoting decision tree based on improvement gradient, comprising the following steps:
Same batch pipe tobacco is obtained in the technological parameter under each process of throwing and the bear under final volume job contract sequence
Technological parameter and loose-ends ratio data are formed raw data set by rate data;
Raw data set is divided based on raw data set combination correlation analysis method, determines that critical process is joined
Number, obtains key process parameter collection;
To the key process parameter concentrate data be normalized, by the data after normalized into
Row random division, obtains training dataset and test data set;
Based on the training sample that training data is concentrated, corresponding regression tree model is established by iterative algorithm, in turn
Building improves gradient and promotes decision-tree model;
Test data set sample is input to and improves gradient promotion decision-tree model, cigarette void-end rate is predicted.
As an embodiment, described be input to training set sample improves gradient promotion decision-tree model, to volume
It further include Optimized Iterative step before cigarette loose-ends ratio carries out prediction steps, the Optimized Iterative step specifically:
The model parameter for improving gradient promotion decision-tree model is chosen using cross-validation method, after being optimized
It improves gradient and promotes decision-tree model.
As an embodiment, the data concentrated to the key process parameter are normalized, and have
Body are as follows:
zk=(Dk-Dk,min)/(DK, max-Dk,min)
Wherein, zkFor the data after normalization;DkTo normalize preceding measured data;Dk,minFor the minimum value in parameter,
DK, maxFor the maximum value in parameter.
As an embodiment, the training sample concentrated based on training data, establishes phase by iterative algorithm
The regression tree model answered, and then construct and improve gradient promotion decision-tree model, specifically:
Based on training dataset, regression tree model is established, training dataset is denoted asWherein, x
Indicate input variable, y indicates corresponding output variable, it is assumed that the leaf of every regression tree is Jm, the input space is divided into Jm
A disjoint range: R1m,R2m,L,Rjm, and the constant value of output is determined on each zone, it is assumed that bjmFor region RjmConstant
It is worth, then regression tree model expression formula are as follows:
Wherein, R1m,R2m,L,RjmIndicate JmA disjoint range, I indicate that region decision exports expression formula, gm(x) it indicates back
Return tree-model;
Established regression tree model is initialized by Huber loss function, the regression tree after being initialized
Model, and the regression tree model after initialization is trained, the regression tree model after initialization indicates are as follows:
Wherein, N indicates quantity, and L indicates that loss function, x indicate that input becomes
Amount, y indicate corresponding output variable, and f (x) indicates a fitting function;
Regression tree model after initialization is trained, the step-length of the gradient decline of regression tree model is obtained;
Based on the step-length for the gradient decline established, the regression tree model after initialization is updated, updated time
Tree-model is returned to indicate are as follows:
fm(x)=fm-1(x)+lr*βmgm(x)
Wherein, lr indicates that learning rate, x indicate input variable, βmIndicate the step-length of gradient decline, gm(x) regression tree mould is indicated
Type, fm(x) updated regression tree model, f are indicatedm-1(x) regression tree model before updating is indicated;
Updated regression tree model is constantly updated based on the desired value for minimizing loss function, final output is steady
Fixed improvement gradient promotes decision-tree model.
As an embodiment, the regression tree model after described pair of initialization is trained, and obtains regression tree model
Gradient step, specifically:
M regression tree of grey iterative generation, m ∈ [1, M], m indicate the m tree;
The data that sample data is concentrated are denoted as N, i ∈ [1, N], i indicate i-th of sample, obtain the negative ladder of loss function
Angle value, and using the negative gradient value as the estimated value r of residual errorim, the estimated value of residual error is expressed as follows:
Wherein, fm-1(xi) indicate regression tree model corresponding to i-th of sample of m-1 tree, yiIndicate i-th of sample institute
Corresponding output variable;
One regression tree model g of Residual Generation based on generationm(x) input space that the m is set is divided into J not phases
Hand over region: R1m,R2m,L,Rjm, the step-length of gradient decline is calculated, the step-length of gradient decline is expressed as:
In formula, βmIndicate the step-length of gradient decline, β is the step-length determined by linear search method.
A kind of cigarette void-end rate forecasting system promoting decision tree based on improvement gradient, including data acquisition module, pre- place
Manage module, reprocessing module, model training module and prediction module;
The data acquisition module, for obtain technological parameter of the same batch pipe tobacco under each process of throwing and
Loose-ends ratio data under final volume job contract sequence, form raw data set for technological parameter and loose-ends ratio data;
The preprocessing module, for being drawn based on raw data set combination correlation analysis method to raw data set
Point, it determines key process parameter, obtains key process parameter collection;
The reprocessing module, the data for concentrating to the key process parameter are normalized, by right
Data after normalized carry out random division, obtain training dataset and test data set;
The model training module, the training sample for being concentrated based on training data are established corresponding by iterative algorithm
Regression tree model, and then construct improve gradient promoted decision-tree model;
The prediction module improves gradient promotion decision-tree model for test data set sample to be input to, to cigarette
Loose-ends ratio is predicted.
It as an embodiment, further include Optimized model, the optimization module is used for using cross-validation method to changing
The model parameter for promoting decision-tree model into gradient is chosen, and the improvement gradient after being optimized promotes decision-tree model.
As an embodiment, the reprocessing module is arranged to:
zk=(Dk-Dk,min)/(DK, max-Dk,min)
Wherein, zkFor the data after normalization;DkTo normalize preceding measured data;Dk,minFor the minimum value in parameter,
DK, maxFor the maximum value in parameter.
As an embodiment, the model training module is arranged to:
Based on training dataset, regression tree model is established, training dataset is denoted asWherein, x
Indicate input variable, y indicates corresponding output variable, it is assumed that the leaf of every regression tree is Jm, the input space is divided into Jm
A disjoint range: R1m,R2m,L,Rjm, and the constant value of output is determined on each zone, it is assumed that bjmFor region RjmConstant
It is worth, then regression tree model expression formula are as follows:
Wherein, R1m,R2m,L,RjmIndicate JmA disjoint range, I indicate that region decision exports expression formula, gm(x) it indicates back
Return tree-model;
Established regression tree model is initialized by Huber loss function, the regression tree after being initialized
Model, and the regression tree model after initialization is trained, the regression tree model after initialization indicates are as follows:
Wherein, N indicates quantity, and L indicates that loss function, x indicate input variable, and y indicates corresponding output variable, f (x)
Indicate a fitting function;
Regression tree model after initialization is trained, the step-length of the gradient decline of regression tree model is obtained;
Based on the step-length for the gradient decline established, the regression tree model after initialization is updated, updated time
Tree-model is returned to indicate are as follows:
fm(x)=fm-1(x)+lr*βmgm(x)
Wherein, lr indicates that learning rate, x indicate input variable, βmIndicate the step-length of gradient decline, gm(x) regression tree mould is indicated
Type, fm(x) updated regression tree model, f are indicatedm-1(x) regression tree model before updating is indicated;
Updated regression tree model is constantly updated based on the desired value for minimizing loss function, final output is steady
Fixed improvement gradient promotes decision-tree model.
As an embodiment, the model training module is also configured to:
M regression tree of grey iterative generation, m ∈ [1, M], m indicate the m tree;
The data that sample data is concentrated are denoted as N, i ∈ [1, N], i indicate i-th of sample, obtain the negative ladder of loss function
Angle value, and using the negative gradient value as the estimated value r of residual errorim, the estimated value of residual error is expressed as follows:
Wherein, fm-1(xi) indicate regression tree model corresponding to i-th of sample of m-1 tree, yiIndicate i-th of sample institute
Corresponding output variable;
One regression tree model g of Residual Generation based on generationm(x) input space that the m is set is divided into J not phases
Hand over region: R1m,R2m,L,Rjm, the step-length of gradient decline is calculated, the step-length of gradient decline is expressed as:
Wherein, βmIndicate the step-length of gradient decline, β is the step-length determined by linear search method.
The present invention is due to using above technical scheme, with significant technical effect:
The present invention improves gradient by building and promotes decision-tree model, by each process parameter of Primary Processing and volume packet bear
Connection is established between rate index, realizes the prediction more accurate to cigarette void-end rate.Algorithm model it is explanatory relatively strong, can
It helps user to more fully understand influence of each variable for result, has method simple, precision of prediction is high, strong real-time, number
According to the advantages that processing accuracy is high, core algorithm robustness is good.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art
To obtain other drawings based on these drawings.
Fig. 1 is overall flow schematic diagram of the invention;
Fig. 2 is overall structure diagram of the invention;
Fig. 3 is to improve gradient to promote decision tree prediction model training result figure;
Fig. 4 is to improve gradient to promote decision tree prediction model prediction result figure.
Specific embodiment
The present invention will be further described in detail below with reference to the embodiments, following embodiment be explanation of the invention and
The invention is not limited to following embodiments.
Embodiment 1:
A kind of cigarette void-end rate prediction technique promoting decision tree based on improvement gradient, as shown in Figure 1, including following step
It is rapid:
S100, same batch pipe tobacco is obtained under the technological parameter under each process of throwing and finally volume job contract sequence
Technological parameter and loose-ends ratio data are formed raw data set by loose-ends ratio data;
S200, raw data set is divided based on raw data set combination correlation analysis method, determines key work
Skill parameter obtains key process parameter collection;
S300, the data concentrated to the key process parameter are normalized, after to normalized
Data carry out random division, obtain training dataset and test data set;
S400, the training sample concentrated based on training data, establish corresponding regression tree model by iterative algorithm,
And then it constructs and improves gradient promotion decision-tree model;
S500, test data set sample is input to and improves gradient promotion decision-tree model, cigarette void-end rate carried out pre-
It surveys.
The present invention will improve gradient and promote decision tree applied to cigarette void-end rate prediction field, overcome common prediction technique
The disadvantages of precision of prediction is low, and the model calculation time is long, explanatory difference.Improving gradient and promoting decision-tree model is a kind of determining for iteration
Plan tree algorithm, basis are post-class processing algorithms, other than the 1st decision tree is generated using original predictive index, each round
Target in iteration is all to realize that current learner loss function minimizes, even loss function is always along under its gradient direction
Drop makes final residual error level off to 0 by continuous iteration, and the result of all trees, which is added up, can obtain final prediction knot
Fruit.
The present invention improves gradient by building and promotes decision-tree model, by each process parameter of Primary Processing and volume packet bear
Connection is established between rate index, realizes the prediction more accurate to cigarette void-end rate.
Under normal circumstances, the step of length of raw data set is M, passes through S200, available k critical process ginseng
Number forms the training dataset (N >=0.7M) for ultimately forming that length is N after processing, and remaining sample then forms test number
According to collection.
Gradient promotion decision-tree model is improved before step S500, that is, in described be input to training set sample,
It further include Optimized Iterative step before carrying out prediction steps to cigarette void-end rate, the Optimized Iterative step specifically:
The model parameter for improving gradient promotion decision-tree model is chosen using cross-validation method, after being optimized
It improves gradient and promotes decision-tree model.
In step S300, the data concentrated to the key process parameter are normalized, specifically:
zk=(Dk-Dk,min)/(DK, max-Dk,min)
Wherein, zkFor the data after normalization;DkTo normalize preceding measured data;Dk,minFor the minimum value in parameter,
DK, maxFor the maximum value in parameter.
In step S400, the training sample concentrated based on training data establishes corresponding return by iterative algorithm
Return decision-tree model, and then construct and improve gradient promotion decision-tree model, specifically:
Based on training dataset, regression tree model is established, training dataset is denoted asWherein, x
Indicate input variable, y indicates corresponding output variable, it is assumed that the leaf of every regression tree is Jm, the input space is divided into Jm
A disjoint range: R1m,R2m,L,Rjm, and the constant value of output is determined on each zone, it is assumed that bjmFor region RjmConstant
It is worth, then regression tree model expression formula are as follows:
Wherein, R1m,R2m,L,RjmIndicate JmA disjoint range, I indicate that region decision exports expression formula, gm(x) it indicates back
Return tree-model;
Established regression tree model is initialized by Huber loss function, the regression tree after being initialized
Model, and the regression tree model after initialization is trained, the regression tree model after initialization indicates are as follows:
Wherein, N indicates quantity, and L indicates that loss function, x indicate input variable, and y indicates corresponding output variable, f (x)
Indicate a fitting function;
Regression tree model after initialization is trained, the step-length of the gradient decline of regression tree model is obtained;
Based on the step-length for the gradient decline established, the regression tree model after initialization is updated, updated time
Tree-model is returned to indicate are as follows:
fm(x)=fm-1(x)+lr*βmgm(x)
Wherein, lr indicates that learning rate, x indicate input variable, βmIndicate the step-length of gradient decline, gm(x) regression tree mould is indicated
Type, fm(x) updated regression tree model, f are indicatedm-1(x) regression tree model before updating is indicated;
Updated regression tree model is constantly updated based on the desired value for minimizing loss function, final output is steady
Fixed improvement gradient promotes decision-tree model.
In addition, further, the regression tree model after described pair of initialization is trained, the ladder of regression tree model is obtained
Step-length is spent, specifically:
M regression tree of grey iterative generation, m ∈ [1, M], m indicate the m tree;
The data that sample data is concentrated are denoted as N, i ∈ [1, N], i indicate i-th of sample, obtain the negative ladder of loss function
Angle value, and using the negative gradient value as the estimated value r of residual errorim, the estimated value of residual error is expressed as follows:
Wherein, fm-1(xi) indicate regression tree model corresponding to i-th of sample of m-1 tree, yiIndicate i-th of sample institute
Corresponding output variable;
One regression tree model g of Residual Generation based on generationm(x) input space that the m is set is divided into J not phases
Hand over region: R1m,R2m,L,Rjm, the step-length of gradient decline is calculated, the step-length of gradient decline is expressed as:
Wherein, βmIndicate the step-length of gradient decline, β is the step-length determined by linear search method.
Specific embodiment:
Based on above step, with certain tobacco factory's cigarette production line actual production data instance in 2018, come illustrate the application with
The difference of the prior art.Firstly, carrying out communication exchange with tobacco factory technical specialist, and the magnanimity creation data of accumulation is associated
Analysis, obtains 10 key process parameters during fiber tow production, according to association analysis as a result, respectively by this 10 technique ginsengs
Several numerical value is as mode input amount, and the cigarette void-end rate under final cigarette process is as model prediction variable.
The pipe tobacco creation data in a period of time is acquired, and Preprocessing is carried out to data set, rejects redundancy technique ginseng
Several and loose-ends ratio lacks abnormal data.Meanwhile realizing that technological parameter is corresponding with the work order of loose-ends ratio by temporal information, it is ensured that system
The data of silk and volume packet are interrelated, form 500 groups of data altogether.Wherein, random uniform sampling is carried out to data, selects 380 groups
Sample improves gradient for training and promotes decision tree prediction model as training sample;Remaining sample is as test set, for surveying
Examination improves the predictablity rate that gradient promotes decision-tree model.Model initial parameter setting are as follows: loss function is lost using Huber
Function, leaf node depth are 3, and the number of iterations 100, learning rate 0.1, model training result and model prediction result are for example attached
Shown in Fig. 3-4.
In order to further probe into and compare the performance for improving gradient promotion decision-tree model in terms of cigarette void-end rate prediction
Superiority and inferiority, the application promote decision-tree model using gradient and compare test, choose identical training sample and test sample, point
Gradient is not trained to promote decision-tree model and be compared.Then, using coefficient of determination R2With root-mean-square error RMSE conduct
The evaluation index of model superiority and inferiority, wherein R2Bigger, then the goodness of fit for representing prediction model is higher;RMSE is smaller, then predicts to tie
The precision of fruit is higher.The prediction result evaluation index of different prediction models is as shown in table 1.
The different prediction model Comparative result tables of table 1
As can be seen from Table 1, either training set data or test set data promote decision tree mould compared to gradient
Type, improve gradient promoted decision tree prediction model evaluation index be all it is more excellent, illustrate its cigarette void-end rate prediction in have
There is better performance, demonstrates the validity of the technical program.
Embodiment 2:
A kind of cigarette void-end rate forecasting system promoting decision tree based on improvement gradient, as shown in Fig. 2, including data acquisition
Module 100, preprocessing module 200, reprocessing module 300, model training module 400 and prediction module 500;
The data acquisition module 100, for obtaining technological parameter of the same batch pipe tobacco under each process of throwing
And the loose-ends ratio data under final volume job contract sequence, technological parameter and loose-ends ratio data are formed into raw data set;
The preprocessing module 200, for based on raw data set combination correlation analysis method to raw data set into
Row divides, and determines key process parameter, obtains key process parameter collection;
The reprocessing module 300, the data for concentrating to the key process parameter are normalized, pass through
Random division is carried out to the data after normalized, obtains training dataset and test data set;
The model training module 400, the training sample for being concentrated based on training data, establishes phase by iterative algorithm
The regression tree model answered, and then construct and improve gradient promotion decision-tree model;
The prediction module 500 improves gradient promotion decision-tree model for test data set sample to be input to, to volume
Cigarette loose-ends ratio is predicted.
It further include Optimized model 600, the optimization module is used to promote decision tree to gradient is improved using cross-validation method
The model parameter of model is chosen, and the improvement gradient after being optimized promotes decision-tree model.
The reprocessing module is by 300 settings are as follows:
zk=(Dk-Dk,min)/(DK, max-Dk,min)
Wherein, zkFor the data after normalization;DkTo normalize preceding measured data;Dk,minFor the minimum value in parameter,
DK, maxFor the maximum value in parameter.
The model training module 400 is arranged to:
Based on training dataset, regression tree model is established, training dataset is denoted asWherein, x
Indicate input variable, y indicates corresponding output variable, it is assumed that the leaf of every regression tree is Jm, the input space is divided into Jm
A disjoint range: R1m,R2m,L,Rjm, and the constant value of output is determined on each zone, it is assumed that bjmFor region RjmConstant
It is worth, then regression tree model expression formula are as follows:
Wherein, R1m,R2m,L,RjmIndicate JmA disjoint range, I indicate that region decision exports expression formula, gm(x) it indicates back
Return tree-model;
Established regression tree model is initialized by Huber loss function, the regression tree after being initialized
Model, and the regression tree model after initialization is trained, the regression tree model after initialization indicates are as follows:
Wherein, N indicates quantity, and L indicates that loss function, x indicate input variable, and y indicates corresponding output variable, f (x)
Indicate a fitting function;
Regression tree model after initialization is trained, the step-length of the gradient decline of regression tree model is obtained;
Based on the step-length for the gradient decline established, the regression tree model after initialization is updated, updated time
Tree-model is returned to indicate are as follows:
fm(x)=fm-1(x)+lr*βmgm(x)
Wherein, lr indicates that learning rate, x indicate input variable, βmIndicate the step-length of gradient decline, gm(x) regression tree mould is indicated
Type, fm(x) updated regression tree model, f are indicatedm-1(x) regression tree model before updating is indicated;
Updated regression tree model is constantly updated based on the desired value for minimizing loss function, final output is steady
Fixed improvement gradient promotes decision-tree model.
The model training module 400 is also configured to:
M regression tree of grey iterative generation, m ∈ [1, M], m indicate the m tree;
The data that sample data is concentrated are denoted as N, i ∈ [1, N], i indicate i-th of sample, obtain the negative ladder of loss function
Angle value, and using the negative gradient value as the estimated value r of residual errorim, the estimated value of residual error is expressed as follows:
Wherein, fm-1(xi) indicate regression tree model corresponding to i-th of sample of m-1 tree, yiIndicate i-th of sample institute
Corresponding output variable;
One regression tree model g of Residual Generation based on generationm(x) input space that the m is set is divided into J not phases
Hand over region: R1m,R2m,L,Rjm, the step-length of gradient decline is calculated, the step-length of gradient decline is expressed as:
Wherein, βmIndicate the step-length of gradient decline, β is the step-length determined by linear search method.
The present invention will improve gradient and promote decision tree applied to cigarette void-end rate prediction field, overcome common prediction technique
The disadvantages of precision of prediction is low, and the model calculation time is long, explanatory difference.Improving gradient and promoting decision-tree model is a kind of determining for iteration
Plan tree algorithm, basis are post-class processing algorithms, other than the 1st decision tree is generated using original predictive index, each round
Target in iteration is all to realize that current learner loss function minimizes, even loss function is always along under its gradient direction
Drop makes final residual error level off to 0 by continuous iteration, and the result of all trees, which is added up, can obtain final prediction knot
Fruit.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple
Place illustrates referring to the part of embodiment of the method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with
The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, apparatus or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the present invention, the flow chart of terminal device (system) and computer program product
And/or block diagram describes.It should be understood that each process in flowchart and/or the block diagram can be realized by computer program instructions
And/or the combination of the process and/or box in box and flowchart and/or the block diagram.It can provide these computer programs to refer to
Enable the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminal devices with
A machine is generated, so that generating by the instruction that computer or the processor of other programmable data processing terminal devices execute
For realizing the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram
Device.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices
In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet
The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram
The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that
Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus
The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart
And/or in one or more blocks of the block diagram specify function the step of.
It should be understood that
" one embodiment " or " embodiment " mentioned in specification means the special characteristic described in conjunction with the embodiments, structure
Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs
Apply example " or " embodiment " the same embodiment might not be referred both to.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications can be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
In addition, it should be noted that, the specific embodiments described in this specification, the shape of parts and components are named
Title etc. can be different.The equivalent or simple change that all structure, feature and principles described according to the invention patent design are done, is wrapped
It includes in the scope of protection of the patent of the present invention.Those skilled in the art can be to described specific implementation
Example is done various modifications or additions or is substituted in a similar manner, and without departing from structure of the invention or surmounts this
Range as defined in the claims, is within the scope of protection of the invention.
Claims (10)
1. a kind of based on the cigarette void-end rate prediction technique for improving gradient promotion decision tree, which comprises the following steps:
Same batch pipe tobacco is obtained in the technological parameter under each process of throwing and the loose-ends ratio number under final volume job contract sequence
According to by technological parameter and loose-ends ratio data composition raw data set;
Raw data set is divided based on raw data set combination correlation analysis method, key process parameter is determined, obtains
To key process parameter collection;
To the key process parameter concentrate data be normalized, by after normalized data carry out with
Machine divides, and obtains training dataset and test data set;
Based on the training sample that training data is concentrated, corresponding regression tree model is established by iterative algorithm, and then construct
It improves gradient and promotes decision-tree model;
Test data set sample is input to and improves gradient promotion decision-tree model, cigarette void-end rate is predicted.
2. according to claim 1 based on the cigarette void-end rate prediction technique for improving gradient promotion decision tree, feature exists
In, described be input to training set sample improves gradient promotion decision-tree model, before carrying out prediction steps to cigarette void-end rate,
It further include Optimized Iterative step, the Optimized Iterative step specifically:
The model parameter for improving gradient promotion decision-tree model is chosen using cross-validation method, the improvement after being optimized
Gradient promotes decision-tree model.
3. according to claim 1 based on the cigarette void-end rate prediction technique for improving gradient promotion decision tree, feature exists
In, the data concentrated to the key process parameter are normalized, specifically:
zk=(Dk-Dk,min)/(DK, max-Dk,min)
Wherein, zkFor the data after normalization;DkTo normalize preceding measured data;Dk,minFor the minimum value in parameter, DK, maxFor
Maximum value in parameter.
4. according to claim 1 based on the cigarette void-end rate prediction technique for improving gradient promotion decision tree, feature exists
In the training sample concentrated based on training data establishes corresponding regression tree model, and then structure by iterative algorithm
It builds and improves gradient promotion decision-tree model, specifically:
Based on training dataset, regression tree model is established, training dataset is denoted asWherein, x is indicated
Input variable, y indicate corresponding output variable, it is assumed that the leaf of every regression tree is Jm, the input space is divided into JmIt is a not
Intersecting area: R1m,R2m,L,Rjm, and the constant value of output is determined on each zone, it is assumed that bjmFor region RjmConstant value,
Then regression tree model expression formula are as follows:
Wherein, R1m,R2m,L,RjmIndicate JmA disjoint range, I indicate that region decision exports expression formula, gm(x) regression tree is indicated
Model;
Established regression tree model is initialized by Huber loss function, the regression tree model after being initialized,
And the regression tree model after initialization is trained, the regression tree model after initialization indicates are as follows:
Wherein, N indicates quantity, and L indicates that loss function, x indicate input variable, and y indicates corresponding output variable, and f (x) is indicated
Fitting function;
Regression tree model after initialization is trained, the step-length of the gradient decline of regression tree model is obtained;
Based on the step-length for the gradient decline established, the regression tree model after initialization is updated, updated regression tree
Model is expressed as:
fm(x)=fm-1(x)+lr*βmgm(x)
Wherein, lr indicates that learning rate, x indicate input variable, βmIndicate the step-length of gradient decline, gm(x) regression tree model is indicated,
fm(x) updated regression tree model, f are indicatedm-1(x) regression tree model before updating is indicated;
Updated regression tree model is constantly updated based on the desired value for minimizing loss function, final output is stable
It improves gradient and promotes decision-tree model.
5. according to claim 4 based on the cigarette void-end rate prediction technique for improving gradient promotion decision tree, feature exists
In, the regression tree model after described pair of initialization is trained, the gradient step of regression tree model is obtained, specifically:
M regression tree of grey iterative generation, m ∈ [1, M], m indicate the m tree;
The data that sample data is concentrated are denoted as N, i ∈ [1, N], i indicate i-th of sample, obtain the negative gradient value of loss function,
And using the negative gradient value as the estimated value r of residual errorim, the estimated value of residual error is expressed as follows:
Wherein, fm-1(xi) indicate regression tree model corresponding to i-th of sample of m-1 tree, yiIt indicates corresponding to i-th of sample
Output variable;
One regression tree model g of Residual Generation based on generationm(x) input space that the m is set is divided into J non-intersecting areas
Domain: R1m,R2m,L,Rjm, the step-length of gradient decline is calculated, the step-length of gradient decline is expressed as:
Wherein, βmIndicate the step-length of gradient decline, β is the step-length determined by linear search method.
6. a kind of based on the cigarette void-end rate forecasting system for improving gradient promotion decision tree, which is characterized in that including data acquisition
Module, preprocessing module, reprocessing module, model training module and prediction module;
The data acquisition module, for obtaining technological parameter of the same batch pipe tobacco under each process of throwing and final
Loose-ends ratio data under volume job contract sequence, form raw data set for technological parameter and loose-ends ratio data;
The preprocessing module, for being divided based on raw data set combination correlation analysis method to raw data set,
It determines key process parameter, obtains key process parameter collection;
The reprocessing module, the data for concentrating to the key process parameter are normalized, by normalizing
Change treated data and carry out random division, obtains training dataset and test data set;
The model training module, the training sample for being concentrated based on training data establish corresponding return by iterative algorithm
Return decision-tree model, and then constructs and improve gradient promotion decision-tree model;
The prediction module improves gradient promotion decision-tree model for test data set sample to be input to, to cigarette void-end
Rate is predicted.
7. according to claim 6 based on the cigarette void-end rate forecasting system for improving gradient promotion decision tree, feature exists
In, it further include Optimized model, the mould that the optimization module is used to that cross-validation method to be used to promote improvement gradient decision-tree model
Shape parameter is chosen, and the improvement gradient after being optimized promotes decision-tree model.
8. according to claim 6 based on the cigarette void-end rate forecasting system for improving gradient promotion decision tree, feature exists
In the reprocessing module is arranged to:
zk=(Dk-Dk,min)/(DK, max-Dk,min)
Wherein, zkFor the data after normalization;DkTo normalize preceding measured data;Dk,minFor the minimum value in parameter, DK, maxFor
Maximum value in parameter.
9. according to claim 6 based on the cigarette void-end rate forecasting system for improving gradient promotion decision tree, feature exists
In the model training module is arranged to:
Based on training dataset, regression tree model is established, training dataset is denoted asWherein, x is indicated
Input variable, y indicate corresponding output variable, it is assumed that the leaf of every regression tree is Jm, the input space is divided into JmIt is a not
Intersecting area: R1m,R2m,L,Rjm, and the constant value of output is determined on each zone, it is assumed that bjmFor region RjmConstant value,
Then regression tree model expression formula are as follows:
Wherein, R1m,R2m,L,RjmIndicate JmA disjoint range, I indicate that region decision exports expression formula, gm(x) regression tree is indicated
Model;
Established regression tree model is initialized by Huber loss function, the regression tree model after being initialized,
And the regression tree model after initialization is trained, the regression tree model after initialization indicates are as follows:
Wherein, N indicates quantity, and L indicates that loss function, x indicate input variable, and y indicates corresponding output variable, and f (x) is indicated
One fitting function;
Regression tree model after initialization is trained, the step-length of the gradient decline of regression tree model is obtained;
Based on the step-length for the gradient decline established, the regression tree model after initialization is updated, updated regression tree
Model is expressed as:
fm(x)=fm-1(x)+lr*βmgm(x)
Wherein, lr indicates that learning rate, x indicate input variable, βmIndicate the step-length of gradient decline, gm(x) regression tree model is indicated,
fm(x) updated regression tree model, f are indicatedm-1(x) regression tree model before updating is indicated;
Updated regression tree model is constantly updated based on the desired value for minimizing loss function, final output is stable
It improves gradient and promotes decision-tree model.
10. according to claim 9 based on the cigarette void-end rate forecasting system for improving gradient promotion decision tree, feature exists
In the model training module is also configured to:
M regression tree of grey iterative generation, m ∈ [1, M], m indicate the m tree;
The data that sample data is concentrated are denoted as N, i ∈ [1, N], i indicate i-th of sample, obtain the negative gradient value of loss function,
And using the negative gradient value as the estimated value r of residual errorim, the estimated value of residual error is expressed as follows:
Wherein, fm-1(xi) indicate regression tree model corresponding to i-th of sample of m-1 tree, yiIt indicates corresponding to i-th of sample
Output variable;
One regression tree model g of Residual Generation based on generationm(x) input space that the m is set is divided into J non-intersecting areas
Domain: R1m,R2m,L,Rjm, the step-length of gradient decline is calculated, the step-length of gradient decline is expressed as:
Wherein, βmIndicate the step-length of gradient decline, β is the step-length determined by linear search method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910539106.1A CN110245802B (en) | 2019-06-20 | 2019-06-20 | Cigarette empty-head rate prediction method and system based on improved gradient lifting decision tree |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910539106.1A CN110245802B (en) | 2019-06-20 | 2019-06-20 | Cigarette empty-head rate prediction method and system based on improved gradient lifting decision tree |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110245802A true CN110245802A (en) | 2019-09-17 |
CN110245802B CN110245802B (en) | 2021-08-24 |
Family
ID=67888427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910539106.1A Active CN110245802B (en) | 2019-06-20 | 2019-06-20 | Cigarette empty-head rate prediction method and system based on improved gradient lifting decision tree |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110245802B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765110A (en) * | 2019-10-24 | 2020-02-07 | 深圳前海微众银行股份有限公司 | Generalization capability processing method, device, equipment and storage medium |
CN110837921A (en) * | 2019-10-29 | 2020-02-25 | 西安建筑科技大学 | Real estate price prediction research method based on gradient lifting decision tree mixed model |
CN110990784A (en) * | 2019-11-19 | 2020-04-10 | 湖北中烟工业有限责任公司 | Cigarette ventilation rate prediction method based on gradient lifting regression tree |
CN111144667A (en) * | 2020-01-16 | 2020-05-12 | 红云红河烟草(集团)有限责任公司 | Tobacco conditioner discharged material water content prediction method based on gradient lifting tree |
CN111191712A (en) * | 2019-12-27 | 2020-05-22 | 浙江工业大学 | Printing and dyeing setting machine energy consumption classification prediction method based on gradient lifting decision tree |
CN111291835A (en) * | 2020-03-27 | 2020-06-16 | 清华大学深圳国际研究生院 | Regression tree prediction method, control device and computer readable storage medium |
CN111429970A (en) * | 2019-12-24 | 2020-07-17 | 大连海事大学 | Method and system for obtaining multi-gene risk scores by performing feature selection based on extreme gradient lifting method |
CN111476274A (en) * | 2020-03-16 | 2020-07-31 | 宜通世纪科技股份有限公司 | Big data prediction analysis method, system, device and storage medium |
CN111475988A (en) * | 2020-04-03 | 2020-07-31 | 浙江工业大学之江学院 | Printing and dyeing setting machine energy consumption optimization method based on gradient lifting decision tree and genetic algorithm |
CN111784084A (en) * | 2020-08-17 | 2020-10-16 | 北京市城市规划设计研究院 | Travel generation prediction method, system and device based on gradient lifting decision tree |
CN111914880A (en) * | 2020-06-18 | 2020-11-10 | 北京百度网讯科技有限公司 | Decision tree generation method and device, electronic equipment and storage medium |
CN111932037A (en) * | 2020-09-23 | 2020-11-13 | 浙江创泰科技有限公司 | Parking space state prediction method and system based on machine learning |
CN112001305A (en) * | 2020-08-21 | 2020-11-27 | 西安交通大学 | Feature optimization SSVEP asynchronous recognition method based on gradient lifting decision tree |
CN112006697A (en) * | 2020-06-02 | 2020-12-01 | 东南大学 | Gradient boosting decision tree depression recognition method based on voice signals |
CN112884215A (en) * | 2021-02-02 | 2021-06-01 | 国网甘肃省电力公司信息通信公司 | Parameter optimization method based on gradient enhancement tree population prediction model |
CN113256021A (en) * | 2021-06-16 | 2021-08-13 | 北京德风新征程科技有限公司 | Product quality alarm method and device based on ensemble learning |
CN113313417A (en) * | 2021-06-23 | 2021-08-27 | 北京鼎泰智源科技有限公司 | Complaint risk signal grading method and device based on decision tree model |
CN113642854A (en) * | 2021-07-23 | 2021-11-12 | 重庆中烟工业有限责任公司 | Cigarette single gram weight prediction method and device and computer readable storage medium |
CN117251814A (en) * | 2023-09-28 | 2023-12-19 | 广东省交通开发有限公司 | Method for analyzing electric quantity loss abnormality of highway charging pile |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103514A (en) * | 2017-04-25 | 2017-08-29 | 北京京东尚科信息技术有限公司 | Commodity distinguishing label determines method and apparatus |
CN107563542A (en) * | 2017-08-02 | 2018-01-09 | 阿里巴巴集团控股有限公司 | Data predication method and device and electronic equipment |
CN108597227A (en) * | 2018-05-29 | 2018-09-28 | 重庆大学 | Road traffic flow forecasting method under freeway toll station |
CN108732559A (en) * | 2018-03-30 | 2018-11-02 | 北京邮电大学 | A kind of localization method, device, electronic equipment and readable storage medium storing program for executing |
CN108876019A (en) * | 2018-05-31 | 2018-11-23 | 中国电力科学研究院有限公司 | A kind of electro-load forecast method and system based on big data |
CN109222208A (en) * | 2018-10-30 | 2019-01-18 | 杭州安脉盛智能技术有限公司 | Technology for making tobacco threds analysis optimization method and system towards production of cigarettes norm controlling |
CN109300046A (en) * | 2018-08-01 | 2019-02-01 | 平安科技(深圳)有限公司 | Electronic device, the vehicle insurance based on the road conditions factor survey dispatching method and storage medium |
-
2019
- 2019-06-20 CN CN201910539106.1A patent/CN110245802B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103514A (en) * | 2017-04-25 | 2017-08-29 | 北京京东尚科信息技术有限公司 | Commodity distinguishing label determines method and apparatus |
CN107563542A (en) * | 2017-08-02 | 2018-01-09 | 阿里巴巴集团控股有限公司 | Data predication method and device and electronic equipment |
CN108732559A (en) * | 2018-03-30 | 2018-11-02 | 北京邮电大学 | A kind of localization method, device, electronic equipment and readable storage medium storing program for executing |
CN108597227A (en) * | 2018-05-29 | 2018-09-28 | 重庆大学 | Road traffic flow forecasting method under freeway toll station |
CN108876019A (en) * | 2018-05-31 | 2018-11-23 | 中国电力科学研究院有限公司 | A kind of electro-load forecast method and system based on big data |
CN109300046A (en) * | 2018-08-01 | 2019-02-01 | 平安科技(深圳)有限公司 | Electronic device, the vehicle insurance based on the road conditions factor survey dispatching method and storage medium |
CN109222208A (en) * | 2018-10-30 | 2019-01-18 | 杭州安脉盛智能技术有限公司 | Technology for making tobacco threds analysis optimization method and system towards production of cigarettes norm controlling |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765110A (en) * | 2019-10-24 | 2020-02-07 | 深圳前海微众银行股份有限公司 | Generalization capability processing method, device, equipment and storage medium |
CN110837921A (en) * | 2019-10-29 | 2020-02-25 | 西安建筑科技大学 | Real estate price prediction research method based on gradient lifting decision tree mixed model |
CN110990784A (en) * | 2019-11-19 | 2020-04-10 | 湖北中烟工业有限责任公司 | Cigarette ventilation rate prediction method based on gradient lifting regression tree |
CN110990784B (en) * | 2019-11-19 | 2024-01-26 | 湖北中烟工业有限责任公司 | Cigarette ventilation rate prediction method based on gradient lifting regression tree |
CN111429970B (en) * | 2019-12-24 | 2024-03-22 | 大连海事大学 | Method and system for acquiring multiple gene risk scores based on feature selection of extreme gradient lifting method |
CN111429970A (en) * | 2019-12-24 | 2020-07-17 | 大连海事大学 | Method and system for obtaining multi-gene risk scores by performing feature selection based on extreme gradient lifting method |
CN111191712A (en) * | 2019-12-27 | 2020-05-22 | 浙江工业大学 | Printing and dyeing setting machine energy consumption classification prediction method based on gradient lifting decision tree |
CN111191712B (en) * | 2019-12-27 | 2023-06-30 | 浙江工业大学 | Printing and dyeing setting machine energy consumption classification prediction method based on gradient lifting decision tree |
CN111144667A (en) * | 2020-01-16 | 2020-05-12 | 红云红河烟草(集团)有限责任公司 | Tobacco conditioner discharged material water content prediction method based on gradient lifting tree |
CN111476274A (en) * | 2020-03-16 | 2020-07-31 | 宜通世纪科技股份有限公司 | Big data prediction analysis method, system, device and storage medium |
CN111476274B (en) * | 2020-03-16 | 2024-03-08 | 宜通世纪科技股份有限公司 | Big data predictive analysis method, system, device and storage medium |
CN111291835A (en) * | 2020-03-27 | 2020-06-16 | 清华大学深圳国际研究生院 | Regression tree prediction method, control device and computer readable storage medium |
CN111291835B (en) * | 2020-03-27 | 2023-04-07 | 清华大学深圳国际研究生院 | Regression tree prediction method, control device and computer readable storage medium |
CN111475988A (en) * | 2020-04-03 | 2020-07-31 | 浙江工业大学之江学院 | Printing and dyeing setting machine energy consumption optimization method based on gradient lifting decision tree and genetic algorithm |
CN111475988B (en) * | 2020-04-03 | 2024-02-23 | 浙江工业大学之江学院 | Printing and dyeing setting energy consumption optimization method based on gradient lifting decision tree and genetic algorithm |
CN112006697A (en) * | 2020-06-02 | 2020-12-01 | 东南大学 | Gradient boosting decision tree depression recognition method based on voice signals |
CN111914880A (en) * | 2020-06-18 | 2020-11-10 | 北京百度网讯科技有限公司 | Decision tree generation method and device, electronic equipment and storage medium |
CN111784084B (en) * | 2020-08-17 | 2021-12-28 | 北京市城市规划设计研究院 | Travel generation prediction method, system and device based on gradient lifting decision tree |
CN111784084A (en) * | 2020-08-17 | 2020-10-16 | 北京市城市规划设计研究院 | Travel generation prediction method, system and device based on gradient lifting decision tree |
CN112001305A (en) * | 2020-08-21 | 2020-11-27 | 西安交通大学 | Feature optimization SSVEP asynchronous recognition method based on gradient lifting decision tree |
CN111932037A (en) * | 2020-09-23 | 2020-11-13 | 浙江创泰科技有限公司 | Parking space state prediction method and system based on machine learning |
CN112884215A (en) * | 2021-02-02 | 2021-06-01 | 国网甘肃省电力公司信息通信公司 | Parameter optimization method based on gradient enhancement tree population prediction model |
CN113256021A (en) * | 2021-06-16 | 2021-08-13 | 北京德风新征程科技有限公司 | Product quality alarm method and device based on ensemble learning |
CN113313417A (en) * | 2021-06-23 | 2021-08-27 | 北京鼎泰智源科技有限公司 | Complaint risk signal grading method and device based on decision tree model |
CN113313417B (en) * | 2021-06-23 | 2024-01-26 | 北京鼎泰智源科技有限公司 | Method and device for classifying complaint risk signals based on decision tree model |
CN113642854A (en) * | 2021-07-23 | 2021-11-12 | 重庆中烟工业有限责任公司 | Cigarette single gram weight prediction method and device and computer readable storage medium |
CN117251814A (en) * | 2023-09-28 | 2023-12-19 | 广东省交通开发有限公司 | Method for analyzing electric quantity loss abnormality of highway charging pile |
Also Published As
Publication number | Publication date |
---|---|
CN110245802B (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110245802A (en) | Based on the cigarette void-end rate prediction technique and system for improving gradient promotion decision tree | |
CN108924198A (en) | A kind of data dispatching method based on edge calculations, apparatus and system | |
CN109858758A (en) | A kind of the combination weighting appraisal procedure and system of distribution network electric energy quality | |
CN108780313B (en) | Method, system, and computer-readable medium for performing targeted parameter analysis for an assembly line | |
CN111611085B (en) | Yun Bian collaboration-based man-machine hybrid enhanced intelligent system, method and device | |
CN107015900B (en) | A kind of service performance prediction technique of video website | |
Kaushik et al. | Software cost optimization integrating fuzzy system and COA-Cuckoo optimization algorithm | |
CN108961031A (en) | Realize information processing method, device and the computer readable storage medium of loan examination & approval | |
US20230394110A1 (en) | Data processing method, apparatus, device, and medium | |
CN112184412A (en) | Modeling method, device, medium and electronic equipment of credit rating card model | |
CN112907026A (en) | Comprehensive evaluation method based on editable mesh index system | |
Dini et al. | Water distribution network quality model calibration: a case study–Ahar | |
CN115189416A (en) | Power generation system control method and system based on day-ahead electricity price grading prediction model | |
CN108537322A (en) | Neural network interlayer activation value quantization method and device | |
Miettinen et al. | A new preference handling technique for interactive multiobjective optimization without trading-off | |
JP2012123592A (en) | Optimization program, apparatus and program | |
CN112329969A (en) | Building intelligent engineering investment prediction method based on support vector machine | |
EP3588326A1 (en) | Solving a deterministic global optimisation problem | |
CN113191828B (en) | User electricity price value grade label construction method, device, equipment and medium | |
Mendula et al. | Energy-aware edge federated learning for enhanced reliability and sustainability | |
Chen et al. | An exact calculation method for Gini coefficient and its application in China | |
CN105656029A (en) | Method and device for determining load points outside power supply radius of low-voltage power supply region | |
Behera et al. | A long term load forecasting of an indian grid for power system planning | |
JP2021120833A (en) | Model update device and method and process control system | |
CN110287272A (en) | A kind of configurable real-time feature extraction method, apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |