CN114881359A - GBDT and XGboost fused road surface IRI prediction method - Google Patents

GBDT and XGboost fused road surface IRI prediction method Download PDF

Info

Publication number
CN114881359A
CN114881359A CN202210625570.4A CN202210625570A CN114881359A CN 114881359 A CN114881359 A CN 114881359A CN 202210625570 A CN202210625570 A CN 202210625570A CN 114881359 A CN114881359 A CN 114881359A
Authority
CN
China
Prior art keywords
model
prediction
layer
iri
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210625570.4A
Other languages
Chinese (zh)
Other versions
CN114881359B (en
Inventor
徐周聪
骆志元
王慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Chongqing Communications Research and Design Institute Co Ltd
Original Assignee
China Merchants Chongqing Communications Research and Design Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Chongqing Communications Research and Design Institute Co Ltd filed Critical China Merchants Chongqing Communications Research and Design Institute Co Ltd
Priority to CN202210625570.4A priority Critical patent/CN114881359B/en
Publication of CN114881359A publication Critical patent/CN114881359A/en
Application granted granted Critical
Publication of CN114881359B publication Critical patent/CN114881359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Mathematical Physics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a pavement IRI prediction method fusing GBDT and XGboost, and belongs to the technical field of pavement monitoring. The method comprises the following steps: s1: acquiring pavement characteristic data, and selecting a characteristic data set by adopting a random forest algorithm; s2: constructing a Stacking fusion model: dividing the characteristic data set of the step S1 into a plurality of sub data sets, inputting the sub data sets into each base learner of the first layer prediction model, and outputting a respective prediction result by each base learner; then, taking the model output of the first layer and the characteristic data set of the step S1 as the input of the second layer, training a meta-learner of a prediction model of the second layer, and averaging the model output of the second layer to obtain a final prediction result; the first layer of prediction models comprise a GBDT model and an XGboost model. The invention improves the road IRI prediction precision, greatly improves the maintenance fund planning benefit and realizes the goal of optimal cost benefit.

Description

GBDT and XGboost fused road surface IRI prediction method
Technical Field
The invention belongs to the technical field of pavement monitoring, and relates to a pavement IRI prediction method fusing GBDT and XGboost.
Background
The existing pavement evenness evaluation index (IRI) prediction method mainly comprises two main types, one type is a prediction method based on a time sequence, and the other type is a learning type prediction method based on characteristic parameters.
The prediction method based on the time series mainly predicts the decay of performance indexes in the whole life cycle of a road surface and is used for maintenance planning and fund allocation, and the utilized data mainly comprises structural characteristic data, traffic environment data and historical detection data and often needs enough historical detection data for modeling. The method does not fully utilize bottom layer detection data including disease information, characteristic parameters and the like. The main problem is that the prediction model has poor precision and can only be applied to network level road management decision.
The learning type prediction method based on the characteristic parameters is mainly divided into two types, one type is used for assisting the modeling of a time sequence model, and the other type is used for predicting other performance indexes based on certain detection data. The precision is relatively high, but a large amount of data is needed as a support, and the portability of different project management modes is poor.
Therefore, a new road surface IRI prediction method is needed to improve the prediction accuracy.
Disclosure of Invention
In view of the above, the invention aims to provide a road surface IRI prediction method fusing GBDT and XGboost, which solves the problem of feature selection of the existing learning type prediction model; and the fusion model is adopted, so that the road surface IRI prediction precision is improved, the maintenance fund planning benefit can be greatly improved, and the goal of optimal cost benefit is realized.
In order to achieve the purpose, the invention provides the following technical scheme:
a road surface IRI prediction method fusing GBDT and XGboost specifically comprises the following steps:
s1: acquiring pavement characteristic data, and selecting a characteristic data set by adopting a random forest algorithm;
s2: constructing a Stacking fusion model: dividing the characteristic data set of the step S1 into a plurality of sub data sets, inputting the sub data sets into each base learner of the first layer prediction model, and outputting a respective prediction result by each base learner; then, the model output of the first layer and the feature data set of step S1 are used as the input of the second layer, the meta-learner of the second layer prediction model is trained, and the model outputs at the second layer are averaged to obtain the final prediction result. The first layer of prediction model comprises a GBDT model and an XGboost model; and the meta-learner of the second layer of prediction model adopts a Bagging model.
Further, in step S1, the random forest algorithm specifically includes: and selecting the features by using an average impurity degree reduction index, using classification or regression precision as a criterion function and using a sequence backward selection method and a generalized sequence backward selection method.
Further, in step S1, the average impure reduction index MDI is calculated by the formula:
Figure BDA0003677198510000021
wherein, IMP R 、IMP L The overall purity and the purity of the left and right sides of the bifurcation, N, are indicated t
Figure BDA0003677198510000022
Respectively representing the number of samples of the global, left and right nodes, N being the sample size.
Further, in step S2, constructing a GBDT model specifically includes: input training sample set T { (x) 1 ,y 1 ),…,(x i ,y i ),…,(x N ,y N )},
Figure BDA0003677198510000023
Where X is the input sample space, X i Is an index for evaluation of the properties of the specimen,
Figure BDA0003677198510000024
y is the performance condition and the loss function is L (Y) i ,f(x i ) The output is regressionTree (R)
Figure BDA0003677198510000025
The specific training process of the GBDT model is as follows:
1) initializing an estimation function to minimize a loss function;
Figure BDA0003677198510000026
wherein f is 0 (x) Is a tree with only one root node, L (y) i C) is a loss function, c is a constant that minimizes the loss function;
2) let M be 1,2, …, M representing the maximum number of iterations;
calculating the negative gradient of the loss function for the ith sample, i is 1,2, …, N, and using the negative gradient as a residual error estimation;
Figure BDA0003677198510000027
wherein r is mi Represents the residual estimate of the ith sample;
② fitting residual error pairs rm j Generating a regression tree to estimate regression leaf node region to obtain mth tree node region R mj Wherein J is 1,2, …, J and J represents the number of leaf nodes;
(iii) for J ═ 1,2, …, J, the values of the leaf node regions are estimated using a linear search, minimizing the loss function:
Figure BDA0003677198510000028
fourthly, updating the learner f m (x i ):
Figure BDA0003677198510000029
3) All c's in the same leaf node area mj And accumulating the values to obtain a final regression tree:
Figure BDA00036771985100000210
further, in step S2, constructing an XGBoost model specifically includes: input training sample set T { (x) 1 ,y 1 ),…,(x i ,y i ),…,(x N ,y N )},
Figure BDA00036771985100000211
Where X is the input sample space, X i Is an index for evaluation of the properties of the specimen,
Figure BDA00036771985100000212
y is the performance condition;
objective function
Figure BDA00036771985100000213
Comprises the following steps:
Figure BDA0003677198510000031
wherein the content of the first and second substances,
Figure BDA0003677198510000032
representing a predicted value;
in the XGboost, each tree needs to be added one by one, so that the effect can be improved;
Figure BDA0003677198510000033
Figure BDA0003677198510000034
wherein t is the number of trees;
if the leaves have too many nodes, the risk of over-fitting the model increases. So at the targetAdding penalty term omega (f) into function t ) Limiting the number of leaf nodes;
Figure BDA0003677198510000035
wherein gamma is punishment, T is the number of leaves, omega is the weight of a leaf node, lambda is an adjustable parameter, and omega is j A leaf node score set is set for each tree;
complete objective function Obj (t) Comprises the following steps:
Figure BDA0003677198510000036
note the book
Figure BDA0003677198510000037
To obtain
Figure BDA0003677198510000038
Solving the optimal solution of the objective function:
Figure BDA0003677198510000039
the above formula can be used as the cotyledon fraction of the tree, and the structure of the tree is excellent along with the increase of the fraction; and once the post-splitting result is less than the maximum resultant value for the given parameter, the algorithm will stop growing the cotyledon depth.
The invention has the beneficial effects that:
(1) the invention solves the problem of feature selection of the existing learning type prediction, and the selection of the feature index is inspected by using the support degree based on feature learning instead of using a simple correlation analysis technology (such as a Pearson correlation coefficient). The accuracy can be greatly improved, and the model has strong application scene transplanting capability.
(2) The invention adopts the fusion model, improves the road surface IRI prediction precision compared with a single model, and overcomes the problem that the traditional prediction model only solves the average fitting precision.
(3) The invention can be widely applied to different areas through improving the precision and the transplanting performance. By improving the prediction precision, the maintenance fund planning benefit can be greatly improved, and the aim of optimizing the cost benefit is fulfilled.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a road surface IRI prediction method fusing GBDT and XGboost according to the invention;
FIG. 2 is a schematic diagram of feature importance scoring based on a random forest algorithm;
FIG. 3 is a comparison graph of the prediction effect of the Stacking fusion model of the present invention and the existing multivariate linear regression, SVM, GBDT, XGboost models.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 1 to 3, a road surface IRI prediction method fusing GBDT and XGBoost mainly includes the following steps:
1) all the characteristic data (according to actual conditions) are obtained, and specific characteristic variables are shown in table 1.
TABLE 1 characterization of characteristic variables
Figure BDA0003677198510000051
Figure BDA0003677198510000061
2) Feature selection
And (4) performing feature selection by adopting a machine learning mode, wherein the used model is a random forest. The random forest feature selection is to use a random forest algorithm to automatically select features with high correlation. The method is a random forest based packaging type feature selection algorithm, takes a random forest algorithm as a basic tool, takes classification or regression precision as a criterion function, and adopts a sequence backward selection method and a generalized sequence backward selection method to select features.
The average impurity reduction index is used for feature selection,
Figure BDA0003677198510000062
wherein, IMP R 、IMP L Respective purity, N t
Figure BDA0003677198510000063
Respectively, the number of samples of the node, and N is the sample size.
The input port is used for receiving the data set transmitted by the front node, the output port is used for outputting the data set added with the discrete fields, the characteristic index results with the characteristic quantity not more than 20 and the purity reduction gradient result more than 0.02 are selected and shown in fig. 2.
3) GBDT model
The GBDT algorithm is essentially a combination of a large number of simple models, and the core of the GBDT algorithm is that, starting from the second decision tree, the input of each decision tree is the sum of the outputs of all the previous decision trees, and based on the promotion idea, the conclusions of a plurality of decision trees are accumulated to obtain the final output.
GBDT is a decision tree algorithm trained based on the Gradient Boosting strategy, and mainly comprises three parts of Gradient Boosting, decision tree algorithm and reduction. The core idea of GBDT is to reduce the residual error, each iteration of which is to reduce the residual error generated by the previous iteration. And when the model prediction result is inconsistent with the actual observed value, generating a new decision tree in the gradient direction of residual error reduction so as to reduce the residual error of the last time, and continuously and repeatedly iterating until the output result is basically consistent with the actual observed value. One sign of continuous optimization and improvement of the model is iterative descent of the loss function of the model, and the GBDT algorithm is to construct a new model in the gradient descent direction of the loss function.
Input training sample set T { (x) 1 ,y 1 ),…,(x i ,y i ),…,(x N ,y N )},
Figure BDA0003677198510000071
Where X is the input sample space, X i Is an index of evaluation of the amount of the substance,
Figure BDA0003677198510000072
y is the performance condition and the loss function is L (Y) i ,f(x i ) The output is a regression tree
Figure BDA0003677198510000073
The specific training process of the GBDT model is as follows:
1) initializing an estimation function to minimize a loss function;
Figure BDA0003677198510000074
wherein f is 0 (x) Is a tree with only one root node, L (y) i C) is a loss function, c is a constant that minimizes the loss function;
2) let M be 1,2, …, M representing the maximum number of iterations;
calculating the negative gradient of the loss function for the ith sample, i is 1,2, …, N, and using the negative gradient as a residual error estimation;
Figure BDA0003677198510000075
wherein r is mi Represents the residual estimate of the ith sample;
② fitting residual error pairs rm j Generating a regression tree to estimate regression leaf node region to obtain mth tree node region R mj Wherein J is 1,2, …, J and J represents the number of leaf nodes;
(iii) for J ═ 1,2, …, J, the values of the leaf node regions are estimated using a linear search, minimizing the loss function:
Figure BDA0003677198510000076
fourthly, updating the learner f m (x i ):
Figure BDA0003677198510000077
3) All c's in the same leaf node area mj And accumulating the values to obtain a final regression tree:
Figure BDA0003677198510000078
4) XGboost model
The XGboost is an iterative tree algorithm, combines a plurality of weak classifiers into a strong classifier together, and is an implementation of a Gradient Boosting Decision Tree (GBDT). XGboost is a powerful sequential integration technique with a modular structure for parallel learning to achieve fast computations, which prevents overfitting by regularization and can generate a weighted quantile sketch that processes weighted data.
The specific algorithm steps are as follows:
objective function
Figure BDA0003677198510000081
Comprises the following steps:
Figure BDA0003677198510000082
wherein the content of the first and second substances,
Figure BDA0003677198510000083
representing a predicted value;
in the XGboost, each tree needs to be added one by one, so that the effect can be improved;
Figure BDA0003677198510000084
Figure BDA0003677198510000085
wherein t is the number of trees;
if the leaves have too many nodes, the risk of over-fitting the model increases. So a penalty term omega (f) is added to the objective function t ) Limiting the number of leaf nodes;
Figure BDA0003677198510000086
wherein gamma is punishment, T is the number of leaves, omega is the weight of a leaf node, lambda is an adjustable parameter, and omega is j A leaf node score set is set for each tree;
complete objective function Obj (t) Comprises the following steps:
Figure BDA0003677198510000087
note book
Figure BDA0003677198510000088
To obtain
Figure BDA0003677198510000089
Solving the optimal solution of the objective function:
Figure BDA00036771985100000810
the above formula can be used as the cotyledon fraction of the tree, and the structure of the tree is excellent along with the increase of the fraction; and once the post-splitting result is less than the maximum resultant value for the given parameter, the algorithm will stop growing the cotyledon depth.
5) Stacking fusion model
The Stacking model fusion method includes the steps that firstly, an original characteristic data set is divided into a plurality of sub data sets, the sub data sets are input into each base learner of a first-layer prediction model, and each base learner outputs a prediction result. Then, the output of the first layer is used as the input of the second layer, the meta-learner of the prediction model of the second layer is trained, and the model positioned at the second layer outputs the final prediction result. The Stacking model fusion method can improve the overall prediction precision by generalizing the output results of a plurality of models.
In the first stage, an original data set is firstly segmented and divided into a training set and a test set according to a certain proportion, then a proper base learner is selected to train the training set in a cross validation mode, each trained base learner predicts the validation set and the test set, and a machine learning model with excellent prediction performance is selected in the first stage, and meanwhile diversification among the models is guaranteed; in the second stage, the prediction result of the base learner is respectively used as feature data for training and predicting the meta learner, the meta learner performs model construction by combining the features obtained in the last stage and the labels of the original training set as sample data, and outputs the final Stacking model prediction result, and the meta learner in the stage generally selects a simple model with better stability to play a role in integrally improving the model performance.
As shown in fig. 1, the Stacking fusion model: and using two different integrated model algorithms of GBDT and XGboost as a base learner to obtain two groups of prediction results, then applying the three groups of prediction results to a second layer of a using element learner, and selecting a Bagging model for training so as to obtain a final prediction result.
6) Stacking fusion model prediction result and evaluation
In order to verify the superiority of the Stacking fusion model in road surface IRI prediction, the same training set and test set are selected, and the Stacking fusion model is compared and analyzed with the existing 4 prediction models (multivariate linear regression, SVM, GBDT and XGboost), as shown in FIG. 3.
Taking the LTPP data in the united states as an example, spanning 62 states, cities, including: traffic volume size (76989 pieces of data), crack length (12964 pieces of data), traffic open date (1817 pieces of data), rut depth (18128 pieces of data), IRI size (97535 pieces of data), texture information (18735 pieces of data), total 6 tables, and total 226128 pieces of data.
As shown in Table 1 and FIG. 3, on the basis of road surface IRI prediction results, the accuracy of the Stacking fusion model is further improved on GBDT and XGboost models which are good in performance, the final RMSE is 0.040, the final MAE is 0.013, and the R is 2 0.996, and meets the high-precision requirement in road surface prediction.
TABLE 1 evaluation index of each prediction model
Figure BDA0003677198510000091
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (6)

1. A road surface IRI prediction method fusing GBDT and XGboost is characterized by comprising the following steps:
s1: acquiring pavement characteristic data, and selecting a characteristic data set by adopting a random forest algorithm;
s2: constructing a Stacking fusion model: dividing the characteristic data set of the step S1 into a plurality of sub data sets, inputting the sub data sets into each base learner of the first layer prediction model, and outputting a respective prediction result by each base learner; then, taking the model output of the first layer and the characteristic data set of the step S1 as the input of the second layer, training a meta-learner of a prediction model of the second layer, and averaging the model output of the second layer to obtain a final prediction result; the first layer of prediction models comprise a GBDT model and an XGboost model.
2. The road surface IRI prediction method according to claim 1, wherein in step S1, the random forest algorithm specifically comprises: and selecting the features by using an average impurity degree reduction index, using classification or regression precision as a criterion function and using a sequence backward selection method and a generalized sequence backward selection method.
3. The road surface IRI prediction method according to claim 2, wherein in step S1, the average impure degree reduction index MDI is calculated by the following formula:
Figure FDA0003677198500000011
wherein, IMP R 、IMP L The overall purity and the purity of the left and right sides of the bifurcation, N, are indicated t
Figure FDA0003677198500000012
Respectively representing the number of samples of the global, left and right nodes, N being the sample size.
4. The road surface IRI prediction method according to claim 1, wherein in step S2, constructing the GBDT model specifically includes: input training sample set T { (x) 1 ,y 1 ),…,(x i ,y i ),…,(x N ,y N )},
Figure FDA0003677198500000013
Where X is the input sample space, X i Is an index for evaluation of the properties of the specimen,
Figure FDA0003677198500000014
y is the performance condition and the loss function is L (Y) i ,f(x i ) The output is a regression tree
Figure FDA0003677198500000015
The specific training process of the GBDT model is as follows:
1) initializing an estimation function to minimize a loss function;
Figure FDA0003677198500000016
wherein f is 0 (x) Is a tree with only one root node, L (y) i C) is a loss function, c is a constant that minimizes the loss function;
2) let M be 1,2, …, M representing the maximum number of iterations;
calculating the negative gradient of the loss function for the ith sample, i is 1,2, …, N, and using the negative gradient as a residual error estimation;
Figure FDA0003677198500000017
wherein r is mi Represents the residual estimate of the ith sample;
② fitting residual error pairs rm i Generating a regression tree to estimate regression leaf node region to obtain mth tree node region R mj Wherein J is 1,2, …, J and J represents the number of leaf nodes;
(iii) for J ═ 1,2, …, J, the values of the leaf node regions are estimated using a linear search, minimizing the loss function:
Figure FDA0003677198500000021
fourthly, updating the learner f m (x i ):
Figure FDA0003677198500000022
3) In the same leafNode region will all c mj And accumulating the values to obtain a final regression tree:
Figure FDA0003677198500000023
5. the road surface IRI prediction method according to claim 1, wherein in step S2, an XGboost model is constructed, specifically comprising: input training sample set T { (x) 1 ,y 1 ),…,(x i ,y i ),…,(x N ,y N )},
Figure FDA0003677198500000024
Where X is the input sample space, X i Is an index for evaluation of the properties of the specimen,
Figure FDA0003677198500000025
y is the performance condition;
objective function
Figure FDA0003677198500000026
Comprises the following steps:
Figure FDA0003677198500000027
wherein the content of the first and second substances,
Figure FDA0003677198500000028
representing a predicted value;
in XGboost, each tree is added one by one;
Figure FDA0003677198500000029
Figure FDA00036771985000000210
wherein t is the number of trees;
adding penalty term omega (f) into the objective function t ) Limiting the number of leaf nodes;
Figure FDA00036771985000000211
wherein gamma is punishment, T is the number of leaves, omega is the weight of a leaf node, lambda is an adjustable parameter, and omega is j A leaf node score set is set for each tree;
complete objective function Obj (t) Comprises the following steps:
Figure FDA00036771985000000212
note the book
Figure FDA00036771985000000213
To obtain
Figure FDA0003677198500000031
Solving the optimal solution of the objective function:
Figure FDA0003677198500000032
the above formula can be used as the cotyledon fraction of the tree, and the structure of the tree is excellent along with the increase of the fraction; and once the post-splitting result is less than the maximum resultant value for the given parameter, the algorithm will stop growing the cotyledon depth.
6. The road surface IRI prediction method according to claim 1, wherein in step S2, the meta-learner of the second layer prediction model employs a Bagging model.
CN202210625570.4A 2022-06-02 2022-06-02 Road surface IRI prediction method fusing GBDT and XGBoost Active CN114881359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210625570.4A CN114881359B (en) 2022-06-02 2022-06-02 Road surface IRI prediction method fusing GBDT and XGBoost

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210625570.4A CN114881359B (en) 2022-06-02 2022-06-02 Road surface IRI prediction method fusing GBDT and XGBoost

Publications (2)

Publication Number Publication Date
CN114881359A true CN114881359A (en) 2022-08-09
CN114881359B CN114881359B (en) 2024-05-14

Family

ID=82679849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210625570.4A Active CN114881359B (en) 2022-06-02 2022-06-02 Road surface IRI prediction method fusing GBDT and XGBoost

Country Status (1)

Country Link
CN (1) CN114881359B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235679A (en) * 2023-11-15 2023-12-15 长沙金码测控科技股份有限公司 LUCC-based tensile load and compressive load evaluation method and system for foundation pit monitoring

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007047137A (en) * 2005-08-05 2007-02-22 Kumataka Engineering:Kk Road surface behavior measuring device
CN107292060A (en) * 2017-07-28 2017-10-24 成都智建新业建筑设计咨询有限公司 Basement roadway applications method based on BIM technology
CN112232526A (en) * 2020-09-28 2021-01-15 中山大学 Geological disaster susceptibility evaluation method and system based on integration strategy
CN112288191A (en) * 2020-11-19 2021-01-29 国家海洋信息中心 Ocean buoy service life prediction method based on multi-class machine learning method
CN112733442A (en) * 2020-12-31 2021-04-30 交通运输部公路科学研究所 Road surface long-term performance prediction model based on deep learning and construction method thereof
CN112906298A (en) * 2021-02-05 2021-06-04 重庆邮电大学 Blueberry yield prediction method based on machine learning
CN113159364A (en) * 2020-12-30 2021-07-23 中国移动通信集团广东有限公司珠海分公司 Passenger flow prediction method and system for large-scale traffic station

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007047137A (en) * 2005-08-05 2007-02-22 Kumataka Engineering:Kk Road surface behavior measuring device
CN107292060A (en) * 2017-07-28 2017-10-24 成都智建新业建筑设计咨询有限公司 Basement roadway applications method based on BIM technology
CN112232526A (en) * 2020-09-28 2021-01-15 中山大学 Geological disaster susceptibility evaluation method and system based on integration strategy
CN112288191A (en) * 2020-11-19 2021-01-29 国家海洋信息中心 Ocean buoy service life prediction method based on multi-class machine learning method
CN113159364A (en) * 2020-12-30 2021-07-23 中国移动通信集团广东有限公司珠海分公司 Passenger flow prediction method and system for large-scale traffic station
CN112733442A (en) * 2020-12-31 2021-04-30 交通运输部公路科学研究所 Road surface long-term performance prediction model based on deep learning and construction method thereof
CN112906298A (en) * 2021-02-05 2021-06-04 重庆邮电大学 Blueberry yield prediction method based on machine learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHIYUAN LUO 等: "Prediction of International Roughness Index Based on Stacking Fusion Model", 《SUSTAINABILITY》, 7 June 2022 (2022-06-07), pages 1 - 13 *
袁明园 等: "路面养护管理决策系统的开发与应用研究", 《中国公路学会养护与管理分会第十届学术年会论文集》, 9 January 2020 (2020-01-09), pages 384 - 387 *
骆志元: "基于性能预测的路面养护智能决策研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 3, 15 March 2024 (2024-03-15), pages 034 - 201 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235679A (en) * 2023-11-15 2023-12-15 长沙金码测控科技股份有限公司 LUCC-based tensile load and compressive load evaluation method and system for foundation pit monitoring

Also Published As

Publication number Publication date
CN114881359B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN104798043B (en) A kind of data processing method and computer system
CN107862173A (en) A kind of lead compound virtual screening method and device
CN105096614B (en) Newly-built crossing traffic flow Forecasting Methodology based on generation moldeed depth belief network
CN101694652A (en) Network resource personalized recommended method based on ultrafast neural network
CN108846526A (en) A kind of CO2 emissions prediction technique
CN109086900B (en) Electric power material guarantee and allocation platform based on multi-target particle swarm optimization algorithm
Mu et al. Multi-objective ant colony optimization algorithm based on decomposition for community detection in complex networks
CN109215740A (en) Full-length genome RNA secondary structure prediction method based on Xgboost
CN104881689A (en) Method and system for multi-label active learning classification
CN105976070A (en) Key-element-based matrix decomposition and fine tuning method
CN109101629A (en) A kind of network representation method based on depth network structure and nodal community
CN116050670B (en) Road maintenance decision method and system based on data driving
CN110147808A (en) A kind of novel battery screening technique in groups
CN108681739A (en) One kind recommending method based on user feeling and time dynamic tourist famous-city
CN115186097A (en) Knowledge graph and reinforcement learning based interactive recommendation method
CN106202377A (en) A kind of online collaborative sort method based on stochastic gradient descent
CN108062566A (en) A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear
Pumpuang et al. Comparisons of classifier algorithms: Bayesian network, C4. 5, decision forest and NBTree for Course Registration Planning model of undergraduate students
CN108764280A (en) A kind of medical data processing method and system based on symptom vector
CN114881359A (en) GBDT and XGboost fused road surface IRI prediction method
Elayidom et al. A generalized data mining framework for placement chance prediction problems
CN104966106A (en) Biological age step-by-step predication method based on support vector machine
CN106203616A (en) Neural network model training devices and method
CN113326919A (en) Traffic travel mode selection prediction method based on computational graph
CN109919374A (en) Prediction of Stock Price method based on APSO-BP neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant