CN115619052B - Urban traffic flow prediction method - Google Patents

Urban traffic flow prediction method Download PDF

Info

Publication number
CN115619052B
CN115619052B CN202211637628.3A CN202211637628A CN115619052B CN 115619052 B CN115619052 B CN 115619052B CN 202211637628 A CN202211637628 A CN 202211637628A CN 115619052 B CN115619052 B CN 115619052B
Authority
CN
China
Prior art keywords
time
vehicle
model
motion state
gaussian distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211637628.3A
Other languages
Chinese (zh)
Other versions
CN115619052A (en
Inventor
徐礼前
朱利强
吴康杰
高羽佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Agricultural University AHAU
Original Assignee
Anhui Agricultural University AHAU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Agricultural University AHAU filed Critical Anhui Agricultural University AHAU
Priority to CN202211637628.3A priority Critical patent/CN115619052B/en
Publication of CN115619052A publication Critical patent/CN115619052A/en
Application granted granted Critical
Publication of CN115619052B publication Critical patent/CN115619052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2477Temporal data queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Abstract

The invention relates to the technical field of traffic prediction, in particular to a method for predicting urban traffic flow, which comprises the steps of firstly collecting real-time data and then preprocessing the collected data; constructing a prediction model, inputting the preprocessed data, and processing the data to form a preliminary prediction model; repeatedly training the prediction model by using a training set, calculating a prediction error through an error function, correspondingly updating the weight coefficient to an optimal value, and further updating to obtain an optimal prediction model; according to the invention, the accuracy of traffic flow prediction can be effectively improved through the fusion of the Informer model and the LightGBM model.

Description

Urban traffic flow prediction method
Technical Field
The invention relates to the technical field of traffic prediction, in particular to a method for predicting urban traffic flow.
Background
With the rapid development of the production and manufacturing industry, the output of automobiles increases day by day, too many automobiles also bring great pressure to urban traffic, and the problem of traffic jam becomes more serious day by day. In order to effectively solve the problem, it is important to predict the urban traffic conditions so that people can make an effective planning of travel according to the prediction result.
At present, a machine learning framework or a deep learning framework is generally used for predicting the condition of urban traffic; and has achieved better effect in deep learning, namely a nonlinear model which can form traffic flow prediction, such as a Convolutional Neural Network (CNN), has been proved to be suitable for the spatial dependence of traffic flow. However, the traffic flow is influenced by various factors, so that the single machine learning architecture or the deep learning architecture cannot sufficiently and accurately predict the traffic flow. In view of this situation, in patent CN112927510a, a time and space mixed model is used for prediction, and the following technical features are disclosed: step 1, intercepting at least one time sequence segment with the length being integral multiple of a prediction window from existing traffic flow data along a time axis; after the steps 2 to 7 are respectively executed for each time sequence segment, the step 8 is executed; step 2, defining the traffic network as an undirected graph G; step 3, executing graph convolution operation on the undirected graph G to obtain a spatial relationship among nodes in the traffic network; step 4, standard convolution operation is carried out on the undirected graph G in the time dimension, and the time dimension relation between nodes in the traffic network is obtained; step 5, inputting the spatial relationship obtained in the step 3 into a CRF layer to obtain the spatial dependency relationship between each node and adjacent nodes in the traffic network; step 6, inputting the time dimension relation obtained in the step 4 into a CRF layer to obtain the time dependency relation between nodes in the traffic network on a time slice; step 7, inputting the spatial dependency relationship and the time dependency relationship into an attention layer to obtain the space-time relationship among nodes in the traffic network in a fluctuation time period, and outputting the weight parameters corresponding to each time sequence segment; and 8, adding and fusing the space-time relationship among the nodes in the traffic network in the fluctuation time period based on the weight parameters corresponding to the time sequence segments, and outputting a traffic flow prediction result. ". Compared with a single model, the hybrid model can improve the accuracy of traffic flow prediction to a certain extent; however, the model simply combines the time and space characteristics, and does not consider the influence of the internal relation between the time trend of the traffic flow and the space characteristics on the traffic flow.
The spatial features of the traffic flow comprise a road network structure, a regional spatial structure, entity distribution features and data extracted from traffic information inferences updated in real time on a social network. The time trend of traffic flow includes traffic flow data for a past period of time, a present period of time, and a future period of time. The spatial characteristics and the time trend of the traffic flow can generate larger influence on the change trend of the traffic flow, if the influence of the factors is not considered, the generated model is lack of high-dimensional time sequence information analysis of the characteristics of the traffic flow, and is insensitive to the periodic distribution change of the traffic flow with a longer time sequence, the peak value of the traffic flow and the like.
Disclosure of Invention
In order to avoid and overcome the technical problems in the prior art, the invention provides a method for predicting urban traffic flow. According to the invention, the accuracy of traffic flow prediction can be effectively improved through the fusion of the Informer model and the LightGBM model.
In order to achieve the purpose, the invention provides the following technical scheme:
an urban traffic flow prediction method comprises the following steps:
s1, collecting real-time data: collecting real-time traffic flow data at a specified road section, and packaging to form a real-time data packet;
s2, data preprocessing: performing real-time analysis on data in the real-time data packet by using a Kalman filter, accurately acquiring the optimal estimated motion state of the vehicle, and further generating a real-time interactive data set;
s3, constructing input data of a prediction model: fusing each real-time interaction data set by using a DAI-DAO technology to form an original flow sequence;
s4, establishing a preliminary prediction model: adding the original flow sequence into the characteristic sequence to construct a multi-dimensional array, and carrying out standardization processing on the multi-dimensional array; dividing the processed multidimensional arrays into a training set and a verification set according to a preset proportion; importing the training set into an Informer model and a LightGBM model for training and generating a primary prediction model;
s5, updating the weight coefficient to obtain an optimal prediction model: and importing the verification set into a preliminary prediction model, calculating a prediction error through an error function, correspondingly updating the weight coefficient to an optimal value, and further updating to obtain an optimal prediction model.
As a still further scheme of the invention: step S2 is specifically as follows:
s21, observing the presence of the vehicle from the acquired real-time traffic video streamt-Observed motion state obeying multidimensional Gaussian distribution at 1 momentX t-1X t-1 Compliancet-Multidimensional Gaussian distribution of 1 timeN(μ t-1 ,P t-1 ),μ t-1 Is composed oft-The dimension of the multi-dimensional gaussian distribution at time 1,P t-1 is composed oft-1 the covariance matrix of the multidimensional Gaussian distribution;
s22, according to the vehiclet-Observation of motion state at time 1 for vehicletPredicted motion state with moments following multi-dimensional Gaussian distribution
Figure SMS_1
A preliminary prediction is made that will be used,
Figure SMS_2
compliancetMultidimensional Gaussian distribution of time instants
Figure SMS_3
μ t Is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,
Figure SMS_4
is composed oftA covariance matrix of the multidimensional gaussian distribution at the moment;
s23, in the moving process of the vehicle, the motion state of the vehicle is influenced by external interference factors, and then the motion state is establishedtThe effect at the moment of time produces a systematic error that is subject to a multidimensional Gaussian distributionW t W t CompliancetMultidimensional Gaussian distribution of time instantsN(f t ,Q t );f t Is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,Q t is composed oftA covariance matrix of the multidimensional gaussian distribution at the moment; will be provided withW t Is introduced into the
Figure SMS_5
In and then get the carThe updated complete predicted motion state of the vehicle;
s24, observing the vehicle in the acquired real-time traffic video streamtObserved motion state with moments obeying multi-dimensional Gaussian distribution
Figure SMS_6
Figure SMS_7
CompliancetMultidimensional Gaussian distribution of time instantsN(δ 0 ,Σ 0 ),δ 0 Is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,Σ 0 is composed oftA covariance matrix of the multidimensional gaussian distribution at the moment;
s25, establishing a vehicletRelationship matrix between observed and predicted motion states at a timeH t And throughH t Associating the observed motion state and the predicted motion state of the vehicle together;
s26, because the observation error exists in the observation process, the vehicle is placed intThe central point of the overlap region existing between the observed motion state and the predicted motion state at the time is set asZ t (ii) a Establishing real motion state of vehicle obeying multi-dimensional Gaussian distribution
Figure SMS_8
Figure SMS_9
CompliancetMultidimensional Gaussian distribution of time instantsN(Z t ,R t ),Z t Is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,R t is composed oftA covariance matrix of the multidimensional gaussian distribution at the moment;
s27, placing the vehicle intMulti-dimensional Gaussian distribution of motion state observed at momenttMultiplying multidimensional Gaussian distribution of the real motion state at the moment, wherein the product obtained by multiplying is the optimal estimation motion state of the Kalman filter
Figure SMS_10
Figure SMS_11
CompliancetMultidimensional Gaussian distribution of time instantsN(δ,Σ),δIs composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,Σis composed oftA covariance matrix of the multidimensional gaussian distribution at the moment;
s28, mixing
Figure SMS_12
The vehicle motion state updating process is carried out continuously, and the optimal estimated motion state of the vehicle at the current moment, namely the optimal real-time motion state, is obtained; and storing the optimal estimated motion state of each vehicle by using a memory so as to form a real-time interaction data set.
As a still further scheme of the invention: the above-mentionedX t-1 The expression of (a) is as follows:
Figure SMS_13
wherein the content of the first and second substances,X t-1 for vehicles att-1, observing the motion state at the moment;v t-1 for vehicles att-The velocity in the observed motion state at time 1,v x,t-1 for vehicles att-The component velocity on the x-axis in the observed motion state at time 1,v y,t-1 for vehicles att-The component velocity on the y-axis in the observed motion state at time 1,v z,t-1 for vehicles att-1, the component velocity on the z axis in the observed motion state at the moment;p t-1 for vehicles att-The position in the observed motion state at time 1,p x,t-1 for vehicles att-The fractional position on the x-axis in the observed motion state at time 1,p y,t-1 for vehicles att-The fractional position on the y-axis in the observed motion state at time 1,p z,t-1 for vehicles att-Z-axis in observed motion state at time 1Upper minute position.
As a still further scheme of the invention: the vehicle is attPredicted motion state of time of day
Figure SMS_14
The calculation is as follows:
Figure SMS_15
wherein the content of the first and second substances,
Figure SMS_16
and then obtaining the updated complete predicted motion state
Figure SMS_17
As follows:
Figure SMS_18
wherein, the first and the second end of the pipe are connected with each other,W t for vehicles attThe systematic error of the moment in time is,W t in the form of a matrix;Q t is a systematic errorW t In thattA covariance matrix of multidimensional Gaussian distribution of time;
Figure SMS_19
vehicle is attA covariance matrix in the predicted motion state of the moment;P t-1 for vehicles att-A covariance matrix in the observed motion state at time 1;
a difference exists between the predicted motion state of the vehicle and the observed motion state of the vehicle due to the existence of the prediction error; thus, assume that the vehicle is intThe observed motion state and the predicted motion state at the moment have a corresponding relation, and the vehicle is usedtRelationship matrix between observed and predicted motion states at a timeH t Representing the relationship between observed and predicted motion states:
Figure SMS_20
wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_21
for vehicles attObserved motion state at a moment;
Figure SMS_22
for vehicles attA covariance matrix in the observed motion state at the moment;H t show the vehicle istA relationship matrix between the observed motion state and the predicted motion state at the time.
As a still further scheme of the invention: observed motion state of vehicle
Figure SMS_23
The multidimensional gaussian distribution of (a) is as follows:
Figure SMS_24
true state of motion
Figure SMS_25
The multidimensional gaussian distribution of (a) is as follows:
Figure SMS_26
Z t is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,R t is composed oftA covariance matrix of the multidimensional gaussian distribution at the moment;
optimal estimation of motion state
Figure SMS_27
The multidimensional gaussian distribution of (a) is as follows:
Figure SMS_28
obtaining a Kalman filter by sortingtThe calculation equation of the optimal estimation motion state of the moment:
Figure SMS_29
wherein the content of the first and second substances,K t for vehicles attThe kalman gain at the time of day is,
Figure SMS_30
vehicle is attAnd optimally estimating the covariance matrix in the motion state at the moment.
As a still further scheme of the invention: step S4 is specifically as follows:
s41, collecting characteristic values influencing traffic flow from data packets, and performing the mapping on the obtained original flow sequence A = &b 1 b 2 ,...,b η The characteristic values are combined to construct a multidimensional array M = { A =,T 1 ,T 2 ,…,T Ω And (c) the step of (c) in which,b 1 representing a first fused data set;b 2 representing a second fused data set;b η is shown asηIndividual fused data sets, i.e. in commonηA fused data set;T 1 representing a first characteristic value;T 2 representing a second characteristic value;T Ω is shown asΩIndividual eigenvalues, i.e. common in the training setΩA characteristic value;
s42, carrying out standardized processing on the multi-dimensional array M, namely flattening the multi-dimensional array M by using a numpy library in a python environment, converting the multi-dimensional array into a one-dimensional array, and dividing the processed plurality of one-dimensional arrays into a training set and a verification set according to a preset proportion; importing the data in the training set into a LightGBM model for training to form a LightGBM model after training; and obtaining the 1 st prediction sequence of the traffic flow through the trained LightGBM modelψ 1
Figure SMS_31
Wherein the content of the first and second substances,
Figure SMS_32
a model of a LightGBM is represented,T(m) Representing a front in a training setmThe value of the characteristic is used as the characteristic value,T(τ) Representing postambles in a training setτA characteristic value;
Figure SMS_33
representing predicted sequencesψ 1 Front ofmValue of,
Figure SMS_34
representing predicted sequencesψ 1 After
Figure SMS_35
Value of,Ωis the total number of eigenvalues;
s43, and thenψ 1 Adding the obtained product into the multidimensional array M to obtain an updated multidimensional array M 1 And M is 1 An Informer model is led in for training to obtain the 2 nd prediction sequence of the traffic flowψ 2 Then, there are:
Figure SMS_36
s44, thenψ 2 Is added to M 1 In (1), forming a new multidimensional array M 2 Then the multidimensional array M is combined 2 Importing an Informer model for training to obtain a new prediction sequence; repeating the above operations until the first step is formednA prediction sequenceψ n And a firstnMultiple multidimensional arrays M n Finally, forming a trained Informer model;
Figure SMS_37
wherein the content of the first and second substances,
Figure SMS_38
representing an Informer model;
s45, importing the data in the verification set into an Informer model and a LightGBM model, and calculating to obtain a predicted value of the verification set; calculating to obtain a prediction error between a true value and a predicted value of the verification set, determining an initial value of a weight coefficient by using an inverse method, and bringing the weight coefficient into an Informer model and a LightGBM model to establish a primary prediction modelD result
Figure SMS_39
Wherein the content of the first and second substances,LightGBM result as a predictor of the LightGBM model,Informer result is a predicted value of the Informer model,D result is a prediction result;ω 1 being the weight coefficients of the LightGBM model,ω 2 the weight coefficient is the weight coefficient of the Informmer model;ε 1 for the error between the predicted value and the true value of the LightGBM model,ε 2 the error between the predicted value and the true value of the Informmer model is obtained.
As a still further scheme of the invention: step S5 is specifically as follows:
s51, reversely propagating the prediction error to determine an error function of the LightGBM modelE L And error function of the Informer modelE I
Figure SMS_40
Wherein, the first and the second end of the pipe are connected with each other,t Lj for LightGBM model injA target value of the layer;O Lj for LightGBM model injThe actual value of the layer;t Ij as the Informer model injA target value of the layer;O Ij as the Informer model injThe actual values of the layers, the number of layers of the LightGBM model and the Informer modelJ
S52, derivation is conducted on the error function, the weight coefficient is updated by gradient descent, and the optimal weight coefficient is obtained after multiple iterations;
error functionE L Derived gradient values:
Figure SMS_41
wherein the content of the first and second substances,ω 1 weight coefficients of the LightGBM model;t L representing the final target value of the LightGBM model;O L representing the last actual value of the LightGBM model;x 1 is an algebra, a surrogate numberω 1 Andt L the product of (a);sigmoidis an activation function;
error functionE I Derived gradient values:
Figure SMS_42
wherein the content of the first and second substances,ω 2 the weight coefficient is the weight coefficient of the Informmer model;t I representing the final target value of the Informer model;O I representing the last actual value of the Informer model;x 2 is an algebra, a surrogate numberω 2 Andt I the product of (a);
s53, finally, predicting results of the Informer model and the LightGBM model, and deducing a final optimal prediction model by using the obtained weight coefficients; the updated optimal weight coefficient is as follows:
Figure SMS_43
the corresponding optimal prediction model is as follows:
Figure SMS_44
wherein the content of the first and second substances,αas a function of the gradient parameters,
Figure SMS_45
wherein the content of the first and second substances,αas a function of the gradient parameters,
Figure SMS_46
and updating the optimal weight coefficient for the Informmer model.
As a still further scheme of the invention: the step S1 is as follows:
s11, the data needing to be collected comprise traffic flow information, weather information and traffic accident information of the specified road section;
s12, firstly, obtaining traffic flow information from a real-time traffic video stream in monitoring equipment of a specified road section;
s13, acquiring weather information corresponding to the real-time traffic video stream in the specified road section in real time through weather forecast software;
s14, acquiring real-time traffic accident information of the specified road section through real-time news;
and S15, carrying out real-time one-to-one correspondence on the acquired real-time traffic video stream, weather information and traffic accident information to form a real-time data packet.
As a still further scheme of the invention: step S3 is specifically as follows:
s31, forming a corresponding real-time interactive data set by the weather information, the traffic accident information and the vehicle motion state information processed by the Kalman filter;
and S32, fusing the real-time interactive data sets through a DAI-DAO technology to obtain fused data sets, and combining the fused data sets into a corresponding original flow sequence.
Compared with the prior art, the invention has the beneficial effects that:
1. the data set adopted by the invention is obtained by combining the vehicle motion state data extracted from the video stream with the traffic accident information and the weather information, and the DAI-DAO technology is used for fusion, so that more reliable output can be obtained, and the prediction performance of the following model is improved.
2. Because the data volume is large, a Kalman filter is adopted for noise reduction, observation data and estimation data are ingeniously fused, closed-loop management is carried out on errors, the errors are limited in a certain range, and the accuracy of information is improved.
3. The deep learning model Informer based on the Transformer can solve the problems of gradient elimination and model inference time surge caused by long sequence prediction. The LightGBM is taken as an efficient gradient algorithm, gives consideration to speed and efficiency, and can be combined with the inform to maximize the efficiency.
4. In order to avoid the influence of insufficient effective information utilization and unexpected factors on the prediction result, the method effectively captures the space-time relationship in the traffic data through the fusion of the Informer model and the LightGBM model, simultaneously reduces the risk of overfitting, and can effectively improve the accuracy of traffic flow prediction.
5. After the weights are preliminarily judged by using an error inverse method, in order to achieve a better prediction effect, errors are propagated reversely, and an error function is determined. And finally, updating the weight by utilizing gradient descent to obtain an optimal weight coefficient, so that the final prediction result is more accurate and precise.
Drawings
FIG. 1 is a schematic diagram of a prediction model according to the present invention.
FIG. 2 is a general block diagram of the Informer of the present invention.
Fig. 3 is a flowchart of LightGBM multi-step prediction according to the present invention.
FIG. 4 is a flow chart of the Kalman filter process of the present invention.
Detailed description of the preferred embodiments
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The prediction of the traffic flow requires acquiring the traffic flow situation of the current time period and predicting the situation of the next traffic flow according to the situation of the current traffic flow. Therefore, what is needed is to obtain the traffic flow of the area to be predicted, remove the noise information in the traffic flow, and introduce the noise information into the prediction model for calculation.
The specific analysis process is as follows:
the data to be collected first comprises traffic flow information, weather information and traffic accident information of a specified road section. The traffic flow information is acquired by calling real-time traffic video streams in monitoring equipment of the specified road section, the video streams are collected by using a conventional data acquisition device, and the video streams are analyzed in a frame-by-frame analysis mode. And then, acquiring real-time weather information corresponding to the real-time traffic video stream in the specified road section through weather forecast software, for example, acquiring the real-time weather information through special weather software, micro-blogs, today's first-line software and the like. And then, acquiring the real-time traffic accident information of the specified road section through real-time news, and acquiring the corresponding traffic accident information through traffic broadcasting or some interactive software. And finally, carrying out real-time one-to-one correspondence on the acquired real-time traffic video stream, the weather information and the traffic accident information to form a real-time data packet.
In the process of acquiring the real-time traffic video stream, firstly, a certain traffic road section is taken as an acquisition point, then, the real-time video monitoring information at the road section is called, and the real-time traffic video stream data is acquired.
The main purpose of collecting the real-time traffic video stream is to obtain the motion state information of the vehicle, which mainly includes the speed and the position of the vehicle. However, in the data acquisition process, due to external factors and errors existing in the camera system, some unavoidable interference factors, namely noise data, exist in the acquired data. When the traffic flow in the real-time traffic video stream is larger than 2000 vehicles/h, the traffic flow is unreasonable, belongs to abnormal data and needs to be eliminated. When the current speed of some vehicles is more than 120km/h, the speed of 120km/h is extremely abnormal due to the fact that the common vehicle speed is 30-100km/h, belongs to abnormal data and needs to be eliminated. When the traffic, speed, occupancy and headway of the vehicles running on the road are all zero, such data are also abnormal and need to be rejected. Similar to these existing noisy data can have a large impact on the accuracy of the final predictive model. Therefore, it is necessary to perform a noise removal operation on the data.
In the process of vehicle driving, because the speed of the vehicle is constantly changed or the position of the vehicle is constantly changed, a chaotic condition can occur in the identification process. When analyzing real-time traffic video stream data, a frame-by-frame analysis is generally adopted. When the vehicle in the video is tracked, when the moving speed of the vehicle is slow, the position of the vehicle between the two frames does not move or the moving distance is small, and the vehicle associations of the two frames can be easily combined together. However, this is actually a two-frame picture, and there is no correlation. If the moving speed of the vehicle is relatively fast, or when frame-by-frame detection is performed, in the subsequent frame, the first vehicle has moved to the position where the second vehicle is located in the previous frame, and so on, the initial position where the next vehicle moves to the previous vehicle is obtained, and at this time, correlation is performed again, and the vehicle is mistakenly considered to be not moving, so that the correlation result is wrong: and associating originally different vehicles into the same vehicle. This produces the noise data mentioned above. Therefore, in order to avoid the situation, the Kalman filter is adopted for data preprocessing, noise data are eliminated, and a more accurate real-time interactive data set can be generated.
The key point of eliminating the noise data is to accurately predict the position of the vehicle in the next frame and then to correlate the position with the position of the frame before the frame. The Kalman filter can predict the position of the next frame, and the whole traveling route of the vehicle can be accurately determined through multiple times of calculation. The kalman filter estimates the process by estimating the process state at a given time and then obtaining feedback in the form of noisy measurements. The equations of a kalman filter are generally divided into two groups: a time update equation and a measurement update equation. The time update equation is responsible for predicting the forward direction, using the current state and error covariance estimate to obtain the prior estimate for the next time step. The measurement update equation is responsible for feedback, i.e. incorporating new measurements into the a priori estimate to obtain an improved a posteriori estimate. In the Kalman filter, each motion state of the vehicle is assumed to accord with corresponding Gaussian distribution, the motion state is linear, and the noise in the processing process is white Gaussian noise.
The processing procedure of the kalman filter is as follows:
vehicle is att-Observed motion state observed at time 1:
through observing real-time traffic video stream data, the motion state of a vehicle at a certain moment is determined, and the running speed of the vehicle isvIn the position ofp(ii) a And the speed of the vehicle is three-dimensional data of space, i.e. component speeds in three-dimensional coordinatesv x v y Andv z the position of the vehicle is also three-dimensional data in space, i.e. the fractional position in three-dimensional coordinatesp x p y Andp z . Thus the vehicle is int-Velocity at time 1v t-1 And positionp t-1 Respectively, as follows:
Figure SMS_47
vehicle is att-The observed motion state at time 1 is represented as follows:
Figure SMS_48
wherein the content of the first and second substances,X t-1 for vehicles att-1, observing the motion state at the moment;v t-1 for vehicles att-Observed motion state at time 1The speed of (2) is (are) in,v x,t-1 for vehicles att-The component velocity on the x-axis in the observed motion state at time 1,v y,t-1 for vehicles att-The component velocity on the y-axis in the observed motion state at time 1,v z,t-1 for vehicles att-1, the component velocity on the z axis in the observed motion state at the moment;p t-1 for vehicles att-The position in the observed motion state at time 1,p x,t-1 for vehicles att-The fractional position on the x-axis in the observed motion state at time 1,p y,t-1 for vehicles att-The fractional position on the y-axis in the observed motion state at time 1,p z,t-1 for vehicles att-The fractional position on the z-axis in the observed motion state at time 1.
The vehicle at the current state conforms to a multi-dimensional Gaussian distribution, and the vehicle at the current state has two variables of speed and position, and in order to further strengthen the connection between the two variables, the vehicle is usedt-Covariance matrix in observed motion state at 1 timeP t-1 To indicate the degree of correlation and dispersion between data.
Figure SMS_49
Predicting the state of motion of the vehicle at the next moment, i.e. the vehicle is attPredicted motion state at time:
from physical analysis, the vehicle time difference is deltatThe speed of the vehicle is determined to have a varying acceleration during the actual movementaThen assume that the vehicle is intAcceleration at the moment of time ofa t The position and speed of the vehicle are then updated as follows:
Figure SMS_50
wherein the content of the first and second substances,
Figure SMS_51
being vehiclesIn thattA position in the predicted motion state of the moment;
Figure SMS_52
for vehicles attThe speed in the predicted motion state of the moment.
Thereby predicting that the vehicle istMatrix representation of predicted motion state at time:
Figure SMS_53
wherein the content of the first and second substances,
Figure SMS_54
for vehicles attA predicted motion state at a time;B t is composed oftThe state control matrix at the moment indicates that the variable speed movement can change the state of the trolley;U t is composed oftThe state control vector at the moment indicates the magnitude and the direction of the speed of the control vehicle;F t is composed oftA state transition matrix of a time;
Figure SMS_55
compliancetTime-of-day multidimensional Gaussian distribution
Figure SMS_56
μ t Is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,
Figure SMS_57
is composed oftThe covariance matrix of the multidimensional gaussian distribution at the time.
External influencing factors:
many external factors can negatively affect the motion state of the vehicle, and thus the external uncertainty factor is assumed to be ΔtThe system error caused by the internal vehicle isW t And is andW t compliancetMultidimensional Gaussian distribution of time instantsN(f t ,Q t );f t Is composed oftTime of day the multidimensional heightThe dimensions of the distribution of the rays are,Q t is composed oftA covariance matrix of the multidimensional gaussian distribution at the moment; will be provided withW t Is introduced into the
Figure SMS_58
And then obtaining the complete predicted motion state of the vehicle after updating. So far, a complete prediction motion state equation of the vehicle in Kalman filtering is obtained:
Figure SMS_59
in the general case of the above-mentioned,W t is 0;
Figure SMS_60
for vehicles attA predicted motion state at a time;
Figure SMS_61
for vehicles attCovariance matrix in the predicted motion state of the time instant.
Observe the vehicle intObserved motion state at time:
due to the existence of the prediction error, a certain difference necessarily exists between the predicted motion state of the vehicle and the observed motion state of the vehicle, but the difference can be controlled within a certain error range. Thus, assume that the vehicle is intHaving a specific relationship between the observed and predicted motion states at a time, and using intRelationship matrix between observed and predicted motion states at a timeH t Representing the relationship between the observed motion state and the predicted motion state.
The corresponding calculation formula is as follows:
Figure SMS_62
wherein the content of the first and second substances,
Figure SMS_63
is a vehicleVehicle attObserved motion state at a moment;
Figure SMS_64
for vehicles attA covariance matrix in the observed motion state at the moment;
Figure SMS_65
compliancetTime-of-day multidimensional Gaussian distributionN(δ 0 ,Σ 0 ),δ 0 Is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,Σ 0 is composed oftThe covariance matrix of the multidimensional gaussian distribution at the time.
There may be a certain observation error in the actual observation process, and the observation noise generated in the observation process is assumed to beV t V t Is subject totTime-of-day multidimensional Gaussian distributionN(0,R t ) The dimension of the multi-dimensional Gaussian distribution is 0,R t is composed oftThe covariance matrix of the noise is observed at time. That is to say the vehicle's observations follow this multidimensional gaussian distribution. The center of an overlapping region where there is an overlap between the predicted motion state and the observed motion state of the vehicle is set to Z t Then the real motion state of the vehicle
Figure SMS_66
CompliancetTime-of-day multidimensional Gaussian distributionN(Z t ,R t );Z t Is composed oftThe dimensions of the multi-dimensional gaussian distribution at the time,R t is composed oftA covariance matrix of the multidimensional gaussian distribution at the time.
And the optimal estimation motion state of the Kalman filter is obtained by observing the product of multidimensional Gaussian distribution of the motion state and the real motion state.
The evolution process of the multidimensional Gaussian distribution of the observed motion state from one dimension to multiple dimensions is as follows:
Figure SMS_67
for a one-dimensional multi-dimensional gaussian distribution,μit is expected that the temperature of the molten steel,σ 2 is the variance; for a multi-dimensional gaussian distribution,μis the dimension of the state of motion,σ 2 is the covariance matrix in the corresponding motion state.
Figure SMS_68
One-dimensional multi-dimensional Gaussian distribution of the vehicle in the observed motion state is obtained;
Figure SMS_69
is a multidimensional Gaussian distribution of the vehicle in a multidimensional way in the observed motion state,
Figure SMS_70
the matrix expression form of multidimensional Gaussian distribution of the vehicle in the observed motion state is adopted;δ 0 represents the dimensions of the vehicle in the observed state of motion,Σ 0 for vehicles in observing motionσ 0 The corresponding covariance matrix.
The evolution process of the multidimensional Gaussian distribution of the real motion state from one dimension to multiple dimensions is as follows:
Figure SMS_71
Figure SMS_72
the method comprises the following steps of (1) obtaining one-dimensional multi-dimensional Gaussian distribution of a vehicle in a real motion state;
Figure SMS_73
is multidimensional Gaussian distribution of the vehicle in a real motion state,
Figure SMS_74
the matrix expression form of multidimensional Gaussian distribution of the vehicle in a real motion state is adopted;δ 1 represents the dimensions of the vehicle in a true state of motion,Σ 1 for vehicles in real motionσ 1 The corresponding covariance matrix.
The evolution process of the multidimensional Gaussian distribution of the optimal estimation motion state from one dimension to multiple dimensions is as follows:
Figure SMS_75
Figure SMS_76
one-dimensional multi-dimensional Gaussian distribution of the vehicle in the optimal estimation motion state is obtained;
Figure SMS_77
for a multidimensional gaussian distribution of the vehicle in the optimal estimated state of motion,
Figure SMS_78
a matrix expression form of multidimensional Gaussian distribution of a vehicle in an optimal estimation motion state is adopted;δrepresenting the dimensions of the vehicle in an optimal estimated state of motion,Σfor vehicles in optimal estimated states of motionσThe corresponding covariance matrix.
The derivation process of the multidimensional Gaussian distribution of the high dimension is guided by the derivation process of the multidimensional Gaussian distribution of the one dimension. Since the motion state of the vehicle is necessarily high-dimensional during actual operation, since the motion state of the vehicle necessarily includes velocity and position, calculation of a multidimensional gaussian distribution of high dimensions is necessary.
Figure SMS_79
Wherein, the first and the second end of the pipe are connected with each other,kis the variance ratio. Since the state information of the vehicle is necessarily a multi-dimensional data in the course of calculation, the image is usedkSuch one-dimensional data is only used for the purpose of clearer description of the calculation process, and the final calculation result is expressed in a multi-dimensional matrix manner. Then willkUsing matricesIs brought into the multidimensional Gaussian distribution of the optimal estimate, in this casekIs converted into a matrix formKThe following matrix form is obtained:
Figure SMS_80
finishing to obtain:
Figure SMS_81
arrange to obtain a vehicletKalman gain of time of dayK t Vehicle is attOptimal estimation of motion state at time of day
Figure SMS_82
And a vehicle aretCovariance matrix under time-optimal estimation motion state
Figure SMS_83
Comprises the following steps:
Figure SMS_84
wherein the content of the first and second substances,
Figure SMS_85
is the optimal estimated motion state of the vehicle, which can be
Figure SMS_86
And
Figure SMS_87
and continuously iterating in the next vehicle motion state updating process to obtain the real-time optimal position of the vehicle. The entire process of the kalman filter is shown in fig. 4.
Based on the Kalman model, data in the real-time traffic video stream can be updated and output, and then the data, weather information and traffic historical flow are fused by using a DAI-DAO technology. And finally, dividing the output data set into a training set, a verification set and a test set according to time sequence.
After preprocessing, the processed data are fused by using a DAI-DAO data fusion technology to form an original flow sequence A = &b 1 ,b 2 ,...,b η },b 1 Representing a first fused data set;b 2 representing a second fused data set;b η is shown asηIndividual fused data sets, i.e. in commonηAnd fusing the data.
As shown in FIG. 1, because the traffic flow prediction belongs to the long sequence prediction problem, an inform model is provided, and a good effect is achieved in the long sequence prediction. The LightGBM model is an efficient gradient algorithm, gives consideration to speed and efficiency, and performs well in many prediction model tasks. In order to avoid the influence of insufficient effective information utilization and unexpected factors on a prediction result and enable the prediction result to have better precision and stability, an Informer model and a LightGBM model are combined, and a combined prediction method is provided.
The prediction process comprises the following steps: and collecting characteristic values influencing traffic flow, wherein the characteristic values refer to the motion state of the vehicle, corresponding real-time weather information and corresponding traffic accident information. Such as specific weather conditions, road segment failures, whether the vehicle is moving or stopped, etc. After each vehicle is processed by the Kalman filter, all the vehicles are combined to form corresponding traffic flow composition information, and then corresponding weather information and traffic accident information are combined together to form traffic flow information closer to the actual traffic condition.
And combining the original flow sequence obtained by the processing to construct a multidimensional array M = { A =,T 1 ,T 2 ,…,T Ω And (c) the step of (c) in which,T 1 representing a first characteristic value;T 2 representing a second characteristic value;T Ω is shown asΩIndividual eigenvalues, i.e. common in the training setΩA characteristic value;
carrying out standardized processing on the multi-dimensional array M, namely flattening the multi-dimensional array M by using a numpy library in a python environment, converting the multi-dimensional array into a one-dimensional array, and dividing the processed plurality of one-dimensional arrays into a training set and a verification set according to a preset proportion; importing the data in the training set into a LightGBM model for training to form a LightGBM model after training; and obtaining the 1 st prediction sequence of the traffic flow through the trained LightGBM modelψ 1
Figure SMS_88
Wherein the content of the first and second substances,
Figure SMS_89
a model of a LightGBM is represented,T(m) Representing a front in a training setmThe value of the characteristic is used as the characteristic value,T(τ) Representing postambles in a training setτA characteristic value;
Figure SMS_90
representing predicted sequencesψ 1 Front ofmValue of,
Figure SMS_91
representing predicted sequencesψ 1 After
Figure SMS_92
Value of,Ωis the total number of eigenvalues.
S43, and thenψ 1 Adding the obtained product into the multidimensional array M to obtain an updated multidimensional array M 1 And M is 1 An Informer model is introduced for training to obtain a prediction sequence of traffic flowψ 2 Then there are:
Figure SMS_93
s44, thenψ 2 Is added to M 1 In (3), forming a new multidimensional array M 2 Then to make the multi-dimensionArray M 2 Importing an Informer model for training to obtain a new prediction sequence; repeating the above operation until the first step is formednA prediction sequenceψ n And a firstnMultiple multidimensional arrays M n Finally, forming a trained Informer model;
Figure SMS_94
wherein the content of the first and second substances,
Figure SMS_95
the Informer model is represented.
And then, importing the verification set into an Informer model and a LightGBM model to obtain a predicted value of the verification set, and calculating a prediction error between a true value and the predicted value of the verification set. The initial weighting coefficients are then determined using the reciprocal method, and then the error function is determined by back-propagating the error.
And finally, updating the weight by utilizing gradient descent to obtain an optimal weight coefficient. And finally, predicting the results of the Informer model and the LightGBM model, and deducing a final prediction model by using the obtained weight coefficients.
Thereby establishing a preliminary prediction modelD result
Figure SMS_96
Wherein the content of the first and second substances,LightGBM result as a predictor of the LightGBM model,Informer result is a predicted value of the Informer model,D result is a prediction result;ω 1 as the weight coefficients of the LightGBM model,ω 2 the weight coefficient is the weight coefficient of the Informmer model;ε 1 for the error between the predicted value and the true value of the LightGBM model,ε 2 the error between the predicted value and the actual value of the Informmer model is shown.
Error function of LightGBM modelE L And error function of the Informer modelE I
Figure SMS_97
Wherein the content of the first and second substances,t Lj for LightGBM model injA target value of the layer;O Lj for LightGBM model injThe actual value of the layer;t Ij as the Informer model injA target value of the layer;O Ij as the Informer model injThe actual value of the layer or layers,Jthe number of layers of the model.
Error functionE L Derived gradient value:
Figure SMS_98
wherein, the first and the second end of the pipe are connected with each other,ω 1 weight coefficients of the LightGBM model;t L representing the final target value of the LightGBM model;O L representing the last actual value of the LightGBM model;x 1 is an algebra, a surrogate numberω 1 Andt L the product of (a);sigmoidis an activation function;
error functionE I Derived gradient values:
Figure SMS_99
wherein the content of the first and second substances,ω 2 the weight coefficient is the weight coefficient of the Informmer model;t I representing the final target value of the Informer model;O I representing the last actual value of the Informer model;x 2 is an algebra, meaningω 2 Andt I the product of (a).
The updated optimal weight coefficient is as follows:
Figure SMS_100
the corresponding optimal prediction model is as follows:
Figure SMS_101
wherein, the first and the second end of the pipe are connected with each other,αas a function of the gradient parameters,
Figure SMS_102
for the updated optimal weight coefficients of the LightGBM model,
Figure SMS_103
and updating the optimal weight coefficient for the Informmer model.
LightGBM model training process:
LightGBM, the basic principle is to use a decision tree based on a learning algorithm, and only make an optimization on the framework (the optimization focuses on the training speed of the model). The most important is that LightGBM uses a decision tree algorithm based on a histogram, and the basic idea is to firstly discretize continuous floating point characteristic values intoΥAn integer number of which simultaneously constitutes a width ofΥA histogram of (a). When data is traversed, statistics are accumulated in the histogram according to the discretized value serving as an index, after the data is traversed for one time, the histogram accumulates required statistics, after the data is traversed for one time, the histogram accumulates the required statistics, and then the optimal segmentation point is traversed and searched according to the discretized value of the histogram.
Optimizing the LightGBM model:
LightGBM optimizes the problem, adopts two methods of Gradient-based one-side sampling (GOSS) and mutual exclusion feature merging (EFB) to balance accuracy and efficiency, accelerates splitting behavior, effectively reduces the number of features under the condition of ensuring the accuracy of a splitting point, accelerates the training process of the traditional GBDT by more than 20 times, and improves the real-time performance of the algorithm. The GOSS algorithm traverses each splitting point of each feature, finds and calculates the maximum information gain, and divides data into left and right nodes according to the splitting points of the features. The EFB can change a plurality of mutually exclusive features into low-dimensional dense features, effectively avoids the calculation of unnecessary value features, and can reduce the time complexity of an algorithm from O (data) to O (non _ zero _ data).
Gradient-based single-sided sampling GOSS. The algorithm traverses each splitting point of each feature, finds and calculates the maximum information gain, and divides data into left and right nodes according to the splitting points of the features, wherein the overall process of the GOSS is as follows: (1) The number of the training samples is N, and the values of a% of the previous larger gradients are selected as the training samples with large gradient values; (2) Randomly selecting b% of the rest 1-a% of smaller gradient values as training samples with small gradient values; (3) For training samples with smaller gradient values, i.e., b%. N, the information gain is amplified by (1-a)/b times when it is calculated.
The mutex feature merges EFBs. The algorithm flow is as follows: (1) Establishing a graph, wherein each point represents a feature, each edge has a weight, and the weight is related to the overall conflict between the features; (2) sorting the features by degrees in a descending order ranking graph; (3) Each feature is traversed and attempts are made to merge the features to minimize the collision ratio.
And training a LightGBM model, wherein the LightGBM adopts a Histopram and Leaf-wise decision tree optimization algorithm to train the model. Compared with the traditional pre-sequencing idea, the Histogram only needs to store the value after the characteristic discretization, so that the memory loss is obviously reduced, and the training speed is accelerated. The Leaf-wise decision tree increases the limitation of the maximum depth of the decision tree on the basis of a Leaf splitting strategy, and avoids overfitting to the maximum extent. Wherein the leaf nodes contain all features such as weather factors, accidents, etc.
Wherein, the loss function in the process of training the model adopts the loss function of the decision treeC a (q) Defined as:
Figure SMS_104
therein,. Mu.gqL represents the number of leaf nodes,N r is shown asrThe number of samples of an individual leaf node,Y r (q) The empirical entropy of the leaf nodes is shown,
Figure SMS_105
the prediction error of the model to the training data, i.e. the degree of fit of the model, is represented.
Figure SMS_106
Representing the complexity of the model by means of parametersβControl the influence of the two whenβWhen determining, it is only necessary to select the model with the smallest loss function.
Where leaf node empirical entropyY r (q) The calculation method of (c) is as follows:
Figure SMS_107
wherein the content of the first and second substances,N r is shown asrThe number of samples of an individual leaf node,N re representing leaf nodesN r In the middle class ofeThe number of samples of (1).
Informmer model training Process:
the Informer is based on a deep learning model proposed by Transform, improves a self-attention mechanism, can be applied to a multi-step sequence prediction scene, and generally consists of an encoder and a decoder. Wherein, the original input sequence is obtained by the encoder, and the sequence prediction is further realized by the decoder. Figure 2 shows the overall framework of the model.
The input of the Informer model encoder is composed ofuA characteristic variable,uA local time stamp anduand (4) forming a global timestamp, and then capturing long-term time dependency by using a multi-head probability sparse self-attention mechanism, and simultaneously reducing the time and space complexity of calculation. Multi-head probability sparse self-attention formulaG(C,S,O) The following were used:
Figure SMS_108
whereinCSOThree matrices of the same size obtained by linearly varying the input characteristic variables,S 0 is composed ofSThe resulting matrix after dilution by probability,C 0 is composed ofCThe matrix obtained after dilution by probability,Softmaxis an activation function;d S representing a vectorSOf (c) is measured.
For the Informer-based long-time sequence prediction model, the input xencode of the encoder can be represented as follows:
Xencoder=γ 1 λ 12 λ 23 λ 3 +PE+SE
wherein λ is 1 、λ 2 、λ 3 Representing the previously obtained traffic flow information time series, weather time information series and traffic accident information time series, PE representing a local time stamp, SE representing a global time stamp, gamma 1 、γ 2 、γ 3 Respectively representing time series lambda of traffic flow information 1 Weather information time series lambda 2 And a traffic accident information time series lambda 3 . When the sequence input has been normalized, then γ 123 =1。
In addition, the feature map of the encoder has a redundant combination of values, so the encoding mechanism based on the idea of convolutional distillation uses the convolutional distillation operation to give higher weight to the dominant features with dominant attention and generates the feature map at the next layer. The "distillation" process is as follows:
Value‘=MaxPool(ELU(Convld(Value)))
convld (Value) represents that an ELU activation function is used for executing a one-dimensional convolution filtering process in a time dimension, maxPool represents a maximum pooling process, and finally the obtained Value and key serve as feature maps of a decoder to be input into a multi-head attention mechanism of the decoder.
For a decoder, a one-step decoding mechanism adopts batch-generated prediction to directly output multi-step prediction results, and the decoder of the Informmer long-term prediction model framework comprises an improvedA multi-head active sparse self-attention mechanism module and a multi-head attention mechanism module. Input of decoderΦ decoder Can be expressed as:
Φ decoder ={Φ token ,Φ 0 }
wherein the content of the first and second substances,Φ token represents the traffic history fusion flow needing to be predicted, comprises the fused traffic flow time sequence, the local time stamp and the global time stamp,Φ 0 a placeholder representing traffic flow that needs to be predicted.
The query is obtained through an improved multi-head active sparse self-attention mechanism module in the decoder and is input into a multi-head attention mechanism of the decoder, and finally the predicted flow is obtained through a full connection layer.
For the output of the decoderΦout,It represents the traffic flow value (change trend) in a set future time period, and finally judges whether the change trend is abnormal or not.
The encoder uses a self-attention distillation technique to reduce feature dimensions and network parameters by convolution and maximum pooling, compress the feature dimensions and extract dominance attention. The encoder finally outputs a characteristic diagram through multiple times of multi-head probability sparse self-attention and self-attention distillation.
To output all predictions at once, the decoder of the Informer model employs a parallel-generation decoder mechanism, where the input consists of g local timestamps, z placeholders, and the corresponding g minus z feature variables. The local timestamp is composed of feature variables and placeholders, where the g and z sizes are determined by the number of input parameters. And after the multi-head probability sparse self-attention operation with the mask is carried out, carrying out the multi-head self-attention operation by utilizing the characteristic diagram output by the encoder. Finally, all the prediction results can be directly output by adjusting the size of data output through the full connection layer. Where g is the output length and z is the predicted length.
The specific implementation experiment:
the experimental environment is as follows:
the method is carried out on a 64-bit hardware configuration machine, namely AMDRyzen74800U, radeonGraphics1.80GHz,16GBRAM, windows10 is installed, python3.7 is used for programming
Experimental data:
selecting the traffic flow, weather data and traffic accidents of a certain expressway in California in 2019-2021 provided by a PEMS website as an experimental data set, and performing data preprocessing including data standardization and data set partitioning when processing the traffic flow and weather data. Due to different sizes of different influence factors, the influence factors with large dimensionality have great influence on the model, and data needs to be standardized so as to enhance the prediction capability and accuracy of the model and improve the convergence rate. Here we use standard deviation normalization, the formula is as follows:
Figure SMS_109
in the formula (I), the compound is shown in the specification,x*as a result of the original data, it is,x 0 is the mean of the raw data, σ is the standard deviation of the corresponding data,x τ to normalize the data.
Then fusing the processed data sets by using a DAI-DAO technology to form an original flow sequence, then collecting characteristic values influencing the flow, and constructing a multi-dimensional array M = { A = { (A) },T 1 ,T 2 ,…,T Ω And (4) dividing the test sample into a training set, a verification set and a test set according to the standard treatment from 1 month to 12 months in 2019, from 1 month to 12 months in 2020 and from 1 month to 12 months in 2021.
Setting model parameters:
in order to improve the prediction effect of the model, the parameters of the Informer model and the LightGBM model need to be set reasonably. Not only can the overfitting of the model be prevented, but also the prediction effect of the model can be improved.
Experimental procedure:
the prediction target of the invention is the next traffic flow, while the LightGBM model does not support multi-step output, so the direct prediction strategy is adopted to realizeMultistep prediction of LightGBM. The direct prediction strategy comprises the specific steps that the input of a first model is ax t-1 ,x t-2 ,..,x t-11 ,x t-12 Output isx t x t+1 ,,x t+10x t+11 And analogically training 12 models to complete the multi-step prediction of the LightGBM. The LightGBM multi-step prediction process is shown in fig. 3.
Predicting traffic flow for the future 12 months with the past 12 months is achieved herein by training 12 LightGBM models. Assuming that the 12-month traffic flow in 2022 needs to be predicted, the input is the 12-month traffic flow in 2021. The traffic flow of 1 month 2022 was obtained using the LightGBM1 model, and then the traffic flow of 2 months 2022 was obtained using the LightGBM2 model. By analogy, the traffic flow of 12 months in 2022 is finally obtained by using the 12-model LightGBM.
Obtaining a predicted flow sequence after LightGBM predictionT n+1 And adding the predicted value sequence into the multidimensional array to train the Informer model and output the predicted value. The method comprises the steps of predicting the traffic flow of 1 month to 12 months 2020 by using an Informer and a LightGBM to obtain a prediction result of a verification set, calculating a prediction error, determining an initial weight value by using an error inverse method, then performing back propagation on the error, determining an error function, and then updating the weight by using gradient descent to obtain an optimal weight value. And finally, the traffic flow from 1 month in 2021 to 12 months in 2021 is obtained by using the optimal weight value, the predicted value of the Informer and the predicted value of the LightGBM.
Designing a model prediction effect evaluation index:
in order to evaluate the prediction effect of the prediction model, after model training and model verification, the performance of the prediction model is evaluated by applying test set data. The core idea of model performance evaluation is as follows: the trained model is used for predicting the data of the test set, and then the difference between the prediction result of the test set and the real result of the test set is evaluated, and the smaller the difference is, the closer the prediction value and the real value are, namely the better the prediction effect of the model is; conversely, the larger the difference is, the more the predicted value deviates from the true value, and the worse the prediction effect of the model is considered. The invention selects two indexes for measuring the difference between the predicted value and the true value: mean Absolute Percent Error (MAPE) and Root Mean Square Error (RMSE) were used as the main criteria for prediction model evaluation. Wherein MAPE is the error between the predicted value and the true value, and the smaller the numerical value is, the better the prediction effect is. RMSE is the square of the error and reflects the stability of the prediction, with smaller values being more stable. The evaluation index calculation formula is as follows:
Figure SMS_110
in the formula (I), the compound is shown in the specification,nin order to test the number of data,x i is an actual value of the traffic flow,x* i is a predicted value of the traffic flow.
The invention compares a prediction model (LightGBM-Informmer combined model) with Informmer, XGboost and LSTM, and evaluates the model by adopting MAPE and RMSE two evaluation indexes. The following table 1 is an evaluation index table.
Table 1 shows evaluation index tables
Figure SMS_111
As can be seen from the indexes, in the non-combination model, MAPE of the Informer and XGboost is small, while RMSE of the LightGBM is large, which indicates that the fluctuation of the prediction result is large. MAPE and RMSE were significantly reduced after using the predictive model. Experimental results show that the prediction model has good stability and precision.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (1)

1. The urban traffic flow prediction method is characterized by comprising the following steps:
s1, collecting real-time data: collecting real-time traffic flow data at a specified road section, and packaging to form a real-time data packet;
s2, data preprocessing: performing real-time analysis on data in the real-time data packet by using a Kalman filter, accurately acquiring the optimal estimated motion state of the vehicle, and further generating a real-time interactive data set;
s3, constructing input data of a prediction model: fusing each real-time interaction data set by using a DAI-DAO technology to form an original flow sequence;
s4, establishing a preliminary prediction model: adding the original flow sequence into the characteristic sequence to construct a multi-dimensional array, and carrying out standardization processing on the multi-dimensional array; dividing the processed multidimensional arrays into a training set and a verification set according to a preset proportion; importing the training set into an Informer model and a LightGBM model for training and generating a primary prediction model;
s5, updating the weight coefficient to obtain an optimal prediction model: importing the verification set into a preliminary prediction model, calculating a prediction error through an error function, correspondingly updating the weight coefficient to an optimal value, and further updating to obtain an optimal prediction model;
the step S1 is as follows:
s11, the data needing to be collected comprise traffic flow information, weather information and traffic accident information of the specified road section;
s12, firstly, acquiring traffic flow information from a real-time traffic video stream in monitoring equipment of a specified road section;
s13, acquiring weather information corresponding to the real-time traffic video stream in the specified road section in real time through weather forecast software;
s14, acquiring real-time traffic accident information of the specified road section through real-time news;
s15, carrying out real-time one-to-one correspondence on the acquired real-time traffic video stream, weather information and traffic accident information to form a real-time data packet;
step S2 is specifically as follows:
s21, observing the observed motion state X of the vehicle obeying multi-dimensional Gaussian distribution at the time t-1 from the acquired real-time traffic video stream t-1 ,X t-1 Multidimensional Gaussian distribution N (mu) obeying t-1 moment t-1 ,P t-1 ),μ t-1 Is the dimension, P, of the multi-dimensional Gaussian distribution at time t-1 t-1 A covariance matrix of the multidimensional Gaussian distribution at the time t-1;
s22, predicting motion state of the vehicle subjected to multi-dimensional Gaussian distribution at the time t according to the observed motion state of the vehicle at the time t-1
Figure FDA0004069408800000021
A preliminary prediction is made that will be used,
Figure FDA0004069408800000022
multidimensional Gaussian distribution obeying t time
Figure FDA0004069408800000023
μ t For the dimension of the multi-dimensional gaussian distribution at time t,
Figure FDA0004069408800000024
a covariance matrix of the multidimensional Gaussian distribution at time t;
s23, in the moving process of the vehicle, the motion state of the vehicle is influenced by external interference factors, and then a system error W which is generated by the influence at the moment t and obeys multidimensional Gaussian distribution is established t ,W t Multidimensional Gaussian distribution N (f) obeying t moment t ,Q t );f t For the dimension of the multi-dimensional Gaussian distribution at time t, Q t A covariance matrix of the multidimensional Gaussian distribution at time t; w is to be t Is introduced into the
Figure FDA0004069408800000025
Then, the updated complete predicted motion state of the vehicle is obtained;
s24, obtaining the real-time traffic video streamObserving motion state of medium-observed vehicle obeying multidimensional Gaussian distribution at time t
Figure FDA0004069408800000026
Figure FDA0004069408800000027
Multidimensional Gaussian distribution N (delta) obeying t moment 00 ),δ 0 For the dimension of the multi-dimensional Gaussian distribution at time t, Σ 0 A covariance matrix of the multidimensional Gaussian distribution at time t;
s25, establishing a relation matrix H between the observed motion state and the predicted motion state of the vehicle at the moment t t And through H t Associating the observed motion state and the predicted motion state of the vehicle together;
s26, since there is an observation error during the observation, the central point of the overlap region existing between the observed motion state and the predicted motion state of the vehicle at time t is Z t (ii) a Establishing real motion state of vehicle obeying multi-dimensional Gaussian distribution
Figure FDA0004069408800000028
Figure FDA0004069408800000029
Multidimensional Gaussian distribution N (Z) obeying t moment t ,R t ),Z t For the dimension of the multi-dimensional Gaussian distribution at time t, R t A covariance matrix of the multidimensional Gaussian distribution at time t;
s27, multiplying the multidimensional Gaussian distribution of the vehicle in the motion state observed at the moment t and the multidimensional Gaussian distribution of the real motion state at the moment t, wherein the product obtained by multiplying is the optimal estimation motion state of the Kalman filter
Figure FDA0004069408800000031
Figure FDA0004069408800000032
The method comprises the following steps of obeying to multidimensional Gaussian distribution N (delta, sigma) at the time t, wherein delta is the dimensionality of the multidimensional Gaussian distribution at the time t, and sigma is a covariance matrix of the multidimensional Gaussian distribution at the time t;
s28, mixing
Figure FDA0004069408800000033
The vehicle motion state updating process is carried out continuously, and the optimal estimated motion state of the vehicle at the current moment, namely the optimal real-time motion state, is obtained; storing the optimal estimated motion state of each vehicle by using a memory so as to form a real-time interaction data set;
said X t-1 The expression of (c) is as follows:
Figure FDA0004069408800000034
Figure FDA0004069408800000035
Figure FDA0004069408800000036
wherein, X t-1 The observed motion state of the vehicle at the moment t-1 is obtained; v. of t-1 Is the speed, v, of the vehicle in the observed motion state at time t-1 x,t-1 Is the partial speed v on the x-axis of the vehicle in the observed motion state at time t-1 y,t-1 The component velocity v on the y-axis of the observed motion state of the vehicle at time t-1 z,t-1 The component speed on the z axis in the observed motion state of the vehicle at the moment t-1 is obtained; p is a radical of t-1 For the position of the vehicle in the observed state of motion at time t-1, p x,t-1 For the partial position on the x-axis of the vehicle in the observed state of motion at time t-1, p y,t-1 For the part position on the y-axis of the vehicle in the observed state of motion at time t-1, p z,t-1 For the vehicle in the z-axis during the observed state of motion at time t-1Dividing positions;
predicted motion state of vehicle at time t
Figure FDA0004069408800000037
The calculation is as follows:
Figure FDA0004069408800000038
Figure FDA0004069408800000039
wherein the content of the first and second substances,
Figure FDA0004069408800000041
the predicted motion state of the vehicle at the time t is obtained; b is t A state control matrix of the vehicle at the time t; u shape t Controlling a vector for the state of the vehicle at the time t; f t A state transition matrix of the vehicle at the time t; Δ t is the time variation, i.e., the time difference; a is a t Acceleration of the vehicle at time t;
w is to be t Is introduced into the above
Figure FDA0004069408800000042
Then, the updated complete predicted motion state is obtained
Figure FDA0004069408800000043
As follows:
Figure FDA0004069408800000044
Figure FDA0004069408800000045
wherein, W t For vehicles at time tSystematic error of (W) t In the form of a matrix; q t Is a systematic error W t A covariance matrix of multidimensional gaussian distribution at time t;
Figure FDA0004069408800000046
a covariance matrix in the predicted motion state of the vehicle at the time t; p is t-1 A covariance matrix in an observed motion state of the vehicle at the moment t-1;
a difference exists between the predicted motion state of the vehicle and the observed motion state of the vehicle due to the existence of the prediction error; it is therefore assumed that the observed and predicted states of motion of the vehicle at time t have a correspondence, and that a relationship matrix H between the observed and predicted states of motion of the vehicle at time t is used t Representing the relationship between observed and predicted motion states:
Figure FDA0004069408800000047
Figure FDA0004069408800000048
wherein the content of the first and second substances,
Figure FDA0004069408800000049
the observed motion state of the vehicle at the time t is shown;
Figure FDA00040694088000000410
a covariance matrix in an observed motion state of the vehicle at the time t; h t A relationship matrix representing a relationship between the observed motion state and the predicted motion state of the vehicle at time t;
observed motion state of vehicle
Figure FDA00040694088000000411
The multidimensional gaussian distribution of (a) is as follows:
Figure FDA00040694088000000412
Figure FDA0004069408800000051
Figure FDA0004069408800000052
true state of motion
Figure FDA0004069408800000053
The multidimensional gaussian distribution of (a) is as follows:
Figure FDA0004069408800000054
Z t for the dimension of the multi-dimensional Gaussian distribution at time t, R t A covariance matrix of the multidimensional Gaussian distribution at time t;
optimal estimation of motion state
Figure FDA0004069408800000055
The multidimensional gaussian distribution of (a) is as follows:
Figure FDA0004069408800000056
Figure FDA0004069408800000057
Figure FDA0004069408800000058
and (3) obtaining a calculation equation of the optimal estimation motion state of the Kalman filter at the time t through sorting:
Figure FDA0004069408800000059
Figure FDA00040694088000000510
Figure FDA00040694088000000511
wherein, K t For the kalman gain of the vehicle at time t,
Figure FDA00040694088000000512
the covariance matrix of the vehicle in the optimal estimation motion state at the moment t;
step S3 is specifically as follows:
s31, forming a corresponding real-time interactive data set by the weather information, the traffic accident information and the vehicle motion state information processed by the Kalman filter;
s32, fusing the real-time interactive data sets through a DAI-DAO technology to obtain fused data sets, and combining the fused data sets into a corresponding original flow sequence;
step S4 is specifically as follows:
s41, collecting characteristic values influencing traffic flow from the data packets, and enabling the obtained original flow sequence A = { b = { b } 1 ,b 2 ,...,b η The method combines the characteristic values to construct a multidimensional array M = { A, T = } 1 ,T 2 ,…,T Ω In which b is 1 Representing a first fused data set; b 2 Representing a second fused data set; b η Representing the eta fusion data set, namely, the eta fusion data sets are total; t is 1 Representing a first characteristic value; t is 2 To express the second characterA characteristic value; t is Ω Representing the omega-th eigenvalue, namely, one total omega eigenvalue in the training set;
s42, carrying out standardized processing on the multi-dimensional array M, namely flattening the multi-dimensional array M by using a numpy library in a python environment, converting the multi-dimensional array into a one-dimensional array, and dividing the processed plurality of one-dimensional arrays into a training set and a verification set according to a preset proportion; importing the data in the training set into a LightGBM model for training to form a LightGBM model after training; and obtaining the 1 st prediction sequence psi of the traffic flow through the trained LightGBM model 1
Figure FDA0004069408800000061
Wherein θ represents a LightGBM model, T (m) represents the first m eigenvalues in the training set, and T (τ) represents the last τ eigenvalues in the training set; psi 1 (m) denotes the prediction sequence psi 1 First m values of, # 1 (τ) denotes the prediction sequence ψ 1 The last τ values of (d), Ω being the total number of eigenvalues;
s43, then psi 1 Adding the obtained product into the multidimensional array M to obtain an updated multidimensional array M 1 And M is 1 An Informer model is introduced for training to obtain the 2 nd prediction sequence psi of the traffic flow 2 Then, there are:
M 1 ={A,T 1 ,T 2 ,…,T Ω1 }
ψ 2 =ζ(M 1 )
s44, changing psi 2 Is added to M 1 In (1), forming a new multidimensional array M 2 Then the multidimensional array M is combined 2 Importing an Informer model for training to obtain a new prediction sequence; the above operations of S41-S44 are repeated until the nth prediction sequence psi is formed n And the nth multidimensional array M n Finally, forming a trained Informer model;
M n ={A,T 1 ,T 2 ,…,T Ω12 ,…,ψ n }
ψ n =ζ(M n-1 )
where ζ represents the Informer model;
s45, importing the data in the verification set into an Informer model and a LightGBM model, and calculating to obtain a predicted value of the verification set; calculating to obtain a prediction error between a true value and a predicted value of the verification set, determining an initial value of a weight coefficient by using an inverse method, and bringing the weight coefficient into an Informer model and a LightGBM model to establish a primary prediction model D result
D result =ω 1 *LightGBM result2 *Informer result
Figure FDA0004069408800000071
Figure FDA0004069408800000072
Among them, lightGBM result Informmer, a predictor for LightGBM model result As a predictor of the Informer model, D result Is a prediction result; omega 1 Weight coefficient, ω, for LightGBM model 2 The weight coefficient is the weight coefficient of the Informmer model; epsilon 1 Is the error between the predicted value and the true value of the LightGBM model, epsilon 2 The error between the predicted value and the true value of the Informmer model is obtained;
step S5 is specifically as follows:
s51, reversely propagating the prediction error to determine an error function E of the LightGBM model L And error function E of Informmer model I
Figure FDA0004069408800000073
Figure FDA0004069408800000074
Wherein, t Lj A target value at layer j for the LightGBM model; o is Lj Actual values of the LightGBM model at the j-th layer; t is t Ij A target value of the Informmer model at the j layer; o is Ij The number of layers of the LightGBM model and the Informmer model is J;
s52, derivation is conducted on the error function, the weight coefficient is updated by gradient descent, and the optimal weight coefficient is obtained after multiple iterations;
error function E L Derived gradient values:
Figure FDA0004069408800000081
O L =sigmoid(ω 1 t L )
x 1 =ω 1 t L
Figure FDA0004069408800000082
Figure FDA0004069408800000083
wherein, ω is 1 Weight coefficients of the LightGBM model; t is t L Representing the final target value of the LightGBM model; o is L Representing the last actual value of the LightGBM model; x is the number of 1 Is an algebra, means omega 1 And t L The product of (a); sigmoid is an activation function;
error function E I Derived gradient values:
Figure FDA0004069408800000084
O I =sigmoid(ω 2 t I )
x 2 =ω 2 t I
Figure FDA0004069408800000085
Figure FDA0004069408800000086
wherein, ω is 2 The weight coefficient is the weight coefficient of the Informmer model; t is t I Representing the final target value of the Informer model; o is I Representing the final actual value of the Informmer model; x is the number of 2 Is an algebra, means omega 2 And t I The product of (a);
s53, finally, predicting results of the Informer model and the LightGBM model, and deducing a final optimal prediction model by using the obtained weight coefficients; the updated optimal weight coefficient is as follows:
Figure FDA0004069408800000091
Figure FDA0004069408800000092
the corresponding optimal prediction model is as follows:
D result =ω′ 1 *LightGBM result +ω′ 2 *Informer result
wherein alpha is a gradient parameter, omega' 1 Is the updated optimal weight coefficient omega of the LightGBM model' 2 And updating the updated optimal weight coefficient for the Informmer model.
CN202211637628.3A 2022-12-20 2022-12-20 Urban traffic flow prediction method Active CN115619052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211637628.3A CN115619052B (en) 2022-12-20 2022-12-20 Urban traffic flow prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211637628.3A CN115619052B (en) 2022-12-20 2022-12-20 Urban traffic flow prediction method

Publications (2)

Publication Number Publication Date
CN115619052A CN115619052A (en) 2023-01-17
CN115619052B true CN115619052B (en) 2023-03-17

Family

ID=84879581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211637628.3A Active CN115619052B (en) 2022-12-20 2022-12-20 Urban traffic flow prediction method

Country Status (1)

Country Link
CN (1) CN115619052B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011015817A2 (en) * 2009-08-03 2011-02-10 Hatton Traffic Management Limited Traffic control system
CN111161535A (en) * 2019-12-23 2020-05-15 山东大学 Attention mechanism-based graph neural network traffic flow prediction method and system
CN114202120A (en) * 2021-12-13 2022-03-18 中国电子科技集团公司第十五研究所 Urban traffic travel time prediction method aiming at multi-source heterogeneous data

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782658B (en) * 2019-08-16 2022-03-29 华南理工大学 Traffic prediction method based on LightGBM algorithm
US11842271B2 (en) * 2019-08-29 2023-12-12 Nec Corporation Multi-scale multi-granularity spatial-temporal traffic volume prediction
CN110889546B (en) * 2019-11-20 2020-08-18 浙江省交通规划设计研究院有限公司 Attention mechanism-based traffic flow model training method
CN113487061A (en) * 2021-05-28 2021-10-08 山西云时代智慧城市技术发展有限公司 Long-time-sequence traffic flow prediction method based on graph convolution-Informer model
CN114021811A (en) * 2021-11-03 2022-02-08 重庆大学 Attention-based improved traffic prediction method and computer medium
CN114330671A (en) * 2022-01-06 2022-04-12 重庆大学 Traffic flow prediction method based on Transformer space-time diagram convolution network
CN114519469A (en) * 2022-02-22 2022-05-20 重庆大学 Construction method of multivariate long sequence time sequence prediction model based on Transformer framework
CN114757389A (en) * 2022-03-10 2022-07-15 同济大学 Federal learning-based urban traffic flow space-time prediction method
CN115206092B (en) * 2022-06-10 2023-09-19 南京工程学院 Traffic prediction method of BiLSTM and LightGBM models based on attention mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011015817A2 (en) * 2009-08-03 2011-02-10 Hatton Traffic Management Limited Traffic control system
CN111161535A (en) * 2019-12-23 2020-05-15 山东大学 Attention mechanism-based graph neural network traffic flow prediction method and system
CN114202120A (en) * 2021-12-13 2022-03-18 中国电子科技集团公司第十五研究所 Urban traffic travel time prediction method aiming at multi-source heterogeneous data

Also Published As

Publication number Publication date
CN115619052A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN110570651B (en) Road network traffic situation prediction method and system based on deep learning
CN112801404B (en) Traffic prediction method based on self-adaptive space self-attention force diagram convolution
CN111612243B (en) Traffic speed prediction method, system and storage medium
CN113313947B (en) Road condition evaluation method of short-term traffic prediction graph convolution network
CN112270355B (en) Active safety prediction method based on big data technology and SAE-GRU
CN113450568A (en) Convolutional network traffic flow prediction method based on space-time attention mechanism
CN108346293B (en) Real-time traffic flow short-time prediction method
CN110660082A (en) Target tracking method based on graph convolution and trajectory convolution network learning
CN112949828A (en) Graph convolution neural network traffic prediction method and system based on graph learning
Wirthmüller et al. Predicting the time until a vehicle changes the lane using LSTM-based recurrent neural networks
CN114495507B (en) Traffic flow prediction method integrating space-time attention neural network and traffic model
CN113808396B (en) Traffic speed prediction method and system based on traffic flow data fusion
CN107730893A (en) A kind of shared bus website passenger flow forecasting for multidimensional characteristic of being gone on a journey based on passenger
CN107688556B (en) Real-time travel time calculation method based on functional principal component analysis
CN110032952A (en) A kind of road boundary point detecting method based on deep learning
Tang et al. Short-term travel speed prediction for urban expressways: Hybrid convolutional neural network models
CN108537825B (en) Target tracking method based on transfer learning regression network
CN114202120A (en) Urban traffic travel time prediction method aiming at multi-source heterogeneous data
CN114820708A (en) Peripheral multi-target trajectory prediction method based on monocular visual motion estimation, model training method and device
CN116612645A (en) Expressway service area vehicle flow prediction method
CN113554060B (en) LSTM neural network track prediction method integrating DTW
CN115619052B (en) Urban traffic flow prediction method
CN113870312A (en) Twin network-based single target tracking method
CN116386020A (en) Method and system for predicting exit flow of highway toll station by multi-source data fusion
CN116673947A (en) Mobile robot travel path point prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant