CN114202120A - Urban traffic travel time prediction method aiming at multi-source heterogeneous data - Google Patents
Urban traffic travel time prediction method aiming at multi-source heterogeneous data Download PDFInfo
- Publication number
- CN114202120A CN114202120A CN202111514594.4A CN202111514594A CN114202120A CN 114202120 A CN114202120 A CN 114202120A CN 202111514594 A CN202111514594 A CN 202111514594A CN 114202120 A CN114202120 A CN 114202120A
- Authority
- CN
- China
- Prior art keywords
- travel time
- vector
- road
- neural network
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000012549 training Methods 0.000 claims abstract description 40
- 230000007246 mechanism Effects 0.000 claims abstract description 36
- 238000003062 neural network model Methods 0.000 claims abstract description 22
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000012795 verification Methods 0.000 claims abstract description 10
- 238000013135 deep learning Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000002372 labelling Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 86
- 239000011159 matrix material Substances 0.000 claims description 56
- 238000013507 mapping Methods 0.000 claims description 49
- 238000011144 upstream manufacturing Methods 0.000 claims description 31
- 230000004927 fusion Effects 0.000 claims description 28
- 238000000605 extraction Methods 0.000 claims description 24
- 238000013528 artificial neural network Methods 0.000 claims description 23
- 230000006870 function Effects 0.000 claims description 22
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 230000004913 activation Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000010200 validation analysis Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000010276 construction Methods 0.000 abstract description 4
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 5
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 5
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000012098 association analyses Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Evolutionary Biology (AREA)
- Game Theory and Decision Science (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a method for predicting urban traffic travel time aiming at multi-source heterogeneous data by a method in the field of network security. The method comprises three implementation steps: the method comprises the following steps: collecting traffic travel data and external attribute information, finishing data preprocessing and labeling, and constructing a training set, a verification set and a test set which are suitable for a deep learning task; step two: constructing a deep neural network model based on multi-view convolution and an attention mechanism, and training on the constructed data set to obtain the deep neural network model; step three: and inputting a travel track to be predicted, and reasoning by adopting a deep neural network model obtained by training to obtain a travel time prediction result. The method provided by the invention can provide a method for predicting the traffic travel time by comprehensively considering multi-source heterogeneous data such as urban road network information, GPS track information, weather information, driver information and the like, and can realize more accurate and stable traffic travel time prediction compared with the current method, thereby providing support for the construction of an intelligent traffic transportation system.
Description
Technical Field
The invention relates to the field of intelligent traffic systems and multi-source information processing technology, in particular to an urban road travel time prediction method of a deep neural network model based on multi-view convolution and attention mechanism aiming at multi-source heterogeneous data such as GPS track data, urban road network data, meteorological information and the like.
Background
With the rapid development of social economy, people's travel demands and motor vehicle reserves show a rapid growth trend, and urgent needs are brought to the construction of a novel urban Intelligent Transportation System (ITS). The accurate and efficient urban traffic travel time prediction technology can effectively reflect real-time traffic conditions, relieve or avoid traffic congestion and support dynamic traffic management service, is a key link of an intelligent traffic transportation system, and has great significance for traffic control and urban traffic intellectualization. Meanwhile, with the widespread application of various mobile devices and GPS sensors, a great deal of traffic data (such as vehicle trajectory information, travel information, online taxi calling data, etc.) is generated and collected. The mass data contain important information about urban trip, and the possibility is provided for establishing a better intelligent transportation system, reducing traffic jam and improving daily commuting efficiency of people. Therefore, the method for accurately and efficiently predicting the urban road travel time by aiming at the multi-source heterogeneous data has great significance.
In recent years, the method of predicting travel time has also shifted from the conventional statistical method to machine learning. Especially, in the deep learning field, the method becomes a research hotspot in the current traffic travel time prediction field due to the strong learning capability of the method on mass data. Existing traffic travel time prediction solutions can be broadly divided into two categories. The first category is path-based solutions that use an intuitive physical model to represent travel time: the total travel time for a given route is equal to the sum of the time to traverse each road segment and the time at each intersection. The second category is data-driven solutions that use location-based data to build rich features, building high-dimensional feature maps, and thus predict travel time. The first type of scheme has the advantage of strong interpretability, but the prediction accuracy of the scheme is not high under the influence of accumulated errors of a plurality of road sections, intersection, traffic light and other factors. The second type of scheme is the most accurate and popular solution at present, such as a series of traffic travel time prediction schemes based on a nearest neighbor method, an ST-NN method, a wide-Deep learning model, Deep-TTE and the like, wherein track information is not considered in some methods, track points are mapped into an area without specific road sections, and errors exist in accurate statistics of flow information; meanwhile, the prior method mainly uses the method based on the LSTM in the time dimension, but the method based on the LSTM has the problem of information loss during long-time propagation, and a new method is needed to deal with the problem.
Considering the superiority of a recent attention machine system (Transformer) in feature association and time sequence association analysis and the superiority of a convolutional neural network in spatial feature extraction, and combining the data features of multisource heterogeneous traffic data, a deep neural network model which utilizes the attention machine system to capture the front and back time sequence association of the traffic data and learns the spatial feature change of a travel track through a convolutional network is designed; through the steps of data set construction, model training and the like, a prediction model of the urban traffic travel time aiming at multi-source heterogeneous data is obtained through training; the high efficiency and the accuracy of the model in traffic travel time prediction are proved through reasoning simulation on the verification data set.
Currently, in the field of urban traffic travel time prediction based on deep learning, the deep tte method proposed by zhengyu of microsoft asian institute is generally considered as an efficient and accurate authoritative baseline. The main functions of the Deeptte model are: through the learning of the historical vehicle track, the prediction of the vehicle commuting time is realized, namely, the time required from the starting point to the end point is predicted given a path.
The network structure of the deep TTE model mainly comprises three parts: Spatio-Temporal Learning Component, Attribute Component and Multi-task Learning Component, as shown in fig. 1.
The deep learning prediction method is an end-to-end deep learning prediction method, and realizes the prediction of the traffic travel time by learning time and space dependence through GPS track information. Specifically, the model firstly adopts an attribute component integrating external factors, including weather conditions, path length, travel date, driving preference and the like, to fuse learned potential representations with original GPS track information so as to consider the comprehensive influence of the external factors; then, a geo-based convolutional layer (geo-based) is adopted to convert the GPS sequence into a series of characteristic maps; then, capturing a series of characteristic graphs and time correlation of external factors by adopting a recurrent neural network (LSTMs); finally, a multitask component is provided, the component learns local travel time and overall travel time simultaneously through a multitask loss function, and learns the weights of different local paths according to implicit expressions of single paths and overall paths and external factors through a multi-factor attention mechanism, so that prediction of the single path travel time and the path overall travel time is finally achieved.
The prior art has the following disadvantages: firstly, urban road network information is not considered in the DeepTTE model, and the prediction accuracy of the model is greatly reduced under the condition that GPS track points are sparsely distributed or have large errors; secondly, the deep mode only adopts simple encoding (embedding) and splicing (splice) operations to fuse external information with GPS track information, and lacks deeper extraction of the change mode between the information; meanwhile, the deep text model mainly uses an LSTM-based method in the time dimension, but the LSTM-based method has a problem of information loss during long-time propagation, and it is difficult to capture long-sequence time characteristics, and a new method is also needed to deal with the problem.
Disclosure of Invention
Therefore, the invention firstly provides an urban traffic travel time prediction method aiming at multi-source heterogeneous data, which comprises the following three implementation steps:
the method comprises the following steps: collecting traffic travel data and external attribute information, finishing data preprocessing and labeling, and constructing a training set, a verification set and a test set which are suitable for a deep learning task;
step two: constructing a deep neural network model based on multi-view convolution and an attention mechanism, and training on the constructed data set to obtain the deep neural network model;
step three: and inputting a travel track to be predicted, and reasoning by adopting a deep neural network model obtained by training to obtain a travel time prediction result.
The traffic travel data comprise a GPS track, travel time, travel date, a specific driver vehicle number, path length and real path travel time.
The external attribute information comprises a city road network structure and weather information.
The method for constructing the training set, the verification set and the test set comprises the following steps: setting screening conditions according to common characteristics of the traffic travel tracks, and removing obvious abnormal tracks; then, each point of the GPS track is projected to a corresponding road section through a Leuven.MapMatching map matching tool to obtain a road section track sequence; the data is then divided into a training data set, a validation data set, and a test data set in a 2:1:1 ratio.
The screening conditions are as follows: the running distance is more than 100km or less than 0.5km, the average speed is more than 100km/h or less than 5km/h, and the running time is more than 7200 seconds or less than 60 seconds.
The specific method for constructing the deep neural network model based on the multi-view convolution and the attention mechanism comprises the following steps: the model architecture is formed by a traffic information fusion component, a multi-view convolution attention mechanism component and a multi-task learning component.
The traffic information fusion component comprises two modules: the road segment vector mapping module realizes the mapping of road segment vectors by using a skip-gram method, and the map comprises a road segment vector mapping module and a path coding module, wherein the skip-gram method is used for mapping the road segment vectors for a road segment sequence { r }iN, N is the number of road segments in the sequence, a sliding window of length 3 is set, and r is obtained for each road segment in a sequence of road segmentsiThe central road section in each upstream and downstream road section point pair is recorded as rcThe upstream road section is denoted as ruAnd the downstream section is marked as rdThe constructed upstream and downstream link point pairs are respectively (r)c,ru) And (r)c,rd),
Then, according to the probability P (r)i) Sampling structure pseudo-adjacent point pair:
wherein, freq (r)i) Is a section of road riThe frequency of occurrence in all the sequences of road segments,
then, a vector mapping neural network h ═ f (r) ═ σ (Wx + b) from the link numbers to the link vectors is constructed, where b is the bias vector to be trained, W is the mapping matrix to be trained, and σ is the sigmoid activation function. For each central section rcUpstream section ruAnd a downstream section rdHas a neural network parameter of Wc,bc、Wu,buAnd Wd,bdRespectively obtained road segment vector is hc、huAnd hdDefining an upstream relation scoring function asA downstream relationship scoring function ofDefining the training target of the vector mapping neural network to make the inner product of the mapping vectors of the real upstream and downstream road sections as close to 1 as possible and the inner product of the mapping vectors of the pseudo upstream and downstream road sections as close to 0 as possible, namely:
wherein, up/down (r)i) Respectively represent road sections riTrue upstream and downstream road segment points, Neg (r)i) Representative road section riPseudo-adjacent road section points,
Obtaining the neural network parameters W respectively aiming at the central road section, the upstream road section and the downstream road section through trainingc,bc、Wu,buAnd Wd,bdThereby respectively obtaining a road section vector mapping neural network fc、fuAnd fd;
The path coding module is used for each GPS track point p on the trackiThe road network map matching in the preprocessing is used to obtain the corresponding road section number pi.riThen, a road section vector corresponding to the GPS track point can be obtained through the road section vector mapping module, that is:
pi.evc=fc(pi.ri)
pi.evu=fu(pi.ri)
pi.evd=fd(pi.ri)
then, the trajectory feature vector traj is calculated as followsi:
Wherein p isiRepresenting the ith trace point, p, on the traceiLon represents the longitude of the track point, piLat represents the latitude, p, of the trace pointi.evc,pi.evu,pi.evdRoad segment mapping vectors, p, representing a center road segment, an upstream road segment and a downstream road segment, respectivelyiDis represents the length of the track point corresponding to the road segment, piSpd represents the speed limit of the road section corresponding to the track point, WlocRepresenting a neural network parameter matrix to be trained,which represents a splicing operation, is performed,
then, extrinsic attribute feature vector attr is calculated as follows:
wherein Dis represents the total length of the track segment, WeekIDemRepresents a vector, time ID, of the track trip date after simple embedding operationemRepresenting the vector, driverID, of the trajectory trip time after a simple embedding operationemRepresenting the vector of the taxi driver number after a simple embedding operation, WeatheremRepresenting a vector, W, obtained by subjecting weather information to a simple embedding operationattrRepresenting a neural network parameter matrix to be trained,
then, the track feature vector traj of each track point on the track is checked by using a one-dimensional convolution kernel with the size of 3iCarrying out convolution operation on the vector spliced with the external attribute feature vector attr to obtain a fusion feature matrixAs follows:
wherein the content of the first and second substances,represents the fusion feature matrix locfThe number of the ith row of (a),representing a one-dimensional convolution operation with a kernel size of 3 with an activation function of an ELU function, convolution kernel parameters being parameters to be trained,
finally, the traffic information fusion component is used for fusing the components according to the input track information { piCalculating to obtain a fusion feature matrix locfAs an output.
The multi-view convolution attention mechanism component consists of a series of multi-view convolution attention modules, each MCT module comprises a space attention feature extraction model and a time attention feature extraction model, and a multi-view space-time matrix is finally output and serves as the input of the multi-task learning component;
specifically, let the input of the ith MCT module be Xi-1The output is XiIn particular, for the 1 st MCT module, it inputs X0I.e. the fusion characteristic matrix loc output by the traffic information fusion componentf(ii) a For the last MCT module, denoted as kth, output XkNamely a multi-view spatiotemporal matrix output by the multi-view convolution attention mechanism component,
for any ith MCT module, the calculation process is as follows: firstly, the input X of the module is input by a position coding method in a multi-head attention mechanismi-1Coded to obtain information with positionAs follows:
wherein, position () represents a general position coding method in a multi-head attention mechanism, then 3 one-dimensional convolutional neural networks with different convolutional kernel sizes are used to extract spatial features in different visual field domains, and then mapped to a target domain in a nonlinear manner to obtain a spatial feature matrix, as follows:
wherein the content of the first and second substances,representing the parameter matrix corresponding to the jth one-dimensional convolutional neural network in the ith MCT module to be trained, bi,jRepresenting the bias vector corresponding to the jth one-dimensional convolution neural network in the ith MCT module to be trained, representing the convolution operation, and softmax (DEG) representing the normalized exponential functionThe number of the first and second groups is,representing the spatial characteristic matrix extracted by the jth one-dimensional convolutional neural network of the ith MCT module,
then, the 3 spatial feature matrixes are respectively connected with the ith MCT modulePerforming dot multiplication and summation, and obtaining a multi-view spatial feature matrix of the ith MCT module through nonlinear mappingAs follows:
wherein, WMVCRepresenting the matrix of the multi-view spatial mapping network to be trained, ELU (-) representing the ELU activation function, the output of the spatial attention feature extraction model of the ith MCT module can then be obtainedComprises the following steps:
wherein, Transformamer Encode (·) represents an Encoder model (Encode) in a general Multi-head attention model (Multi-head attention),
then, the output of the spatial attention feature extraction model is outputInputting the time attention feature extraction model, and performing time relation feature extraction by using an encoder model in the multi-head attention model to replace an LSTM method to obtain the output of the time attention feature extraction model of the ith MCT moduleThe following were used:
then, the output X of the ith MCT module is calculated as followsiThe following were used:
traverse i through all MCT modules starting with the 1 st MCT module, which outputs XkNamely the output multi-view spatio-temporal matrix of the multi-view convolution attention mechanism component.
The multi-view spatiotemporal matrix X of the multi-task learning component based on the inputkThe overall travel time prediction is finally obtained by carrying out nonlinear mapping through a multi-factor attention mechanism, and particularly, x is recordedi(i-1, …, N) multi-view spatio-temporal matrix X, each input to the multitasking learning componentkIn any ith row of (1), N is XkThe total number of lines of (2) is calculated first the multifactor attention vector xattentionThe following were used:
wherein alpha isiThe calculation method of the multi-factor weight factor is as follows:
zi=<attr,xi>
wherein the content of the first and second substances,<·>representing the vector inner product operation, and finally, the whole travel time t of the track is calculated through a full connection layerentireCalculations were performed as follows:
tentire=Wentirexattention+bentire
wherein, WentireRepresenting the parameter matrix of the full connection layer to be trained, bentireA bias vector representing a fully-connected layer to be trained.
The training method comprises the following steps: the percentage of mean absolute error was chosen as the model training error, as follows:
wherein, gap (p)1,pN) Representing a track in the database from a starting point p1To the end point pNThe time interval of (a), i.e. the overall travel time of the trajectory,
and (3) carrying out model training by adopting an Adam gradient optimization algorithm, setting the initial learning rate to be 1e-4, setting the batch size to be 64, reducing the learning rate to 1/4 of the original learning rate in every 2 stages, and training 100 stages in total to finally obtain the traffic travel time prediction depth neural network model based on the multi-view convolution and the attention mechanism.
The technical effects to be realized by the invention are as follows:
the method for predicting the traffic travel time can comprehensively consider multi-source heterogeneous data such as urban road network information, GPS track information, weather information, driver information and the like, can realize more accurate and stable traffic travel time prediction compared with the existing method, and thus provides support for the construction of an intelligent traffic transportation system.
Drawings
FIG. 1 prior art method architecture;
FIG. 2 is an architecture of an urban traffic travel time prediction method for multi-source heterogeneous data;
FIG. 3 is a block diagram of a road segment vector mapping module;
FIG. 4 is a comparison of prediction accuracy methods at different travel times;
FIG. 5 model prediction accuracy at different travel distances
Detailed Description
The following is a preferred embodiment of the present invention and is further described with reference to the accompanying drawings, but the present invention is not limited to this embodiment.
The invention provides an urban traffic travel time prediction method aiming at multi-source heterogeneous data. The specific implementation steps are as follows: (1) collecting mass traffic travel data and external attribute information, finishing data preprocessing and labeling, and constructing a training set, a verification set and a test set which are suitable for a deep learning task; (2) designing and constructing a deep neural network model based on multi-view convolution and an attention mechanism, and training on the constructed data set to obtain the deep neural network model; (3) and inputting a travel track to be predicted, and reasoning by adopting a deep neural network model obtained by training to obtain a travel time prediction result.
Specifically, the step (1) is as follows:
we first collected the following travel data: and the taxi track data of 8 months in 2014 in metropolis comprises about 15000 taxi drivers and 973 ten thousand GPS tracks in total. Besides the GPS track, the data also comprises attribute information such as travel time, travel date, taxi driver number, path length, real travel time of the path and the like. In addition, we have collected the following extrinsic attribute information: urban road network structure in 2014, weather information in 2014 8 months in 2014, and the like.
Then, according to the common characteristics of the travel tracks, the following screening conditions are set to eliminate the obvious abnormal tracks: the running distance is more than 100km or less than 0.5km, the average speed is more than 100km/h or less than 5km/h, and the running time is more than 7200 seconds or less than 60 seconds.
Then, each point of the GPS track is projected onto a corresponding road segment by using a Leuven. MapMatching map matching tool, so as to obtain a road segment track sequence.
Then, we split the data into a training data set, a validation data set, and a test data set at a 2:1:1 ratio. The training data set is used for model training, the verification data set is used for parameter calibration, and the test data set is used for performance verification.
Specifically, the step (2) is as follows:
the deep neural network model based on the multi-view convolution and attention mechanism designed by the invention consists of three components: a traffic information fusion component (traffic information fusion), a multi-view convolutional attention mechanism component (multiview CNN Transformer), and a multi-task learning component (multi-task learning), as shown in fig. 2.
The specific details of each component of the model are as follows:
(A) traffic information fusion component
The traffic information fusion component comprises two modules: a segment vector mapping module (RSV Embedding Model) and a Path encoding module (Path Encoder). The road section vector mapping module is used for mapping the road section number into a road section number vector, so that the feature space is compressed while the upstream and downstream information of the road section is reserved. The path coding module captures the spatial structure characteristics of the trajectory and processes other external factors (such as taxi driver number, weather information, etc.) and basic information (such as start time, total distance, etc.) of a given path to generate a fusion characteristic matrix as the output of the traffic information fusion component and the input of the multi-view convolution attention mechanism component.
(a) Road section vector mapping module (RSV Embedding Model)
The road segment vector mapping module provided by the invention realizes the mapping of the road segment vectors by using a skip-gram method. For a sequence of road segments riN (N is the number of road segments in the sequence), a sliding window of length 3 is set, resulting in r for each road segment in a sequence of road segmentsiAs shown in fig. 3.
Recording the center road section in each upstream and downstream road section point pair as rcThe upstream road section is denoted as ruAnd the downstream section is marked as rdThe constructed upstream and downstream link point pairs are respectively (r)c,ru) And (r)c,rd)。
Then, according to the probability P (r)i) Sampling structure pseudo-adjacent point pair:
wherein, freq (r)i) Is a section of road riFrequency of occurrence in all link sequences.
Then, a vector mapping neural network h ═ f (r) ═ σ (Wx + b) from the link numbers to the link vectors is constructed, where b is the bias vector to be trained, W is the mapping matrix to be trained, and σ is the sigmoid activation function. For each central section rcUpstream section ruAnd a downstream section rdHas a neural network parameter of Wc,bc、Wu,buAnd Wd,bdRespectively obtained road segment vector is hc、huAnd hdDefining an upstream relation scoring function asA downstream relationship scoring function ofThe training goal of the vector mapping neural network is defined to try to make the inner product of the mapping vectors of the real upstream and downstream segments as good as possible (close to 1), and the inner product of the mapping vectors of the pseudo upstream and downstream segments as good as possible (close to 0), that is:
wherein, up/down (r)i) Respectively represent road sections riTrue upstream and downstream road segment points, Neg (r)i) Representative road section riPseudo-adjacent link points.
Through training, the neural network parameters W respectively aiming at the central road section, the upstream road section and the downstream road section can be obtainedc,bc、Wu,buAnd Wd,bdThereby respectively obtaining a road section vector mapping neural network fc、fuAnd fd。
(b) Path coding module (Path Encoder)
Let for each GPS trace point p on the traceiThe road network map matching in the preprocessing is used to obtain the corresponding road section number pi.riThen, a road section vector corresponding to the GPS track point can be obtained through the road section vector mapping module, that is:
pi.evc=fc(pi.ri)
pi.evu=fu(pi.ri)
pi.evd=fd(pi.ri)
then, the trajectory feature vector traj is calculated as followsi:
Wherein p isiRepresenting the ith trace point, p, on the traceiLon represents the longitude of the track point, piLat represents the latitude, p, of the trace pointi.evc,pi.evu,pi.evdRoad segment mapping vectors, p, representing a center road segment, an upstream road segment and a downstream road segment, respectivelyiDis represents the length of the track point corresponding to the road segment, piSpd represents the speed limit of the road section corresponding to the track point, WlocRepresenting a neural network parameter matrix to be trained,representing a splicing operation.
Then, extrinsic attribute feature vector attr is calculated as follows:
wherein Dis represents the total length of the track segment, WeekIDemRepresent willVector, timeID, of the trajectory trip date after simple embeddingemRepresenting the vector, driverID, of the trajectory trip time after a simple embedding operationemRepresenting the vector of the taxi driver number after a simple embedding operation, WeatheremRepresenting a vector, W, obtained by subjecting weather information to a simple embedding operationattrRepresenting a neural network parameter matrix to be trained.
Then, the track feature vector traj of each track point on the track is checked by using a one-dimensional convolution kernel with the size of 3iCarrying out convolution operation on the vector spliced with the external attribute feature vector attr to obtain a fusion feature matrixAs follows:
wherein the content of the first and second substances,represents the fusion feature matrix locfThe number of the ith row of (a),the one-dimensional convolution operation with the kernel size of 3 and the activation function of ELU function is represented, and the convolution kernel parameter is the parameter to be trained.
In summary, the input trajectory information { p can be obtained by the traffic information fusion componentiCalculating to obtain a fusion feature matrix locfAs an output.
(B) Multi-view convolution attention mechanism assembly
The multi-view convolution attention mechanism component is composed of a series of multi-view CNN (convolutional Convergence) attention Modules (MCT), and each MCT module comprises a spatial attention feature extraction model (spatial transform) and a temporal attention feature extraction model (temporal transform) for jointly learning spatiotemporal features in the context of dynamically varying dependencies.
The multi-view convolution attention mechanism component improves the network structure of a space attention characteristic extraction model, and extracts multi-level space related information by adopting a multi-view convolution layer, so that the multi-view convolution attention mechanism component is more suitable for the prediction problem of traffic travel time. The multi-view convolution attention mechanism component will finally output a multi-view spatiotemporal matrix as the input of the multi-task learning component. Specifically, let the input of the ith MCT module be Xi-1The output is Xi. In particular, for the 1 st MCT module, it inputs X0I.e. the fusion characteristic matrix loc output by the traffic information fusion componentf(ii) a For the last MCT module (set to kth), its output XkNamely a multi-view spatiotemporal matrix output by the multi-view convolution attention mechanism component.
The specific calculation steps of the MCT module are exemplified below by taking the ith MCT module as a representative. Firstly, the input X of the module is input by a position coding method in a multi-head attention mechanismi-1Coded to obtain information with positionAs follows:
wherein, position () represents a general position encoding (position embedding) method in the multi-head attention mechanism. Then, using 3 one-dimensional convolutional neural networks with different convolutional kernel sizes to extract spatial features in different field of view, and then mapping to a target domain in a nonlinear manner to obtain a spatial feature matrix, as follows:
wherein the content of the first and second substances,representing the parameter matrix corresponding to the jth one-dimensional convolutional neural network in the ith MCT module to be trained, bi,jRepresents the bias vector corresponding to the jth one-dimensional convolutional neural network in the ith MCT module to be trained, represents the convolution operation, softmax ((-)) represents the normalized exponential function,and the j one-dimensional convolutional neural network represents the ith MCT module and extracts the obtained spatial feature matrix.
Then, the 3 spatial feature matrixes are respectively connected with the ith MCT modulePerforming dot multiplication and summation, and obtaining a multi-view spatial feature matrix of the ith MCT module through nonlinear mappingAs follows:
wherein, WMVCRepresents the matrix of the multi-view spatial mapping network to be trained, and ELU (-) represents the ELU activation function. The output of the spatial attention feature extraction model for the ith MCT module may then be obtainedComprises the following steps:
wherein, Transformamer Encode (·) represents an Encoder model (Encode) in a general-purpose Multi-head attention model (Multi-head attention).
Then, the output of the spatial attention feature extraction model is outputInputting the time attention feature extraction model, and performing time relation feature extraction by using an encoder model in the multi-head attention model to replace an LSTM method to obtain the output of the time attention feature extraction model of the ith MCT moduleThe following were used:
then, the output X of the ith MCT module is calculated as followsiThe following were used:
according to the principle, the output X of the MCT module from 1 st MCT module to the last MCT module (set as k-th MCT) is calculated in sequencekNamely the output multi-view spatio-temporal matrix of the multi-view convolution attention mechanism component.
(C) Multitask learning component
Multi-view spatiotemporal matrix X based on input for a multi-task learning componentkAnd carrying out nonlinear mapping by a multi-factor attention mechanism to finally obtain a prediction result of the whole travel time. Specifically, let xi(i 1.. said., N) multi-view spatio-temporal matrix X respectively input by the multi-task learning componentkIn the ith row, N is XkThe total number of lines of (2) is calculated first the multifactor attention vector xattentionThe following were used:
wherein alpha isiThe calculation method of the multi-factor weight factor is as follows:
zi=<attr,xi>
wherein the content of the first and second substances,<·>representing a vector inner product operation. Finally, the whole travel time t of the track is paired through a full connection layerentireCalculations were performed as follows:
tentire=Wentirexattention+bentire
wherein, WentireRepresenting the parameter matrix of the full connection layer to be trained, bentireA bias vector representing a fully-connected layer to be trained. And thus, the traffic travel time prediction depth neural network model structure based on the multi-view convolution and the attention mechanism is completely explained.
We chose the percent mean absolute error (MAPE) as the model training error, as follows:
wherein, gap (p)1,pN) Representing a track in the database from a starting point p1To the end point pNI.e. the overall travel time of the trajectory.
Model training is carried out by adopting an Adam gradient optimization algorithm, the initial learning rate is set to be 1e-4, the batch size is set to be 64, the learning rate is reduced to 1/4 in the prior art every 2 stages, 100 stages are trained in total, and finally the traffic travel time prediction depth neural network model based on the multi-view convolution and the attention mechanism is obtained.
After the training of the traffic travel time prediction deep neural network model is completed, the model can output the travel time prediction value of the whole track only by inputting the GPS track information to be predicted.
The method can realize the prediction of the urban traffic travel time with higher precision and better stability.
Specifically, through simulation verification on a test data set, the average absolute error percentage (MAPE) of the travel time prediction result of the urban traffic travel time prediction method for the multi-source heterogeneous data is 11.25%, and the average absolute value error (MAE) is 154.8 s. Compared with the baseline method DeepTTE in the field of urban traffic travel time prediction, the urban traffic travel time prediction method for multi-source heterogeneous data provided by the invention has the advantages that although the training time is relatively long, after model training is completed, the reasoning speed of the model is equivalent, the memory occupation is equivalent, and the prediction precision is about 1.1% higher than that of the DeepTTE. Compared with another recently proposed traffic travel time prediction method DeepGTT (which combines statistical learning and deep learning and uses three layered probability models to predict travel time distribution and reconstruct a travel path), the method provided by the invention can also achieve 1% higher MAPE precision while the model reasoning speed and the memory occupation are equivalent. The prediction accuracy method pairs at different travel times are shown in fig. 4.
It can be seen that in the traces with different travel time lengths, the performance of the invention is superior to that of the DeepTTE model and the DeepGTT model, which shows that the invention has better performance under the travel time with different lengths; and as the travel time increases, the prediction accuracy of the three models decreases. However, compared to DeepTTE and DeepGTT, the accuracy of the proposed model is less degraded and becomes more stable with increasing travel time.
The model prediction accuracy at different travel distances is shown in fig. 5. Therefore, the MAE and MAPE of the model provided by the invention under the tracks of different travel distances are smaller than those of DeepTTE and DeepGTT models, which shows that the model provided by the invention has better performance under different travel distances. And as the driving distance increases, the MAE of the three models gradually increases, and the MAPE of the three models is in a descending trend. This shows that, as the distance increases, the absolute error of the prediction gradually increases, and the relative error gradually decreases, but the accuracy of the model proposed by the present invention relatively changes less, and has better stability.
Claims (10)
1. A method for predicting urban traffic travel time aiming at multi-source heterogeneous data is characterized by comprising the following steps: the method comprises three implementation steps:
the method comprises the following steps: collecting traffic travel data and external attribute information, finishing data preprocessing and labeling, and constructing a training set, a verification set and a test set which are suitable for a deep learning task;
step two: constructing a deep neural network model based on multi-view convolution and an attention mechanism, and training on the constructed data set to obtain the deep neural network model;
step three: and inputting a travel track to be predicted, and reasoning by adopting a deep neural network model obtained by training to obtain a travel time prediction result.
2. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 1, wherein the method comprises the following steps: the traffic travel data comprise a GPS track, travel time, travel date, a specific driver vehicle number, path length and real path travel time.
3. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 2, wherein the method comprises the following steps: the external attribute information comprises a city road network structure and weather information.
4. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 3, wherein the method comprises the following steps: the method for constructing the training set, the verification set and the test set comprises the following steps: setting screening conditions according to common characteristics of the traffic travel tracks, and removing obvious abnormal tracks; then, each point of the GPS track is projected to a corresponding road section through a Leuven.MapMatching map matching tool to obtain a road section track sequence; the data is then divided into a training data set, a validation data set, and a test data set in a 2:1:1 ratio.
5. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 4, wherein the method comprises the following steps: the screening conditions are as follows: the running distance is more than 100km or less than 0.5km, the average speed is more than 100km/h or less than 5km/h, and the running time is more than 7200 seconds or less than 60 seconds.
6. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 5, wherein the method comprises the following steps: the specific method for constructing the deep neural network model based on the multi-view convolution and the attention mechanism comprises the following steps: the model architecture is formed by a traffic information fusion component, a multi-view convolution attention mechanism component and a multi-task learning component.
7. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 6, wherein the method comprises the following steps: the traffic information fusion component comprises two modules: the road segment vector mapping module realizes the mapping of road segment vectors by using a skip-gram method, and the map comprises a road segment vector mapping module and a path coding module, wherein the skip-gram method is used for mapping the road segment vectors for a road segment sequence { r }iN, N is the number of road segments in the sequence, a sliding window of length 3 is set, and r is obtained for each road segment in a sequence of road segmentsiThe central road section in each upstream and downstream road section point pair is recorded as rcThe upstream road section is denoted as ruAnd the downstream section is marked as rdThe constructed upstream and downstream link point pairs are respectively (r)c,ru) And (r)c,rd),
Then, according to the probability P (r)i) Sampling structure pseudo-adjacent point pair:
wherein, freq (r)i) Is a section of road riThe frequency of occurrence in all the sequences of road segments,
then, a vector mapping neural network h ═ f (r) ═ σ (Wx + b) from the link numbers to the link vectors is constructed, where b is the bias vector to be trained, W is the mapping matrix to be trained, and σ is siA gmoid activation function; for each central section rcUpstream section ruAnd a downstream section rdHas a neural network parameter of Wc,bc、Wu,buAnd Wd,bdRespectively obtained road segment vector is hc、huAnd hdDefining an upstream relation scoring function asA downstream relationship scoring function ofDefining the training target of the vector mapping neural network to make the inner product of the mapping vectors of the real upstream and downstream road sections as close to 1 as possible and the inner product of the mapping vectors of the pseudo upstream and downstream road sections as close to 0 as possible, namely:
wherein, up/down (r)i) Respectively represent road sections riTrue upstream and downstream road segment points, Neg (r)i) Representative road section riThe pseudo-adjacent road segment points of (a),
obtaining the neural network parameters W respectively aiming at the central road section, the upstream road section and the downstream road section through trainingc,bc、Wu,buAnd Wd,bdThereby respectively obtaining a road section vector mapping neural network fc、fuAnd fd;
The path coding module is used for each GPS track point p on the trackiThe road network map matching in the preprocessing is used to obtain the corresponding road section number pi.riThen, a road section vector corresponding to the GPS track point can be obtained through the road section vector mapping module, that is:
pi.evc=fc(pi.ri)
pi.evu=fu(pi.ri)
pi.evd=fd(pi.ri)
then, the trajectory feature vector traj is calculated as followsi:
Wherein p isiRepresenting the ith trace point, p, on the traceiLon represents the longitude of the track point, piLat represents the latitude, p, of the trace pointi.evc,pi.evu,pi.evdRoad segment mapping vectors, p, representing a center road segment, an upstream road segment and a downstream road segment, respectivelyiDis represents the length of the track point corresponding to the road segment, piSpd represents the speed limit of the road section corresponding to the track point, WlocRepresenting a neural network parameter matrix to be trained,which represents a splicing operation, is performed,
then, extrinsic attribute feature vector attr is calculated as follows:
wherein Dis represents the total length of the track segment, WeekIDemRepresents a vector, time ID, of the track trip date after simple embedding operationemRepresenting the vector, driverID, of the trajectory trip time after a simple embedding operationemRepresenting the vector of the taxi driver number after a simple embedding operation, WeatheremRepresenting a vector, W, obtained by subjecting weather information to a simple embedding operationattrRepresenting a neural network parameter matrix to be trained,
then, each on the trace is checked using a one-dimensional convolution of size 3Track feature vector traj of one track pointiCarrying out convolution operation on the vector spliced with the external attribute feature vector attr to obtain a fusion feature matrixAs follows:
wherein the content of the first and second substances,represents the fusion feature matrix locfThe number of the ith row of (a),representing a one-dimensional convolution operation with a kernel size of 3 with an activation function of an ELU function, convolution kernel parameters being parameters to be trained,
finally, the traffic information fusion component is used for fusing the components according to the input track information { piCalculating to obtain a fusion feature matrix locfAs an output.
8. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 7, wherein the method comprises the following steps: the multi-view convolution attention mechanism component consists of a series of multi-view convolution attention modules, each MCT module comprises a space attention feature extraction model and a time attention feature extraction model, and a multi-view space-time matrix is finally output and serves as the input of the multi-task learning component;
specifically, let the input of the ith MCT module be Xi-1The output is XiIn particular, for the 1 st MCT module, it inputs X0I.e. the fusion characteristic matrix loc output by the traffic information fusion componentf(ii) a For the last MCT module, denoted as kth, output XkI.e. the multi-view angle output by the multi-view convolution attention mechanism componentThe empty matrix is a matrix of the data stream,
for any ith MCT module, the calculation process is as follows: firstly, the input X of the module is input by a position coding method in a multi-head attention mechanismi-1Coded to obtain information with positionAs follows:
wherein, position () represents a general position coding method in a multi-head attention mechanism, then 3 one-dimensional convolutional neural networks with different convolutional kernel sizes are used to extract spatial features in different visual field domains, and then mapped to a target domain in a nonlinear manner to obtain a spatial feature matrix, as follows:
wherein the content of the first and second substances,representing the parameter matrix corresponding to the jth one-dimensional convolutional neural network in the ith MCT module to be trained, bi,jRepresents the bias vector corresponding to the jth one-dimensional convolutional neural network in the ith MCT module to be trained, represents the convolution operation, softmax ((-)) represents the normalized exponential function,representing the spatial characteristic matrix extracted by the jth one-dimensional convolutional neural network of the ith MCT module,
then, the 3 spatial feature matrixes are respectively connected with the ith MCT modulePerforming dot multiplication and summation, and obtaining a multi-view spatial feature matrix of the ith MCT module through nonlinear mappingAs follows:
wherein, WMVCRepresenting the matrix of the multi-view spatial mapping network to be trained, ELU (-) representing the ELU activation function, the output of the spatial attention feature extraction model of the ith MCT module can then be obtainedComprises the following steps:
wherein, Transformar Encoder (·) represents an Encoder model (Enencoder) in a general Multi-head attention model (Multi-head attention),
then, the output of the spatial attention feature extraction model is outputInputting the time attention feature extraction model, and performing time relation feature extraction by using an encoder model in the multi-head attention model to replace an LSTM method to obtain the output of the time attention feature extraction model of the ith MCT moduleThe following were used:
then, the output X of the ith MCT module is calculated as followsiThe following were used:
traverse i through all MCT modules starting with the 1 st MCT module, which outputs XkNamely the output multi-view spatio-temporal matrix of the multi-view convolution attention mechanism component.
9. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 8, wherein the method comprises the following steps: the multi-view spatiotemporal matrix X of the multi-task learning component based on the inputkThe overall travel time prediction is finally obtained by carrying out nonlinear mapping through a multi-factor attention mechanism, and particularly, x is recordedi(i-1, …, N) multi-view spatio-temporal matrix X, each input to the multitasking learning componentkIn any ith row of (1), N is XkThe total number of lines of (2) is calculated first the multifactor attention vector xattentionThe following were used:
wherein alpha isiThe calculation method of the multi-factor weight factor is as follows:
zi=<attr,xi>
wherein the content of the first and second substances,<·>representing the vector inner product operation, and finally, the whole travel time t of the track is calculated through a full connection layerentireCalculations were performed as follows:
tentire=Wentirexattention+bentire
wherein, WentireRepresenting the parameter matrix of the full connection layer to be trained, bentireA bias vector representing a fully-connected layer to be trained.
10. The method for predicting the urban traffic travel time aiming at the multi-source heterogeneous data according to claim 9, wherein the method comprises the following steps: the training method comprises the following steps: the percentage of mean absolute error was chosen as the model training error, as follows:
wherein, gap (p)1,pN) Representing a track in the database from a starting point p1To the end point pNThe time interval of (a), i.e. the overall travel time of the trajectory,
and (3) carrying out model training by adopting an Adam gradient optimization algorithm, setting the initial learning rate to be 1e-4, setting the batch size to be 64, reducing the learning rate to 1/4 of the original learning rate in every 2 stages, and training 100 stages in total to finally obtain the traffic travel time prediction depth neural network model based on the multi-view convolution and the attention mechanism.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111514594.4A CN114202120A (en) | 2021-12-13 | 2021-12-13 | Urban traffic travel time prediction method aiming at multi-source heterogeneous data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111514594.4A CN114202120A (en) | 2021-12-13 | 2021-12-13 | Urban traffic travel time prediction method aiming at multi-source heterogeneous data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114202120A true CN114202120A (en) | 2022-03-18 |
Family
ID=80652704
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111514594.4A Pending CN114202120A (en) | 2021-12-13 | 2021-12-13 | Urban traffic travel time prediction method aiming at multi-source heterogeneous data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114202120A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115221971A (en) * | 2022-07-28 | 2022-10-21 | 上海人工智能创新中心 | Track prediction method based on heterogeneous graph |
CN115565376A (en) * | 2022-09-30 | 2023-01-03 | 福州大学 | Vehicle travel time prediction method and system fusing graph2vec and double-layer LSTM |
CN115619052A (en) * | 2022-12-20 | 2023-01-17 | 安徽农业大学 | Urban traffic flow prediction method |
CN116109021A (en) * | 2023-04-13 | 2023-05-12 | 中国科学院大学 | Travel time prediction method, device, equipment and medium based on multitask learning |
CN116541721A (en) * | 2023-03-31 | 2023-08-04 | 苏州大学 | Positioning and road network matching method and system for signaling data |
CN117831287A (en) * | 2023-12-29 | 2024-04-05 | 北京大唐高鸿数据网络技术有限公司 | Method, device, equipment and storage medium for determining highway congestion index |
CN117974075A (en) * | 2024-04-01 | 2024-05-03 | 法诺信息产业有限公司 | Smart city public information management system based on big data |
CN118052347A (en) * | 2024-04-16 | 2024-05-17 | 北京航空航天大学 | Travel time estimation method and system based on travel track sequence of floating car |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669594A (en) * | 2020-12-11 | 2021-04-16 | 国汽(北京)智能网联汽车研究院有限公司 | Method, device, equipment and storage medium for predicting traffic road conditions |
CN112669606A (en) * | 2020-12-24 | 2021-04-16 | 西安电子科技大学 | Traffic flow prediction method for training convolutional neural network by utilizing dynamic space-time diagram |
WO2021174876A1 (en) * | 2020-09-18 | 2021-09-10 | 平安科技(深圳)有限公司 | Smart decision-based population movement prediction method, apparatus, and computer device |
-
2021
- 2021-12-13 CN CN202111514594.4A patent/CN114202120A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021174876A1 (en) * | 2020-09-18 | 2021-09-10 | 平安科技(深圳)有限公司 | Smart decision-based population movement prediction method, apparatus, and computer device |
CN112669594A (en) * | 2020-12-11 | 2021-04-16 | 国汽(北京)智能网联汽车研究院有限公司 | Method, device, equipment and storage medium for predicting traffic road conditions |
CN112669606A (en) * | 2020-12-24 | 2021-04-16 | 西安电子科技大学 | Traffic flow prediction method for training convolutional neural network by utilizing dynamic space-time diagram |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115221971A (en) * | 2022-07-28 | 2022-10-21 | 上海人工智能创新中心 | Track prediction method based on heterogeneous graph |
CN115565376A (en) * | 2022-09-30 | 2023-01-03 | 福州大学 | Vehicle travel time prediction method and system fusing graph2vec and double-layer LSTM |
CN115565376B (en) * | 2022-09-30 | 2024-05-03 | 福州大学 | Vehicle journey time prediction method and system integrating graph2vec and double-layer LSTM |
CN115619052A (en) * | 2022-12-20 | 2023-01-17 | 安徽农业大学 | Urban traffic flow prediction method |
CN115619052B (en) * | 2022-12-20 | 2023-03-17 | 安徽农业大学 | Urban traffic flow prediction method |
CN116541721A (en) * | 2023-03-31 | 2023-08-04 | 苏州大学 | Positioning and road network matching method and system for signaling data |
CN116109021A (en) * | 2023-04-13 | 2023-05-12 | 中国科学院大学 | Travel time prediction method, device, equipment and medium based on multitask learning |
CN116109021B (en) * | 2023-04-13 | 2023-08-15 | 中国科学院大学 | Travel time prediction method, device, equipment and medium based on multitask learning |
CN117831287A (en) * | 2023-12-29 | 2024-04-05 | 北京大唐高鸿数据网络技术有限公司 | Method, device, equipment and storage medium for determining highway congestion index |
CN117974075A (en) * | 2024-04-01 | 2024-05-03 | 法诺信息产业有限公司 | Smart city public information management system based on big data |
CN118052347A (en) * | 2024-04-16 | 2024-05-17 | 北京航空航天大学 | Travel time estimation method and system based on travel track sequence of floating car |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114202120A (en) | Urban traffic travel time prediction method aiming at multi-source heterogeneous data | |
CN109670277B (en) | Travel time prediction method based on multi-mode data fusion and multi-model integration | |
CN111400620B (en) | User trajectory position prediction method based on space-time embedded Self-orientation | |
CN110570651B (en) | Road network traffic situation prediction method and system based on deep learning | |
CN109697852B (en) | Urban road congestion degree prediction method based on time sequence traffic events | |
Cai et al. | Environment-attention network for vehicle trajectory prediction | |
CN111832814A (en) | Air pollutant concentration prediction method based on graph attention machine mechanism | |
CN109272157A (en) | A kind of freeway traffic flow parameter prediction method and system based on gate neural network | |
Rahmani et al. | Graph neural networks for intelligent transportation systems: A survey | |
CN112017436B (en) | Method and system for predicting urban traffic travel time | |
Katariya et al. | Deeptrack: Lightweight deep learning for vehicle trajectory prediction in highways | |
CN112633602B (en) | Traffic congestion index prediction method and device based on GIS map information | |
CN115204478A (en) | Public traffic flow prediction method combining urban interest points and space-time causal relationship | |
CN114495500B (en) | Traffic prediction method based on dual dynamic space-time diagram convolution | |
CN115510174A (en) | Road network pixelation-based Wasserstein generation countermeasure flow data interpolation method | |
CN113159403A (en) | Method and device for predicting pedestrian track at intersection | |
CN112884014A (en) | Traffic speed short-time prediction method based on road section topological structure classification | |
CN116307152A (en) | Traffic prediction method for space-time interactive dynamic graph attention network | |
Vijayalakshmi et al. | Multivariate Congestion Prediction using Stacked LSTM Autoencoder based Bidirectional LSTM Model. | |
Chuanxia et al. | Machine learning and IoTs for forecasting prediction of smart road traffic flow | |
CN117610734A (en) | Deep learning-based user behavior prediction method, system and electronic equipment | |
CN116542391B (en) | Urban area passenger flow volume prediction method based on big data | |
CN115565376B (en) | Vehicle journey time prediction method and system integrating graph2vec and double-layer LSTM | |
Liu et al. | MCT‐TTE: Travel Time Estimation Based on Transformer and Convolution Neural Networks | |
CN116386020A (en) | Method and system for predicting exit flow of highway toll station by multi-source data fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |