CN112767682A - Multi-scale traffic flow prediction method based on graph convolution neural network - Google Patents
Multi-scale traffic flow prediction method based on graph convolution neural network Download PDFInfo
- Publication number
- CN112767682A CN112767682A CN202011513907.XA CN202011513907A CN112767682A CN 112767682 A CN112767682 A CN 112767682A CN 202011513907 A CN202011513907 A CN 202011513907A CN 112767682 A CN112767682 A CN 112767682A
- Authority
- CN
- China
- Prior art keywords
- grained
- data
- coarse
- fine
- traffic flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 22
- 235000019580 granularity Nutrition 0.000 claims description 49
- 239000011159 matrix material Substances 0.000 claims description 31
- 238000013507 mapping Methods 0.000 claims description 21
- 230000007246 mechanism Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 7
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract description 12
- 238000011160 research Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 7
- 230000002123 temporal effect Effects 0.000 description 6
- 230000007547 defect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- BULVZWIRKLYCBC-UHFFFAOYSA-N phorate Chemical compound CCOP(=S)(OCC)SCSCC BULVZWIRKLYCBC-UHFFFAOYSA-N 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- YHXISWVBGDMDLQ-UHFFFAOYSA-N moclobemide Chemical compound C1=CC(Cl)=CC=C1C(=O)NCCN1CCOCC1 YHXISWVBGDMDLQ-UHFFFAOYSA-N 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
The utility model discloses a multi-task urban traffic flow prediction method based on a graph neural network, which firstly researches a new problem of predicting multi-scale (fine-grained and coarse-grained) traffic flow, and specifically, given a road graph, firstly, a coarse-grained road graph is constructed based on topological proximity and traffic flow similarity between nodes (road links); and then, a Cross-Scale graph convolution Cross-Scale GCN is provided to extract traffic flow characteristics of fine granularity and coarse granularity and fuse the traffic flow characteristics. The time characteristics of LSTM extraction added with Intra-orientation and Inter-orientation are utilized, and structural constraint is introduced for ensuring the consistency of the prediction results of the two scales of data. The scheme shows excellent performance in the aspect of fine-grained and coarse-grained flow prediction, and the prediction accuracy is improved.
Description
Technical Field
The invention provides a multi-scale traffic flow prediction method based on a graph convolution neural network, relates to the field of space-time big data prediction, and is mainly used for predicting traffic flows of different granularities in cities so as to help traffic departments to alleviate traffic congestion.
Background
With the acceleration of urbanization process in China, the contradiction between the growing urban population and the limited space resources is increasingly serious day by day, and the problem of traffic jam becomes a big problem which hinders urban development. Since the sixties of the last century, urban traffic planning and urban traffic control have been studied in countries around the world, but with the continuous expansion of urban scales and the increasing complexity of traffic conditions, effective traffic management by these two measures is no longer feasible, and therefore Intelligent Transportation Systems (ITS) have come into force. The intelligent traffic system combines advanced physical communication equipment and intelligent computer technology to establish an information prediction and management system aiming at the whole traffic network, and is the best way for comprehensively and effectively solving the problems in the field of traffic transportation including traffic jam at present.
Urban traffic flow prediction is an important component of an intelligent traffic system, and comprises traffic speed, traffic density and the like. The method has important research and application values in many fields, and most of the traditional prediction methods are based on statistics, including ARIMA, VAR and the like. Statistical-based methods typically learn linear mapping models based on historical traffic data to predict their future trends. While such methods may achieve the desired performance in road level traffic prediction, their performance may be significantly degraded when applied to predicting traffic across a road network, where the correlation between roads is highly non-linear and dynamic, and none of such methods is well characterized by the spatiotemporal nature of the data. With the advancement of technology, the increase of hardware, and the collection of a large amount of data, neural networks are widely used due to their excellent performance, and with the introduction of network structures of a convolutional neural network, a recurrent neural network, and a series of variants thereof, various deep learning models are used for traffic flow prediction. Many researchers have proposed a series of new approaches, such as: DCRNN, STGCN, TGCN and the like, and the neural network-based methods learn characteristics from a large amount of data, well utilize the spatiotemporal characteristics of the data and obtain excellent performance. The above studies are all the existing technical exploration and further optimization in urban traffic flow prediction, but the above methods have some limitations. First, most of the previous studies have focused on predicting traffic conditions on each road, which can be considered as a fine-grained prediction. However, in many cases, coarse-grained predictions are also needed, such as predicting traffic flow between different urban areas covering multiple road connections, to help governments better understand traffic conditions from a macroscopic perspective. This is particularly useful in applications of city planning and public traffic planning.
In summary, the existing urban traffic flow prediction model usually ignores the mutual influence among different granularity data, has a defect in the aspect of area prediction, and also has the problem of fuzzy prediction. Therefore, the existing problems often have the defects of low prediction accuracy and efficiency.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a multi-scale urban traffic flow prediction method based on a graph neural network, aiming at the defects of the prior art and solving the problems of the defects in the background art. By adopting the method disclosed by the invention, the traffic flow prediction of different scales of the whole city can be realized by effectively utilizing the space-time correlation of the traffic data flow, and higher prediction precision can be ensured under different conditions.
The technical scheme is as follows: a multi-scale urban traffic flow prediction method based on a graph neural network comprises the following specific steps:
the method comprises the following steps: data pre-processing
1) Obtaining an allocation matrix A based on a graph structurefc1. The original data is processed to remove abnormal values. Processing fine-grained data by using a Louvain algorithm in community detection to obtain a mapping matrix A from fine granularity to coarse granularity based on a graph structurefc1。
2) Obtaining an allocation matrix A based on node characteristicsfc2. Collecting the fine-grained data of all the moments together to form a new matrix, and performing spectral clustering on the matrix to obtain a mapping matrix A from fine granularity to coarse granularity based on node characteristicsfc2。
3) And fusing the two mapping matrixes to obtain a final mapping matrix. Specifically, the dot product operation is performed on the two mapping matrixes, so that the final mapping matrix A is obtainedfc:
Afc=softmax(Afc1⊙Afc2)
4) Then, fine-grained data are aggregated by adopting a summing or averaging mode, so that coarse-grained data X are obtainedc:
Xc=Agg(v1,v2,...,vn)
The fine-grained and coarse-grained data are urban traffic flow historical data tensors required by us.
Step two: training neural networks
And (4) training the whole network by using the fine-grained and coarse-grained urban traffic flow data constructed in the step one. The model has two parts: the device comprises a spatial feature extraction module and a temporal feature extraction module. The space extraction module comprises a common graph convolution GCN and a Cross-Scale graph convolution Cross-Scale GCN, wherein the GCN is used for respectively extracting the respective space characteristics of different granularity data, and the Cross-Scale GCN fuses the space characteristics of the different granularity data. The time characteristic extraction module comprises an Inter-orientation and an Intro-orientation, wherein the Inter-orientation is used for enhancing the capability of capturing time correlation in the same granularity, and the Intro-orientation is used for capturing time correlation of data with different granularities.
The input of the network is a fine-grained historical traffic characteristic matrix, a coarse-grained historical traffic characteristic matrix and an adjacent matrix corresponding to the fine-grained historical traffic characteristic matrix and the coarse-grained historical traffic characteristic matrix. The value of the adjacency matrix represents whether two nodes are connected or not, and if the two nodes are connected, the adjacency matrix is 1, and the unconnected nodes are 0. The method comprises the steps of firstly, performing convolution on coarse-and-fine-granularity data through a GCN, then fusing the characteristics of the data with different granularities through a Cross-Scale GCN by using a mapping matrix, wherein information with fine granularity flows to the coarse granularity, and meanwhile, the data with the coarse granularity also flows to the fine granularity. And then fusing the hidden representations of the two kinds of granularity data through Intra-orientation by adding an Inter-orientation LSTM, and finally mapping the hidden representations into the data with different granularities through a full connection layer to obtain final prediction data.
In addition, in order to ensure the accuracy of the fine granularity and the coarse granularity, structural constraints are added between the predicted values and the real values, so that the predicted values of the coarse granularity nodes are ensured to be corresponding to the fine granularity nodes corresponding to the coarse granularity nodes. If with XTRepresenting the real data of a fine grain size,fine-grained prediction data representing the output,represents the coarse-grained real data of the image,coarse-grained prediction data representing the output, using the mean square error penalty, the objective function can ultimately be described in the form:
wherein lambda is a hyper-parameter, the loss function is optimized by using the Adam algorithm and the back propagation algorithm, and finally, when the algorithm converges, an optimal solution is obtained.
Step three: generating a prediction result
Using the fine-grained and coarse-grained city traffic flow matrix { X) of the first t momentsiI ═ 1,. t } and the corresponding adjacency matrix are input into the trained network model to obtain the prediction results of the urban traffic flow with two granularities at the next moment, namely the flow { X with the fine granularityiI ═ t +1} and coarse grain flow { X |i|i=t+1}。
As a further preferable aspect of the present invention, in the second step, the spatial convolution module and the temporal feature extraction module are specifically designed as follows:
for data with two granularities of thickness and fineness, two independent graph neural networks are used, and a space convolution module firstly contains a common GCN which can be aggregatedThe characteristic information of the local node also comprises Cross-scale GCN, the result is added to the fine-grained characteristic through coarse-grained convolution, and the Cross-scale GCN and the fine-grained characteristic realize information transmission. We learn a better representation of the nodes by passing information in each graph and exchange information between the two representations using a flow of information from coarse to fine and from fine to coarse, which allows the nodes in the graph to capture the characteristics of distant nodes. Spatial convolution module we use the Attention-added LSTM, and when the input sequence is very long, it is difficult for the model to learn a reasonable vector representation. Thus, Intra-attention is added so that the data at the last moment can be differently weighted to consider all previous moment data. The input history data is { X1,X2,...,XTPredicting data { X) at next timet+1And after the Intra-attribute is added, the model can acquire the historical data at which moment is more important for predicting the next moment. To model the temporal correlation between two scale data, we also designed an Inter-Attention mechanism. Since the feature dimensions of the two scale data are different, we need to convert them to the same feature space first. We can upsample the coarse-grained features into a fine-grained feature space. Then, Inter-orientation is carried out, and finally, data with different granularities in thickness are mapped through MLP to obtain final prediction.
Has the advantages that: the invention provides an urban traffic flow prediction method based on a graph neural network aiming at the problem of urban traffic flow prediction. Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
1) the invention firstly researches the multi-scale traffic prediction problem and provides a multi-task space-time network model to realize urban traffic flow prediction of different scales.
2) A cross-scale space-time feature learning mechanism is provided, and the mechanism comprises a cross-scale GCN layer which effectively fuses cross-scale space features and a layered attention mechanism which captures cross-scale time correlation, so that model prediction is more accurate.
3) In order to meet the consistency of multi-scale traffic data prediction results, structural constraints are designed for the objective function.
Drawings
FIG. 1 is a method flow diagram;
FIG. 2 is a detailed design block diagram of a model;
FIG. 3 is a schematic diagram of the Intra-orientation and Inter-orientation modules;
Detailed Description
The technical scheme of the invention is further explained in detail with reference to the attached drawings.
The overall flow of the urban traffic flow prediction method based on the graph neural network is shown in figure 1. And inputting the preprocessed data into a module containing spatial feature extraction and temporal feature extraction to generate the coarse-grained and fine-grained urban traffic flow at the future moment. Two different granularities of data can interact, specifically, the invention constructs as input two sets of data:
XT: predicting fine-grained traffic flow data, X, at n moments before a time pointt={xt|t=1,...n}
the invention discloses a multi-scale urban traffic flow prediction method based on a graph neural network, which comprises the following specific processes:
the method comprises the following steps: data pre-processing
5) Obtaining an allocation matrix A based on a graph structurefc1. The original data is processed to remove abnormal values. Processing fine-grained data by using a Louvain algorithm in community detection to obtain a mapping matrix A from fine granularity to coarse granularity based on a graph structurefc1。
6) Obtaining an allocation matrix A based on node characteristicsfc2. The fine-grained data of all the moments are gathered together to formApplying spectral clustering to a new matrix to obtain a mapping matrix A from fine granularity to coarse granularity based on node characteristicsfc2。
7) And fusing the two mapping matrixes to obtain a final mapping matrix. Specifically, the dot product operation is carried out on the two mapping matrixes to obtain a final mapping matrix Afc。
Afc=softmax(Afc1⊙Afc2)
8) Then, fine-grained data are aggregated by adopting a summing or averaging mode, so that coarse-grained data X are obtainedc
Xc=Agg(v1,v2,...,vn)
The fine-grained and coarse-grained data are urban traffic flow historical data tensors required by us.
Step two: training neural networks
And (4) training the whole network by using the fine-grained and coarse-grained urban traffic flow data constructed in the step one. As shown in fig. 2, the model has two parts: the device comprises a spatial feature extraction module and a temporal feature extraction module. The space extraction module comprises GCN and Cross-Scale GCN, wherein the GCN is used for respectively extracting the respective space characteristics of the data with different granularities, and the Cross-Scale GCN fuses the characteristics of the data with different granularities. The time extraction module comprises an Inter-orientation and an intra-orientation, wherein the Inter-orientation is used for enhancing the capability of capturing time correlation in the same granularity, and the intra-orientation is used for capturing time correlation of data in different granularities.
The input of the network is a historical flow characteristic matrix { X with fine granularity1,X2,...,XTAnd coarse-grained historical traffic characterization matricesIn addition, in order to ensure the accuracy of the fine granularity and the coarse granularity, structural constraints are added between the predicted values of the coarse granularity nodes and the corresponding fine granularity nodes, so that the predicted values of the coarse granularity nodes and the corresponding fine granularity nodes are ensuredIs the corresponding.
If with XTRepresenting the real data of a fine grain size,fine-grained prediction data representing the output,represents the coarse-grained real data of the image,coarse-grained prediction data representing the output, using the mean square error penalty, the objective function can be described ultimately as follows:
wherein lambda is a hyper-parameter, the loss function is optimized by using the Adam algorithm and the back propagation algorithm, and finally, when the algorithm converges, an optimal solution is obtained.
For data with two granularities of thickness and fineness, two independent graph neural networks are used, a space convolution module firstly comprises a common GCN, the common GCN can aggregate characteristic information of local nodes, remote traffic flow can also influence nodes at the current position, Cross-scale GCN is added for enabling the nodes to capture the remote node information, the result is added to fine-grained characteristics through coarse-grained convolution, and information transmission is achieved through the coarse-grained convolution and the fine-grained characteristics. We learn a better representation of the nodes by passing information in each graph and exchange information between the two representations using a flow of information from coarse to fine and from fine to coarse, which allows the nodes in the graph to capture the characteristics of distant nodes. Our method can overcome some known limitations of classical graph neural networks, such as capturing information of distant nodes while performing effective training. The formula for Cross-scale GCN is as follows:
the spatial convolution module we use LSTM and improve, as shown in fig. 3, when the input sequence is very long, it is difficult for the model to learn a reasonable vector representation. The Intra-attribute is added, so that the data of the last moment can consider the data of all the previous moments in different emphasis, and the most core operation is a string of weight parameters, the importance degree of each element is learned from the sequence, and then the elements are combined according to the importance degree. The weighting parameter is a coefficient of attention allocation, which element is assigned more or less attention. The Attention mechanism is implemented by retaining intermediate output results of the LSTM encoder on the input sequence, then training a model to selectively learn these inputs and associate the output sequence with them as the model is output. The input history data is { X1,X2,...,XTPredicting data { X) at next timet+1And after the Intra-attribute is added, the model can acquire the historical data at which moment is more important for predicting the next moment. The Intra-anchorage formula is as follows
ft=σ(Wf[ht-1,Ht]+bf)
it=σ(Wi[ht-1,Ht]+bf)
ct=ft⊙ct-1+tanh(Wc[ht-1,Ht]+bc)
ot=σ(Wo[ht-1,Ht]+bo),ht=ot⊙tanh(ct)
[α1,α2,...,αm]=align(ht,hm)
Wherein σ represents an activation function,. alpha.represents a Hadamard product,. ft,it,otRespectively representing a forgetting gate, an input gate and an output gate. c. Ct,htMemory cells and hidden features, respectively. align represents the Intra-attribute computation similarity function and s represents the final hidden representation.
To simulate the temporal correlation between two scale data, we also designed an Inter-orientation mechanism, as shown on the right side of FIG. 3. Since the feature dimensions of the two scale data are different, we need to convert them to the same feature space first. We first upsample coarse-grained features to the fine-grained feature space, in s'cRepresenting the final hidden feature, then mapping the fine-grained feature to the coarse-grained feature, and using s' to represent the final hidden feature, wherein the internal attention formula is as follows
Z=β1s+β2sc
Zc=βc,1s’+βc,2sc
β represents the coefficient of Inter-Attention.
Step three: generating a prediction result
The fine-grained traffic flow matrix (X) at the first t moments i1,. t } and coarse grain traffic flow matrixInputting the corresponding adjacency matrix into the trained network model to obtain the prediction results of the urban traffic flow with two granularities at the next moment, namely the fine-grained flow { XiI ═ t +1} and coarse grain flow { X |i|i=t+1}。
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.
Claims (2)
1. A multi-scale urban traffic flow prediction method based on a graph neural network is mainly characterized by comprising the following steps:
(1) pretreatment observed data:
two distribution matrixes A are obtained based on graph structure and node characteristicsfc1,Afc2. Fusing the two mapping matrixes to obtain a final mapping matrix, specifically, performing dot product operation on the two mapping matrixes to obtain a final mapping matrix Afc:
Afc=softmax(Afc1⊙Afc2)
Then, fine-grained data are aggregated by adopting a summing or averaging mode, so that coarse-grained data X are obtainedc:
Xc=Agg(v1,v2,...,vn)
(2) Problem definition: given a road sensor network G, a fine-grained traffic flow observed value { X1,X2,...,XT}, and coarse-grained traffic flow observationsOur goal is to predict the next moment multi-scale traffic flow simultaneously
(3) For data with two granularities of thickness and fineness, two independent graph neural networks are used, a space convolution module firstly comprises a common GCN, the common GCN can aggregate characteristic information of local nodes, remote traffic flow can also influence nodes at the current position, in order to enable the nodes to capture the remote node information, Cross-scale graph convolution GCN is added, the convolution result of the characteristic with the thickness granularity is added to the characteristic with the fine granularity, and the two realize information transmission. We learn a better representation of the nodes by passing information in each graph and exchange information between the two representations using the information flow from coarse to fine and from fine to coarse, which allows the nodes in the graph to capture the characteristics of distant nodes while aiding each other in prediction;
(4) inputting data after graph convolution into LSTM added with Attention, when input time sequence is very long, model is difficult to learn reasonable vector representation and Intra-Attention is added to the vector representation, so that data of last time can consider all previous time data differently, it learns importance degree of each time from time sequence, and then elements are combined according to importance degree. The weight parameter is a coefficient of attention allocation, and attention weights with different sizes are allocated to the elements. The Attention mechanism is implemented by retaining intermediate output results of the LSTM encoder on input sequences, then training a model to selectively learn these inputs and associate the output sequences with them as the model is output;
(5) in order to simulate the time correlation between two scale data, an Inter-orientation mechanism is also designed, and because the feature dimensions of the two scale data are different, the two scale data need to be converted into the same feature space. Firstly, upsampling coarse-grained features to a fine-grained feature space, carrying out one-time Inter-orientation to obtain fine-grained prediction, then mapping the fine-grained features to coarse-grained features, and carrying out one-time Inter-orientation to obtain coarse-grained prediction;
(6) a gradient random descent method is used, and the model is optimized in a back propagation mode, so that data generation is more accurate.
2. The method for predicting the multi-Scale urban traffic flow based on the graph neural network according to claim 1, wherein traffic flow space-time characteristics of fine granularity and coarse granularity are learned by using a common graph convolution neural network (GCN), a Cross-Scale graph convolution neural network (GCN) and LSTM added with Intra-Attention and Inter-Attention. Information transfer can be achieved between two types of granular traffic flow data, by transferring information in each graph to learn a better representation of the node, and using the coarse-to-fine and fine-to-coarse information flows to exchange information between the two representations, which allows the nodes in the graph to capture the characteristics of distant nodes. The Intra-orientation and Inter-orientation can help the model to better capture the time characteristics, and can consider the data of all historical moments with different emphasis. The method improves the accuracy of prediction, provides a more powerful auxiliary tool for urban traffic planning, path selection, traffic risk prediction and other aspects, and provides a more convenient and more accurate method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011513907.XA CN112767682A (en) | 2020-12-18 | 2020-12-18 | Multi-scale traffic flow prediction method based on graph convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011513907.XA CN112767682A (en) | 2020-12-18 | 2020-12-18 | Multi-scale traffic flow prediction method based on graph convolution neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112767682A true CN112767682A (en) | 2021-05-07 |
Family
ID=75694467
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011513907.XA Pending CN112767682A (en) | 2020-12-18 | 2020-12-18 | Multi-scale traffic flow prediction method based on graph convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767682A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361810A (en) * | 2021-06-30 | 2021-09-07 | 佳都科技集团股份有限公司 | Passenger flow volume prediction method, device, equipment and storage medium |
CN113569473A (en) * | 2021-07-19 | 2021-10-29 | 浙江大学 | Air separation pipe network oxygen long-term prediction method based on polynomial characteristic LSTM granularity calculation |
CN113673412A (en) * | 2021-08-17 | 2021-11-19 | 驭势(上海)汽车科技有限公司 | Key target object identification method and device, computer equipment and storage medium |
CN113947250A (en) * | 2021-10-22 | 2022-01-18 | 山东大学 | Urban fine-grained flow prediction method and system based on limited data resources |
CN113962460A (en) * | 2021-10-22 | 2022-01-21 | 山东大学 | Urban fine-grained flow prediction method and system based on space-time contrast self-supervision |
CN114038214A (en) * | 2021-10-21 | 2022-02-11 | 哈尔滨师范大学 | Urban traffic signal control system |
CN114052734A (en) * | 2021-11-24 | 2022-02-18 | 西安电子科技大学 | Electroencephalogram emotion recognition method based on progressive graph convolution neural network |
CN114326391A (en) * | 2021-12-13 | 2022-04-12 | 哈尔滨工程大学 | Building energy consumption prediction method |
CN114374619A (en) * | 2022-01-10 | 2022-04-19 | 昭通亮风台信息科技有限公司 | Internet of vehicles flow prediction method, system, equipment and storage medium |
CN114662792A (en) * | 2022-04-22 | 2022-06-24 | 广西财经学院 | Traffic flow prediction method of recurrent neural network based on convolution of dynamic diffusion graph |
CN114822033A (en) * | 2022-04-24 | 2022-07-29 | 山东交通学院 | Road network traffic flow data restoration method and system based on characteristic pyramid network |
CN114925836A (en) * | 2022-07-20 | 2022-08-19 | 中国海洋大学 | Urban traffic flow reasoning method based on dynamic multi-view graph neural network |
CN116110226A (en) * | 2023-02-14 | 2023-05-12 | 清华大学 | Traffic flow prediction method and device and electronic equipment |
-
2020
- 2020-12-18 CN CN202011513907.XA patent/CN112767682A/en active Pending
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361810B (en) * | 2021-06-30 | 2024-04-26 | 佳都科技集团股份有限公司 | Passenger flow volume prediction method, device, equipment and storage medium |
CN113361810A (en) * | 2021-06-30 | 2021-09-07 | 佳都科技集团股份有限公司 | Passenger flow volume prediction method, device, equipment and storage medium |
CN113569473A (en) * | 2021-07-19 | 2021-10-29 | 浙江大学 | Air separation pipe network oxygen long-term prediction method based on polynomial characteristic LSTM granularity calculation |
CN113673412A (en) * | 2021-08-17 | 2021-11-19 | 驭势(上海)汽车科技有限公司 | Key target object identification method and device, computer equipment and storage medium |
CN113673412B (en) * | 2021-08-17 | 2023-09-26 | 驭势(上海)汽车科技有限公司 | Method and device for identifying key target object, computer equipment and storage medium |
CN114038214B (en) * | 2021-10-21 | 2022-05-27 | 哈尔滨师范大学 | Urban traffic signal control system |
CN114038214A (en) * | 2021-10-21 | 2022-02-11 | 哈尔滨师范大学 | Urban traffic signal control system |
CN113947250A (en) * | 2021-10-22 | 2022-01-18 | 山东大学 | Urban fine-grained flow prediction method and system based on limited data resources |
CN113962460B (en) * | 2021-10-22 | 2024-05-28 | 山东大学 | Urban fine granularity flow prediction method and system based on space-time comparison self-supervision |
CN113962460A (en) * | 2021-10-22 | 2022-01-21 | 山东大学 | Urban fine-grained flow prediction method and system based on space-time contrast self-supervision |
CN114052734A (en) * | 2021-11-24 | 2022-02-18 | 西安电子科技大学 | Electroencephalogram emotion recognition method based on progressive graph convolution neural network |
CN114326391A (en) * | 2021-12-13 | 2022-04-12 | 哈尔滨工程大学 | Building energy consumption prediction method |
CN114374619A (en) * | 2022-01-10 | 2022-04-19 | 昭通亮风台信息科技有限公司 | Internet of vehicles flow prediction method, system, equipment and storage medium |
CN114374619B (en) * | 2022-01-10 | 2024-08-13 | 昭通亮风台信息科技有限公司 | Internet of vehicles flow prediction method, system, equipment and storage medium |
CN114662792A (en) * | 2022-04-22 | 2022-06-24 | 广西财经学院 | Traffic flow prediction method of recurrent neural network based on convolution of dynamic diffusion graph |
CN114662792B (en) * | 2022-04-22 | 2023-01-20 | 广西财经学院 | Traffic flow prediction method of recurrent neural network based on dynamic diffusion graph convolution |
CN114822033A (en) * | 2022-04-24 | 2022-07-29 | 山东交通学院 | Road network traffic flow data restoration method and system based on characteristic pyramid network |
CN114925836B (en) * | 2022-07-20 | 2022-11-29 | 中国海洋大学 | Urban traffic flow reasoning method based on dynamic multi-view graph neural network |
CN114925836A (en) * | 2022-07-20 | 2022-08-19 | 中国海洋大学 | Urban traffic flow reasoning method based on dynamic multi-view graph neural network |
CN116110226A (en) * | 2023-02-14 | 2023-05-12 | 清华大学 | Traffic flow prediction method and device and electronic equipment |
CN116110226B (en) * | 2023-02-14 | 2023-08-08 | 清华大学 | Traffic flow prediction method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112767682A (en) | Multi-scale traffic flow prediction method based on graph convolution neural network | |
US11270579B2 (en) | Transportation network speed foreeasting method using deep capsule networks with nested LSTM models | |
CN113053115B (en) | Traffic prediction method based on multi-scale graph convolution network model | |
CN114330671A (en) | Traffic flow prediction method based on Transformer space-time diagram convolution network | |
CN110348624B (en) | Sand storm grade prediction method based on Stacking integration strategy | |
CN114944053B (en) | Traffic flow prediction method based on space-time hypergraph neural network | |
CN113538910B (en) | Self-adaptive full-chain urban area network signal control optimization method | |
CN111612243A (en) | Traffic speed prediction method, system and storage medium | |
CN112766597B (en) | Bus passenger flow prediction method and system | |
Liu et al. | Fedgru: Privacy-preserving traffic flow prediction via federated learning | |
CN114495500B (en) | Traffic prediction method based on dual dynamic space-time diagram convolution | |
CN115376317B (en) | Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network | |
CN117494034A (en) | Air quality prediction method based on traffic congestion index and multi-source data fusion | |
CN115935796A (en) | Time-space heterogeneous and synchronous graph convolution network traffic flow prediction method | |
CN116504075A (en) | Attention and multiple graph convolution fusion space-time traffic speed prediction method and system | |
CN118278567A (en) | Traffic flow space-time prediction method based on graphic neural network | |
Li et al. | Hydropower generation forecasting via deep neural network | |
Shterev et al. | Time series prediction with neural networks: a review | |
Peng et al. | Meteorological satellite operation prediction using a BiLSTM deep learning model | |
Su et al. | Graph ode recurrent neural networks for traffic flow forecasting | |
Yao et al. | Spatio-Temporal Hypergraph Neural ODE Network for Traffic Forecasting | |
Gital et al. | LSTM network for predicting medium to long term electricity usage in residential buildings (Rikkos Jos-City, Nigeria) | |
Xie et al. | The general conformable fractional grey system model and its applications | |
Wan et al. | K-STTN: Knowledge-induced Spatio-Temporal Transformer Networks for Traffic Forecasting | |
CN117635218B (en) | Business district flow prediction method based on six-degree separation theory and graph annotation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |