CN115376317A - Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network - Google Patents

Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network Download PDF

Info

Publication number
CN115376317A
CN115376317A CN202211004685.8A CN202211004685A CN115376317A CN 115376317 A CN115376317 A CN 115376317A CN 202211004685 A CN202211004685 A CN 202211004685A CN 115376317 A CN115376317 A CN 115376317A
Authority
CN
China
Prior art keywords
traffic
data
traffic flow
convolution
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211004685.8A
Other languages
Chinese (zh)
Other versions
CN115376317B (en
Inventor
付蔚
吴志强
童世华
李正
刘庆
李明
吕贝哲
张锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ironman Technology Co ltd
Chongqing University of Post and Telecommunications
Original Assignee
Beijing Ironman Technology Co ltd
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ironman Technology Co ltd, Chongqing University of Post and Telecommunications filed Critical Beijing Ironman Technology Co ltd
Priority to CN202211004685.8A priority Critical patent/CN115376317B/en
Publication of CN115376317A publication Critical patent/CN115376317A/en
Application granted granted Critical
Publication of CN115376317B publication Critical patent/CN115376317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to the technical field of intelligent traffic, and particularly relates to a traffic flow prediction method based on a dynamic graph convolution and a time sequence convolution network, which comprises the following steps: acquiring traffic flow data to be predicted in a crossing node, and preprocessing the traffic flow data to be predicted; inputting the preprocessed traffic flow data into a trained traffic flow prediction model to obtain a traffic flow prediction result of the intersection node; carrying out traffic control on the intersection according to the traffic flow prediction result; the method can more completely extract the space-time characteristics of traffic flow data, improves the accuracy of traffic flow prediction, and solves the problems of unstable model gradient, slow dynamic change response and the like by adopting the convolution of the timing diagram, thereby having important significance for relieving urban traffic jam and improving the driving efficiency.

Description

Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a traffic flow prediction method based on a dynamic graph convolution and a time sequence convolution network.
Background
Big data and artificial intelligence will be Intelligent integration traffic construction, have driven the development of Intelligent Transportation System (ITS), and the ITS can realize Intelligent Transportation functions such as real-time data acquisition and analysis, real-time traffic management and control and dynamic traffic management. The traffic flow prediction is used as an important link of ITS, and can realize real-time and dynamic prediction of traffic flow. The ITS continuously predicts the condition of the urban road in the future time through a traffic flow prediction technology, can extract and predict traffic jam events which are likely to occur, and regulates and guides the traffic flow so as to keep the urban road smooth. The current traffic flow prediction method takes the traffic flow of adjacent road sections of urban road flow in space as an independent variable, utilizes historical time series data to establish a prediction model, or takes the change in time dimension change as an independent variable, adopts the most popular intelligent learning algorithm to carry out prediction simulation, and lacks the research and analysis of synchronizing two dimensions of time and space; secondly, the extraction of the space-time characteristics of the current traffic flow prediction method is not sufficient, the graph convolution neural network extracts the space characteristics, a static graph is constructed, the relevance among the nodes is represented by using a fixed weight value, and the relevance among the nodes is neglected to be dynamically changed along with the time.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a traffic flow prediction method based on a dynamic graph convolution and a time sequence convolution network, which comprises the following steps: acquiring traffic flow data to be predicted in a crossing node, and preprocessing the traffic flow data to be predicted; inputting the preprocessed traffic flow data into a trained traffic flow prediction model to obtain a traffic flow prediction result of the intersection node; carrying out traffic control on the intersection according to the traffic flow prediction result;
the process of training the traffic flow prediction model comprises the following steps:
s1: acquiring traffic data, preprocessing the traffic data, and taking the preprocessed traffic data as a training set;
s2: constructing a traffic map, and constructing a static adjacency matrix according to the traffic map;
s3: inputting the static adjacency matrix and the data in the training set into a graph learning module to obtain a dynamic adjacency matrix in the current traffic network;
s4: extracting the space characteristics of the dynamic adjacent matrix by adopting a graph convolution neural network to obtain the space characteristics;
s5: time characteristic extraction is carried out on the data in the training set by adopting a time sequence convolution neural network to obtain time sequence characteristics;
s6: fusing the time sequence characteristics and the space characteristics to obtain information of the input data covering the space-time characteristic state;
s7: and calculating a loss function of the model, and finishing the training of the model when the loss function is minimum.
Preferably, the process of preprocessing the traffic data includes: constructing a traffic data exception screening rule, and screening traffic data according to the traffic data exception screening rule to obtain exception data; and repairing the abnormal data to obtain preprocessed data.
Further, the traffic data exception screening rule comprises: the traffic data comprises the speed and the flow rate of the vehicle passing through and the occupancy rate of the vehicle flow rate; the abnormal data comprises the speed, the flow and the occupancy rate of the traffic flow of the vehicle which are all negative values, the flow exceeds the traffic capacity of a lane, the occupancy rate exceeds 100%, only one of the speed, the flow and the occupancy rate is not zero, only the flow and the occupancy rate is not zero or only the speed and the occupancy rate are not zero, and the traffic data index calculated by an index algorithm does not reach the data of the corresponding threshold value.
Preferably, the process of constructing the static adjacency matrix includes:
s21: acquiring a road network structure diagram, acquiring all roads meeting space neighbor conditions in the road network structure diagram, and constructing a traffic diagram G = (V, E) according to the roads meeting the space neighbor conditions sp ) (ii) a The roads satisfying the spatial neighborhood condition include a road V i And road V j Are connected at the same road intersection together, then the road V i And road V j The spatial neighborhood condition is satisfied; wherein V represents a set of N roads, E sp Representing connectivity between actual space roads;
s22: and calculating the geographic distance between the road i and the road j, calculating the weight of the edge of the road i and the edge of the road j according to the geographic distance between the road i and the road j, and constructing a spatial adjacency matrix by taking the edge weight as an element.
Preferably, the process of processing the data in the static adjacency matrix and the training set by using the graph learning module includes:
s31: acquiring traffic data X of the previous h time lengths of N roads in a training set;
s32: constructing a projection matrix P ∈ R h×d H is the selected first h time lengths, and d is the dimension of the weight vector parameter;
s33: multiplying the traffic data X by the projection matrix to convert the traffic data into traffic matrix data;
s34: and inputting the traffic matrix data and the static adjacency matrix into a graph learning module to obtain a dynamic adjacency matrix in the current traffic network.
Further, the formula for processing the traffic matrix data and the static adjacency matrix by adopting the image learning module is as follows:
Figure BDA0003808549340000031
wherein A is ij Representing dynamic adjacencyThe matrix, f, represents the neural network,
Figure BDA0003808549340000032
represents traffic matrix data at the road i,
Figure BDA0003808549340000033
representing traffic matrix data at road j, S representing a static adjacency matrix, S ij Representing elements in a spatial adjacency matrix S, reLU representing an activation function, K T Representing the weight vector parameter, N representing the number of roads in the current road network, S ir A fixed adjacency matrix representing road i and road r.
Preferably, the formula for extracting the spatial features of the dynamic adjacency matrix by using the graph convolution neural network is as follows:
Figure BDA0003808549340000034
wherein, I is a unit matrix,
Figure BDA0003808549340000035
is that
Figure BDA0003808549340000036
Is a feature of each layer, H (l) Is the output of the layer l, W (l) Contains the l-layer parameters, and sigma (-) represents the nonlinear activation function of the nonlinear model.
Preferably, the process of extracting the time characteristics of the data by using the time-series convolutional neural network includes:
s51: processing the input time sequence data by adopting a multilayer self-attention model to obtain the characteristics of different subspaces;
s52: merging the features of each subspace;
s53: and performing convolution processing on the combined spatial features by adopting multilayer TCN expansion convolution, and keeping the time sequence length unchanged by adopting a zero filling strategy on each layer to obtain the time high-dimensional features.
Preferably, a time series convolution neural network is adopted to extract the time characteristics of the data in the training set:
processing the input time sequence data by adopting a multilayer self-attention model to obtain the characteristics of different subspaces;
merging the features of each subspace; the calculation formula is as follows:
MultiHead(Q,K,V)=Contact(head 1 ,…,head n )W c
head i =Attention(Q i ,K i ,V i )
Q i =XW i Q ,K i =XW i K ,V i =XW i V
Figure BDA0003808549340000041
wherein Q, K, V are three subspaces obtained by the single attention model according to the same input X,
Figure BDA0003808549340000042
wherein d is q 、d k 、d v The dimensions of Q, K, V are represented, respectively. N is the number of road nodes, N is the number of attention models, c is the semantic expression vector, W Q 、W k 、W v And T is T historical traffic flow parameters.
And performing convolution processing on the combined spatial features by adopting the expansion convolution of multiple layers of TCNs, and keeping the length of the time sequence unchanged by adopting a zero filling strategy on each layer to obtain the time high-dimensional features.
The extended convolution of the TCN adopts a zero padding strategy to keep the length of the time sequence unchanged, the same convolution kernel is used for all elements in the output sequence, and the calculation formula of the TCN is as follows:
Y tcn =θ* d X
wherein d Means performing a dilation convolution, d is the dilation rate, θ is the time convolution kernel. Then at t, the road node V on the p channel i TCN result y of i,t,p Expressed as:
Figure BDA0003808549340000043
wherein d is the expansion ratio, θ a,z,p Are elements of a convolution kernel and form a temporal convolution kernel
Figure BDA0003808549340000044
Wherein A is τ Represents the kernel length, P' represents the number of output channels, and z refers to the dimension.
And multilayer TCN superposition is used, so that the receptive field on the time axis is enlarged, and more outputs are obtained. To enlarge the receptive field, the expansion rate increases exponentially, i.e., d l =2 l-1 . The output of the l-th layer multi-headed self-attention time convolution network is
Figure BDA0003808549340000051
Figure BDA0003808549340000052
Wherein, when l =0 is the input layer, the input of the network is the output of the multi-head self-attention model, and X is equal to R T×N×P ,θ l ∈R T ×N×p′ Is an expansion convolution kernel of TCN, sigma (-) is a nonlinear function, and T is the number of historical traffic flow parameters.
Preferably, the fused high-dimensional spatiotemporal features are as follows:
Y i =W tcn ·Y i tcn +W H ·Y i H
wherein, W tcn And W H Weights, Y, of temporal and spatial high-dimensional features, respectively i tcn 、Y i H Respectively represent nodes V to be aggregated i Temporal and spatial high-dimensional features of (2).
The invention has the beneficial effects that:
the method introduces a graph learning module in a graph convolution neural network, so that the graph learning module dynamically changes along with the change of the traffic condition of an actual road network, more comprehensively describes the spatial characteristics of urban traffic flow, adopts a multi-head attention mechanism in front of a time sequence convolution network, and reweighs the time sequence state influence of historical traffic, thereby capturing the global change trend of the traffic state; the invention can effectively extract the space-time characteristics of the traffic flow and improve the traffic flow prediction precision.
Drawings
Fig. 1 is an overall flowchart of a traffic flow control method based on a time chart convolution network according to the present invention;
fig. 2 is a flow chart of the present invention for preprocessing traffic data.
FIG. 3 is a diagram of a multi-head attention model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A traffic flow prediction method based on dynamic graph convolution and time sequence convolution network includes: traffic flow data to be predicted in the intersection nodes are obtained, and the traffic flow data to be predicted are preprocessed; inputting the preprocessed traffic flow data into a trained traffic flow prediction model to obtain a traffic flow prediction result of the intersection node; and carrying out traffic control on the intersection according to the traffic flow prediction result.
A method of training a traffic flow prediction model, as shown in fig. 1, the method comprising:
s1: acquiring traffic data, preprocessing the traffic data, and taking the preprocessed traffic data as a training set;
s2: constructing a traffic map, and constructing a static adjacency matrix according to the traffic map;
s3: inputting the static adjacency matrix and the data in the training set into a graph learning module to obtain a dynamic adjacency matrix in the current traffic network;
s4: extracting the space characteristics of the dynamic adjacent matrix by adopting a graph convolution neural network to obtain the space characteristics;
s5: time characteristic extraction is carried out on the data in the training set by adopting a time sequence convolution neural network to obtain time sequence characteristics;
s6: fusing the time sequence characteristics and the space characteristics to obtain information of the input data covering the space-time characteristic state;
s7: and calculating a loss function of the model, and finishing the training of the model when the loss function is minimum.
The graph learning module adopted by the invention judges the current relevance according to the connectivity of the data and the current speed; if the traffic flow is large, the relevance is large; if the roads are connected, but the traffic flow is small, if the roads are still considered to be related to each other, the spatial characteristics are not accurate enough, and the specific way for each user to obtain the dynamic spatial relationship is different.
As shown in fig. 2, the process of preprocessing the traffic data includes: constructing a traffic data exception screening rule, and screening traffic data according to the traffic data exception screening rule to obtain exception data; and repairing the abnormal data to obtain preprocessed data. The traffic data exception screening rule comprises the following steps: the traffic data comprises the speed and the flow rate of the vehicle passing through and the occupancy rate of the vehicle flow rate; the abnormal data comprises the speed, the flow and the occupancy rate of the traffic flow of the vehicle which are all negative values, the flow exceeds the traffic capacity of a lane, the occupancy rate exceeds 100%, only one of the speed, the flow and the occupancy rate is not zero, only the flow and the occupancy rate is not zero or only the speed and the occupancy rate are not zero, and the traffic data index calculated by an index algorithm does not reach the data of the corresponding threshold value.
The process of repairing the screened abnormal data according to different missing conditions comprises the following steps:
(1) And (3) repairing based on historical data: because the traffic flow has certain periodicity, the traffic flow can be repaired by using traffic data of different dates and the same time point.
Figure BDA0003808549340000071
(2) Time-series based repair:
Figure BDA0003808549340000072
wherein alpha is i-1 Representing the equation parameter, x, at time i-1 i-1 The traffic flow data at the time i-1 is shown, and k is a time length.
(3) Repairing based on spatial correlation:
Figure BDA0003808549340000073
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003808549340000074
traffic data repaired for time i; x is the number of i-T The detection value at the time i-T is obtained;
Figure BDA0003808549340000075
data repair for location j for location m and location n; k, alpha i ,γ 0 ,γ 1 ,γ 2 Is an equation parameter; x is a radical of a fluorine atom i (m),x i (n) is the detection values of position m and position n;
Figure BDA0003808549340000076
is the final repair value for detector j.
The specific operation of constructing the static spatial adjacency matrix comprises the following steps:
step 1: if the road V i And road V j Are connected with the same road intersection together,defining the two roads as spatial neighbors, wherein the roads satisfying the spatial neighbor condition in the road network form a traffic map G = (V, E) sp );
Wherein V is a set of N roads in the area, called road node; e sp Representing the connectivity between the actual spatial roads.
Step 2: spatial adjacency matrix according to E sp Constructing by using S e R N×N And (4) showing. S ij For elements in the spatial adjacency matrix S, the current way V i And road V j When the same road intersection is shared, the value is
Figure BDA0003808549340000077
Otherwise it is 0.
Figure BDA0003808549340000081
Wherein S is ij As the weight of the edges of road i and road j, d ij Road i and road j geographical distances. σ is the standard deviation of the inter-node distance, and ∈ is the threshold to control sparsity.
The concrete operation of learning the adjacency matrix by using the graph learning module to acquire the dynamic spatial relationship of the current traffic network comprises the following steps:
step 1: traffic data X = (X) of the first h time lengths of the known N roads 1 ,x 2 ,…,x N ) T ∈R N×h
Step 2: multiplying the input data X by a projection matrix P ∈ R h×d Carrying out data preprocessing;
Figure BDA0003808549340000082
wherein x is i ∈R h (i∈[1,N]) Is the speed sequence of the road i, i is the order of the roads; the weight vector parameter K is a learnable parameter, K = (K) 1 ,k 2 ,…,k d ) T ∈R d×1 (ii) a T denotes transposition.
And step 3: and inputting the preprocessed data and the fixed adjacency matrix S into a single neural network, and learning the current adjacency matrix. The output dynamic adjacency matrix of the graph learning module is:
Figure BDA0003808549340000083
wherein, A ij Representing a dynamic adjacency matrix, f representing a neural network,
Figure BDA0003808549340000084
represents traffic matrix data at the road i,
Figure BDA0003808549340000085
representing traffic matrix data at road j, S representing a static adjacency matrix, S ij Representing elements in a spatial adjacency matrix S, reLU representing an activation function, K T Representing the weight vector parameter, N representing the number of roads in the current road network, S ir A fixed adjacency matrix representing road i and road r. ReLU (x) = max (0, x) is an activation function, ensuring A ij Not less than 0; the row and column of the graph learning module are guaranteed to be 1 using the Softmax function.
Figure BDA0003808549340000086
The specific operation of extracting the spatial and temporal characteristic information of the traffic flow data respectively by using the graph convolutional neural network (GCN) and the time-series convolutional network comprises the following steps:
step 1: constructing a feature matrix X epsilon R according to the features of all points in the graph N×D Wherein N is the number of nodes, and D is the characteristic number of each node;
step 2: and carrying out graph convolution operation on the characteristic matrix and the dynamic space adjacent matrix, wherein the transmission process between GCN layers is as follows:
Figure BDA0003808549340000091
wherein
Figure BDA0003808549340000092
I is a unit matrix of the image data,
Figure BDA0003808549340000093
is that
Figure BDA0003808549340000094
The degree matrix of (c) is,
Figure BDA0003808549340000095
h is a characteristic of each layer, H (l) Is the output of layer l. If l is an input layer, then H is the feature matrix X. W is a group of (l) Contains l-layer parameters, σ (-) represents the nonlinear activation function of the nonlinear model: sigmoid function.
The time sequence convolution is adopted to obtain the time sequence characteristics, and different from other methods, the time sequence convolution has the advantages of being capable of processing the whole sequence in a large scale in parallel compared with other time sequence characteristic extraction methods, generating stable gradient, being short in time consumption and fast in processing speed.
And step 3: time characteristic extraction is carried out on data by adopting a time sequence convolution neural network, namely a multi-head self-attention model is used for preprocessing time sequence data, characteristics of different subspaces are learned, richer potential information is obtained, time characteristics of the processed time sequence data are extracted by using a TCN (time-series network), and local and remote time dependencies are captured, as shown in figure 3. The specific process comprises the following steps:
s51: processing the input time sequence data by adopting a multilayer self-attention model to obtain the characteristics of different subspaces;
s52: merging the features of each subspace; the calculation formula is as follows:
MultiHead(Q,K,V)=Contact(head 1 ,…,head n )W c
head i =Attention(Q i ,K i ,V i )
Q i =XW i Q ,K i =XW i K ,V i =XW i V
Figure BDA0003808549340000096
wherein Q, K, V are three subspaces obtained by the single attention model according to the same input X,
Figure BDA0003808549340000097
wherein d is q 、d k 、d v Representing the dimensions of Q, K, V, respectively. N is the number of road nodes, N is the number of attention models, c is the semantic expression vector, W Q 、W k 、W v And T is T historical traffic flow parameters.
S53: and performing convolution processing on the combined spatial features by adopting multilayer TCN expansion convolution, and keeping the time sequence length unchanged by adopting a zero filling strategy on each layer to obtain the time high-dimensional features.
The expansion convolution of the TCN adopts a zero padding strategy to keep the time sequence length unchanged, the same convolution kernel is used for all elements in an output sequence, and the calculation formula of the TCN is as follows:
Y tcn =θ* d X
wherein d It means performing a dilation convolution, d is the dilation rate, and θ is the time convolution kernel. Then at t, the road node V on the p channel i TCN result y of i,t,p Expressed as:
Figure BDA0003808549340000101
wherein d is the expansion ratio, θ a,z,p Are elements of a convolution kernel and constitute a temporal convolution kernel
Figure BDA0003808549340000102
Wherein A is τ Representing kernelsLength, P' denotes the number of output channels, z refers to the dimension.
And multilayer TCN superposition is used, so that the receptive field on the time axis is enlarged, and more outputs are obtained. To enlarge the receptive field, the expansion rate increases exponentially, i.e., d l =2 l-1 . The output of the l-th layer multi-headed self-attention time convolution network is
Figure BDA0003808549340000103
Figure BDA0003808549340000104
Wherein, when l =0 is the input layer, the input of the network is the output of the multi-head self-attention model, and X is equal to R T×N×P ,θ l ∈R T ×N×p′ Is an expansion convolution kernel of TCN, sigma (-) is a nonlinear function, and T is the number of historical traffic flow parameters.
The fused high-dimensional space-time characteristics are as follows:
Y i =W tcn ·Y i tcn +W H ·Y i H
wherein, W tcn And W H Weights, Y, of temporal and spatial high-dimensional features, respectively i tcn 、Y i H Respectively represent nodes V to be aggregated i Temporal and spatial high-dimensional features of (2).
The obtained weights of the temporal and spatial high-dimensional features comprise calculating the variance of the prediction results of the combined intra-prediction model, and determining the weight of the prediction model by using a minimum variance criterion.
The loss function of the model adopts mean square error MSE, namely, a predicted value and a real value are continuously compared, a back propagation algorithm is used for continuously optimizing neural network parameters, and when the iteration times reach a set value or the loss function value is smaller than a set threshold value, the optimal solution of the network parameters is obtained, wherein the loss function is as follows:
Figure BDA0003808549340000111
wherein Y ∈ R n×N×M Is the actual value, Y' is e.R n×N×M Is the predicted value and n is the number of batch training samples.
The above-mentioned embodiments, which are further detailed for the purpose of illustrating the invention, technical solutions and advantages, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made to the present invention within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A traffic flow prediction method based on dynamic graph convolution and time sequence convolution network is characterized by comprising the following steps: traffic flow data to be predicted in the intersection nodes are obtained, and the traffic flow data to be predicted are preprocessed; inputting the preprocessed traffic flow data into a trained traffic flow prediction model to obtain a traffic flow prediction result of the intersection node; carrying out traffic control on the intersection according to the traffic flow prediction result;
the process of training the traffic flow prediction model comprises the following steps:
s1: acquiring traffic data, preprocessing the traffic data, and taking the preprocessed traffic data as a training set;
s2: constructing a traffic map, and constructing a static adjacency matrix according to the traffic map;
s3: inputting the static adjacency matrix and data in the training set into a graph learning module to obtain a dynamic adjacency matrix in the current traffic network;
s4: extracting the space characteristics of the dynamic adjacent matrix by adopting a graph convolution neural network to obtain the space characteristics;
s5: time characteristic extraction is carried out on the data in the training set by adopting a time sequence convolution neural network to obtain time sequence characteristics;
s6: fusing the time sequence characteristics and the space characteristics to obtain information of the input data covering the space-time characteristic state;
s7: and calculating a loss function of the model, and finishing the training of the model when the loss function is minimum.
2. The traffic flow prediction method based on the dynamic graph convolution and the time sequence convolution network is characterized in that the process of preprocessing the traffic data comprises the following steps: constructing a traffic data exception screening rule, and screening traffic data according to the traffic data exception screening rule to obtain exception data; and repairing the abnormal data to obtain preprocessed data.
3. The traffic flow prediction method based on the dynamic graph convolution and the time sequence convolution network is characterized in that the traffic data abnormity screening rule comprises the following steps: the traffic data comprises the speed and the flow rate of the vehicle passing through and the occupancy rate of the vehicle flow rate; the abnormal data comprises the speed, the flow and the occupancy of the traffic flow of the vehicle, which are all negative values, the flow exceeds the traffic capacity of a lane, the occupancy exceeds 100 percent, only one of the speed, the flow and the occupancy is not zero, only the flow and the occupancy are not zero, or only the speed and the occupancy are not zero, and the data of which the traffic data indexes calculated by an index algorithm do not reach the corresponding threshold values.
4. The traffic flow prediction method based on the dynamic graph volume and the time-series convolution network is characterized in that the process of constructing the static adjacency matrix comprises the following steps:
s21: acquiring a road network structure diagram, acquiring all roads meeting space neighbor conditions in the road network structure diagram, and constructing a traffic diagram G = (V, E) according to the roads meeting the space neighbor conditions sp ) (ii) a The roads satisfying the spatial neighborhood condition include a road V i And road V j Are connected together at the same road intersection, wherein V represents a set of N roads, E sp Representing connectivity between actual space roads;
s22: and calculating the geographic distance between the road i and the road j, calculating the weight of the edge of the road i and the edge of the road j according to the geographic distance between the road i and the road j, and constructing a spatial adjacency matrix by taking the edge weight as an element.
5. The traffic flow prediction method based on the dynamic graph volume and the time sequence convolution network according to claim 1 is characterized in that the process of processing the static adjacency matrix and the data in the training set by adopting the graph learning module comprises the following steps:
s31: acquiring traffic data X of the previous h time lengths of N roads in a training set;
s32: constructing a projection matrix P ∈ R h×d H is the first h selected time lengths, and d is the dimension of the weight vector parameter;
s33: multiplying the traffic data X by the projection matrix so that the traffic data is converted into traffic matrix data;
s34: and inputting the traffic matrix data and the static adjacency matrix into a graph learning module to obtain a dynamic adjacency matrix in the current traffic network.
6. The traffic flow prediction method based on the dynamic graph convolution and the time sequence convolution network is characterized in that a formula for processing traffic matrix data and a static adjacent matrix by adopting a graph learning module is as follows:
Figure FDA0003808549330000021
Figure FDA0003808549330000022
wherein A is ij Representing a dynamic adjacency matrix, f representing a neural network,
Figure FDA0003808549330000023
represents the traffic matrix data at the road i,
Figure FDA0003808549330000031
representing a roadTraffic matrix data at j, S represents a static adjacency matrix, S ij Representing elements in a spatial adjacency matrix S, reLU representing an activation function, K T Representing the weight vector parameter, N representing the number of roads in the current road network, S ir A fixed adjacency matrix representing road i and road r.
7. The traffic flow prediction method based on the dynamic graph convolution and the time sequence convolution network is characterized in that a formula for extracting the spatial features of the dynamic adjacent matrix by adopting the graph convolution neural network is as follows:
Figure FDA0003808549330000032
Figure FDA0003808549330000033
wherein, I is an identity matrix,
Figure FDA0003808549330000034
is that
Figure FDA0003808549330000035
A degree matrix of (H) is a characteristic of each layer, H (l) Is the output of the l layer, W (l) L-layer parameters are included, σ (-) represents the nonlinear activation function of the nonlinear model.
8. The traffic flow prediction method based on the dynamic graph convolution and the time sequence convolution network is characterized in that the process of extracting the time characteristics of the data by adopting the time sequence convolution neural network comprises the following steps:
s51: processing the input time sequence data by adopting a multilayer self-attention model to obtain the characteristics of different subspaces;
s52: merging the features of each subspace;
s53: and performing convolution processing on the combined spatial features by adopting the expansion convolution of multiple layers of TCNs, and keeping the length of the time sequence unchanged by adopting a zero filling strategy on each layer to obtain the time high-dimensional features.
9. The traffic flow prediction method based on the dynamic graph convolution and the time sequence convolution network according to claim 1, characterized in that a formula for fusing the time sequence feature and the space feature is as follows:
Figure FDA0003808549330000036
wherein, W tcn And W H Weights, Y, of temporal and spatial high-dimensional features, respectively i tcn
Figure FDA0003808549330000037
Respectively represent nodes V to be aggregated i Temporal and spatial high-dimensional features of (2).
10. The traffic flow prediction method based on the dynamic graph convolution and the time sequence convolution network according to claim 1, characterized in that the loss function of the model is as follows:
Figure FDA0003808549330000038
wherein Y ∈ R n×N×M Is the actual value, Y' epsilon R n×N×M Is the predicted value and n is the number of batch training samples.
CN202211004685.8A 2022-08-22 2022-08-22 Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network Active CN115376317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211004685.8A CN115376317B (en) 2022-08-22 2022-08-22 Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211004685.8A CN115376317B (en) 2022-08-22 2022-08-22 Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network

Publications (2)

Publication Number Publication Date
CN115376317A true CN115376317A (en) 2022-11-22
CN115376317B CN115376317B (en) 2023-08-11

Family

ID=84067163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211004685.8A Active CN115376317B (en) 2022-08-22 2022-08-22 Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network

Country Status (1)

Country Link
CN (1) CN115376317B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116415928A (en) * 2023-03-06 2023-07-11 武汉理工大学 Urban waterlogging traffic network rapid restoration method and system based on deep learning
CN117116051A (en) * 2023-10-25 2023-11-24 深圳市交投科技有限公司 Intelligent traffic management system and method based on artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674987A (en) * 2019-09-23 2020-01-10 北京顺智信科技有限公司 Traffic flow prediction system and method and model training method
CN112053560A (en) * 2020-08-27 2020-12-08 武汉理工大学 Short-time traffic flow prediction method, system and storage medium based on neural network
CN113380025A (en) * 2021-05-28 2021-09-10 长安大学 Vehicle driving quantity prediction model construction method, prediction method and system
CN114220271A (en) * 2021-12-21 2022-03-22 南京理工大学 Traffic flow prediction method, equipment and storage medium based on dynamic space-time graph convolution cycle network
CN114495492A (en) * 2021-12-31 2022-05-13 中国科学院软件研究所 Traffic flow prediction method based on graph neural network
CN114566048A (en) * 2022-03-03 2022-05-31 重庆邮电大学 Traffic control method based on multi-view self-adaptive space-time diagram network
CN114565124A (en) * 2022-01-12 2022-05-31 武汉理工大学 Ship traffic flow prediction method based on improved graph convolution neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674987A (en) * 2019-09-23 2020-01-10 北京顺智信科技有限公司 Traffic flow prediction system and method and model training method
CN112053560A (en) * 2020-08-27 2020-12-08 武汉理工大学 Short-time traffic flow prediction method, system and storage medium based on neural network
CN113380025A (en) * 2021-05-28 2021-09-10 长安大学 Vehicle driving quantity prediction model construction method, prediction method and system
CN114220271A (en) * 2021-12-21 2022-03-22 南京理工大学 Traffic flow prediction method, equipment and storage medium based on dynamic space-time graph convolution cycle network
CN114495492A (en) * 2021-12-31 2022-05-13 中国科学院软件研究所 Traffic flow prediction method based on graph neural network
CN114565124A (en) * 2022-01-12 2022-05-31 武汉理工大学 Ship traffic flow prediction method based on improved graph convolution neural network
CN114566048A (en) * 2022-03-03 2022-05-31 重庆邮电大学 Traffic control method based on multi-view self-adaptive space-time diagram network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HU, ZHIQIU: "An Efficient Short-Term Traffic Speed Prediction Model Based on Improved TCN and GCN", 《SENSORS》, vol. 21, no. 10 *
SHI, TONGTONG: "Regional Traffic Flow Prediction on multiple Spatial Distributed Toll Gate in a City Cycle", 《PROCEEDINGS OF THE 2021 IEEE 16TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2021)》, pages 94 - 99 *
刘赏: "面向交通流预测的双分支时空图卷积神经网络", 《信息与控制》, pages 1 - 15 *
钟林君: "基于注意力机制的交通流预测方法研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, no. 04, pages 034 - 550 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116415928A (en) * 2023-03-06 2023-07-11 武汉理工大学 Urban waterlogging traffic network rapid restoration method and system based on deep learning
CN116415928B (en) * 2023-03-06 2023-11-17 武汉理工大学 Urban waterlogging traffic network rapid restoration method and system based on deep learning
CN117116051A (en) * 2023-10-25 2023-11-24 深圳市交投科技有限公司 Intelligent traffic management system and method based on artificial intelligence
CN117116051B (en) * 2023-10-25 2023-12-22 深圳市交投科技有限公司 Intelligent traffic management system and method based on artificial intelligence

Also Published As

Publication number Publication date
CN115376317B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN109887282B (en) Road network traffic flow prediction method based on hierarchical timing diagram convolutional network
US11270579B2 (en) Transportation network speed foreeasting method using deep capsule networks with nested LSTM models
CN110827543B (en) Short-term traffic flow control method based on deep learning and spatio-temporal data fusion
CN111612243B (en) Traffic speed prediction method, system and storage medium
CN114330671A (en) Traffic flow prediction method based on Transformer space-time diagram convolution network
CN115240425B (en) Traffic prediction method based on multi-scale space-time fusion graph network
CN112949828B (en) Graph convolution neural network traffic prediction method and system based on graph learning
CN115376317A (en) Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network
CN114299723B (en) Traffic flow prediction method
Zhang et al. Multistep speed prediction on traffic networks: A graph convolutional sequence-to-sequence learning approach with attention mechanism
CN114925836B (en) Urban traffic flow reasoning method based on dynamic multi-view graph neural network
CN112863182B (en) Cross-modal data prediction method based on transfer learning
CN113112791A (en) Traffic flow prediction method based on sliding window long-and-short term memory network
CN115578851A (en) Traffic prediction method based on MGCN
CN114973678B (en) Traffic prediction method based on graph attention neural network and space-time big data
Liu et al. Spatial-temporal conv-sequence learning with accident encoding for traffic flow prediction
CN115936069A (en) Traffic flow prediction method based on space-time attention network
CN116935649A (en) Urban traffic flow prediction method for multi-view fusion space-time dynamic graph convolution network
Bao et al. PKET-GCN: prior knowledge enhanced time-varying graph convolution network for traffic flow prediction
CN114572229A (en) Vehicle speed prediction method, device, medium and equipment based on graph neural network
CN117116045A (en) Traffic flow prediction method and device based on space-time sequence deep learning
Sun et al. Ada-STNet: A Dynamic AdaBoost Spatio-Temporal Network for Traffic Flow Prediction
Wu et al. Learning spatial–temporal pairwise and high-order relationships for short-term passenger flow prediction in urban rail transit
Chen et al. A bidirectional context-aware and multi-scale fusion hybrid network for short-term traffic flow prediction
Ma et al. Dynamic-static-based spatiotemporal multi-graph neural networks for passenger flow prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant