CN111738535A - Method, device, equipment and storage medium for predicting rail transit time-space short-time passenger flow - Google Patents
Method, device, equipment and storage medium for predicting rail transit time-space short-time passenger flow Download PDFInfo
- Publication number
- CN111738535A CN111738535A CN202010860224.5A CN202010860224A CN111738535A CN 111738535 A CN111738535 A CN 111738535A CN 202010860224 A CN202010860224 A CN 202010860224A CN 111738535 A CN111738535 A CN 111738535A
- Authority
- CN
- China
- Prior art keywords
- matrix
- time
- passenger flow
- current
- inbound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 95
- 239000011159 matrix material Substances 0.000 claims abstract description 224
- 238000013528 artificial neural network Methods 0.000 claims abstract description 41
- 230000007246 mechanism Effects 0.000 claims abstract description 41
- 239000013598 vector Substances 0.000 claims description 86
- 230000006870 function Effects 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 14
- 230000004913 activation Effects 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 125000004122 cyclic group Chemical group 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 9
- 230000002123 temporal effect Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 8
- 238000011156 evaluation Methods 0.000 description 7
- 238000010606 normalization Methods 0.000 description 7
- 241001522296 Erithacus rubecula Species 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- YHXISWVBGDMDLQ-UHFFFAOYSA-N moclobemide Chemical compound C1=CC(Cl)=CC=C1C(=O)NCCN1CCOCC1 YHXISWVBGDMDLQ-UHFFFAOYSA-N 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000012733 comparative method Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011425 standardization method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Tourism & Hospitality (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Development Economics (AREA)
- Biomedical Technology (AREA)
- Educational Administration (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Game Theory and Decision Science (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Primary Health Care (AREA)
- Train Traffic Observation, Control, And Security (AREA)
Abstract
The invention relates to the technical field of passenger flow prediction, and discloses a method, a device, equipment and a storage medium for predicting short-time passenger flow in time and space of rail transit, wherein the method comprises the following steps: acquiring the arrival data and the train schedule data of the historical time period; constructing an adjacency matrix according to the train schedule data; normalizing the inbound data and the adjacency matrix; extracting the normalized inbound data and the spatial feature matrix of the adjacent matrix by using a graph convolution neural network; and extracting the time characteristic of the spatial characteristic matrix by adopting a sequence-to-sequence model based on a gating cycle unit and an attention mechanism to predict the outbound quantity at the current moment. The invention can not only capture the space-time relationship of large-scale passenger flow, but also has higher precision and stronger interpretability, is convenient for mastering the passenger flow distribution condition and provides a basis for passenger flow state analysis and early warning. Meanwhile, the passenger flow is convenient to organize, the transportation capacity resources are reasonably configured, the congestion is relieved, and the service quality is improved.
Description
Technical Field
The invention relates to the technical field of passenger flow prediction, in particular to a method, a device, equipment and a storage medium for predicting short-time passenger flow in space and time of rail transit.
Background
In recent years, urban rail transit construction is increasing, and the increasing subway network stimulates the rapid increase of the number of passengers, so that the current subway construction capacity cannot meet the rapidly increasing passenger demand, and subway congestion occurs along with the increase. In related researches of subways, short-time passenger flow prediction plays a crucial role in improving the operation efficiency of a subway system. On one hand, the short-time passenger flow prediction can enable a subway manager to master the passenger flow distribution condition of the subway network, know which stations will have passenger flow surge in the future and provide a basis for subway network passenger flow state analysis and early warning. On the other hand, the system is also beneficial to the subway manager to organize passenger flow and reasonably allocate capacity resources, thereby relieving congestion and improving service quality.
The inventor of the invention researches and discovers that the short-time passenger flow prediction method mainly has the following two defects:
(1) in the prior art, only time characteristics are considered, and spatial correlation is ignored, so that the change of passenger flow data is not restricted by a subway network, and the traffic passenger flow cannot be accurately predicted.
(2) Existing studies combine convolutional neural networks with cyclic neural networks for spatio-temporal prediction. But convolutional neural networks cannot describe the spatial dependence of such topologies as metro networks.
Disclosure of Invention
The invention provides a data-driven short-time rail transit space-time short-time passenger flow prediction method, which adopts the following technical scheme:
acquiring the arrival data and the train schedule data of the historical time period;
constructing an adjacency matrix based on travel time according to the train schedule data;
standardizing the inbound data and the adjacent matrix to obtain a standardized inbound matrix and an standardized adjacent matrix;
extracting the space characteristic matrix of the inbound matrix and the adjacent matrix after the standardization treatment by adopting a graph convolution neural network;
and extracting the time characteristic of the spatial characteristic matrix by adopting a sequence-to-sequence model based on a gating cycle unit and an attention mechanism to predict the outbound quantity at the current moment.
Sequence-to-sequence models based on gated round robin units are used to extract temporal features, taking care of the mechanism to identify relevant input time steps to improve temporal prediction performance while also increasing the interpretability of the method.
80% of the inbound data was taken as the training set and the remaining 20% as the test set. The training set is used to train weights in the neural network, and the test set is used to calculate evaluation indexes to check the performance of the model.
Preferably, the adjacency matrixA ij Comprises the following steps:
in the formula,t ij representing slave stationiTo stationjThe actual travel time of; n represents the number of stations and,va set of subway stations is represented,v i andv j respectively indicating stationsiAnd stationj,EIs a set of lines.If the station is at a bus stopiTo stationjIs reachable, then。
In consideration of the particularity of the subway network, that adjacent stations cannot be directly connected due to being on different subway lines, the subway network structure cannot be directly represented by using an adjacency matrix based on spatial distance. Furthermore, the correlation between two stations in the subway network is also related to the waiting time and the walking time of passengers in the stations, the train running interval and the interval running time, and the trip time includes the characteristics of waiting time, walking time and the like in the stations and the train operation characteristics of the train running interval, the interval running time and the like, so that the adjacency matrix constructed based on the trip time is more suitable for being applied to the subway network.
Preferably, the normalization process uses a min-max normalization method, namely:
in the formula,Xthe data is normalized;X 0 is the original data.
Further, the min-max normalization method can preserve the relationship between the original data and eliminate the influence of dimension and data value range.
The graph convolution neural network has two-layer structure and two input matrixes, wherein the input matrixes are respectively an adjacent matrix and an inbound matrix after standardized processing.
Preferably, the extracting the normalized inbound matrix and the normalized spatial feature matrix of the adjacency matrix by using the graph convolution neural network includes:
taking the inbound matrix after the standardization processing as a characteristic matrix of an initial layer and inputting the characteristic matrix into the initial layer of the graph convolution neural network;
constructing a feature matrix of a first layer of the graph convolution neural network based on the inbound matrix, the weight matrix of the initial layer, and the normalized adjacency matrix and degree matrix;
and inputting the feature matrix of the first layer into the first layer of the graph convolution neural network, repeating the previous step, and outputting the feature matrix of the second layer as the spatial feature matrix.
Preferably, the sequence-to-sequence model includes an encoder of gated loop units and a decoder of gated loop units.
Preferably, the extracting the temporal features of the spatial feature matrix by using a sequence-to-sequence model based on a gating cycle unit and an attention mechanism to predict the outbound amount at the current time includes:
inputting the spatial characteristic matrix into an encoder consisting of the gating circulation unit to obtain encoding results of all moments of the historical time period;
inputting the coding result into an attention mechanism model;
and inputting the output result of the attention mechanism model into a decoder consisting of the gating circulation unit to predict the outbound amount at the current moment.
An encoder that is only sequence-to-sequence structured integrates all inputs into one representative vector input into the decoder, which results in the loss of input features; after the attention mechanism is added, the features extracted at each time step of the encoder are input into the attention mechanism, so that the problem of input feature missing can be solved, and according to the principle of the attention mechanism, each input feature can be given with weight and used for identifying related input time steps through machine learning.
Preferably, the inputting the spatial feature matrix into an encoder composed of the gated cyclic unit to obtain the encoding results at all time points of the historical time period includes:
respectively transposing the spatial feature matrix into a reset gate weight matrix, an updated gate weight matrix and a current memory content weight matrix;
obtaining a reset gate based on the reset gate weight matrix, the hidden state vector at the previous moment and the row vector of the spatial feature matrix at the current moment;
obtaining an update gate based on the update gate weight matrix, the hidden state vector at the previous moment and the row vector of the spatial feature matrix at the current moment;
obtaining the current memory content based on the current memory content weight matrix, the hidden state vector at the previous moment, the row vector of the spatial feature matrix at the current moment and a reset gate;
and outputting the coding result based on the hidden state vector, the updating gate and the current memory content at the previous moment.
Preferably, the reset gate and the update gate are calculated by an S-shaped function of a nonlinear activation function; the current memory content is calculated by a hyperbolic tangent function of the nonlinear activation function.
Preferably, inputting the output result of the attention mechanism model into a decoder composed of the gating cycle unit to predict the outbound amount at the current time comprises:
calculating an attention weight based on the hidden state vectors of the encoder at all times of the historical period and the hidden state vector of the decoder at the last time; the weight given to the hidden state vector at the current time of the encoder when prediction is performed at the current time is specified.
Calculating a context vector at a current time based on the hidden state vector of the encoder at the current time and the attention weight;
and predicting the outbound amount of the current time based on the weight matrix of the decoder, the context vector and the hidden state vector of the current time of the decoder.
Based on the same inventive concept, the invention also provides a device for predicting the short-time passenger flow in space and time in rail transit, which comprises:
the acquisition module is used for acquiring the arrival data and the train schedule data of the historical time period;
the construction module is used for constructing an adjacency matrix based on travel time according to the train schedule data;
the standardization processing module is used for standardizing the inbound data and the adjacent matrix to obtain a standardized inbound matrix and a standardized adjacent matrix;
the spatial feature extraction module is used for extracting the spatial feature matrix of the inbound matrix and the adjacent matrix after the standardization processing by adopting a graph convolution neural network;
and the prediction module is used for extracting the time characteristics of the spatial characteristic matrix by adopting a sequence-to-sequence model based on a gating cycle unit and an attention mechanism so as to predict the outbound amount at the current moment.
Preferably, the spatial feature extraction module includes:
the initial layer building module is used for inputting the normalized inbound matrix serving as a characteristic matrix of the initial layer into the initial layer of the graph convolution neural network;
a first layer construction module, configured to construct a feature matrix of a first layer of the graph convolution neural network based on the inbound matrix, a weight matrix of an initial layer, and the normalized adjacency matrix and degree matrix;
and the second layer construction module is used for inputting the feature matrix of the first layer into the first layer of the graph convolution neural network, repeating the previous step and outputting the feature matrix of the second layer as the spatial feature matrix.
Preferably, the prediction module comprises:
the coding submodule is used for inputting the spatial characteristic matrix into a coder consisting of the gating circulation unit to obtain coding results of all moments of the historical time period;
the input submodule is used for inputting the coding result into an attention mechanism model;
and the output sub-module is used for inputting the output result of the attention mechanism model into a decoder consisting of the gating circulation unit so as to predict the outbound amount at the current moment.
Preferably, the encoding submodule includes:
the reset unit is used for respectively transposing the spatial feature matrix into a reset gate weight matrix, an update gate weight matrix and a current memory content weight matrix;
the reset gate unit is used for obtaining a reset gate based on the reset gate weight matrix, the hidden state vector at the previous moment and the row vector of the spatial feature matrix at the current moment;
the updating gate unit is used for obtaining an updating gate based on the updating gate weight matrix, the hidden state vector at the previous moment and the row vector of the space characteristic matrix at the current moment;
the memory unit is used for obtaining the current memory content based on the current memory content weight matrix, the hidden state vector at the previous moment, the row vector of the space characteristic matrix at the current moment and the reset gate;
and the coding output unit is used for outputting the coding result based on the hidden state vector, the updating gate and the current memory content at the previous moment.
Preferably, the output sub-module includes:
an attention weight unit for calculating an attention weight based on the hidden state vectors of the encoder at all times of the history period and the hidden state vector of the decoder at the previous time;
a context vector unit that calculates a context vector at a current time based on the hidden state vector of the encoder at the current time and the attention weight;
and an outbound prediction unit for predicting the outbound at the current time based on the weight matrix of the decoder, the context vector, and the hidden state vector at the current time of the decoder.
Based on the same inventive concept, the present invention also provides an electronic device, comprising:
at least one processor; and
a memory coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to implement the method of any one of the above.
Based on the same inventive concept, the present invention also provides a computer-readable storage medium for storing a computer program, which, when executed, is capable of implementing the method of any of the above.
The invention has the beneficial effects that:
(1) a subway passenger flow short-term prediction method combining a graph convolution neural network, a gate control cycle unit, a sequence-to-sequence structure and an attention mechanism is provided. The method can realize space-time prediction and has interpretability.
(2) And predicting the outbound flow according to the inbound flow of the subway network by applying a combined model of a graph convolution neural network and a gating circulation unit. Compared with the existing space-time prediction method, the model can be applied to passenger flow prediction of the whole subway network. In addition, an adjacency matrix constructed based on travel time is provided to solve the unique topological structure of the subway system.
(3) The effectiveness and interpretability of the invention are verified by taking a Beijing subway network as an example. The result shows that the model can accurately predict the passenger flow of the large subway network.
Drawings
Fig. 1 is a flow chart of a method for predicting short-term passenger flow in track traffic space-time according to embodiments 1 and 2;
fig. 2 is a flow chart of the short-term passenger flow prediction method S4 in the rail transit space-time of embodiment 2;
fig. 3 is a flow chart of the short-term passenger flow prediction method S5 in the track traffic space-time of embodiment 2;
fig. 4 is a flow chart of the short-term passenger flow prediction method S501 in the track traffic space-time of embodiment 2;
fig. 5 is a flowchart of the short-term passenger flow prediction method S503 in track traffic space-time according to embodiment 2;
FIG. 6 is a diagram of a device for predicting short-term passenger flow in track traffic space-time according to embodiment 2;
fig. 7 is a device diagram of a spatial feature extraction module 40 of a rail transit space-time short-term passenger flow prediction device in embodiment 2;
fig. 8 is a device diagram of a short-time passenger flow prediction device prediction module 50 in rail transit space-time according to embodiment 2;
fig. 9 is a device diagram of a coding submodule 501 of the rail transit space-time short-term passenger flow prediction device in embodiment 2;
fig. 10 is a device diagram of an output submodule 503 of the rail transit space-time short-time passenger flow prediction device according to embodiment 2;
FIG. 11 is a sequence model diagram of the method for predicting the short-term passenger flow in track traffic space-time according to embodiment 3;
FIG. 12 is a fitting graph of the outbound volume prediction of the time-track traffic space-time short-term passenger flow prediction method of embodiment 3;
FIG. 13 is a comparison chart of the single-station decision coefficients of the rail transit space-time short-term passenger flow prediction method of embodiment 3;
fig. 14 is a comparison graph of the root mean square error of the method for predicting short-term passenger flow in rail transit space-time in embodiment 3.
Detailed Description
The technical solution of the present invention is described in detail below with the help of embodiments of the present invention with reference to the accompanying drawings.
s1, acquiring the arrival data and the train schedule data of the historical time period;
s2, constructing an adjacency matrix based on travel time according to the train schedule data;
s3, standardizing the inbound data and the adjacent matrix to obtain a standardized inbound matrix and an standardized adjacent matrix;
s4, extracting the normalized inbound matrix and the normalized spatial feature matrix of the adjacent matrix by using a graph convolution neural network;
s5 adopts a sequence-to-sequence model based on a gating cycle unit and an attention mechanism to extract the time characteristic of the space characteristic matrix so as to predict the outbound amount at the current moment.
Sequence-to-sequence models based on gated round robin units are used to extract temporal features, taking care of the mechanism to identify relevant input time steps to improve temporal prediction performance while also increasing the interpretability of the method.
s1, acquiring the arrival data and the train schedule data of the historical time period;
s2, constructing an adjacency matrix based on travel time according to the train schedule data;
s3, standardizing the inbound data and the adjacent matrix to obtain a standardized inbound matrix and an standardized adjacent matrix;
s4, extracting the normalized inbound matrix and the normalized spatial feature matrix of the adjacent matrix by using a graph convolution neural network;
s5 adopts a sequence-to-sequence model based on a gating cycle unit and an attention mechanism to extract the time characteristic of the space characteristic matrix so as to predict the outbound amount at the current moment.
Sequence-to-sequence models based on gated round robin units are used to extract temporal features, taking care of the mechanism to identify relevant input time steps to improve temporal prediction performance while also increasing the interpretability of the method.
80% of the inbound data was taken as the training set and the remaining 20% as the test set. The training set is used to train weights in the neural network, and the test set is used to calculate evaluation indexes to check the performance of the model.
Wherein, the adjacent matrix in S2A ij Comprises the following steps:
in the formula,t ij representing slave stationiTo stationjThe actual travel time of;va set of subway stations is represented,Eis a set of lines.If the station is at a bus stopiTo stationjIs reachable, then。
In consideration of the particularity of the subway network, that adjacent stations cannot be directly connected due to being on different subway lines, the subway network structure cannot be directly represented by using an adjacency matrix based on spatial distance. Furthermore, the correlation between two stations in the subway network is also related to the waiting time and the walking time of passengers in the stations, the train running interval and the interval running time, and the trip time includes the characteristics of waiting time, walking time and the like in the stations and the train operation characteristics of the train running interval, the interval running time and the like, so that the adjacency matrix constructed based on the trip time is more suitable for being applied to the subway network.
Wherein, the standardization process in S3 adopts a min-max standardization method, that is:
in the formula,Xthe data is normalized;X 0 is the original data.
Further, the min-max normalization method can preserve the relationship between the original data and eliminate the influence of dimension and data value range.
The graph convolution neural network has two-layer structure and two input matrixes, wherein the input matrixes are respectively an adjacent matrix and an inbound matrix after standardized processing.
As shown in fig. 2, S4 includes:
s401, inputting the normalized inbound matrix serving as a feature matrix of an initial layer into an initial layer of a graph convolution neural network;
s402, constructing a feature matrix of a first layer of the graph convolution neural network based on the inbound matrix, the weight matrix of the initial layer and the adjacent matrix and degree matrix after the normalization processing;
s403, inputting the feature matrix of the first layer into the first layer of the graph convolution neural network, repeating the previous step, and outputting the feature matrix of the second layer as the spatial feature matrix.
The sequence-to-sequence model includes an encoder of gated cycle units and a decoder of gated cycle units.
Wherein, as shown in fig. 3, S5 includes:
s501, inputting the spatial feature matrix into an encoder consisting of the gating circulation units to obtain encoding results of all moments of the historical time period;
s502, inputting the coding result into an attention mechanism model;
s503, inputting the output result of the attention mechanism model into a decoder consisting of the gating circulation unit to predict the outbound quantity.
An encoder that is only sequence-to-sequence structured integrates all inputs into one representative vector input into the decoder, which results in the loss of input features; after the attention mechanism is added, the features extracted at each time step of the encoder are input into the attention mechanism, so that the problem of input feature missing can be solved, and according to the principle of the attention mechanism, each input feature can be given with weight and used for identifying related input time steps through machine learning.
As shown in fig. 4, S501 includes:
s5011, respectively transposing the spatial feature matrix into a reset gate weight matrix, an updated gate weight matrix and a current memory content weight matrix;
s5012, obtaining a reset gate based on the reset gate weight matrix, the hidden state vector at the previous moment and the row vector of the spatial feature matrix at the current moment;
s5013, obtaining an updated gate based on the updated gate weight matrix, the hidden state vector at the previous moment and the row vector of the spatial feature matrix at the current moment;
s5014, obtaining the current memory content based on the current memory content weight matrix, the hidden state vector at the previous moment, the row vector of the spatial feature matrix at the current moment and the reset gate;
s5015 outputs the encoding result based on the hidden state vector at the previous time, the update gate, and the current memory content.
Preferably, the reset gate and the update gate are calculated by an S-shaped function of a nonlinear activation function; the current memory content is calculated by a hyperbolic tangent function of the nonlinear activation function.
As shown in fig. 5, S503 includes:
s5031 calculating an attention weight based on the hidden state vectors of the encoder at all times of the history period and the hidden state vector of the decoder at the last time; the weight given by the hidden state vector of the encoder at the current moment when the prediction of the current moment is carried out is appointed;
s5032 calculating a context vector at the current time based on the hidden state vector of the encoder at the current time and the attention weight;
s5033 predicts the outbound amount at the current time based on the weight matrix of the decoder, the context vector, and the hidden state vector at the current time of the decoder.
In a preferred embodiment, there is provided a rail transit space-time short-time passenger flow prediction apparatus, as shown in fig. 6, the apparatus comprising:
the acquisition module 10 is used for acquiring the arrival data and the train schedule data of the historical time period;
a building module 20, configured to build an adjacency matrix based on travel time according to the train schedule data;
a standardization processing module 30, which standardizes the inbound data and the adjacency matrix to obtain a standardized inbound matrix and an standardized adjacency matrix;
a spatial feature extraction module 40, which adopts a graph convolution neural network to extract the spatial feature matrix of the inbound matrix and the adjacent matrix after the standardization processing;
and the prediction module 50 is used for extracting the time characteristics of the spatial characteristic matrix by adopting a sequence-to-sequence model based on a gating cycle unit and an attention mechanism so as to predict the outbound amount at the current moment.
Preferably, as shown in fig. 7, the spatial feature extraction module 40 includes:
an initial layer construction module 401, configured to input the normalized inbound matrix as a feature matrix of an initial layer into an initial layer of a graph convolution neural network;
a first layer constructing module 402, configured to construct a feature matrix of a first layer of the graph convolution neural network based on the inbound matrix, a weight matrix of an initial layer, and the normalized adjacency matrix and degree matrix;
a second layer construction module 403, configured to input the feature matrix of the first layer into the first layer of the graph convolution neural network, repeat the previous step, and output the feature matrix of the second layer as the spatial feature matrix.
As shown in fig. 8, the prediction module 50 includes:
the encoding submodule 501 is configured to input the spatial feature matrix into an encoder formed by the gate control cycle unit to obtain encoding results at all times of the historical time period;
an input submodule 502 for inputting the encoding result into an attention mechanism model;
and the output submodule 503 is configured to input an output result of the attention mechanism model to a decoder formed by the gating cycle unit to predict an outbound amount at the current time.
As shown in fig. 9, the encoding submodule 501 includes:
a resetting unit 5011, configured to transpose the spatial feature matrix into a reset gate weight matrix, an update gate weight matrix, and a current memory content weight matrix;
a reset gate unit 5012, configured to obtain a reset gate based on the reset gate weight matrix, the hidden state vector at the previous time, and the row vector of the spatial feature matrix at the current time;
the update gate unit 5013 is configured to obtain an update gate based on the update gate weight matrix, the hidden state vector at the previous time, and the row vector of the spatial feature matrix at the current time;
the memory unit 5014 is configured to obtain current memory content based on the current memory content weight matrix, the hidden state vector at the previous time, the row vector of the spatial feature matrix at the current time, and a reset gate;
and an encoding output unit 5015, configured to output the encoding result based on the hidden state vector at the previous time, the update gate, and the current memory content.
As shown in fig. 10, the output submodule 503 includes:
an attention weight unit 5031 configured to calculate an attention weight based on the hidden state vectors of the encoder at all time instants of the historical time period and the hidden state vector of the decoder at the last time instant;
a context vector unit 5032 that calculates a context vector at the current time based on the hidden state vector at the current time of the encoder and the attention weight;
an outbound prediction unit 5033 predicts the outbound of the current time based on the weight matrix of the decoder, the context vector, and the hidden state vector of the decoder at the current time.
In a preferred embodiment, there is provided an electronic device comprising:
at least one processor; and
a memory coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to implement the method of any one of the above.
In a preferred embodiment, there is provided a computer readable storage medium for storing a computer program which, when executed, is capable of carrying out the method of any one of the above.
The method comprises the following steps:
s1: acquiring subway outbound passenger flow data with 15-minute granularity according to an AFC system; and constructing an adjacency matrix according to the train schedule data. Carrying out standardization processing on subway outbound passenger flow data and an adjacency matrix to obtain a processed outbound matrix and an adjacency matrix;
s2: inputting the data of the subway outbound matrix and the adjacent matrix after the standardization processing into a spatial feature extraction unit so as to extract spatial features of passenger flow;
s3: inputting the data after the spatial features are extracted into a time feature extraction unit to extract the time features of passenger flow;
s4: and combining the established spatial feature extraction unit and the time feature extraction unit to predict to obtain the outbound amount and the corresponding evaluation index value.
The method predicts the future outbound volume according to the historical inbound volume of each station in the subway network. This passenger flow prediction problem can be translated into learning a mapping function based on the adjacency matrix and inbound passenger flow volume for each sitefThus, the outbound passenger flow of each station is obtained, namely:
wherein,Ais composed ofN×NOf the adjacent matrix of (a) and (b),Nthe number of stations;representing all sites in the pastpThe station-entering amount within a time length,representing all sites in the futureThe outbound amount (predicted value) at each time.
In S1, the adjacency matrix is constructed based on travel time, that is:
in the formula,t ij representing the actual travel time from station to station;va set of subway stations is represented,Eis a set of lines.If the station is reachable, then。
In S1, the normalization process adopts a min-max normalization method, that is:
in the formula,Xthe data is normalized;X 0 is the original data.
In S1, 80% of the passenger flow data is used as the training set, and the remaining 20% is used as the test set. The training set is used to train weights in the neural network, and the test set is used to calculate evaluation indexes to check the performance of the model.
In S2, the spatial feature extraction unit is composed of two layers of graph convolution neural networks. Further, the graph convolution neural network has two inputs: adjacency matrixAAnd inbound passenger flow matrixXSimultaneously has a shape ofN×pThe output matrix of (1).
In S2, the spatial feature extraction calculation process includes the following 4 steps:
the S21 graph convolution neural network is a neural network layer, and the propagation mode among layers is as follows:
in the formula,H l()、H l+(1)respectively representlLayer and the firstl+1 layer of feature matrices; it is shaped asN×p;W l()Represents the firstlA weight matrix of layers having a shape ofp×p;Is composed ofOf the shape ofN×N;Is the sum of the adjacency matrix and the identity matrix,in the shape ofN×N(ii) a Initial layerH (0)The feature matrix of (a) is an inbound traffic matrixX。
s23 nonlinear activation functionIs a linear rectification function () The expression is as follows:
s24 the invention selects a two-layer graph convolution neural network model, the expression of which is as follows:
H (2) is shaped asN×pThe output matrix of (a);
in S3, the temporal feature extraction unit includes a gated loop unit, a sequence-to-sequence structure, and an attention mechanism. Further, the sequence-to-sequence model consists of an encoder and a decoder, as shown in fig. 11. The encoder and decoder are made up of a series of gated cyclic units. The input to the encoder is oneN×pOf (2) matrixX'(H (2) Transposed matrix of) the matrix is formedX'Each row of (a) is input as a vector into each gated loop unit of the encoder.
In S3, the time feature extraction unit includes the following 2 steps:
s31 transposes the output result of the graph convolution neural network, and then inputs the transposed result into an encoder based on a gate control circulation unit;
the output results of the S32 encoder at all the time are input into the attention mechanism again; the output of the attention mechanism is input into the decoding layer to find the final predicted value (i.e. the outbound passenger flow).
In step S31, the gated loop element calculation process includes the following 2 sub-steps:
s311 gated loop unit calculation formula:
in the formula,in order to reset the gate, the gate is reset,in order to update the door,in order to memorize the content at present,a hidden state vector at time t, i.e. the output at the current time;to representHidden state vectors at a moment;is composed oftAn input vector of a time;reset gate, update gate and current memory contents are respectivelyt-1Hiding a weight matrix corresponding to the state vector at a moment;reset gate, update gate and current memory contents are respectivelytInputting a weight matrix corresponding to the vector at a moment;respectively, the bias items corresponding to the reset gate, the update gate and the current memory content.
S312 nonlinear activation functionFor a sigmoid function, the nonlinear activation function tanh is a hyperbolic tangent function, and the expression is as follows:
in step S32, the attention mechanism model calculation process includes the following 3 steps:
s321 when predicting the futurejAt the time of outbound amount of each moment, the encoder is in the pastpHidden state vector of individual time instantsAnd hidden state vector of decoder at last momentThe attention weight is calculated as an input. The attention weight expression is as follows:
wherein,is the attention weight, which specifies that the process is going to the secondjHidden state vector of encoder at current moment during prediction of each momentHow much weight should be given. In the attention mechanism, there areThere are many functions. The invention has simple and effective selectiondotFunction asA function.dotThe function is expressed as follows:
S322 outputs a context vector for each time of prediction. In carrying out the firstjTables of context vectors at prediction time of timeThe expression is as follows:
s323 calculates a final predicted value according to the context vector and the hidden state vector of the decoder at the current time, where the expression is as follows:
in the formula,is the firstjAn attention hiding state vector for each time step;is a weight matrix;is the firstjThe predicted value of each time step.
In S4, the evaluation index includes Root Mean Square Error (Root Mean Square Error, abbreviated as Root Mean Square Error)RMSE) Precision (1)Accuracy) And a coefficient of determination (R 2 ) The calculation formulas of the three evaluation indexes are as follows:
in the formula,denotes the 1-norm, y t Represents the actual outbound amount,Represents the average outbound amount,Indicating the predicted outbound volume.
Taking a Beijing subway network as an example, 12 key stations are selected for analysis, and all AFC data and train schedule data of the 12 stations in 2019 in 3 months of 5:00-23:00 are collected. Based on the AFC data, passenger flow data of 15-minute granularity was obtained. And constructing an adjacency matrix based on the train schedule data.
Further, the traffic data is in the form of a matrix, each row represents traffic of all stations at a specific time, and each column represents traffic of a specific station at all times.
Further, the adjacency matrix is a matrix with a size of one, and is used for describing the relationship between sites. The adjacency matrix constructed and standardized according to the train schedule is shown in table 1.
TABLE 1
Further, 80% of the passenger flow data is taken as a training set, and the remaining 20% is taken as a test set. The training set is used to train weights in the neural network, and the test set is used to calculate evaluation indexes to check the performance of the model.
Fig. 12 is a prediction fitting graph of the present embodiment, which shows the prediction results of sunward 3 month and 26 days.
Comparing the method of the embodiment with the prior art method from two aspects of the overall prediction result and the single-station prediction result:
the method of the present invention is compared with an Integrated moving average Autoregressive model (ARIMA), Support Vector Regression (SVR), Gated round robin Unit (GRU), graph convolution Gated round robin Unit (GCGRU), and a Gated round robin Unit based on Attention machine system (GRUA). For convenience of description, the data-driven space-time short-time passenger flow prediction method and device disclosed by the invention are replaced by the abbreviation STGGA.
Further, the results of the method of the present invention and other comparative methods are shown in Table 2
TABLE 2
And (3) overall comparison:
the performance of the method of this example was compared to a linear method (e.g. ARIMA). The root mean square error of the method is reduced by 88.2%, so the method has better performance than an ARIMA method.
The performance of the method of the present embodiment is compared with that of a non-linear correlation method (i.e., SVR) without time information capture capability. Taking the prediction result of 15-minute granularity data as an example, the root mean square error of the method is reduced by 82.1%, and the method has better performance than the SVR method.
The performance of the method of the present embodiment is compared with that of the non-linear correlation method with time information capture capability (i.e., GRU and GRUA). Taking the prediction result of 15-minute granularity data as an example, the root mean square error of the method is reduced by 40.78 percent and 5.43 percent respectively compared with the GRU and the GRUA; the precision is respectively improved by 10.30 percent and 0.79 percent.
The method of the present embodiment is compared with a method that does not include an attention mechanism (i.e., the GCGRU method). Take the predicted results of 15 minute granularity data as an example. Compared with the GCGRU method, the method disclosed by the invention has the advantages that the root mean square error is reduced by 12.7%, and the precision is improved by 2.0%. The results show that the attention mechanism can improve the prediction accuracy.
Single station result comparison
The method of the present embodiment is compared with the single station prediction result of the GRUA method. As shown in fig. 13 and fig. 14, the predicted performance of the method according to this embodiment is better than that of the GRUA method in each station. For each station, the root mean square error of the method described in this embodiment is smaller than that of the GRUA method, and the largest root mean square error difference occurs in the compound gate station, at this time, the root mean square error of the method described in this embodiment is 26.1% lower than that of the GRUA method. To further illustrate, for each station, the determination coefficient of the method described in this embodiment is higher than that of the GRUA method, and the maximum difference of the determination coefficients is also at the rift station, at which time the determination coefficient of the method described in this embodiment is 10.4% higher than that of the GRUA method.
The method comprises a graph convolution neural network, a gate control circulation unit, an attention mechanism and a sequence-to-sequence structure. Wherein the graph convolutional neural network and the gated cyclic neural network are combined to capture spatio-temporal correlations in the subway network. In addition, attention mechanisms and sequences are added to the sequence model to improve interpretability and extensibility. Finally, the performance of the method is verified through data acquisition of a Beijing subway system. In terms of overall prediction, the method of the embodiment is superior to other comparison methods (including ARIMA, SVR, GRU, GCGRU and GRUA methods), and the root mean square error is improved by at least 5.4%. For the passenger flow prediction result of each station, the root mean square error and the decision coefficient indexes of the method are also superior to those of other comparison models.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and devices may refer to the corresponding processes in the previous method embodiment, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method for transmitting/receiving the power saving signal according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (10)
1. The rail transit time-space short-time passenger flow prediction method is characterized by comprising the following steps:
acquiring the arrival data and the train schedule data of the historical time period;
constructing an adjacency matrix based on travel time according to the train schedule data;
standardizing the inbound data and the adjacent matrix to obtain a standardized inbound matrix and an standardized adjacent matrix;
extracting the space characteristic matrix of the inbound matrix and the adjacent matrix after the standardization treatment by adopting a graph convolution neural network;
and extracting the time characteristic of the spatial characteristic matrix by adopting a sequence-to-sequence model based on a gating cycle unit and an attention mechanism to predict the outbound quantity at the current moment.
2. The rail transit space-time short-time passenger flow prediction method according to claim 1, characterized in that the sequence-to-sequence model comprises an encoder composed of gated cyclic units and a decoder composed of gated cyclic units.
3. The rail transit space-time short-time passenger flow prediction method according to claim 1, wherein the extracting of the spatial feature matrix of the normalized inbound matrix and the adjacency matrix by using a graph convolution neural network comprises:
taking the inbound matrix after the standardization processing as a characteristic matrix of an initial layer and inputting the characteristic matrix into the initial layer of the graph convolution neural network;
constructing a feature matrix of a first layer of the graph convolution neural network based on the inbound matrix, the weight matrix of the initial layer, and the normalized adjacency matrix and degree matrix;
and inputting the feature matrix of the first layer into the first layer of the graph convolution neural network, repeating the previous step, and outputting the feature matrix of the second layer as the spatial feature matrix.
4. The rail transit space-time short-time passenger flow prediction method according to claim 2, wherein the extracting the time features of the spatial feature matrix by using a sequence-to-sequence model based on a gating cycle unit and an attention mechanism to predict the outbound amount at the current moment comprises:
inputting the spatial characteristic matrix into an encoder consisting of the gating circulation unit to obtain encoding results of all moments of the historical time period;
inputting the coding result into an attention mechanism model;
and inputting the output result of the attention mechanism model into a decoder consisting of the gating circulation unit to predict the outbound amount at the current moment.
5. The rail transit space-time short-time passenger flow prediction method according to claim 4, wherein the step of inputting the spatial feature matrix into an encoder composed of the gate control cycle units to obtain encoding results at all times of the historical time period comprises the steps of:
respectively transposing the spatial feature matrix into a reset gate weight matrix, an updated gate weight matrix and a current memory content weight matrix;
obtaining a reset gate based on the reset gate weight matrix, the hidden state vector at the previous moment and the row vector of the spatial feature matrix at the current moment;
obtaining an update gate based on the update gate weight matrix, the hidden state vector at the previous moment and the row vector of the spatial feature matrix at the current moment;
obtaining the current memory content based on the current memory content weight matrix, the hidden state vector at the previous moment, the row vector of the spatial feature matrix at the current moment and a reset gate;
and outputting the coding result based on the hidden state vector, the updating gate and the current memory content at the previous moment.
6. The rail transit space-time short-time passenger flow prediction method according to claim 5, characterized in that the reset gate and the update gate are calculated by a sigmoid function of a nonlinear activation function; the current memory content is calculated by a hyperbolic tangent function of the nonlinear activation function.
7. The rail transit space-time short-time passenger flow prediction method according to claim 4, wherein inputting the output result of the attention mechanism model into a decoder composed of the gate control cycle unit to predict the outbound amount at the current time comprises:
calculating an attention weight based on the hidden state vectors of the encoder at all times of the historical period and the hidden state vector of the decoder at the last time;
calculating a context vector at a current time based on the hidden state vector of the encoder at the current time and the attention weight;
and predicting the outbound amount of the current time based on the weight matrix of the decoder, the context vector and the hidden state vector of the current time of the decoder.
8. The device for predicting the short-time passenger flow in the rail transit space-time is characterized by comprising the following components:
the acquisition module acquires the arrival data and the train schedule data of the historical time period;
the construction module is used for constructing an adjacency matrix based on travel time according to the train schedule data;
the standardization processing module is used for standardizing the inbound data and the adjacent matrix to obtain a standardized inbound matrix and a standardized adjacent matrix;
the spatial feature extraction module is used for extracting the spatial feature matrix of the inbound matrix and the adjacent matrix after the standardization processing by adopting a graph convolution neural network;
and the prediction module is used for extracting the time characteristics of the spatial characteristic matrix by adopting a sequence-to-sequence model based on a gating cycle unit and an attention mechanism so as to predict the outbound amount at the current moment.
9. An electronic device, comprising:
at least one processor; and
a memory coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program, which when executed is capable of implementing the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010860224.5A CN111738535A (en) | 2020-08-25 | 2020-08-25 | Method, device, equipment and storage medium for predicting rail transit time-space short-time passenger flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010860224.5A CN111738535A (en) | 2020-08-25 | 2020-08-25 | Method, device, equipment and storage medium for predicting rail transit time-space short-time passenger flow |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111738535A true CN111738535A (en) | 2020-10-02 |
Family
ID=72658751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010860224.5A Pending CN111738535A (en) | 2020-08-25 | 2020-08-25 | Method, device, equipment and storage medium for predicting rail transit time-space short-time passenger flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111738535A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112257614A (en) * | 2020-10-26 | 2021-01-22 | 中国民航大学 | Station building passenger flow space-time distribution prediction method based on graph convolution network |
CN112488185A (en) * | 2020-11-27 | 2021-03-12 | 湖南大学 | Method, system, electronic device and readable storage medium for predicting vehicle operating parameters including spatiotemporal characteristics |
CN112651577A (en) * | 2021-01-08 | 2021-04-13 | 重庆邮电大学 | Tunnel deformation prediction method based on fusion spatio-temporal data |
CN112766597A (en) * | 2021-01-29 | 2021-05-07 | 中国科学院自动化研究所 | Bus passenger flow prediction method and system |
CN112905659A (en) * | 2021-02-05 | 2021-06-04 | 希盟泰克(重庆)实业发展有限公司 | Urban rail transit data analysis method based on BIM and artificial intelligence |
CN112907056A (en) * | 2021-02-08 | 2021-06-04 | 之江实验室 | Urban management complaint event prediction method and system based on graph neural network |
CN112965888A (en) * | 2021-03-03 | 2021-06-15 | 山东英信计算机技术有限公司 | Method, system, device and medium for predicting task quantity based on deep learning |
CN113051474A (en) * | 2021-03-24 | 2021-06-29 | 武汉大学 | Passenger flow prediction method and system fusing multi-platform multi-terminal search indexes |
CN113435502A (en) * | 2021-06-25 | 2021-09-24 | 平安科技(深圳)有限公司 | Site flow determination method, device, equipment and storage medium |
CN113449905A (en) * | 2021-05-21 | 2021-09-28 | 浙江工业大学 | Traffic jam early warning method based on gated cyclic unit neural network |
CN113556266A (en) * | 2021-07-16 | 2021-10-26 | 北京理工大学 | Traffic matrix prediction method taking traffic engineering as center |
CN114881330A (en) * | 2022-05-09 | 2022-08-09 | 华侨大学 | Neural network-based rail transit passenger flow prediction method and system |
CN114973653A (en) * | 2022-04-27 | 2022-08-30 | 中国计量大学 | Traffic flow prediction method based on space-time graph convolution network |
CN115116212A (en) * | 2022-05-06 | 2022-09-27 | 浙江科技学院 | Traffic prediction method for road network, computer device, storage medium and program product |
CN115526382A (en) * | 2022-09-09 | 2022-12-27 | 扬州大学 | Interpretability analysis method of road network traffic flow prediction model |
CN115620525A (en) * | 2022-12-16 | 2023-01-17 | 中国民用航空总局第二研究所 | Short-time traffic passenger demand prediction method based on time-varying dynamic Bayesian network |
CN116050672A (en) * | 2023-03-31 | 2023-05-02 | 山东银河建筑科技有限公司 | Urban management method and system based on artificial intelligence |
CN116110588A (en) * | 2022-05-10 | 2023-05-12 | 北京理工大学 | Medical time sequence prediction method based on dynamic adjacency matrix and space-time attention |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886444A (en) * | 2018-12-03 | 2019-06-14 | 深圳市北斗智能科技有限公司 | A kind of traffic passenger flow forecasting, device, equipment and storage medium in short-term |
CN110866649A (en) * | 2019-11-19 | 2020-03-06 | 中国科学院深圳先进技术研究院 | Method and system for predicting short-term subway passenger flow and electronic equipment |
-
2020
- 2020-08-25 CN CN202010860224.5A patent/CN111738535A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886444A (en) * | 2018-12-03 | 2019-06-14 | 深圳市北斗智能科技有限公司 | A kind of traffic passenger flow forecasting, device, equipment and storage medium in short-term |
CN110866649A (en) * | 2019-11-19 | 2020-03-06 | 中国科学院深圳先进技术研究院 | Method and system for predicting short-term subway passenger flow and electronic equipment |
Non-Patent Citations (1)
Title |
---|
桎皓: "《IT610》", 12 August 2020 * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112257614A (en) * | 2020-10-26 | 2021-01-22 | 中国民航大学 | Station building passenger flow space-time distribution prediction method based on graph convolution network |
CN112257614B (en) * | 2020-10-26 | 2022-05-17 | 中国民航大学 | Station building passenger flow space-time distribution prediction method based on graph convolution network |
CN112488185A (en) * | 2020-11-27 | 2021-03-12 | 湖南大学 | Method, system, electronic device and readable storage medium for predicting vehicle operating parameters including spatiotemporal characteristics |
CN112488185B (en) * | 2020-11-27 | 2024-04-26 | 湖南大学 | Method and system for predicting vehicle operating parameters including spatiotemporal characteristics |
CN112651577A (en) * | 2021-01-08 | 2021-04-13 | 重庆邮电大学 | Tunnel deformation prediction method based on fusion spatio-temporal data |
CN112651577B (en) * | 2021-01-08 | 2022-03-22 | 重庆邮电大学 | Tunnel deformation prediction method based on fusion spatio-temporal data |
CN112766597B (en) * | 2021-01-29 | 2023-06-27 | 中国科学院自动化研究所 | Bus passenger flow prediction method and system |
CN112766597A (en) * | 2021-01-29 | 2021-05-07 | 中国科学院自动化研究所 | Bus passenger flow prediction method and system |
CN112905659A (en) * | 2021-02-05 | 2021-06-04 | 希盟泰克(重庆)实业发展有限公司 | Urban rail transit data analysis method based on BIM and artificial intelligence |
CN112907056B (en) * | 2021-02-08 | 2022-07-12 | 之江实验室 | Urban management complaint event prediction method and system based on graph neural network |
CN112907056A (en) * | 2021-02-08 | 2021-06-04 | 之江实验室 | Urban management complaint event prediction method and system based on graph neural network |
CN112965888A (en) * | 2021-03-03 | 2021-06-15 | 山东英信计算机技术有限公司 | Method, system, device and medium for predicting task quantity based on deep learning |
CN113051474A (en) * | 2021-03-24 | 2021-06-29 | 武汉大学 | Passenger flow prediction method and system fusing multi-platform multi-terminal search indexes |
CN113051474B (en) * | 2021-03-24 | 2023-09-15 | 武汉大学 | Passenger flow prediction method and system integrating multi-platform multi-terminal search indexes |
CN113449905A (en) * | 2021-05-21 | 2021-09-28 | 浙江工业大学 | Traffic jam early warning method based on gated cyclic unit neural network |
CN113435502A (en) * | 2021-06-25 | 2021-09-24 | 平安科技(深圳)有限公司 | Site flow determination method, device, equipment and storage medium |
CN113556266A (en) * | 2021-07-16 | 2021-10-26 | 北京理工大学 | Traffic matrix prediction method taking traffic engineering as center |
CN114973653B (en) * | 2022-04-27 | 2023-12-19 | 中国计量大学 | Traffic flow prediction method based on space-time diagram convolutional network |
CN114973653A (en) * | 2022-04-27 | 2022-08-30 | 中国计量大学 | Traffic flow prediction method based on space-time graph convolution network |
CN115116212A (en) * | 2022-05-06 | 2022-09-27 | 浙江科技学院 | Traffic prediction method for road network, computer device, storage medium and program product |
CN114881330A (en) * | 2022-05-09 | 2022-08-09 | 华侨大学 | Neural network-based rail transit passenger flow prediction method and system |
CN116110588A (en) * | 2022-05-10 | 2023-05-12 | 北京理工大学 | Medical time sequence prediction method based on dynamic adjacency matrix and space-time attention |
CN116110588B (en) * | 2022-05-10 | 2024-04-26 | 北京理工大学 | Medical time sequence prediction method based on dynamic adjacency matrix and space-time attention |
CN115526382B (en) * | 2022-09-09 | 2023-05-23 | 扬州大学 | Road network level traffic flow prediction model interpretability analysis method |
CN115526382A (en) * | 2022-09-09 | 2022-12-27 | 扬州大学 | Interpretability analysis method of road network traffic flow prediction model |
CN115620525B (en) * | 2022-12-16 | 2023-03-10 | 中国民用航空总局第二研究所 | Short-time traffic passenger demand prediction method based on time-varying dynamic Bayesian network |
CN115620525A (en) * | 2022-12-16 | 2023-01-17 | 中国民用航空总局第二研究所 | Short-time traffic passenger demand prediction method based on time-varying dynamic Bayesian network |
CN116050672A (en) * | 2023-03-31 | 2023-05-02 | 山东银河建筑科技有限公司 | Urban management method and system based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111738535A (en) | Method, device, equipment and storage medium for predicting rail transit time-space short-time passenger flow | |
CN109492830B (en) | Mobile pollution source emission concentration prediction method based on time-space deep learning | |
CN111223301B (en) | Traffic flow prediction method based on graph attention convolution network | |
CN114330671A (en) | Traffic flow prediction method based on Transformer space-time diagram convolution network | |
CN108564227B (en) | Rail transit passenger flow volume prediction method based on space-time characteristics | |
CN116128122B (en) | Urban rail transit short-time passenger flow prediction method considering burst factors | |
CN114692984A (en) | Traffic prediction method based on multi-step coupling graph convolution network | |
CN112988851B (en) | Counterfactual prediction model data processing method, device, equipment and storage medium | |
CN113762338A (en) | Traffic flow prediction method, equipment and medium based on multi-graph attention mechanism | |
CN113239897B (en) | Human body action evaluation method based on space-time characteristic combination regression | |
CN114783608A (en) | Construction method of slow patient group disease risk prediction model based on graph self-encoder | |
CN110019420A (en) | A kind of data sequence prediction technique and calculate equipment | |
CN111723667A (en) | Human body joint point coordinate-based intelligent lamp pole crowd behavior identification method and device | |
CN116227180A (en) | Data-driven-based intelligent decision-making method for unit combination | |
CN112529284A (en) | Private car residence time prediction method, device and medium based on neural network | |
TW200814708A (en) | Power save method and system for a mobile device | |
CN115375020A (en) | Traffic prediction method and system for rail transit key OD pairs | |
CN115348182A (en) | Long-term spectrum prediction method based on depth stack self-encoder | |
CN111141879A (en) | Deep learning air quality monitoring method, device and equipment | |
CN110991729A (en) | Load prediction method based on transfer learning and multi-head attention mechanism | |
CN112182498B (en) | Old people nursing device and method based on network representation learning | |
CN115080795A (en) | Multi-charging-station cooperative load prediction method and device | |
Wang et al. | TATCN: time series prediction model based on time attention mechanism and TCN | |
CN113591391A (en) | Power load control device, control method, terminal, medium and application | |
CN116227738B (en) | Method and system for predicting traffic interval of power grid customer service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201002 |