WO2019228848A1 - Système de gestion de la circulation - Google Patents

Système de gestion de la circulation Download PDF

Info

Publication number
WO2019228848A1
WO2019228848A1 PCT/EP2019/063039 EP2019063039W WO2019228848A1 WO 2019228848 A1 WO2019228848 A1 WO 2019228848A1 EP 2019063039 W EP2019063039 W EP 2019063039W WO 2019228848 A1 WO2019228848 A1 WO 2019228848A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
data
models
traffic management
management system
Prior art date
Application number
PCT/EP2019/063039
Other languages
English (en)
Inventor
Shaun HOWELL
Original Assignee
Vivacity Labs Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivacity Labs Limited filed Critical Vivacity Labs Limited
Publication of WO2019228848A1 publication Critical patent/WO2019228848A1/fr

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present invention relates to predicting future states, for example congestion, in a traffic system.
  • sensors can be provided which detect when there is queuing in a particular part of a road system, and more generally provide details of traffic speed and density. Sensors can also show which parts of road systems are used most by cars, larger vehicles, bicycles, motorcycles, pedestrians and so on. They can be used to detect behaviours such as lane changing taking place at particular locations. Incidents such as collisions or breakdowns can also be detected in near real time.
  • the Applicant’s co-pending application WO2018051200 discloses a method of using an image capture unit to identify different road users and produce data relating to those road users.
  • a control centre in a large city or for a network of major trunk roads will typically use data from such sensors, together with collateral information from police, the public, and online services such as Google (RTM) Maps, to understand in near real time what is going on in the road system and to make appropriate decisions.
  • RTM Google
  • a lower speed limit may be temporarily imposed to reduce the risk of collisions and try to avoid a stop-start state developing.
  • As a result of a collision or breakdown lanes may be closed and/or diversions put into place.
  • recurrent neural networks particularly including long short-term memory (LSTM) cells, feed forward deep neural networks and other types of recurrent neural network
  • k- nearest-neighbours models pure convolutional models
  • support vector regression models support vector regression models and Bayesian inference.
  • the state-of-the-art machine-learning traffic prediction products generally create a single model, which is updated infrequently with a historical view of the data - if it is updated at all. These products tend to lose prediction accuracy over time, since changes to road systems and road use lead to the model becoming invalid. Prediction accuracy in response to unusual events can be poor. Also, in a network of sensors around a road system it is common for sensors to come on- and off-line unpredictably due to faults in the sensors, obstructions, power supply issues, and so on. The input data dimensionality is not fixed and some data streams may be incomplete. These issues lead to a lack of prediction accuracy in state-of-the-art machine-learning traffic prediction products.
  • a traffic management system for a road network comprising: a plurality of sensors for monitoring vehicles and/or other users of the road network; a traffic prediction system; and active traffic management means, the active traffic management means including outputs for controlling the vehicles and/or other users of the road network, the sensors providing inputs to the traffic prediction system, the traffic prediction system providing estimates of a future state of the road network, and the active traffic management means using outputs to control the vehicles and/or other users of the road network in response to the estimates of future state, characterised in that the traffic prediction system includes a prediction module adapted to apply a machine learning model to the input data from the sensors to produce predictions, and a stochastic model generation module adapted to generate, train and test machine learning models using the input data from the sensors, the model generation module sending an updated model to the prediction module whenever an updated and improved model is available, and the prediction module replacing any previous model with the updated model when it is received from the model generation module.
  • the stochastic model generation module may be an evolutionary model generation module.
  • the sensors may be similar to those described in WO2018051200. They may be camera-based sensors including entity detection, for example capable of identifying cars, bicycles, pedestrians, vans, buses, lorries, etc in the field of view of the camera. The sensors are preferably capable of detecting the type, speed and direction at least of each entity in the field of view of the sensor.
  • Sensors may be placed, for example at intersections in a“grid” system of roads. Placement of sensors will depend on the characteristics of the particular road network. In some embodiments, instead of or as well as“infrastructure mounted” sensors such as those in WO2018051200, the sensors may include vehicle sensors such as smart phones or connected vehicles or telematics boxes.
  • the outputs from of the active traffic management means may include at a basic level, overhead signs at locations in the road network for providing information to road users, closing lanes, setting temporary speed limits, and similar.
  • Other outputs may include for example speed enforcement cameras, traffic signals, and movable barriers for physically opening or closing links on the road network to different types of traffic.
  • the active traffic management means may include transmitters for sending data directly to vehicles.
  • the data sent to the vehicle may just be displayed on an in-vehicle display, but it is also possible for some vehicles, for example autonomous, semi-autonomous or driverless vehicles, to be controlled directly by the active traffic-management means, for example to cause the vehicle to take a certain route to its destination, or to limit its speed.
  • the models produced by the evolutionary model generation module and executed by the prediction module may be a type of neural network, for example a recurrent neural network including long short-term memory (LSTM) cells.
  • LSTM long short-term memory
  • the prediction model may run repeatedly on new data. As soon as the model has been executed and predictions generated, the model will be run again on the most up- to-date data. It is found that in practical embodiments, all steps of the prediction process may take around 60 seconds. By the time a prediction has been generated therefore, there will be another 60 seconds or so worth of more up-to-date data which can be used as the basis for the next prediction.
  • the sensors continually feed data into a database and the prediction model has access to that database to retrieve data each time the model is executed.
  • the prediction module may carry out a step of cleaning input data before the model is executed.
  • the particular characteristics of the model in use (which may change frequently as the evolutionary model generation module supplies new models) will dictate which data streams are required. Not all models will require all data streams available from the input sensors. However, the required data streams need to be in the correct order and with no missing values before a model can be executed. In a large road network spanning for example a city centre or an even larger area, it is likely that at any particular time at least one sensor may be unavailable for one reason or another, and so there may be gaps in the input data. Any missing values may be filled with appropriate data to facilitate running of the model.
  • This may be achieved, for example, using a historical median approach in which the target values are filled by reference to typical values of historical data at that time of the day, on that day of the week, for example. If insufficient data is available, the time window examined in historical data may be iteratively broadened.
  • An alternative way of filling missing data, which is found to be appropriate for relatively short missing periods, is to fill the missing data by interpolating between the time period immediately before the gap and immediately afterwards.
  • the data cleaning step can be carried out incrementally where the same prediction model is being run on updated data.
  • the data cleaning step may also include a sorting step, since due to transmission irregularities from the sensors it may be that the“new data” which has been received in the last 60 seconds or so in some data streams relates to events which took place at an earlier time. In this case only the last 60 seconds or so of data will need to be cleaned.
  • a new model becomes available the relevant database tables may be dropped and re-built, the entire cleaning process being carried out afresh. This is because the new model may include a different subset of data streams, and/or require the data to be ordered differently.
  • historical data may be periodically checked, and cleaned if required.
  • This periodic cleaning may occur even on data which was already cleaned, since the filling in of missing data can be improved once there is more data available from which estimates based on averages can be derived. For that reason, data which is“filled in” is preferably marked as an estimate, to allow that estimate to be revised or improved later, when more data is available.
  • the prediction module may carry out a further data preparation step of downsampling the data.
  • the size of the timestep required may be a hyperparameter which is provided whenever a new model is provided to the prediction module.
  • the size of the timestep may therefore be larger than the maximum resolution of the available data, in which case a downsampling step needs to take place.
  • the prediction module may carry out a further data preparation step on the cleaned data before the model is executed.
  • the cleaned data is in the form of a 2D tensor with each column representing an individual data stream and each row representing an individual timestep.
  • the further data preparation step may include‘lagging’ the data backwards through time by a number of lag steps.
  • the number of lag steps is a hyperparameter which is provided whenever a new model is provided to the prediction module.
  • the result of the‘lagging’ is a new 3D tensor in which each row contains the value for each data stream for a particular timestamp and also (along the third axis) for one or more previous timestamps.
  • Each input node to the neural network therefore accepts (at each time step) a tuple of values for a particular data stream at times (t, t- 1, t-2), for example.
  • t, t- 1, t-2 times (t, t- 1, t-2), for example.
  • the earliest timesteps will now have missing data (if the number of lag steps is two then the earliest and second earliest timesteps will lack the data for their two previous timesteps). These earliest and incomplete rows are removed from the tensor.
  • Another data preparation step may include augmenting the sensor data with collateral data.
  • collateral data include weather-related data such as rainfall, temperature, visibility etc.
  • the collateral data may be obtained from internet services but it will be appreciated that in some cases individual sensors may include data streams for things like temperature.
  • other collateral sources may include for example social media or news feeds. Fairly simple filters could be applied to these feeds to pick up words like“accident” or“congestion” as well as particular placenames. More complex classifiers could be applied to attempt to better filter this type of data. Alternatively, a filtering /reporting process could take place manually.
  • Sports fixtures and details of similar events are another type of collateral data which might be usefully added to the data collected from sensors.
  • the prediction model feeds the input tensor into the neural-network prediction model.
  • This prediction can in turn be fed to the active traffic management means so that appropriate action can be taken - for example, if the model predicts a very large volume of traffic on a particular road in ten minutes’ time, then it may be useful to reduce the speed limit on that stretch and/or to open hard shoulder lanes, for example.
  • the purpose of the evolutionary model generation module is to produce, train and test candidate models to search for the best possible model based on current circumstances.
  • the model generation module makes use of the same input data from the sensors as the prediction module, i.e. the same input data from the sensors - not necessarily the same cleaned / filtered / prepared data. It uses the same process of replacing any missing or incorrect values with typical values derived from historical data or interpolated.
  • the data may be truncated to, for example, the last year.
  • the model generation module may carry out a step of filtering available data streams. In other words, a subset of the available data streams may be selected for possible inclusion in the models.
  • Various criteria may be used for the filtering process, for example:
  • “Usefulness” scores may be available for different data streams. A particular data stream may be useful if predicting future values for that data stream has a particular operational purpose. These scores may be to some extent manually applied, and may put a constraint on the extent to which some of the other factors may cause particular streams to be disregarded.
  • the data stream is of sufficiently high quality - i.e. that the amount of missing or erroneous data does not exceed some threshold.
  • the evolutionary model generation module may carry out a step of creating a random population of candidate models.
  • Each model may be, for example, an LSTM-based recurrent neural network model.
  • the model is defined by a set of model hyperparameters. When the population of random models is generated, the hyperparameters of each model in the population are given random values.
  • the initial values might be based on a heuristic, or be generated otherwise than completely randomly.
  • the initial values in some embodiments may include both non-random and random elements.
  • the size of the population may be configurable. In some embodiments, the population size may be about 200.
  • hyperparameters examples include the combination of any preset configuration and the complete list of randomized hyperparameters.
  • the combination of any preset configuration and the complete list of randomized hyperparameters will completely define a model, together with the trained parameters.
  • Hyperparameters which are initially randomized may include: ⁇ Timestep - the frequency to which the data is downsampled;
  • Widths the list of layer widths; i.e. the number of nodes in each layer of the model;
  • Epochs the number of epochs for which the model is trained
  • Optimiser the name of the optimisation algorithm to use in training. This may be selected from a set of known optimisation algorithms which are available to the model generation module;
  • L1 and L2 reg the amount of L1 and L2 regularisation to apply to each of the LSTM parts:
  • Kernel the weights matrices used within the LSTM cell, i.e. at each of its gates;
  • Activation the output of an LTSM layer
  • Bias the bias vectors used within the LSTM cell, i.e. at each of its gates
  • Recurrent the weights matrices used at the recurrent connection of each gate
  • a fitness value is calculated.
  • the fitness value is a measure of the model’s accuracy of prediction.
  • each model is trained and tested according to methods known in the art. Briefly, the input data is separated into three subsets, say subsets A, B and C. The quantity of data in each subset may be for example split in the ratio 50:20:30.
  • set A is training data
  • set B is validation data
  • the training data being used to adjust the parameters in the neural network and the validation data being used to detect and prevent overfitting.
  • Set C is testing data, the testing data being then used to measure the predictive accuracy of the trained neural network and assign a fitness score based on a hyper-loss function.
  • the models in the population may then be sorted based on fitness.
  • the population is filtered / selected by choosing a predetermined percentage of the best models.
  • the selection includes a random aspect, for example where a predetermined percentage of the best models (by fitness score) are selected along with a random selection of models with lower fitness scores.
  • the model generation function may carry out a mutation phase. For each model left in the population after filtering, a mutation operation may be performed or not with a predetermined probability. For example, in some embodiments there may be a 5% probability in each case that a model will be subject to a mutation.
  • a mutation operation may be performed or not with a predetermined probability. For example, in some embodiments there may be a 5% probability in each case that a model will be subject to a mutation.
  • a simple example of one way in which a mutation is implemented is that one of the hyperparameters (chosen at random) is reassigned a random value.
  • the model generation function may carry out a crossover phase. After mutation, the number of models in the population is increased to the predetermined population size (for example 200) by crossover operations.
  • the predetermined population size for example 200
  • One way a crossover can be implemented is for each‘child’ model, two‘parent’ models are chosen at random, and to set each of the hyperparameters in the child model either the value of that hyperparameter in one parent or the value of that hyperparameter in the other parent is chosen.
  • the “crossover” process is repeated until enough‘child’ models have been created for the population to reach its predetermined size (e.g. about 200 models).
  • the models in the population after crossover may be re-tested using a different dataset from the test data used to evaluate fitness during the selection stage. For example, where set C was used as test data in the selection stage, the models may now be retested against dataset B. The end condition, where a‘final’ population is settled upon and no further evolution occurs, may be the result of this retest. Typically, if some measure of the overall fitness of the population has deteriorated or not improved as a result of a single pass of the evolutionary algorithm (i.e. the selection, mutation and crossover phases), then no further evolutionary stages will occur and the population becomes“final”.
  • the measure of the overall fitness of the population could be for example some kind of average of the fitness of each model as determined by the testing, or an average of the top 50% (or some other proportion) of the models.
  • the state where the overall population fitness is no longer significantly improving is known as‘hyperconvergence’.
  • the best model in the population can be ‘activated’, i.e. sent to the prediction module for live use.
  • the performance of the new‘best’ model in the evolved population may be compared against the currently‘activated’ model, to ensure that models in the prediction module are only replaced with better models.
  • stochastic optimisation algorithm may be used in different embodiments.
  • stochastic optimisation algorithm based on a gradient descent method may be suitable.
  • Figure 1 shows a visualisation of a predictive model, and changes which may be made to the model as part of an evolutionary process
  • Figure 2 shows a visualisation of inputs to a predictive model, in which one of the inputs becomes unavailable and the predictive model is updated as a result;
  • Figure 3 shows a visualisation of inputs to a predictive model, in which the model is updated to take account of a greater time period in each input stream; and
  • Figure 4 is a flowchart illustrating the process of creating new models by an evolutionary process.
  • Figure 1 illustrates a predictive model in the form of a neural network 10.
  • the neural network 10 may be a recurrent neural network made from LSTM (Long Short-Term Memory) cells.
  • LSTM Long Short-Term Memory
  • Such models are defined in terms of‘parameters’, which are determined during training of the model using training data, and‘hyperparameters’ which are fixed properties of the model which are not changed during training.
  • Some of the hyperparameters relate to the‘structure’ of the model, for example the number of layers and the number of nodes per layer.
  • hyperparameters which define the model pre-training which do not relate to the‘visible’ structure as shown in Figure 1. For example, the number of lag steps applied to the data and the number of epochs for which the model is trained.
  • Hyperparameters also include the data streams to be included as inputs to the model and how they are filtered, the duration of data used in making predictions and the sample rate, and the data streams to be included as outputs (i.e. the data streams the future values of which are being predicted) and the future timescales over which predictions are made.
  • the types of nodes and types of layers are also potential hyperparameters - for example layers can be RNN or CNN layers and nodes can be LSTM or GRU nodes.
  • models 10’ and 10 are two different models having different, but similar, hyperparameters.
  • Model 10’ has an extra layer compared with model 10.
  • Model 10 has two extra nodes in its second layer.
  • the inputs to the models are data streams from traffic sensors.
  • the outputs are multi-timestep multi-variate predictions of a future state of the traffic system.
  • the input data streams in the traffic management system come from a network of traffic sensors 12.
  • these sensors may be located at every intersection on a road network spanning a city centre or an even wider area.
  • the sensors typically communicate wirelessly with a central hub where the prediction module is located.
  • the system is resilient to these problems without an unduly detrimental impact on prediction quality.
  • model 10 needs to be changed to require fewer inputs. This means providing a new model 10’.
  • Figure 3 shows another potential change to model hyperparameters and emphasises that the model hyperparameters are not limited to the structure of the model as shown in the schematic visualisation.
  • the time window 14 of historical data in the data streams 16 from the sensors 12 is widened.
  • This is another example of a model hyperparameter that can be changed.
  • the changes to the models are determined by the evolutionary algorithm described below.
  • FIG. 4 is a flowchart of an evolutionary algorithm used for generating models.
  • the model generation as such begins at step 20. Where no population already exists, a population of models is created randomly. Each model in the population will be a different model, i.e. it will be have a different set of hyperparameters. Once the models are created, they are trained and tested using training, validation and test data. Once the models are trained both hyperparameters and parameters are determined for each model, and each model is associated with a score as to its predictive accuracy, based on the testing.
  • Step 22 is a filtering or“selection” stage.
  • the population is reduced in size to save only the“best” models, for example the top 30% of models based on the scores. Additionally, a random selection, for example a random sample of 10%, of the remaining non-best models are saved. The remainder of the models are discarded. Different percentages of the‘best’ and‘non-best’ models may be saved in different embodiments.
  • Step 24 is the“mutation” stage. Not all models have to be mutated. For example embodiments may cycle through each model and decide that a mutation is to occur, for example with 5% probability. If a mutation is to occur, typically one of the hyperparameters of the model will be changed. The purpose of mutation is to change the model slightly to make a different, but similar,‘mutated’ model.
  • Step 26 is the“crossover” stage. In this stage the number of models in the population is increased again typically back up to the preset population size at which the random population was originally created. New‘child’ models are made by selecting at random two‘parent’ models, and setting each hyperparameter of the‘child’ model at random either equal to that hyperparameter in the one parent or equal to that hyperparameter in the other parent.
  • any models which have not been trained and tested i.e. any models which have either been mutated or are new‘child’ models
  • step 28 a decision has to be made as to whether to continue the evolutionary process. This is made based on two factors:
  • Hyperconvergence is a state where the overall quality of the population of models is deteriorating, or is no longer significantly improving.
  • the overall quality of the population may be measured based on the scores from training and testing. However, it is found in some embodiments to be preferable to use a different set of testing data to score for the detection of hyperconvergence, than the testing data used to score each individual model during the selection / filtering stage 22.
  • the evolutionary process terminates at step 30 in a state where there is a final population of scored models.
  • One of these models (the one with the top score) is the best model, and this model may be sent to the prediction module to replace the currently running model.
  • a final check is made to ensure that the new best model is better than the currently running model.
  • the best model or a number of the best models may be included in the starting population the next time the evolutionary process is run, rather than starting with a completely random new population.
  • the evolutionary process in its search for a new best model will train and test a potentially very large number of models. This is a process which has considerable memory requirements and typically in implementations garbage collection will be incomplete. Over time this means that resources, in particular memory, will not be completely released for re-use and the process may eventually run out of memory. This memory can typically be recovered by terminating and restarting the process. For that reason, each time a model has been trained and tested the state of the evolutionary process (i.e. all the models in the population and associated scores, and other state variables) will be saved to a‘checkpoint’ on disk. If the process should crash then it can be restarted and recovered from the checkpoint (step 19 in Figure 4). As well as allowing recovery from crashes, in particular caused by memory leaks, this also allows new versions of the software to be deployed without interrupting and restarting the model generation process. Note that hyperconvergence in some embodiments may take for example a few weeks to occur.
  • Models are parameterised and serialised for storage. It is therefore unnecessary to hold more than one model in memory at once.
  • the model is also sent in its parameterised and serialised form from the evolutionary model generation module to the prediction module when the model needs to be‘activated’.
  • Parameterising and serialising the model involves saving the following components:
  • the model s parameters - i.e. everything about the model which is determined by training the model;
  • hyperparameters is used to mean anything about a model which is determined by the evolutionary process. In many embodiments this will include aspects of the data streams and ordering, aspects of the model architecture, and may include other items on the above list. In most implementations, not everything will be typically evolvable as hyperparameters. In a particular implementation, the parts which form a complete definition of a model therefore fall into three broad mutually exclusive categories: • Model parameters - everything which is determined by training the model;
  • some embodiments may allow some of the“fixed” properties to be manually changed - i.e. they form part of the configuration of a more general system. Flowever, only the parameters and hyperparameters are changed automatically by the model generation module.
  • Configurable properties of the evolutionary model generation module may include:
  • Probability of re-including a model which was excluded at the first filtering step i.e. the probability of an individual“non-best” model being retained in the population
  • proportion of“non-best” models which are retained i.e. the probability of an individual“non-best” model being retained in the population
  • the value is a number, the minimum and maximum allowable values; o If the value is a string, a list of the allowable values;
  • The type(s) of neural network architecture to include.
  • Some of the items in the above list in some embodiments may be evolvable hyperparameters.
  • the implementation of the prediction module and evolutionary model generation module is typically as two programs running simultaneously on the same computer, a database shared by the two programs, and various configuration files used by the programs. Both programs run continuously, iteratively making predictions and producing new models.
  • prediction and/or model generation could take place on different machines, and some aspects of the evolutionary model generation process in particular could be readily parallelised on a cluster of computers.
  • the sensor network is arranged so that data from the sensors arrives in the database and becomes available to both programs as quickly as possible.
  • the sensors communicate wirelessly over cellular networks. Due to network conditions it is likely that data will sometimes arrive‘out of order’, i.e. sooner from some sensors than from others. The data cleaning and filtering processes described allow for this.
  • the system of the invention is found to make better predictions of future traffic states than state-of-the-art systems.
  • it has good predictive accuracy and can respond to transient and long-term changes in the road system such as roadworks, new roads, new signals, new lanes, etc. It can also respond to events such as collisions, breakdowns, and congestion. It is resilient to unreliable sensor networks and communications systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Physiology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un système de gestion active de la circulation. Le système comprend des capteurs dans le réseau routier et des moyens de sortie, par exemple des portiques de signalisation, pour réguler la circulation. Des décisions de gestion de la circulation (c'est-à-dire des sorties sur les moyens de sortie) sont prises sur la base de prédictions d'un état futur qui sont faites par un modèle basé sur un réseau neuronal. Des prédictions sont faites de manière itérative par le modèle de réseau neuronal, et, pendant ce temps, un processus de génération de modèle évolutif s'exécute de manière itérative, pour rechercher de nouveaux modèles. Le modèle de prédiction est ainsi constamment mis à jour, et peut prendre en compte des changements et des défaillances du réseau de capteurs, ainsi que des événements et d'autres facteurs sur le réseau routier, pour améliorer la précision de prédiction et la gestion active de la circulation.
PCT/EP2019/063039 2018-05-31 2019-05-21 Système de gestion de la circulation WO2019228848A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1808854.2A GB2574224B (en) 2018-05-31 2018-05-31 Traffic management system
GB1808854.2 2018-05-31

Publications (1)

Publication Number Publication Date
WO2019228848A1 true WO2019228848A1 (fr) 2019-12-05

Family

ID=62872886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/063039 WO2019228848A1 (fr) 2018-05-31 2019-05-21 Système de gestion de la circulation

Country Status (2)

Country Link
GB (1) GB2574224B (fr)
WO (1) WO2019228848A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035808A (zh) * 2018-07-20 2018-12-18 上海斐讯数据通信技术有限公司 一种基于深度学习的红绿灯切换方法及系统
CN111709553A (zh) * 2020-05-18 2020-09-25 杭州电子科技大学 一种基于张量gru神经网络的地铁流量预测方法
CN111785014A (zh) * 2020-05-26 2020-10-16 浙江工业大学 一种基于dtw-rgcn的路网交通数据修复的方法
CN111882869A (zh) * 2020-07-13 2020-11-03 大连理工大学 一种考虑不良天气的深度学习交通流预测方法
CN113537580A (zh) * 2021-06-28 2021-10-22 中科领航智能科技(苏州)有限公司 一种基于自适应图学习的公共交通客流预测方法及系统
CN114726463A (zh) * 2021-01-05 2022-07-08 大唐移动通信设备有限公司 基于神经网络的移动通信用户时空分布预测方法及装置
WO2022241802A1 (fr) * 2021-05-19 2022-11-24 广州广电运通金融电子股份有限公司 Procédé de prédiction d'écoulement de trafic à court terme dans un réseau routier complexe, support de stockage et système
US20220412756A1 (en) * 2021-06-25 2022-12-29 Toyota Jidosha Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
DE102020127407B4 (de) 2020-10-19 2024-04-25 Audi Aktiengesellschaft Verfahren und Steuervorrichtung zur Erzeugung wenigstens eines Fahrauftrags für wenigstens ein fahrerloses Transportfahrzeug, FTF, eines modularen Montagesystems
CN117994986B (zh) * 2024-04-07 2024-05-28 岳正检测认证技术有限公司 一种基于智能优化算法的交通车流量预测优化方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489568B (zh) * 2019-01-25 2022-08-02 阿里巴巴集团控股有限公司 交通信号灯的调控方法、装置及计算机可读存储介质
US11984023B2 (en) * 2020-01-26 2024-05-14 Roderick Allen McConnell Traffic disturbances
GB2599121A (en) * 2020-09-24 2022-03-30 Acad Of Robotics Ltd Method and system for training a neural network
CN112712695B (zh) * 2020-12-30 2021-11-26 桂林电子科技大学 一种交通流预测方法、装置及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5668717A (en) * 1993-06-04 1997-09-16 The Johns Hopkins University Method and apparatus for model-free optimal signal timing for system-wide traffic control
US20120072096A1 (en) * 2006-03-03 2012-03-22 Chapman Craig H Dynamic prediction of road traffic conditions
US20140222321A1 (en) * 2013-02-06 2014-08-07 Iteris, Inc. Traffic state estimation with integration of traffic, weather, incident, pavement condition, and roadway operations data
WO2018051200A1 (fr) 2016-09-15 2018-03-22 Vivacity Labs Limited Procédé et système destinés à analyser le mouvement de corps dans un système de trafic

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5513098A (en) * 1993-06-04 1996-04-30 The Johns Hopkins University Method for model-free control of general discrete-time systems
US6269351B1 (en) * 1999-03-31 2001-07-31 Dryken Technologies, Inc. Method and system for training an artificial neural network
CN107945534A (zh) * 2017-12-13 2018-04-20 浙江大学城市学院 一种基于gmdh神经网络的交通车流量预测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5668717A (en) * 1993-06-04 1997-09-16 The Johns Hopkins University Method and apparatus for model-free optimal signal timing for system-wide traffic control
US20120072096A1 (en) * 2006-03-03 2012-03-22 Chapman Craig H Dynamic prediction of road traffic conditions
US20140222321A1 (en) * 2013-02-06 2014-08-07 Iteris, Inc. Traffic state estimation with integration of traffic, weather, incident, pavement condition, and roadway operations data
WO2018051200A1 (fr) 2016-09-15 2018-03-22 Vivacity Labs Limited Procédé et système destinés à analyser le mouvement de corps dans un système de trafic

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Proceedings of the 2017 SIAM International Conference on Data Mining", 30 June 2017, SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS, Philadelphia, PA, ISBN: 978-1-61197-497-3, article ROSE YU ET AL: "Deep Learning: A Generic Approach for Extreme Condition Traffic Forecasting", pages: 777 - 785, XP055625221, DOI: 10.1137/1.9781611974973.87 *
ANGELINE P J ET AL: "AN EVOLUTIONARY ALGORITHM THAT CONSTRUCTS RECURRENT NEURAL NETWORKS", IEEE TRANSACTIONS ON NEURAL NETWORKS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 5, no. 1, 1 January 1994 (1994-01-01), pages 54 - 64, XP000441909, ISSN: 1045-9227, DOI: 10.1109/72.265960 *
SHAHRIAR AFANDIZADEH ZARGARI ET AL: "A computational intelligence-based approach for short-term traffic flow prediction", EXPERT SYSTEMS., 1 November 2010 (2010-11-01), GB, pages no - no, XP055622289, ISSN: 0266-4720, DOI: 10.1111/j.1468-0394.2010.00567.x *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035808A (zh) * 2018-07-20 2018-12-18 上海斐讯数据通信技术有限公司 一种基于深度学习的红绿灯切换方法及系统
CN111709553A (zh) * 2020-05-18 2020-09-25 杭州电子科技大学 一种基于张量gru神经网络的地铁流量预测方法
CN111785014A (zh) * 2020-05-26 2020-10-16 浙江工业大学 一种基于dtw-rgcn的路网交通数据修复的方法
CN111882869B (zh) * 2020-07-13 2022-10-04 大连理工大学 一种考虑不良天气的深度学习交通流预测方法
CN111882869A (zh) * 2020-07-13 2020-11-03 大连理工大学 一种考虑不良天气的深度学习交通流预测方法
DE102020127407B4 (de) 2020-10-19 2024-04-25 Audi Aktiengesellschaft Verfahren und Steuervorrichtung zur Erzeugung wenigstens eines Fahrauftrags für wenigstens ein fahrerloses Transportfahrzeug, FTF, eines modularen Montagesystems
CN114726463A (zh) * 2021-01-05 2022-07-08 大唐移动通信设备有限公司 基于神经网络的移动通信用户时空分布预测方法及装置
CN114726463B (zh) * 2021-01-05 2023-06-23 大唐移动通信设备有限公司 基于神经网络的移动通信用户时空分布预测方法及装置
WO2022241802A1 (fr) * 2021-05-19 2022-11-24 广州广电运通金融电子股份有限公司 Procédé de prédiction d'écoulement de trafic à court terme dans un réseau routier complexe, support de stockage et système
US20220412756A1 (en) * 2021-06-25 2022-12-29 Toyota Jidosha Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
CN113537580A (zh) * 2021-06-28 2021-10-22 中科领航智能科技(苏州)有限公司 一种基于自适应图学习的公共交通客流预测方法及系统
CN113537580B (zh) * 2021-06-28 2024-04-09 中科领航智能科技(苏州)有限公司 一种基于自适应图学习的公共交通客流预测方法及系统
CN117994986B (zh) * 2024-04-07 2024-05-28 岳正检测认证技术有限公司 一种基于智能优化算法的交通车流量预测优化方法

Also Published As

Publication number Publication date
GB2574224A (en) 2019-12-04
GB2574224B (en) 2022-06-29
GB201808854D0 (en) 2018-07-18

Similar Documents

Publication Publication Date Title
WO2019228848A1 (fr) Système de gestion de la circulation
CN112700072B (zh) 交通状况预测方法、电子设备和存储介质
Ramezani et al. Queue profile estimation in congested urban networks with probe data
Sander et al. The potential of clustering methods to define intersection test scenarios: Assessing real-life performance of AEB
US11908317B2 (en) Real-time traffic safety management system
Sun et al. Bus travel speed prediction using attention network of heterogeneous correlation features
WO2021102213A1 (fr) Détermination guidée par les données d'effets de congestion en cascade dans un réseau
Christalin Nelson et al. A novel optimized LSTM networks for traffic prediction in VANET
US10706720B2 (en) Predicting vehicle travel times by modeling heterogeneous influences between arterial roads
Provoost et al. Short term prediction of parking area states using real time data and machine learning techniques
Jagannathan et al. Predicting road accidents based on current and historical spatio-temporal traffic flow data
Silva et al. Interpreting traffic congestion using fundamental diagrams and probabilistic graphical modeling
Briani et al. Inverting the fundamental diagram and forecasting boundary conditions: How machine learning can improve macroscopic models for traffic flow
Waury et al. Assessing the accuracy benefits of on-the-fly trajectory selection in fine-grained travel-time estimation
Hayashi et al. Prioritization of Lane-based Traffic Jam Detection for Automotive Navigation System utilizing Suddenness Index Calculation Method for Aggregated Values
Yidan et al. Bus travel speed prediction using attention network of heterogeneous correlation features
Nejad et al. State space reduction in modeling traffic network dynamics for dynamic routing under its
Hossain et al. Development of a real-time crash prediction model for urban expressway
Satyananda et al. Deep learning to handle congestion in vehicle routing problem: A review
Aziz et al. A data-driven framework to identify human-critical autonomous vehicle testing and deployment zones
Duan et al. Spatiotemporal dynamics of traffic bottlenecks yields an early signal of heavy congestions
Wismans et al. State estimation, short term prediction and virtual patrolling providing a consistent and common picture for traffic management and service providers
Niemann Tygesen et al. Incident congestion propagation prediction using incident reports
Bowman Modeling Traffic Flow With Microscopic Discrete Event Simulation
Abhishek Novel multi input parameter time delay neural network model for traffic flow prediction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19726362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19726362

Country of ref document: EP

Kind code of ref document: A1