WO2012162726A1 - Optimising transit priority in a transport network - Google Patents
Optimising transit priority in a transport network Download PDFInfo
- Publication number
- WO2012162726A1 WO2012162726A1 PCT/AU2012/000576 AU2012000576W WO2012162726A1 WO 2012162726 A1 WO2012162726 A1 WO 2012162726A1 AU 2012000576 W AU2012000576 W AU 2012000576W WO 2012162726 A1 WO2012162726 A1 WO 2012162726A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tpa
- level component
- priority
- transit
- heuristic algorithm
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/18—Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/06—Multi-objective optimisation, e.g. Pareto optimisation using simulated annealing [SA], ant colony algorithms or genetic algorithms [GA]
Definitions
- the present invention relates generally to methods and systems for allocating space available to vehicles and other entities for movement in a transport network, and in particular to optimising the allocation of exclusive lanes or paths to transit vehicles in such a transport network.
- the invention is suitable for use in the allocation of exclusive lanes to transit vehicles such as buses in a road transport network, and it will be convenient to describe the invention is relation to that exemplary, non-limiting application. It will be appreciated however that the invention is also suitable for use in allocating lanes or paths for rail-mounted vehicles, pedestrians, bicycles and other entities in a variety of transport networks.
- RSA road space allocation
- NDP Network Design Problem
- NP non-deterministic polynomial-time
- a computerised system for optimising transit priority in a network of transport links interconnected by nodes including:
- one or more computing entities including one or more program instruction processing units, and one or more memory devices storing program instructions for performing a heuristic algorithm to determine a Transit Priority Alternative (TPA) defining a combination of transport links on which priority would be provided for transit vehicles in which one or more traffic characteristics are optimised for the network, the heuristic algorithm including
- TPA Transit Priority Alternative
- an upper level component for sequentially generating a plurality of TPAs each defining a different combination of transport links on which priority would be provided for a transit vehicles;
- a lower level component for evaluating each TPA using n user behaviour models to evaluate the one or more traffic characteristics for each TPA generated by the upper level component, where n is an integer
- the heuristic algorithm uses the results of the lower level component evaluation of each TPA in the generation of a subsequent TPA.
- a first group of one of more of the computing entities is adapted to process the upper level component of the heuristic algorithm; and a second group of one or more of the computing entities is adapted to process the lower level component of the heuristic algorithm.
- p of the plurality of user behaviour models may be serially evaluated on a same computing entity from the second group, where p is an integer less than or equal to n.
- p of the plurality of user behaviour models may be evaluated in parallel on p independent program instruction processing units of a same computing entity from the second group, where p is the number of processing units on the computing entity and where p is an integer less than or equal to n.
- p of the plurality of user behaviour models is evaluated in parallel on program instruction processing units of p computing entities from the second group, where p is an integer less than or equal to n.
- a computerised method of optimising transit priority in a network of transport links interconnected by nodes in a system including one or more computing entities including one or more program instruction processing units, and one or more memory devices storing program instructions for performing a heuristic algorithm to determine a Transit Priority Alternative (TPA) defining a combination of transport links on which priority would be provided for transit vehicles in which one or more traffic characteristics are optimised for the network, the method including:
- the heuristic algorithm uses the results of the lower level component evaluation of each TPA in the generation of a subsequent TPA.
- a first computing entity for use in a computerised system for optimising transit priority in a network of transport links interconnected by nodes, the computing entity including: one or more program instruction processing units, and one or more memory devices storing program instructions for performing an upper level component of a heuristic algorithm to determine a Transit Priority Alternative (TPA) defining a combination of transport links on which priority would be provided for transit vehicles in which one or more traffic characteristics are optimised for the network, the upper level component sequentially generating a plurality of TPAs each defining a different combination of transport links on which priority would be provided for a transit vehicles; and
- TPA Transit Priority Alternative
- communication means for receiving data from at least a second computing entity configured to carry out a lower level component of the heuristic algorithm, the lower level component evaluating each TPA using n user behaviour models to evaluate the one or more traffic characteristics for each TPA generated by the upper level component, where n is an integer,
- the computing entity uses the results of the lower level component evaluation of each TPA in the generation of a subsequent TPA.
- Figure 1 is a schematic diagram of one embodiment of a computerised system for optimising transit priority in a transport network
- Figure 2 is a flow chart depicting steps involved in a heuristic algorithm carried out by the computerised system of Figure 1 ;
- Figure 3 is a flow chart depicting steps involved in a serial implementation of the heuristic algorithm shown in Figure 2;
- Figure 4 is a is a flow chart depicting steps involved in a parallel implementation of the heuristic algorithm shown in Figure 2;
- Figure 5 is a schematic diagram of a second embodiment of a computerised system for optimising transit priority in a transport network, adapted to carry out a multithreading implementation of the parallel heuristic algorithm shown in Figure 4;
- Figure 6 is a schematic diagram of a third embodiment of a computerised system for optimising transit priority in a transport network, adapted to carry out a high throughput computing implementation of the parallel heuristic algorithm shown in Figure 4;
- Figure 7 is a schematic diagram showing elements forming part of computing entities within the various computerised systems shown in Figures 1 , 5 and 6;
- Figure 8 is a visual representation of a transport network which the various computerised systems shown in Figures 1 , 5 and 6 are adapted to optimising transit priority;
- Figures 9 to 1 1 are graphs depicting improvements in computation speeds achieved in the above-mentioned embodiments of the computerised system for optimising transit priority in a transport network.
- FIG. 1 there is shown generally a first embodiment of a computerised system 10 for optimising transit priority in a transport network.
- the system 10 includes a first computing entity 12 and a second computing entity 14.
- Each computing entity may be a mainframe computer, desktop computer or any programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations.
- Each computing entity includes one or more processing units, and one or more memory devices storing program instructions to cause the processing units to execute the program instructions.
- the system 10 carries out a heuristic algorithm to determine a Transit Priority Alternative (TPA) defining a combination of transport links on which priority would be provided for a transit vehicles in which one or more traffic characteristics are optimised for the network.
- TPA Transit Priority Alternative
- An exemplary heuristic algorithm 16 is depicted in Figure 2.
- the heuristic algorithm is a genetic algorithm which mimics the process of natural evolution, however it is to be understood that other forms of heuristic algorithms may be used in the context of the present invention.
- the genetic algorithm includes an upper level component 18, executed by computing entity 12, for sequentially generating a plurality of TPAs each defining a different combination of transport links on which priority would be provided for transit vehicles.
- the genetic algorithm also includes a lower level component 20, executed by computing entity 14, for evaluating each TPA using a plurality of user behaviour models to evaluate the one or more traffic characteristics for each TPA generated by the upper level component.
- the genetic algorithm 16 uses the results of the lower level component evaluation of each TPA in the generation of a subsequent TPA.
- the upper level component 18 determines the TPA or the links on which priority would be provided for transit vehicles (decision variables).
- the function of the upper level is System Optimal (SO), thus the objective function includes a combination of network performance measures.
- the correspondent constraints are included in the upper level constraints.
- the computation carried out by the upper level component 18 can be formulated by the following objective function:
- MinZ a ⁇ x (x) + ⁇ x/ b a (x) + ⁇
- f a ⁇ / ⁇ ⁇ ⁇ where ⁇ ⁇ is the bus line-link incident matrix and t a b (x) is the in-vehicle travel time.
- the first two terms in the objective function are the total travel time by car and bus.
- the next two terms represent the various other impacts of these two modes including emission, noise, accident, and reliability of travel time.
- the factors ⁇ , ⁇ , ⁇ , ⁇ not only convert the units, but also enable the formulation to attribute different relative weights to the components of the objective function.
- Equation (2) states that the cost of the implementation should be less than or equal to the budget.
- the computer system 10 implements a four step method for transport modeling. In this embodiment, it is assumed that the travel demand and the distribution of demand are not affected by the location of bus lanes.
- the lower level component 20 carries out steps to implement three models:
- the second computing entity 14 is advantageously programmed with a commercially available software system for transportation planning, travel demand modelling and network data management, such as the VISUM package provided by PTV A.G.
- VISUM package provided by PTV A.G.
- the use of commercial software in optimization is important as many of the cities are already simulated in such packages.
- the computer system 10 incorporates the use of this or like packages in the optimization without a need to convert the model to other platforms.
- the first computing entity 12 can be programmed with any suitable software with mode choice and assignment component for private and public transport.
- GA Genetic Algorithm
- GA is an iterative search method in which the new answers are produced by combining two predecessor answers.
- a feasible set of answers referred to as the population.
- Each individual answer in the population (called a chromosome) is assigned a survival probability, based on the value of the objective function.
- the algorithm selects individual chromosomes based on this probability to breed the next generation of the population.
- GA uses cross over and mutation operators to breed the next generation which replaces the predecessor generation. The algorithm is repeated with the new generation until a convergence criterion is satisfied.
- a GA is applied to the RSA problem.
- a gene is defined to represent the binary variable ⁇ and a chromosome is the vector of genes ( ⁇
- a chromosome (or TPA) contains a feasible combination of links on which an exclusive lane may be introduced (set A 2 ). Therefore, the length of the chromosome is equal to the size of A 2 .
- the algorithm starts with a feasible initial population. The chromosomes of the initial population are produced randomly.
- TPA evaluations are carried out at the lower level by the transport planning models of mode split, traffic assignment, and transit assignment. Using the flow and travel time at the lower level, the objective function for the chromosome is determined. The lower level calculations are repeated for all chromosomes in the population.
- Step 2 - Evaluation Calculate the objective function value for all chromosomes (or TPAs) in the population, using the transport planning models at the lower level.
- Step 3 Fitness: Determine survival probabilities (fitness) and update UBD.
- Step 5 Reproduction: Breed new generation by performing selection, cross over, and mutation. Go to Step 2.
- the most computationally intensive part of the GA is step 2 where TPAs are evaluated.
- the execution time for one evaluation is large.
- the GA requires large number of TPA evaluations.
- One evaluation involves running the four- step modeling for a network which may take as long as some hours on a personal computer.
- the number of TPA evaluations depends on the number of decision variables and attributes of the GA such as cross over probability and mutation probability.
- the goal of this section is to decompose the processes of the GA in order to execute them simultaneously on separate resources. Such an approach can significantly reduce the execution time of the GA.
- step 2 The steps of Genetic algorithm in terms of dependency of processes are of two types: First is the evaluation step (step 2). The evaluation of an individual chromosome (or TPA) is independent of other chromosomes (or TPAs) in a generation. Therefore, step 2 consists of a number of processes that can be analyzed independently. The second part of GA involves fitness, convergence, and reproduction (steps 3 to 5) which should be performed on the population. This part of the GA integrates the individual evaluations of step 2 where the processes are interdependent. On the basis of the dependency attribute two variants of the GA can be implemented, namely a Serial Genetic Algorithm (SGA) and a Parallel Genetic Algorithm (PGA).
- SGA Serial Genetic Algorithm
- PGA Parallel Genetic Algorithm
- step 2 evaluation of a chromosome is performed before evaluation of another chromosome is started. Then steps 3, 4, and 5 are completed to produce another generation and cycle back to step 2.
- the SGA is the simplest variant of GA to implement as a computer.
- step 2 is executed in parallel which is then followed by steps 3, 4, and 5 in a sequence.
- the aim of this parallelization is to achieve faster computation.
- Two PGA implementation techniques can be employed: using multiple cores available on one machine or multiple cores available on multiple machines in a network, as will be now explained.
- the operating system (OS) of the first computing entity 32 creates process threads to run computer applications. If no provision is made in the coding, the OS creates only one thread by which all the required processes of the code are run successively. In the presence of multiple running applications, there could be more than one thread processed by a Central Processing Unit (CPU) at a time. Execution of multiple threads can result in parallel processing of an application if the machine is capable of handling it, such as when a multi-core machine is in use. This technique of executing multiple threads in parallel is called Multi-threading in computer science. In order to implement PGA by the Multi-threading technique, the architecture 28 shown in Figure 4 is used.
- the number of threads is selected equal to the number of processing cores on a machine (say p) plus a main thread.
- the main thread is reserved to control the flow of the GA from the start to the end.
- the main thread performs the fitness, convergence, and reproductions steps of the GA.
- the rest p threads are used to execute the TPA evaluations (Objective function evaluation). Please note that the main thread is paused when the p threads are running so that all cores would be free to execute the remaining p threads. This technique can engage multiple cores available on one computer.
- the speed up achieved by the multi-threading approach depends on the number of cores on a machine and the efficiency of the OS on running parallel processes.
- the implementation is in the Microsoft Windows OS since the TPA are evaluated by VISUM and this package is only available in the Windows platform. Windows is at times criticized for its performance, but regardless of the operating system, there will be cases where performance starts to decline when the number of threads exceeds hardware cores. This is where the OS must share the limited hardware resources among so many threads that must be supported, due to time slicing overheads.
- multi-threading approach can save considerably on license costs for some packages. Since all instances of a commercial package (e.g. VISUM) are triggered on one machine, only a single VISUM license is sufficient to be provided, but at the expense of performance. It makes sense to strike a balance between budget and performance costs, and the next section discusses how the HTC approach can provide the benefits of parallelism, avoiding some of performance limits of Multi-threading, although it requires multiple licenses of the software.
- MT multi-threaded
- HTC approach in Visual Basic .NET environment in this study.
- Figure 5 depicts a computer system 30 adapted to carry out the MT approach to PGA.
- the computer system 30 implements a distributed architecture and includes a first computing entity 32 in communication with a second computing entity 34.
- the second computing entity 34 includes multiple processing cores 36 to 42, and is programmed with a commercially available software system for transportation planning, travel demand modelling and network data management, such as the VISUM package provided by PTV A.G.
- the multi-threading approach can run a certain number of TPA evaluations in parallel. For example in the architecture 28 shown in Figure 4, p evaluations from n evaluations in a population can be run in parallel. The ideal case is when p is equal to n, which means all n evaluations can be done at the same time.
- multi-threading is an approach that delivers speed-ups due to the parallelism afforded by many computations that can run simultaneously on many cores.
- the number of threads on a machine is limited, or at least, there is a limit to the speed up expected.
- a distributed computing approach such as HTC schedules TPA evaluations as computational jobs to several computers on a network, each being independent with its own set of cores and local memory.
- Figure 6 depicts a computer system 60 adapted to carry out the HTC approach to PGA.
- the computer system 60 implements a distributed architecture and includes a first computing entity 62 in communication with a group of second computing entities 64 to 72 by means of an intermediary server 74.
- the group of second computing entities 64 to 72 are programmed with a commercially available software system for transportation planning, travel demand modelling and network data management, such as the VISUM package provided by PTV A.G.
- the intermediary server 74 includes software to perform communications, job queuing and job distribution functions in relation to the group of second computing entities 64 to 72, such as the Condor open-source high throughput computing software developed by the Condor team at the University of Wisconsin- Madison.
- the various computing entities described above may be implemented using hardware, software or a combination thereof and may be implemented in one or more computer systems or processing systems.
- An exemplary computer system 80 is shown in Figure 7.
- the computer system 80 includes one or more processors 82.
- the processor(s) 82 is/are connected to a communication infrastructure 84.
- the computer system 80 may include a display interface 86 that forwards graphics, texts and other data from the communication infrastructure 84 for supply to the display unit 88.
- the computer system 80 may also include a main memory 90, preferably random access memory, and may also include a secondary memory 92.
- the secondary memory 92 may include, for example, a hard disk drive 94, magnetic tape drive, optical disk drive, etc.
- the removable storage drive 96 reads from and/or writes to a removable storage unit 98 in a well known manner.
- the removable storage unit 98 represents a floppy disk, magnetic tape, optical disk, etc.
- the removable storage unit 96 includes a computer usable storage medium having stored therein computer software in a form of a series of instructions to cause the processor(s) 82 to carry out desired functionality.
- the secondary memory 92 may include other similar means for allowing computer programs or instructions to be loaded into the computer system 80. Such means may include, for example, a removable storage unit 100 and interface 102.
- the computer system 80 may also include a communications interface 104.
- Communications interface 104 allows software and data to be transferred between the computer system 80 and external devices. Examples of communication interface 104 may include a modem, a network interface, a communications port, a PCMIA slot and card etc.
- Software and data transferred via a communications interface 104 are in the form of signals 106 which may be electromagnetic, electronic, optical or other signals capable of being received by the communications interface 104.
- the signals are provided to communications interface 104 via a communications path 104 such as a wire or cable, fibre optics, phone line, cellular phone link, radio frequency or other communications channels.
- the invention is implemented primarily using computer software, in other embodiments the invention may be implemented primarily in hardware using, for example, hardware components such as an application specific integrated circuit (ASICs).
- ASICs application specific integrated circuit
- Implementation of a hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art.
- the invention may be implemented using a combination of both hardware and software.
- VISUM is a commercial package, it is protected from being used beyond the maximum number of running instances that is allowed by the purchased license. Additionally, VISUM comes with a hardware lock, which contains the license data. The first requirement was to obtain a network license, installed on a license server to which all computers on the network should have access. The license server can be the same machine as or different to the Condor central manager. It should be noted that if n computers are in the network and m licenses are available, the maximum number of parallel TPA evaluations is min(n, m).
- VISUM 700 MB
- VISUM 700 MB
- VISUM has to be installed and registered in Windows in such a way that user interaction with a mouse and keyboard is normally required. The solution was pre- installation of VISUM on a subset of computers in the network. Condor must then be told, in the job command file, to send jobs only to nodes with VISUM installed.
- Windows differentiates between the local or remote launch of an application. Windows also consults the user permissions to run an application either locally or remotely. Condor runs VISUM remotely as a 'system' user. The VISUM 'COM server' was set to grant suitable permissions to launch Condor jobs from a 'remote system' user.
- FIG. 8 shows how the developed method can be applied using a heuristic algorithm such as GA.
- This grid network consists of 86 nodes and 306 links. All the circumferential nodes together with Centroid 22, 26, 43, 45, 62, and 66 are origin and destination nodes. A 'flat' demand matrix of 30 Person/hr is travelling from all origins to all destinations. The total demand for all the 36 origin destination is 37,800 Person/hr. There are 10 bus lines covering transit demand in the network shown in Figure 8. The frequency of service for all the bus lines is 10 mins.
- the models and parameters used in this example are extracted from those calibrated for Melbourne Integrated Transport Model (MITM) a four step transport model used by the Egyptian State Government for planning in Melbourne (Department of Infrastructure, 2004).
- MITM Melbourne Integrated Transport Model
- Mode share is determined using a Logit model.
- Traffic User Equilibrium (UE) and a frequency-based assignment is employed to model traffic and transit assignments, respectively. All these lower level transport models are implemented using Visum modeling package (PTV AG, 2009).
- the upper level objective function includes total travel time (veh.sec) and total vehicle distance (veh.km).
- the absolute value of the objective function therefore can be very large. In order to avoid numerical problems, a constant value is subtracted from the objective function for all evaluations. Hence, the value of the objective functions is on a relative basis.
- the weighting factors of the objective function are assumed to be equal to 0.01 . Regarding constraints, it is assumed that the budget allows for all candidate links for the provision of bus priority.
- Table 1 below shows the seven computers used in this study.
- the pool of computers includes various CPUs, Windows versions, and VISUM versions. It demonstrates that the HTC approach can incorporate all types of computers and software versions.
- a machine for example has 8 threads, it indicates that it can perform a maximum of 8 TPA evaluations at a time. A total of 32 threads are provided which means if all computers could be assigned to an evaluation job, a total of 32 evaluations can be carried out simultaneously. This requires 32 VISUM licenses.
- the last column in Table 1 is the time spent for evaluation of one TPA on each machine. As the table shows, machine 1 was the fastest computer with 65 seconds of execution time and machine 7 was the slowest with 226 seconds.
- Figure 9 presents a graph 130 showing the improvement in the objective function value for four SGA runs where 3 runs were terminated short in generation 50 and the fourth run was continued for 300 generations.
- the execution time of SGA is prohibitively long; thus, the number of generations cannot increase to more than 300 in this example.
- Figure 10 presents a graph 140 showing the improvement of the objective function for two MT and two HTC runs.
- the graph of SGA runs is also repeated in Figure 10.
- all approaches represent a similar trend in reduction of the objective function. This result is expected as the same algorithms are used for fitness, convergence, and reproduction of the SGA, MT and HTC. Therefore, the algorithm to produce TPAs in all approaches is the same, although the evaluation step (step 2) is implemented differently. The implementation of the evaluation step has resulted in various execution times.
- FIG. 1 A graph 140 showing the improvement of the objective function by an increment in the execution time is presented in Figure 1 1.
- the SGA runs have a significantly longer execution time than the PGA runs.
- Figure 1 1 suggests that HTC runs are faster.
- SGA-3 spends about 170 ⁇ 00 sec (about 2 days) while HTC needs only 2000 sec which is about one-ninth of what it takes for the SGA approach.
- SGA's fourth run (SGA-4) with 300 generations exceeded 5 days, which is prohibitively long to run GA as SGA for this example.
- This overall comparison reveals that parallelization can effectively reduce the GA execution time.
- ATE Average Time per Evaluation
- Speed up which is the ratio of ATE in one run to the ATE associate with a SGA run.
- the speed up and the efficiency of SGA runs are 1.
- Table 2 shows that ATE is almost constant with a change in the mp.
- the ATE for SGA, MT, and HTC runs are approximately 140sec, 40sec, and Msec, respectively.
- a speed up of more than 3 times is achieved with the application of the MT approach while the speed up was more than 9 times using the HTC approach.
- the efficiency measure demonstrates that in return for adding each thread in the MT approach, the execution time has improved by 80-90%.
- the efficiency in the HTC approach was just above 50% for addition of each thread. This gap is caused by the fact that there is an overhead time associated with distribution and queuing of the processing jobs in the HTC approach.
- Table 3 below presents the effects of the number of available threads on the execution time in the HTC approach.
- the ATE decreases when the number of threads increases which is an expectable outcome.
- the ATE in experiment E231 should also be similar to E228 and E230. However, as mentioned in Table 3, ATE has considerably increased in E231. In this experiment, a very slow CPU of machine 7 is added to the pool. As shown before in Table 1 , machine 7 has the slowest CPU. During the time which is required for this machine to evaluate one TPA, other machines can evaluate between 2 to 4 TPAs. As a result, this machine only holds up the other available threads as 'idle' which extends the evaluation time of each generation and collectively increases the evaluation time of the experiment. Number of Average
- HTC High Throughput Computing
- RSA Road Space Allocation
- SGA Serial Genetic Algorithm
- PGA Parallel Genetic Algorithm
- MT Multi-threading
- the performance of the GA variants has been compared in a numerical example.
- the PGA-MT approach with 4 'threads' available could reduce the execution time by 3.2 to 3.7 times compared to SGA.
- the PGA-HTC approach with 18 'threads' could decrease the execution time by 9.3-9.8 times in different examples.
- the 'efficiency' of the MT approach was higher than the HTC, MT approach cannot be used to solve large scale (real world network) examples since the total number of threads on a computer is limited. In contrast, there is no limit on the number of cores/threads which can be employed in the HTC approach.
- One of the novel aspects of this study was the successful implementation of the HTC approach to the road space application study using commercial software on Windows platform.
- Virtualization allows multiple diverse operating environments to run on top of the actual system of the base machine. Clouds can accommodate a user-prepared virtual machine image with the preferred settings and applications configured, and run several of them in a virtual network. The user therefore, has considerable control over how much computational power is desired in an HTC resource that can be created, expanded, contracted or retired as necessary. The next step in our work is therefore, to explore the applicability, benefits and constraints using cloud environments.
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2012262648A AU2012262648A1 (en) | 2011-05-31 | 2012-05-23 | Optimising transit priority in a transport network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2011902113A AU2011902113A0 (en) | 2011-05-31 | High performance computing solution for optimization of transit priority in a transportation network | |
AU2011902113 | 2011-05-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012162726A1 true WO2012162726A1 (en) | 2012-12-06 |
Family
ID=47258149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2012/000576 WO2012162726A1 (en) | 2011-05-31 | 2012-05-23 | Optimising transit priority in a transport network |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2012262648A1 (en) |
WO (1) | WO2012162726A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161536A (en) * | 2019-12-25 | 2020-05-15 | 南京行者易智能交通科技有限公司 | Time interval and road section selection method, device and system for bus lane |
CN116976540A (en) * | 2023-09-21 | 2023-10-31 | 上海银行股份有限公司 | Bank cash distribution route planning method under composite scene |
-
2012
- 2012-05-23 AU AU2012262648A patent/AU2012262648A1/en not_active Abandoned
- 2012-05-23 WO PCT/AU2012/000576 patent/WO2012162726A1/en active Application Filing
Non-Patent Citations (2)
Title |
---|
MESBAH ET AL.: "Optimization of transit priority in the transportation network using a decomposition methodology", 5 July 2010 (2010-07-05), Retrieved from the Internet <URL:http://www.sciencedirect.com/science/article/pii/S0968090X1000104X> [retrieved on 20120813] * |
MESBAH ET AL.: "Policy-Making Tool for Optimization of Transit Priority Lanes in Urban Network", TRANSPORTATION RESEARCH RECORD: JOURNAL OF THE TRANSPORTATION RESEARCH BOARD, vol. 2197, no. ISSUE, 1 December 2010 (2010-12-01), pages 54 - 62 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161536A (en) * | 2019-12-25 | 2020-05-15 | 南京行者易智能交通科技有限公司 | Time interval and road section selection method, device and system for bus lane |
CN111161536B (en) * | 2019-12-25 | 2021-04-02 | 南京行者易智能交通科技有限公司 | Time interval and road section selection method, device and system for bus lane |
CN116976540A (en) * | 2023-09-21 | 2023-10-31 | 上海银行股份有限公司 | Bank cash distribution route planning method under composite scene |
CN116976540B (en) * | 2023-09-21 | 2023-12-22 | 上海银行股份有限公司 | Bank cash distribution route planning method under composite scene |
Also Published As
Publication number | Publication date |
---|---|
AU2012262648A1 (en) | 2014-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Manasrah et al. | Workflow scheduling using hybrid GA-PSO algorithm in cloud computing | |
Feng et al. | Exploring serverless computing for neural network training | |
JP5756271B2 (en) | Apparatus, method, and computer program for affinity-driven distributed scheduling of parallel computing (system and method for affinity-driven distributed scheduling of parallel computing) | |
Di Martino et al. | Scheduling in a grid computing environment using genetic algorithms | |
Topcuoglu et al. | Performance-effective and low-complexity task scheduling for heterogeneous computing | |
Blythe et al. | Task scheduling strategies for workflow-based applications in grids | |
Fidanova | Simulated annealing for grid scheduling problem | |
Gupta et al. | Efficient workflow scheduling algorithm for cloud computing system: a dynamic priority-based approach | |
CN115061810A (en) | Processing a computation graph | |
Dai et al. | A synthesized heuristic task scheduling algorithm | |
Xiao et al. | A cooperative coevolution hyper-heuristic framework for workflow scheduling problem | |
Wadhwa et al. | Optimized task scheduling and preemption for distributed resource management in fog-assisted IoT environment | |
Ijaz et al. | MOPT: list-based heuristic for scheduling workflows in cloud environment | |
Saha et al. | A novel scheduling algorithm for cloud computing environment | |
Heidari et al. | Scheduling in multiprocessor system using genetic algorithm | |
Domeniconi et al. | Cush: Cognitive scheduler for heterogeneous high performance computing system | |
Pop et al. | Genetic algorithm for DAG scheduling in grid environments | |
Mangalampalli et al. | DRLBTSA: Deep reinforcement learning based task-scheduling algorithm in cloud computing | |
Vasile et al. | MLBox: Machine learning box for asymptotic scheduling | |
Noorian Talouki et al. | A hybrid meta-heuristic scheduler algorithm for optimization of workflow scheduling in cloud heterogeneous computing environment | |
Bouaziz et al. | Architecture exploration of real-time systems based on multi-objective optimization | |
CN114064249A (en) | Method and device for scheduling cross-cloud computing tasks of hybrid cloud and storage medium | |
Mirsoleimani et al. | A parallel memetic algorithm on GPU to solve the task scheduling problem in heterogeneous environments | |
WO2012162726A1 (en) | Optimising transit priority in a transport network | |
Chatterjee et al. | Job scheduling in cloud datacenters using enhanced particle swarm optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12793045 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2012262648 Country of ref document: AU Date of ref document: 20120523 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12793045 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12793045 Country of ref document: EP Kind code of ref document: A1 |