CN110431544B - Travel time and distance estimation system and method - Google Patents

Travel time and distance estimation system and method Download PDF

Info

Publication number
CN110431544B
CN110431544B CN201780088459.3A CN201780088459A CN110431544B CN 110431544 B CN110431544 B CN 110431544B CN 201780088459 A CN201780088459 A CN 201780088459A CN 110431544 B CN110431544 B CN 110431544B
Authority
CN
China
Prior art keywords
module
time
neurons
neuron
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780088459.3A
Other languages
Chinese (zh)
Other versions
CN110431544A (en
Inventor
伊珊·金达尔
秦志伟
陈学文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Publication of CN110431544A publication Critical patent/CN110431544A/en
Application granted granted Critical
Publication of CN110431544B publication Critical patent/CN110431544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3617Destination input or retrieval using user history, behaviour, conditions or preferences, e.g. predicted or inferred from previous use or current movement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3691Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

Travel time and distance estimation systems and methods are provided. Such a method may include obtaining a vehicle trip data set including an origin, a destination, a time of day, a trip time, and a trip distance associated with each of a plurality of trips, and training a neural network model using the vehicle trip data set to obtain a trained model. The neural network model may include a first module that may include a first number of neuron layers, the first module may be configured to acquire a start point and a destination as first inputs to estimate a travel distance, and a second module that may include a second number of neuron layers, the second module may be configured to acquire information of a last layer of the first module and a time of day as second inputs to estimate a travel time.

Description

Travel time and distance estimation system and method
Technical Field
The present application relates generally to travel time and distance estimation methods and apparatus.
Background
Travel time is one of the important traffic measures for efficient navigation and better trip planning. The accurate estimation of the driving time is a key component for establishing an intelligent traffic system.
Disclosure of Invention
Various embodiments of the present application may include systems, methods, and non-transitory computer-readable media configured to estimate travel time and distance. A computing system for estimating travel time and distance may include one or more processors and memory storing instructions. When executed by one or more processors, the instructions may cause a computing system to obtain a vehicle trip data set including an origin, a destination, a time of day, a trip time, and a trip distance associated with each of a plurality of trips, and train a neural network model using the vehicle trip data set to obtain a training model. The neural network model may include a first module and a second module, each module corresponding to at least a portion of the instructions. The first module may include a first number of neuron layers. The first module may be configured to obtain a starting point and a destination as a first input to predict a distance traveled. The second module may include a second number of neuron layers. The second module may be configured to obtain information of a last floor of the first module and a time of day as a second input to estimate the travel time.
In some embodiments, the memory may store instructions that, when executed by the one or more processors, may cause the computing system to receive a query comprising a query origin, a query destination, and a query time, and apply a trained model to the received query to estimate a travel time and a travel distance from the query origin to the query destination at the query time.
In some embodiments, the origin and destination may each comprise discrete GPS (global positioning system) coordinates, and the travel time may comprise discrete travel time.
In some embodiments, to train a neural network model having a vehicle trip data set, the computing system may be caused to provide the origin and destination to the first module to obtain a first number and a number of neurons in each of the first number of neuron layers, and to compare the estimated distance traveled to the trip distance to adjust the first number and the number of neurons in each of the first number of neuron layers.
In some embodiments, to train the neural network model with the vehicle trip data set, the computing system may be further caused to provide the last layer information and the time of day of the first module to the second module to obtain the second number and the number of neurons in each of the second number of neuron layers, and compare the estimated travel time to the trip time to adjust the second number and the number of neurons in each of the second number of neuron layers.
In some embodiments, the first module may be a neural network comprising three neuron layers and the second module may be another neural network comprising three neuron layers.
In some embodiments, the first module may include a first neuron layer comprising 20 neurons, a second neuron layer comprising 100 neurons, and a third neuron layer comprising 20 neurons.
In some embodiments, the second module may include a first neuron layer including 64 neurons, a second neuron layer including 120 neurons, and a third neuron layer including 20 neurons.
Another aspect of the present application provides a travel time and distance estimation method. The method may include obtaining a vehicle trip data set including an origin, a destination, a time of day, a trip time, and a trip distance associated with each of a plurality of trips, and training a neural network model using the vehicle trip data set to obtain a trained model. The neural network model may include a first module and a second module. The first module includes a first number of neuron layers and is configured to obtain as a first input an origin and a destination to predict a distance traveled. The second module includes a second number of neuron layers and is configured to obtain information of a last layer of the first module and a time of day as a second input to estimate a travel time.
Another aspect of the present application provides a non-transitory computer readable medium for estimating travel time and distance. The non-transitory computer-readable medium stores instructions therein, and when executed by the one or more processors, causes the one or more processors to perform obtaining a model including an origin, a destination, a time of flight, and a distance of flight associated with each of a plurality of trips, and training a neural network model using a vehicle trip data set to obtain the trained model. The neural network model may include a first module and a second module. The first module includes a first number of neuron layers and is configured to obtain as a first input a start point and a destination to estimate a distance traveled. The second module includes a second number of neuron layers and is configured to obtain as a second input the last layer information and the time of day of the first module to estimate the travel time.
Another aspect of the present application provides a travel time estimation method. The method includes inputting information on a start point and a destination to a first trained neural network module to acquire a travel distance, and inputting time information and information of a last layer of the first trained neural network module to a second trained neural network module to acquire a travel time.
These and other features of the systems, methods, and non-transitory computer-readable media disclosed herein, the operation and function of the related elements of structure, and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention.
Drawings
Certain features of various embodiments of the technology are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present technology will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:
FIG. 1 is an exemplary environment for estimating travel time and distance, shown in accordance with various embodiments.
FIG. 2 is an exemplary system for estimating travel time and distance shown in accordance with various embodiments.
FIG. 3A is an exemplary method of pre-processing a starting point and a destination, shown in accordance with various embodiments.
FIG. 3B is an exemplary neural network framework for estimating travel time and distance, shown in accordance with various embodiments.
FIG. 3C is an exemplary neural network model of estimated travel time and distance, shown in accordance with various embodiments.
FIG. 4A is a flow diagram illustrating an exemplary method of estimating travel time and distance in accordance with various embodiments.
FIG. 4B is a flow diagram illustrating an example method of estimating travel time in accordance with various embodiments.
FIG. 5 illustrates a block diagram of an exemplary computer system in which any of the embodiments described herein may be implemented.
Detailed Description
It can be challenging to predict the travel time and moment of a traffic trip between a start point (o) and a destination (d) starting from a particular time (t). Conventional machine learning methods for predicting time-varying start-end travel times and distances require feature engineering-manual creation of the features required to perform supervised learning. The performance of such methods depends to a large extent on the quality of the features. An example conventional approach for estimating travel time and distance is to construct a look-up table containing all possible queries of average travel time and distance information. However, this solution has several drawbacks. First, since GPS coordinates are continuous variables, forming such a large lookup table requires a large amount of memory to store. Second, most vehicle service applications that provide estimated travel time to the user are intended to work in real time, i.e., update the user with the remaining travel time. Therefore, it is often very time consuming to query from such a large table. Finally, given the very sparse historical trip data, it is not possible to have a complete start x destination x time cartesian product.
The disclosed systems and methods may overcome the above-described problems in predicting time-varying start-to-end travel times and distances. In an exemplary approach, the run pi for model training may include 5-tuples ((o) i ,d i ,t i ,D i ,T i ) From time t i Go to destination d i Position o of i In which D is i Is the distance traveled, T i Is the travel time from the origin to the destination. Both the origin and destination may be in GPS (global positioning system) coordinates (e.g., o) i =(Lat i ,Lon i ) ) or an alternative representation. Will be at a time t i As part of the journey, different traffic conditions may be considered at different times. For example, high traffic may be encountered during peak hours above off-peak hours. In some embodiments, it may be assumed that the intermediate position or travel trajectory is unknown and only the end positions are available. In addition, the disclosed system and method may receive a query q i As input pairs (origin, destination, time) i And obtain the corresponding pair (time, distance) i As an output. That is, given a historical database of N trips
Figure BDA0002202604900000051
The disclosed system and method can predict the query q = (o) q ,d q ,t q ) Distance and time of travel (D) q ,T q )。
Various embodiments of the present application include systems, methods, and non-transitory computer-readable media configured to estimate travel time and distance. In some embodiments, a computing system for estimating travel time and distance may include one or more processors and memory storing instructions. When executed by one or more processors, the instructions may cause a computing system to obtain a vehicle trip data set including an origin, a destination, a time of day, a trip time, and a trip distance associated with each trip of a plurality of trips, and train a neural network model using the vehicle trip data set to obtain a training model. The neural network model may include a first module and a second module, each module corresponding to at least a portion of the instructions. The first module may include a first number of neuron layers. The first module may be configured to obtain a start point and a destination as a first input to estimate a distance traveled. The second module may include a second number of neuron layers. The second module may be configured to obtain information of a last floor of the first module and a time of day as a second input to estimate the travel time.
In some embodiments, the computing system may include one or more servers that implement a vehicle information platform. The platform may be configured to receive a request for transportation (e.g., from a mobile device of a user of the platform), communicate the request to various vehicles (e.g., to a mobile device of a driver of the vehicle), and forward information of the vehicle accepting the request to the user, upon which the user and vehicle may meet and transport. At any time, the user may send a query to the computing system, causing the computing system to predict travel times and distances, for example, before or while the vehicle is in transit.
In some embodiments, the disclosed system may train a model to predict travel time and distance together based on a vehicle (e.g., taxi, platform vehicle) trip dataset collected by a Global Positioning System (GPS). The model described in more detail below may be referred to as a unified neural network (UnifiedNN). The model may first predict the travel distance between the origin and destination and then use this information along with the time of day to predict travel time. The UnifiedNN may use only the original GPS coordinates and time of day of the origin and destination as input features to estimate travel time and distance together. Compared with other travel time estimation methods, unifiedNN has stronger generalization capability, can obviously reduce the average absolute error (MAE) of the predicted travel time and distance, and is more robust to abnormal values in a data set. Thus, the disclosed systems and methods may more accurately predict travel times and distances.
FIG. 1 illustrates an example environment 100 for estimating travel time and distance in accordance with various embodiments. As shown in fig. 1, the example environment 100 may include at least one computing system 102 including one or more processors 104 and memory 106. Memory 106 may be non-transitory and computer-readable. The memory 106 may store instructions that, when executed by the one or more processors 104, cause the one or more processors 104 to perform various operations described herein. The system 102 may be implemented on a variety of devices such as a mobile phone, a tablet, a server, a computer, a wearable device (smart watch), and so on. The above system 102 may be installed with appropriate software (e.g., platform programs, etc.) and/or hardware (e.g., wires, wireless connections, etc.) to access other devices of the environment 100.
Environment 100 may include one or more databases (e.g., database 108) and one or more computing devices (e.g., computing device 109) accessible to system 102. In some embodiments, the system 102 may be configured to capture data such as o from a database 108 (e.g., a historical database or dataset of N transit trips) and/or a computing device 109 (e.g., a computer, server, mobile phone used by the driver or cyclist, which captures data such as i 、d i 、t i 、D i And T i Transportation trip information) to obtain data (e.g., o) for each of a plurality of vehicle transportation trips i 、d i 、t i 、D i And T i ). The system 102 may use the acquired data to train a model for estimating travel time and distance.
Environment 100 may also include one or more computing devices (e.g., computing devices 110 and 111) connected to system 102. Computing devices 110 and 111 may include devices such as cell phones, tablets, computers, wearable devices (smartwatches), and the like. Computing devices 110 and 111 may send data to system 102 or receive data from system 102 (e.g., queries for estimated travel time and distance, estimated travel time and distance).
In some embodiments, the system 102 mayTo implement an online information or service platform. The service may be associated with a vehicle (e.g., car, bicycle, boat, airplane, etc.), and the platform may be referred to as a vehicle (service item) platform. The platform can accept the transport request, identify vehicles that meet the requirements, arrange for pickup and process the transaction. For example, a user may use a computing device 110 (e.g., a mobile phone installed with an application associated with the platform) to request a transmission from the platform. The system 102 may receive the request and communicate it to the various vehicle drivers (e.g., by issuing the request to a mobile phone carried by the driver). The vehicle operator may use a computing device 111 (e.g., another mobile phone installed with a platform-related application) to accept the issued transportation request and obtain pickup location information and user information. Some platform data (e.g., vehicle information, vehicle driver information, address information) may be stored in the memory 106 or may be retrieved from the database 108 and/or the computing device 109. As described herein, the system 102 can receive the query q = (starting point o) q Destination d q Time t q ) And estimating the distance traveled and the time (D) accordingly q ,T q )。
In some embodiments, system 102 and one or more computing devices (e.g., computing device 109) may be integrated in a single device or system. Alternatively, system 102 and one or more computing devices may operate as separate devices. The database may be anywhere accessible to the system 102, e.g., in the memory 106, in the computing device 109, in another device coupled to the system 102 (e.g., a network storage device), or another storage location (e.g., a cloud-based storage system, a network file system, etc.), and so forth. Although system 102 and computing device 109 are shown as single components in this figure, it should be understood that system 102 and computing device 109 may be implemented as a single device or multiple devices coupled together. System 102 may be implemented as a single system or as multiple systems coupled to one another. In general, system 102, computing device 109, database 108, and computing devices 110 and 111 may be capable of communicating with each other over one or more wired or wireless networks (e.g., the internet), over which data may be communicated. Various aspects of the environment 100 are described below with reference to fig. 2-5.
Fig. 2 illustrates an example system 200 for predicting time and distance in accordance with various embodiments. The operations shown in fig. 2 and presented below are intended to be illustrative. In various embodiments, the system 102 may obtain the data 202 from the database 108 and/or the computing device 109. The acquired data 202 may be stored in the memory 106. The system 102 may train a model (e.g., a UnifiedNN) using the acquired data 202 to obtain a trained model for estimating travel time and distance. The computing device 110 may send a query 204 to the system 102, and the system 102 may predict travel times and distances by applying the trained model to at least some of the information from the query 204. The predicted travel time and distance may be included in data 207 to be returned to computing device 110 or one or more other devices.
In some embodiments, the acquired data 202 may include information related to each of various vehicle trips, such as q i =(o i ,d i ,t i ,D i ,T i ) Which are the start, destination, time of day, distance and time of the trip i, respectively. The origin and destination may be represented by GPS coordinates or other representations. The distance may be a distance traveled in miles or other units. The time may be a travel time in seconds or other units. The vehicle trip may be by a taxi, a call to a service vehicle, or the like. For efficiency reasons, it may be assumed that the vehicle service navigates the driver to the path with the shortest distance between the origin and destination regardless of the time of day. Furthermore, o i And d i Geographic coordinates may be included and several methods may be used to improve the accuracy of the geographic coordinates. For continuously variable geographic coordinates and cities (e.g., new york city), erroneous GPS coordinates may be included in the reported data due to high-rise buildings and dense areas. Other causes of erroneous GPS coordinates relate to atmospheric effects, multipath effects and ephemeris, and clock errors. Therefore, to combat the uncertainty in the GPS records and "clean up" the training data set, one or more devices of environment 100 (e.g., system 102,Computing device 109) may perform data pre-processing on the raw GPS to remove erroneous GPS coordinates. In one example, as shown in FIG. 3A, the raw GPS coordinates may be discretized into a 2-D checkered cell (e.g., longitude ("Lon") and latitude ("Lat") of 200 meters), and all GPS coordinates within the checkered cell may be represented by the lower left corner of the checkered cell (or another pre-set point). In other examples, any trip that satisfies any of the following conditions may be flagged as anomalous and removed from the training data: there are more than seven or zero passengers in a taxi, and there is no GPS coordinate for getting on or off the taxi, the running time is zero seconds and the corresponding running distance is non-zero, the running distance is zero mileage and the corresponding running time is not zero. Alternatively or additionally, other methods may be used to discretize or otherwise pre-process the raw data.
Similar to the location map, the time of day may be pre-processed. For example, the time of day may be discretized into 1-D time units. For example, 3600 × 24 seconds a day may be discretized by 10 minute cells to obtain 3600 × 24/60/10=144 cells. Each cell may correspond to an average travel time of the vehicle traveling from a respective time. In addition, other pre-treatment methods may be applied. For example, a passenger riding a taxi or service vehicle may have different patterns depending on the day of the week (e.g., long distance riding late on the weekend), season (e.g., during extremely cold or hot seasons), weather (e.g., riding longer during inclement weather), geographic region (e.g., riding longer in more dispersed metropolitan areas), etc. With the day of the week example, the workday may correspond to 144 cells and the weekend may correspond to another 144 cells, thereby obtaining a total of 288 time units.
In various embodiments, the computing device 110 may conduct a query 204 directed to the system 102. Alternatively, another device or system 102 may itself initiate query 204. In any case, the system 102 may receive the query at any time (e.g., before the user begins transporting in the vehicle, while the user is transporting in the vehicle). The query 204 may include an origin, a destination, and a time of day, which may be sent by the computing device 110, collected by the system 102, or otherwise obtained. In some embodiments, the starting point may be related to the user's location. If the user is moving (e.g., riding in a vehicle), the starting point may be continually updated, and the travel time and distance estimate may be continually updated accordingly.
In various embodiments, the UnifiedNN may learn the feature representation from the training data and best map the features to the output variables. Furthermore, the UnifiedNN may be able to learn any mapping from features to outputs, and may approximate any non-linear function. The main framework of unifiednns may be based on neural networks, as described below with reference to fig. 3B. In neural networks, neurons may serve as the basic building blocks. Neurons can receive input signals (e.g., input data), process them using logical computational functions, and send output signals (e.g., output data) according to the results of the computations. When these neurons are arranged into a network of neurons, they are referred to as a neural network. Each column of neurons in the network is referred to as a layer, and there can be multiple layers of neurons in each layer of the network. Networks with a single layer of neurons are called perceptrons, and networks with multiple layers of neurons are called multilayer perceptrons (MLPs). Two hidden layer multilayer perceptrons (A) are shown in FIG. 3B 1 Layer and A 2 Layer) where the input layer includes inputs (X) to the network 1 ,X 2 ,X 3 ,X 4 ). The input layer is also referred to as the visible layer, since this may be the only exposed part of the network. The hidden layer derives features from the input layer at different scales or resolutions to form high-level features and outputs values or vectors of values at the output layer. At each hidden layer, the network may compute the features as:
A 1 =f(W 1 *X)
A 2 =f(W 2 *A 1 )
Figure BDA0002202604900000101
where f is an activation function, which takes a weight (e.g., W) 1 、W 2 、W 3 ) And the previous layerAnd outputs a value. The function f may be the same or different for all hidden layers. A1, A2 and
Figure BDA0002202604900000102
is a continuous output of the first hidden layer, the second hidden layer and the final output layer. For a given row of data X as a network input, the network may process the input to obtain A1, A2 and eventually a prediction output ≧ greater>
Figure BDA0002202604900000104
This may be referred to as forward propagation. The prediction can then be output
Figure BDA0002202604900000105
Compared to the expected output Y to calculate the error using a loss function. The expected output Y may have high precision (e.g., results of independent verification, manually acquired results, cross-check results). The loss function measures the confidence level of the network estimates. For example, in a regression problem, the mean square loss between the predicted output and the expected output can be calculated as:
Figure BDA0002202604900000103
where N is the number of training data samples, Y i Representing the expected output of the ith training sample. The error may then be propagated back through the network using a back-propagation algorithm to update the weight W for each layer according to a random gradient descent 1 、W 2 、W 3 One layer at a time. This may be referred to as back propagation. The process of forward propagation and backward propagation may be repeated for all data samples in the training data, and one propagation over the entire training data set may be referred to as an epoch. Thus, the neural network can be trained to minimize losses over a significant period of time. In addition, cross-validation can be used to adjust all hyper-parameters, such as the number of layers in the network, the number of neurons per layer, the activation of neurons, loss functions. Thus, by the above trainingAnd adjustments, a trained neural network can be obtained for accurate mapping from features to outputs. And in the present application, the characteristics may include an origin, a destination, global positioning system coordinates, and a time of day, and the output may include a predicted travel time and distance.
Fig. 3C is an exemplary unified neural network (UnifiedNN) 330 that estimates travel time and distance, according to various embodiments. The UnifiedNN and various modules described herein (e.g., distance DNN (deep neural network) module, time DNN module) may be implemented as instructions stored in memory 106 described above. The instructions, when executed by the processor 104, may cause the system 102 to perform the various methods and steps described herein.
In some embodiments, the UnifiedNN may include a "distance DNN module" for estimating distance traveled and a "time DNN module" for estimating time traveled. The input to the distance DNN module may include a starting point o in discrete GPS coordinates i And destination d i (latitude of origin, longitude of origin, latitude of destination, longitude of destination). The distance DNN module may not be exposed to the time of day information because the time of day information is not relevant to the distance traveled estimate and may mislead the network. Thus, the input to the distance DNN module may include only the origin latitude grid, the origin longitude grid, the destination latitude grid, and the destination longitude grid. Distance DNN may accomplish the predicted distance traveled. Further, the input of the time DNN module may include information of the last layer of the first module (e.g., activation or activation function of the last hidden layer of the distance DNN module, acquired travel distance output from the last layer, etc.) and time of day information, which is important for estimating the travel time because the time of day information is related to dynamic traffic condition information. Thus, the time DNN module may complete predicting the travel time.
In some embodiments, both the distance DNN module and the time DNN module may be three-layer multi-layered perceptrons with different numbers of neurons per layer. FIG. 3C shows an example of the cross-validation number of neurons for each layer parameter for the two modules, configured for optimal performance for accurate travel time and distance prediction. The distance DNN model may include three hidden layers: a first layer of 20 neurons, a second layer of 100 neurons, and a third layer of 20 neurons. The temporal DNN model may include three hidden layers: a first layer of 64 neurons, a second layer of 128 neurons, and a third layer of 20 neurons.
Figure BDA0002202604900000113
And &>
Figure BDA0002202604900000114
Is the pre-ranging distance and time of the distance DNN module and the time DNN module. The whole network can be trained together with the loss function by random gradient descent:
Figure BDA0002202604900000111
from equation (1), the loss function of equation (2) can be written as:
Figure BDA0002202604900000112
the disclosed methods (e.g., unifiedNN) can predict travel times and distances more accurately than other methods, such as: (1) Time Linear Regression (LRT), a simple linear regression method for time estimation that models travel time as a function of origin and destination defined by two-dimensional grid cells and one-dimensional time cell numbers; (2) Distance Linear Regression (LRD), a simple linear regression method for distance estimation that models distance traveled as a function of origin and destination defined by a two-dimensional grid cell; (3) A time DNN module (TimeNN), a method of learning travel time using only the time DNN module of UnifiedNN, and whose inputs include only an origin and a destination defined by two-dimensional square cells and one-dimensional time cell numbers; and (4) a distance DNN module (DistNN), a method of learning a travel distance using only the distance DNN module of the UnifiedNN, and the input of which includes only an original origin and destination defined by a two-dimensional checkered cell.
The disclosed method is superior to the simple linear regression method (LRT) of travel time estimation, as simple linear regression is a baseline method that does not consider uncertain traffic conditions and only attempts to find a linear relationship between the original origin-destination GPS coordinates and travel time. Considering the time difference, timeNN may improve the travel time prediction of the time linear regression by mapping the original origin-destination GPS coordinates and time of day information to the travel time. This can increase the MAE by about 78%. The disclosed method may add coding distance information for travel time prediction to TimeNN, thereby further improving prediction performance. MAE can be provided for about 13 seconds from TimeNN to UnifiedNs. Since MAE may be an average of millions of vehicle trips, about 13 seconds of improvement in MAE is significant. Travel time and distance predictions are more accurate by UnifiedNN than TimeNN, and their variance may increase with longer trips (e.g., longer than 12 minutes or 720 seconds). In some embodiments, particularly those short trips as long as 8-10 minutes are more susceptible to time conditions, the disclosed method may be better at obtaining dynamic conditions than other methods. In particular, the disclosed methods can provide significant benefits for short trips over other methods.
Similar to LRT, LRD performs worse than UnifiedNN because most vehicular service areas include cities and it is difficult to find a straight route from an origin to a destination. Therefore, although LRD always tries to strive for linear mode, finding linear distance mode is not an efficient method. Furthermore, the performance difference between DistNN and UnifiedNN is small, but DistNN does not have a high performance travel time estimate, because the DistNN method uses only distance data for calculations without time conditional input as described further herein.
Fig. 4A is a flow diagram illustrating an exemplary method 400 according to various embodiments of the present application. Method 400 may be implemented in various environments, including, for example, environment 100 of FIG. 1. The example method 400 may be implemented by one or more components of the system 102 (e.g., the processor 104, the memory 106). The exemplary method 400 may be implemented by a plurality of systems similar to the system 102. The operations of method 400 presented below are for illustrative purposes. Depending on the implementation, the exemplary method 400 may include additional, fewer, or alternative steps performed in various orders or in parallel.
In process 402, a vehicle trip data set for a plurality of trips may be obtained, the data set including an origin, a destination, a time of day, a trip time, and a trip distance associated with each of the plurality of trips. In some embodiments, the origin and destination may be pre-processed as described above, for example, to obtain discrete GPS (global positioning system) coordinates. The travel time may also be preprocessed as described above to obtain discrete travel times. In step 404, the neural network model may be trained using the vehicle trip data set to obtain a trained model. The neural network model may include a first module and a second module, each module corresponding to at least a portion of the instructions stored in the memory. The first module may include a first number of neuron layers. The first module may be configured to obtain a start point and a destination as a first input to estimate a distance traveled. The second module may include a second number of neuron layers. The second module may be configured to obtain information of a last floor of the first module and a time of day as a second input to estimate the travel time. In some embodiments, to train the neural network model using the vehicle trip data set, the computing system 102 may be used or configured to provide the origin and the destination to the first module to obtain the first number and the number of neurons in each of the first number of neuron layers, and to compare the estimated distance traveled to the distance traveled to adjust the first number and the number of neurons in each of the first number of neuron layers, to provide the information and the time of day of the last layer of the first module to the second module to obtain the second number and the number of neurons in each of the second number of neuron layers, and to compare the estimated time of travel to the time of trip to adjust the second number and the number of neurons in each of the second number of neuron layers. In step 406, optionally, a query may be received that includes a query origin, a query destination, and a query time (where the origin, destination, and time are modified by the "query" to distinguish the origin, destination, and time in the training data set described above). In step 408, optionally, a trained model may be applied to the received query to estimate a travel time and a travel distance from the query origin to the query destination at the query time.
In some embodiments, the first module and the second module may be implemented as various instructions stored in a memory (e.g., memory 106 described above). When executed by a processor (e.g., processor 104 described above), the instructions may cause a system (e.g., system 102) to perform the various steps and methods described herein. The first module may be a neural network model including three neuron layers, and the second module may be another neural network model including three neuron layers. The first module may include a first neuron layer comprising 20 neurons, a second neuron layer comprising 100 neurons, and a third neuron layer comprising 20 neurons. The second module may include a first neuron layer comprising 64 neurons, a second neuron layer comprising 120 neurons, and a third neuron layer comprising 20 neurons.
In some embodiments, steps 406 and 408 may show applying a training model to estimate travel time and distance. For example, a user may submit a query while looking for a transportation (e.g., by a taxi or other vehicle) from the query origin to the query destination. The estimated travel time and distance returned to the user may provide the user with important information to decide when to transport as planned. As another example, a user being transported (e.g., by a taxi or other vehicle) may continually receive updated travel time and distance estimates while riding in the vehicle based on changing query origins (real-time locations of the vehicle or user). The user may adjust the schedule accordingly based on the updated travel times and distances, make changes to the haul route plan as necessary, and so forth. In any case, the user may actively submit the origin and destination (e.g., an entered address, a fixed location on a map) to the computing system 102, and/or the computing system 102 may actively obtain the origin and destination (e.g., query the current location of the computing device 110 to update the origin).
Fig. 4B is a flow diagram illustrating an exemplary method 450 according to various embodiments of the present application. Method 450 may be implemented in various environments, including, for example, environment 100 of FIG. 1. The example method 450 may be implemented by one or more components of the system 102 (e.g., the processor 104, the memory 106). The exemplary method 450 may be implemented by a plurality of systems similar to the system 102. The operations of method 450 presented below are for illustrative purposes. Depending on the implementation, the example method 450 may include additional, fewer, or alternative steps performed in various orders or in parallel. The various modules described below may be trained, for example, by the methods discussed above.
In step 452, information about the origin and destination may be input to a first trained neural network module (e.g., distance DNN as described above) to obtain the distance traveled. For example, a user may directly enter the origin and destination into the device and send to computing system 102. As another example, computing system 102 may query a device (e.g., a mobile phone) associated with the user to obtain or update the origin or destination. In step 454, time information (e.g., time of day information) and information of the last layer of the first trained neural network module (e.g., activation or activation function of the last hidden layer from the DNN module, the acquired travel distance output from the last layer, etc.) are input to the second trained neural network module (e.g., time DNN as described above) to acquire the travel time. The computing system 102 may provide information for the last layer of the first trained neural network module to the second trained neural network module. The time of day information may be submitted actively by the user or obtained actively by the computing system 102. The input to the distance DNN module may include a starting point o in discrete GPS coordinates i And destination d i (latitude of origin, longitude of origin, latitude of destination, longitude of destination). Distance DNN may accomplish distance traveled prediction. The input to the time DNN module may include information such as activation of the last hidden layer from the DNN module and time of day information, as the time of day information is handed over to the dynamicThe traffic information is relevant and therefore important for estimating the travel time. Thus, time DNN may complete travel time prediction.
The techniques described herein are implemented by one or more special-purpose computing devices. The special purpose computing device may be hardwired to perform the techniques, or may include circuitry or digital electronics, such as one or more Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) that are continuously programmable to perform the techniques, or may include one or more hardware processors programmed to perform the techniques according to program instructions in firmware, memory, other storage, or a combination. Such special purpose computing devices may also incorporate custom hardwired logic, ASICs, or FPGAs with custom programming to implement these techniques. A special-purpose computing device may be a desktop computer system, a server computer system, a portable computer system, a handheld device, a network device, or any other device or combination of devices that contain hardwired and/or program logic to implement the techniques. Computing devices are typically controlled and coordinated by operating system software. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file systems, networking, input/output services, and provide user interface functions such as a graphical user interface ("GUI"), among others.
Fig. 5 illustrates a block diagram of a computer system 500, on which computer system 500 any of the embodiments described herein may be implemented. The system 500 may correspond to the system 102 described above. Computer system 500 includes a bus 502 or other communication mechanism for communicating information, and one or more hardware processors 504 coupled with bus 502 for processing information. The hardware processor 504 may be, for example, one or more general-purpose microprocessors. The processor 504 may correspond to the processor 104 described above.
Computer system 500 also includes a main memory 506, such as a Random Access Memory (RAM), cache memory, and/or other dynamic storage device, coupled to bus 502 for storing information and instructions to be executed by processor 504. Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. When stored in a storage medium accessible to processor 504, the instructions cause computer system 500 to become a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system 500 further includes a Read Only Memory (ROM) 508 or other static storage device coupled to bus 502 for storing static information and instructions for processor 504. A storage device 510, such as a magnetic disk, optical disk, or USB thumb drive (flash drive), is provided and coupled to bus 502 for storing information and instructions. Main memory 506, ROM 508, and/or storage 510 may correspond to memory 106 described above.
Computer system 500 may implement the techniques described herein using custom hardwired logic, one or more ASICs or FPGAs, firmware, and/or program logic that, in combination with the computer system, render computer system 500 a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 500 in response to processor 504 executing one or more sequences of one or more instructions contained in main memory 506. Such instructions may be read into main memory 506 from another storage medium, such as storage device 510. Execution of the sequences of instructions contained in main memory 506 causes processor 504 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
Main memory 506, ROM 508, and/or storage 510 may include non-transitory storage media. The term "non-transitory medium" and similar terms, as used herein, refers to any medium that stores data and/or instructions that cause a machine to operate in a specific manner. Such non-transitory media may include non-transitory media volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 510. Volatile media include dynamic memory, such as main memory 506. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any other physical medium with holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and network versions thereof.
Computer system 500 also includes a communication interface 518 coupled to bus 502. Communication interface 518 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 518 may be an Integrated Services Digital Network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 518 may be a Local Area Network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicate with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 518 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Computer system 500 can send messages and receive data, including program code, through the network(s), network link and communication interface 518. In the Internet example, a server might transmit a requested code for an application program through the Internet, an ISP, local network and communication interface 518.
The received code may be executed by processor 504 as it is received, and/or stored in storage device 510, or other non-volatile storage for later execution.
Each of the processes, methods, and algorithms described in the preceding sections may be implemented by code modules executed by one or more computer systems or computer processors comprising computer hardware, and are fully or partially automated. The processes and algorithms may be partially or fully implemented in application-specific circuitry.
The various features and processes described above may be used independently of one another or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of the present disclosure. Additionally, certain method or process blocks may be omitted in some embodiments. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states associated therewith may be performed in other sequences as appropriate. For example, described steps or states may be performed in an order different than that specifically disclosed, or multiple steps or states may be combined in a single step or state. The example steps or states may be performed in series, in parallel, or in some other manner. Steps or states may be added or removed from the disclosed exemplary embodiments. The example systems and components described herein may have configurations different than those described. For example, elements may be added, removed, or rearranged compared to the disclosed example embodiments.
Various operations of the example methods described herein may be performed, at least in part, by one or more processors that are temporarily configured (e.g., via software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such a processor may constitute a processor-implemented engine that operates to perform one or more operations or functions described herein.
Similarly, the methods described herein may be implemented at least in part by a processor, where a particular processor is an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented engines. Further, the one or more processors may also be operable to support performance of related operations or "software as a service" (SaaS) in a "cloud computing" environment. For example, at least some of the operations may be performed by a set of computers (as an example of machines including processors), which may be accessed via a network (e.g., the internet) and via one or more appropriate interfaces (e.g., application Program Interfaces (APIs)).
The performance of certain operations may be distributed among processors, not only residing within a single machine, but also deployed across multiple machines. In some exemplary embodiments, the processor or processor-implemented engine may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other exemplary embodiments, the processor or processor-implemented engine may be distributed across multiple geographic locations.
Throughout the specification, multiple instances may implement a component, an operation, or a structure described as a single instance. Although the individual operations of one or more methods are illustrated and described as separate operations, one or more of the separate operations may be performed concurrently and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
Although the summary of the subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to the embodiments without departing from the broader scope of the embodiments of the application. Embodiments of these subject matter may be referred to herein, individually or collectively, by the term "invention" merely for convenience and without intending to limit the scope of this application to any single disclosure or concept if more than one is in fact disclosed.
The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the disclosed teachings. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The detailed description is, therefore, not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
Any process descriptions, elements, or steps in the flowcharts depicted herein and/or in the figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternative examples are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art.
As used herein, the term "or" may be interpreted in an inclusive or exclusive sense. Furthermore, multiple instances may be provided as a single instance for a resource, operation, or structure described herein. In addition, the boundaries between the various resources, operations, engines, and databases are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are contemplated and fall within the scope of various embodiments of the present application. In general, structures and functionality presented as separate resources in the exemplary configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within the scope of the embodiments of the application, as represented by the claims that follow. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Conditional language "may", "might", or "may" is generally intended to convey that certain embodiments include certain features, elements, and/or steps not otherwise included in other embodiments, unless otherwise indicated or otherwise understood in the context of usage. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.

Claims (16)

1. A computing system for estimating travel time and distance, comprising:
one or more processors; and
memory storing instructions that, when executed by the one or more processors, cause the computing system to:
obtaining a vehicle trip data set including an origin, a destination, a time of day, a trip time, and a trip distance associated with each of a plurality of trips; and
training a neural network model with the vehicle travel data set to obtain a trained model, wherein:
the neural network model comprises a first module and a second module;
the first module includes a first number of neuron layers;
the first module is configured to obtain the origin and the destination as a first input
Estimating the driving distance;
the second module comprises a second number of neuron layers; and
the second module is used for acquiring the information of the last layer of the first module and the information of the last layer of the first module
The time is used as a second input to estimate the driving time, and the information of the last layer comprises the last hidden layer of the first module and the driving distance;
the training of a neural network model with the vehicle trip data set causes the computing system to:
providing the origin and the destination to the first module to obtain the first
A number of neurons in each of a number and the first number of neuron layers
Counting;
comparing the estimated distance traveled to the trip distance to adjust the first amount
And a number of neurons in each of the first number of neuron layers;
providing the information and the time of day of the last layer of the first module
Giving the second module to obtain the second quantity and the second quantity of neuron layers
The number of neurons in each neuron layer; and
comparing the estimated travel time to the travel time to adjust the second quantity
And a number of neurons in each of the second number of neuron layers.
2. The computing system of claim 1, wherein the memory further stores instructions that, when executed by the one or more processors, cause the computing system to:
receiving a query comprising a query starting point, a query destination and a query time; and
applying the trained model to the received query to predict the travel time and the travel distance from the query origin to the query destination at the query time.
3. The computing system of claim 1, wherein:
the origin and destination each comprise discrete GPS coordinates; and
the travel time comprises a discrete time.
4. The computing system of claim 1, wherein:
the first module is a neural network comprising three layers of neurons; and
the second module is another neural network that includes three layers of neurons.
5. The computing system of claim 4, wherein the first module comprises:
a first neuron layer comprising 20 neurons;
a second neuron layer comprising 100 neurons; and
a third neuron layer comprising 20 neurons.
6. The computing system of claim 4, wherein the second module comprises:
a first neuron layer comprising 64 neurons;
a second neuron layer comprising 120 neurons; and
a third neuron layer comprising 20 neurons.
7. A travel time and distance estimation method, comprising:
obtaining a vehicle trip data set including an origin, a destination, a time of day, a trip time, and a trip distance associated with each of a plurality of trips; and
training a neural network model with the vehicle travel data set to obtain a trained model, wherein:
the neural network model comprises a first module and a second module;
the first module includes a first number of neuron layers;
the first module is used for acquiring the starting point and the destination as first input to predict a driving distance;
the second module comprises a second number of neuron layers; and
the second module is used for acquiring information of the last layer of the first module and the time as second input to estimate the driving time, wherein the information of the last layer comprises a last hidden layer of the first module and the driving distance;
the training a neural network model with the vehicle trip data set comprises:
providing the origin and the destination to the first module to obtain the first
A number of neurons in each of a number and the first number of neuron layers
Counting;
comparing the estimated distance traveled to the trip distance to adjust the first quantity
And a number of neurons in each of the first number of neuron layers;
providing the information of the last layer of the first module and the time of day to the second module to obtain the second number and the number of neurons in each of the second number of neuron layers; and
comparing the estimated travel time to the travel time to adjust the second number and the number of neurons in each of the second number of neuron layers.
8. The method of claim 7, further comprising:
receiving a query comprising a query starting point, a query destination and a query time; and
applying the trained model to the received query to predict the travel time and the travel distance from the query origin to the query destination at the query time.
9. The method of claim 7, wherein:
the origin and destination each comprise discrete GPS coordinates; and
the travel time comprises a discrete time.
10. The method of claim 7, wherein:
the first module is a neural network comprising three layers of neurons; and
the second module is another neural network that includes three layers of neurons.
11. The method of claim 10, wherein the first module comprises:
a first neuron layer comprising 20 neurons;
a second neuron layer comprising 100 neurons; and
a third neuron layer comprising 20 neurons.
12. The method of claim 10, wherein the second module comprises:
a first neuron layer comprising 64 neurons;
a second neuron layer comprising 120 neurons; and
a third neuron layer comprising 20 neurons.
13. A non-transitory computer-readable medium for estimating travel time and distance, comprising instructions stored therein, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform the steps of:
obtaining a vehicle trip data set comprising an origin, a destination, a time of day, a trip time, and a trip distance associated with each of a plurality of trips; and
training a neural network model by using the vehicle travel data set to obtain a trained model, wherein:
the neural network model comprises a first module and a second module;
the first module includes a first number of neuron layers;
the first module is used for acquiring the appeal starting point and the destination as a first input to predict a driving distance;
the second module comprises a second number of neuron layers; and
the second module is used for acquiring information of the last layer of the first module and the time as second input to estimate the driving time, wherein the information of the last layer comprises a last hidden layer of the first module and the driving distance;
the training a neural network model with the vehicle trip data set comprises:
providing the origin and the destination to the first module to obtain the first
A number of neurons in each of a number and the first number of neuron layers
Counting;
comparing the estimated distance traveled to the trip distance to adjust the number of neurons in each of the first number and the first number of neuron layers;
providing the information of the last layer of the first module and the time of day to the second module to obtain the second number and the number of neurons in each of the second number of neuron layers; and
comparing the estimated travel time to the travel time to adjust the second number and the number of neurons in each of the second number of neuron layers.
14. The non-transitory computer-readable medium of claim 13, wherein the instructions, when executed by one or more processors, further perform the steps of:
receiving a query comprising a query origin, a query destination, and a query time; and
applying the trained model to the received query to predict the travel time and the travel distance from the query origin to the query destination at the query time.
15. The non-transitory computer-readable medium of claim 13, wherein:
the first module is a neural network comprising three neuron layers; and
the second module is another neural network that includes three layers of neurons.
16. The non-transitory computer-readable medium of claim 13, wherein:
the first module comprises:
a first neuron layer comprising 20 neurons;
a second neuron layer comprising 100 neurons; and
a third neuron layer comprising 20 neurons; and
the second module includes:
a first neuron layer comprising 64 neurons; a second neuron layer comprising 120 neurons; and
a third neuron layer comprising 20 neurons.
CN201780088459.3A 2017-08-10 2017-08-10 Travel time and distance estimation system and method Active CN110431544B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/096893 WO2019028763A1 (en) 2017-08-10 2017-08-10 System and method for estimating travel time and distance

Publications (2)

Publication Number Publication Date
CN110431544A CN110431544A (en) 2019-11-08
CN110431544B true CN110431544B (en) 2023-04-18

Family

ID=65272934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780088459.3A Active CN110431544B (en) 2017-08-10 2017-08-10 Travel time and distance estimation system and method

Country Status (4)

Country Link
US (1) US11536582B2 (en)
CN (1) CN110431544B (en)
TW (1) TW201911142A (en)
WO (1) WO2019028763A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11276071B2 (en) * 2017-08-31 2022-03-15 Paypal, Inc. Unified artificial intelligence model for multiple customer value variable prediction
US11893623B1 (en) 2020-05-29 2024-02-06 Walgreen Co. System for displaying dynamic pharmacy information on a graphical user interface
TWI748514B (en) * 2020-06-12 2021-12-01 中華電信股份有限公司 Method and system for estimating traffic
US20210390629A1 (en) * 2020-06-16 2021-12-16 Amadeus S.A.S. Expense fraud detection
EP3971780A1 (en) * 2020-07-24 2022-03-23 Tata Consultancy Services Limited Method and system for dynamically predicting vehicle arrival time using a temporal difference learning technique
CN112429044B (en) * 2020-11-27 2023-02-28 株洲中车时代软件技术有限公司 Method and system for positioning running train position based on nonlinear compensation
FR3124256A1 (en) * 2021-06-21 2022-12-23 Psa Automobiles Sa METHOD FOR PREDICTING A POSITION OR A DESTINATION TRAJECTORY OF A VEHICLE

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081859A (en) * 2009-11-26 2011-06-01 上海遥薇实业有限公司 Control method of bus arrival time prediction model
JP2013013261A (en) * 2011-06-30 2013-01-17 Hitachi Zosen Corp Device and method for detecting position of railway vehicle
US9015093B1 (en) * 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN104715630A (en) * 2014-10-06 2015-06-17 中华电信股份有限公司 Arrival time prediction system and method
CN105636197A (en) * 2014-11-06 2016-06-01 株式会社理光 Method and device for estimating distance, and method and device for locating node

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM474994U (en) 2011-06-08 2014-03-21 Univ Chaoyang Technology System for estimating traffic flow and vehicle speed
TW201324461A (en) 2011-12-13 2013-06-16 Ind Tech Res Inst Vehicle speed forecast method and system
CN104346926B (en) 2013-07-31 2017-09-12 国际商业机器公司 Running time Forecasting Methodology and device and related terminal device
CN103440768B (en) 2013-09-12 2015-04-15 重庆大学 Dynamic-correction-based real-time bus arrival time predicting method
US10848912B2 (en) 2014-05-30 2020-11-24 Apple Inc. Estimated time of arrival (ETA) based on calibrated distance
CN104637334B (en) 2015-02-10 2017-07-07 中山大学 A kind of bus arrival time real-time predicting method
CN104900063B (en) 2015-06-19 2017-10-27 中国科学院自动化研究所 A kind of short distance running time Forecasting Methodology
CN108133611A (en) * 2016-12-01 2018-06-08 中兴通讯股份有限公司 Vehicle driving trace monitoring method and system
EP3336571A1 (en) * 2017-04-05 2018-06-20 Siemens Healthcare GmbH Assignment of mr fingerprints on the basis of neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081859A (en) * 2009-11-26 2011-06-01 上海遥薇实业有限公司 Control method of bus arrival time prediction model
US9015093B1 (en) * 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
JP2013013261A (en) * 2011-06-30 2013-01-17 Hitachi Zosen Corp Device and method for detecting position of railway vehicle
CN104715630A (en) * 2014-10-06 2015-06-17 中华电信股份有限公司 Arrival time prediction system and method
CN106022541A (en) * 2014-10-06 2016-10-12 中华电信股份有限公司 Arrival time prediction method
CN105636197A (en) * 2014-11-06 2016-06-01 株式会社理光 Method and device for estimating distance, and method and device for locating node

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Cascade of Artificial Neural Networks to Predict Transformers Oil Parameters;Khaled Shaban;《IEEE Transactions on Dielectrics and Electrical Insulation》;20090417;第16卷(第2期);第516-523页 *
Accurate freeway travel time prediction with state-space neural networks under missing data;J.W.C. van Lint;《elsevier》;20051231;第347-369页 *
Traffic Speed Prediction Under Weekday, Time, and Neighboring Links’ Speed: Back Propagation NeuralNetwork Approach;Eun-Mi Lee;《Advanced Intelligent Computing Theories and Applications-With Aspects of Theoretical and Methodological Issues》;20071221;第626-635页 *
Urban link travel time estimation using large-scale taxi data with partial information;Xianyuan Zhan;《elsevier》;20131231;第37-49页 *

Also Published As

Publication number Publication date
CN110431544A (en) 2019-11-08
US20210231454A1 (en) 2021-07-29
TW201911142A (en) 2019-03-16
US11536582B2 (en) 2022-12-27
WO2019028763A1 (en) 2019-02-14

Similar Documents

Publication Publication Date Title
CN110073426B (en) System and method for estimating time of arrival
CN110431544B (en) Travel time and distance estimation system and method
US11727345B1 (en) Integrated multi-location scheduling, routing, and task management
CN110998568B (en) Navigation determination system and method for embarkable vehicle seeking passengers
CN112074845A (en) Deep reinforcement learning for optimizing car pooling strategies
WO2020001261A1 (en) Systems and methods for estimating an arriel time of a vehicle
US11507894B2 (en) System and method for ride order dispatching
TW201738811A (en) Systems and methods for recommending an estimated time of arrival
JP6965426B2 (en) Systems and methods for estimating arrival time
US20220187083A1 (en) Generating navigation routes and identifying carpooling options in view of calculated trade-offs between parameters
CN111415024A (en) Arrival time estimation method and estimation device
CN112106021B (en) Method and device for providing vehicle navigation simulation environment
US10989546B2 (en) Method and device for providing vehicle navigation simulation environment
CN111066048B (en) System and method for ride order dispatch
CN113924460B (en) System and method for determining recommendation information for service request
CN112088106B (en) Method and device for providing vehicle navigation simulation environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant