WO2020243937A1 - Systems and methods for map-matching - Google Patents

Systems and methods for map-matching Download PDF

Info

Publication number
WO2020243937A1
WO2020243937A1 PCT/CN2019/090233 CN2019090233W WO2020243937A1 WO 2020243937 A1 WO2020243937 A1 WO 2020243937A1 CN 2019090233 W CN2019090233 W CN 2019090233W WO 2020243937 A1 WO2020243937 A1 WO 2020243937A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching
candidate road
neural network
road segments
network model
Prior art date
Application number
PCT/CN2019/090233
Other languages
English (en)
French (fr)
Inventor
Haibo Li
Original Assignee
Beijing Didi Infinity Technology And Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology And Development Co., Ltd. filed Critical Beijing Didi Infinity Technology And Development Co., Ltd.
Publication of WO2020243937A1 publication Critical patent/WO2020243937A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • the present disclosure generally relates to map-matching technology, and in particular, to methods and systems for matching geographic locations with target road segments in a road network.
  • LBS Location-based services
  • Map-matching techniques are widely used in LBS for, for example, planning a travel route, establishing an intelligent transport system (ITS) , etc.
  • Map-matching techniques may be used to match recorded locations (e.g., geographic coordinates) to a logical model of the real world (such as a road network of an area) .
  • Acommon algorithm for map-matching is the Hidden Markov Model (HMM) .
  • the HMM model can be used to determine target road segments in the road network that match with the recorded locations using coordinates of the recorded locations as one single input of the HMM model, which may cause a lower accuracy of the map-matching. For instance, when a vehicle makes a turn or turns over, the map-matching result determined using the traditional HMM model may be incorrect. Since the required accuracy of the map-matching is relatively high for some fields such as the car-hailing services, it is desirable to provide systems and methods for map-matching with a higher accuracy.
  • a system for map-matching may include at least one storage medium storing a set of instructions for map-matching and at least one processor in communication with the at least one storage medium.
  • the at least one processor may execute the stored set of instructions for map-matching by obtaining position data provided by a positioning device that is associated with a position sequence.
  • the position sequence may include a plurality of consecutive positions associated with a trajectory and have a last position in the plurality of consecutive positions.
  • the at least one processor may further execute the stored set of instructions for map-matching by obtaining movement data provided by a motion sensor that is associated with the plurality of consecutive positions and determining one or more candidate road segments.
  • the at least one processor may further execute the stored set of instructions for map-matching by using a target neural network model to determine a matching probability of the last position matching with each one of the one or more candidate road segments based on the position and movement data; and designating the candidate road segment with the highest matching probability as a target road segment.
  • the target neural network model may be obtained by training a neural network model using a plurality of groups of training samples.
  • the positioning device includes a Global Positioning System (GPS) .
  • GPS Global Positioning System
  • the motion sensor includes a gyroscope or an accelerometer.
  • a distance between each of the one or more candidate road segments and the last position in the position sequence may be smaller than a threshold.
  • a distance between each of the one or more candidate road segments and the last position in the position sequence may be smaller than about 50 meters.
  • the at least one processor may execute the stored set of instructions for map-matching by determining one or more first features based on the position data, and determining one or more second features based on the movement data.
  • the matching probability of the last position in the position sequence matching with each of the one or more candidate road segments may be determined by inputting the one or more first features and the one or more second features to the target neural network model.
  • the one or more first features may include at least one of a distance between each of the plurality of positions and each of the one or more candidate road segments or a position accuracy of each of the plurality of positions.
  • the distance may be a vertical distance between each of the plurality of positions and each of the one or more candidate road segments.
  • the one or more second features may include at least one of an acceleration, a direction angle, or a velocity.
  • the at least one processor may execute the stored set of instructions for map-matching by determining one or more third features associated with each of the one or more candidate road segments.
  • the target road segment may be determined based on the position data, the movement data, and the one or more third features.
  • the one or more third features may include at least one of a number of lanes associated with the each of the one or more candidate road segments, a velocity limit associated with the each of the one or more candidate road segments, a rank of the each of the one or more candidate road segments, or a condition of the each of the one or more candidate road segments.
  • each group of the plurality of groups of training samples may include one or more features associated with positions of a reference position sequence, one or more features associated with a movement of a reference client terminal at the positions of the reference position sequence, and one or more features associated with one or more reference road segments matching with the reference position sequence.
  • the neural network model may be constructed based on at least one of a long short term memory (LSTM) model, a recurrent neural network (RNN) model, a gated recurrent unit (GRU) model.
  • LSTM long short term memory
  • RNN recurrent neural network
  • GRU gated recurrent unit
  • the LSTM model includes a bi-directional LSTM model having a number of nodes equals to 30.
  • the neural network model may be constructed based on one or more one-dimension convolution layers.
  • the one-dimension convolution layer may be configured with a Relu activation function.
  • the neural network model may be constructed based on a full connection layer.
  • a number of nodes in the full connection layer may be equal to 128.
  • the full connection layer may be configured with a Sigmoid function.
  • the at least one processor may execute the stored set of instructions for map-matching by determining whether the trajectory is changing or has changed based on the target road segment.
  • the position sequence may include 30 consecutive positions.
  • a method for map-matching may be implemented on a computing device having at least one processor and at least one non-transitory storage medium.
  • the method may include obtaining position data provided by a positioning device that is associated with a position sequence.
  • the position sequence may include a plurality of consecutive positions associated with a trajectory and have a last position in the plurality of consecutive positions.
  • the method may further include obtaining movement data provided by a motion sensor that is associated with the plurality of consecutive positions and determining one or more candidate road segments.
  • the method may further include using a target neural network model to determine a matching probability of the last position matching with each one of the one or more candidate road segments based on the position and movement data.
  • the method may further include designating the candidate road segment with the highest matching probability as a target road segment.
  • the target neural network model may be obtained by training a neural network model using a plurality of groups of training samples.
  • a system for map-matching may include an obtaining module, configured to obtain position data provided by a positioning device that is associated with a position sequence.
  • the position sequence may include a plurality of consecutive positions associated with a trajectory and have a last position in the plurality of consecutive positions.
  • the obtaining module may further obtain movement data provided by a motion sensor that is associated with the plurality of consecutive positions.
  • the system may further include a candidate road segment determination module, configured to determine one or more candidate road segments.
  • the system may further include a matching module, configured to use a target neural network model to determine a matching probability of the last position matching with each one of the one or more candidate road segments based on the position and movement data, and designate the candidate road segment with the highest matching probability as a target road segment.
  • the target neural network model may be obtained by training a neural network model using a plurality of groups of training samples.
  • a non-transitory computer readable medium may include a set of instructions for map-matching. When executed by at least one processor, the set of instructions may direct the at least one processor to effectuate a method.
  • the method may include obtaining position data provided by a positioning device that is associated with a position sequence.
  • the position sequence may include a plurality of consecutive positions associated with a trajectory and have a last position in the plurality of consecutive positions.
  • the method may further include obtaining movement data provided by a motion sensor that is associated with the plurality of consecutive positions and determining one or more candidate road segments.
  • the method may further include using a target neural network model to determine a matching probability of the last position matching with each one of the one or more candidate road segments based on the position and movement data.
  • the method may further include designating the candidate road segment with the highest matching probability as a target road segment.
  • the target neural network model may be obtained by training a neural network model using a plurality of groups of training samples.
  • FIG. 1 is a schematic diagram of a system for map-matching according to some embodiments of the present disclosure
  • FIG. 2 is a schematic diagram illustrating hardware and/or software components of a computing device according to some embodiments of the present disclosure
  • FIG. 3 is a schematic diagram illustrating a terminal device according to some embodiments of the present disclosure.
  • FIG. 4 is a block diagram illustrating a processing engine according to some embodiments of the present disclosure.
  • FIG. 5 is a flowchart illustrating a process for map-matching according to some embodiments of the present disclosure
  • FIG. 6 is a flowchart illustrating a process for training a neural network model to obtain a target neural network model according to some embodiments of the present disclosure.
  • FIG. 7 is a schematic diagram illustrating a structure of a neural network model according to some embodiments of the present disclosure.
  • the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments of the present disclosure. It is to be expressly understood, the operations of the flowcharts may be implemented not in order. Conversely, the operations may be implemented in inverted order or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
  • the system or method of the present disclosure may be applied to user of any other kind of scenarios that requires positioning services, such as navigation services, food- delivery services, online car-hailing services, etc.
  • the application scenarios of the system or method of the present disclosure may include a webpage, a plug-in of a browser, a client terminal, a custom system, an internal analysis system, an artificial intelligence robot, or the like, or any combination thereof.
  • passenger, ” “requester, ” “requestor, ” “service requester, ” “service requestor, ” and “customer” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may request or order a service.
  • driver, ” “provider, ” “service provider, ” and “supplier” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may provide a service or facilitate the providing of the service.
  • user in the present disclosure refers to an individual, an entity or a tool that may request a service, order a service, provide a service, or facilitate the providing of the service.
  • terms “requester” and “requester terminal” may be used interchangeably
  • terms “provider” and “provider terminal” may be used interchangeably.
  • the terms “request, ” “service, ” “service request, ” and “order” in the present disclosure are used interchangeably to refer to a request that may be initiated by a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, a supplier, or the like, or a combination thereof.
  • the service request may be accepted by any one of a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, or a supplier.
  • the service request may be chargeable or free.
  • Position data and movement data associated with a vehicle may be obtained.
  • the position data and movement data may be associated with a position sequence including a plurality of consecutive positions.
  • One or more candidate road segments that is possible to be matched with a last position in the position sequence may be determined based on the position data.
  • feature data associated with each of the plurality of consecutive positions may be inputted to a target neural network model, so as to determine the matching probability of the last position matching with each of the one or more candidate road segments.
  • the feature data may include one or more first features determined based on the position data, one or more second features determined based on the movement data, one or more third features determined based on road information associated with each of the one or more candidate road segments, or the like, or any combination thereof.
  • the one or more first features may include a distance (e.g., a vertical distance) between each of the plurality of consecutive positions in the position sequence and each of a plurality of candidate road segments, the position accuracy of each of the plurality of consecutive positions, or the like, or any combination thereof.
  • the one or more second features may include one or more of an acceleration, a direction angle, a velocity, a travelled distance of the vehicle, etc.
  • the one or more third features may include a number of lanes, a velocity limit, a rank, a condition, or the like, or any combination thereof.
  • the target neural network model may be constructed based on one or more of a long short term memory (LSTM) model, a recurrent neural network (RNN) model, a gated recurrent unit (GRU) model, etc.
  • LSTM long short term memory
  • RNN recurrent neural network
  • GRU gated recurrent unit
  • FIG. 1 is a schematic diagram of a system 100 for map-matching according to some embodiments of the present disclosure.
  • the system 100 may include a server 110, a network 120, a terminal 130, and a storage (also referred to as a database) 140.
  • the server 110 may include a processing engine 112.
  • the server 110 may be configured to process information for map-matching. For example, the server 110 may obtain position data associated with a vehicle from a positioning device. The position data may be associated with a position sequence including a plurality of consecutive positions associated with a trajectory. The position sequence may include a last position in the plurality of consecutive positions. As another example, the server 110 may obtain movement data associated with the vehicle. The movement data may include but not limited to an acceleration, a direction angle, a velocity, or the like, or any combination thereof. In some embodiments, the server 110 may be a single server, or a server group. The server group may be centralized, or distributed (e.g., the server 110 may be a distributed system) . In some embodiments, the server 110 may be local or remote.
  • the server 110 may access information and/or data stored in the terminal 130, and/or the storage 140 via the network 120. As another example, the server 110 may be directly connected to the terminal 130, and/or the storage 140 to access information and/or data.
  • the server 110 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the server 110 may be implemented on a computing device having one or more components illustrated in FIG. 2 of the present disclosure.
  • the server 110 may include a processing engine 112. At least a part of the functions of the server 110 may be implemented on the processing engine 112. For instance, the processing engine 112 may determine one or more candidate road segments within a distance from the last position in the position sequence. The processing engine 112 may further obtain road information associated with each of the one or more candidate road segments, such as a number of lanes, a velocity limit, a rank, a condition, or the like, or any combination thereof. The processing engine 112 may use a target neural network model to determine a matching probability of the last position with each one of the one or more candidate road segments based on the position data, the movement data, and the road information.
  • the processing engine 112 may further designate a road segment having the highest matching probability as the target road segment.
  • the processing engine 112 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) .
  • the processing engine 112 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction-set processor
  • GPU graphics processing unit
  • PPU physics processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • PLD programmable logic device
  • controller a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • RISC reduced
  • the network 120 may facilitate exchange of information and/or data.
  • one or more components in the system 100 e.g., the server 110, the terminal 130, and/or the storage 140
  • the server 110 may obtain the position data from a positioning device via the network 120.
  • the network 120 may be any type of wired or wireless network, or combination thereof.
  • the network 120 may include a cable network, a wireline network, an optical fiber network, a tele communications network, an intranet, an Internet, a local area network (LAN) , a wide area network (WAN) , a wireless local area network (WLAN) , a metropolitan area network (MAN) , a wide area network (WAN) , a public telephone switched network (PSTN) , a Bluetooth TM network, a ZigBee TM network, a near field communication (NFC) network, a global system for mobile communications (GSM) network, a code-division multiple access (CDMA) network, a time-division multiple access (TDMA) network, a general packet radio service (GPRS) network, an enhanced data rate for GSM evolution (EDGE) network, a wideband code division multiple access (WCDMA) network, a high velocity downlink packet access (HSDPA) network, a long term evolution (LTE) network, a user datagram protocol (UDP) network
  • LAN local area
  • the network 120 may include one or more network access points.
  • the network 120 may include wired or wireless network access points such as base stations and/or internet exchange points 120-1, 120-2, ..., through which one or more components of the system 100 may be connected to the network 120 to exchange data and/or information.
  • the terminal 130 may be associated with a user.
  • the terminal 130 may perform one or more functions of the processing engine 112 described earlier, such as the determination of the one or more candidate road segments, the determination of the matching probability of the last position matching with each one of the one or more candidate road segments, the determination of a target road segment having the highest matching probability, or the like, or any combination thereof.
  • the terminal 130 may obtain the target road segment and display a target position on the target road segment to show a current position of the vehicle on a map.
  • a positioning device for determining the position data may be integrated into the terminal 130.
  • a motion sensor e.g., a gyroscope or an accelerometer
  • the terminal 130 may include a mobile device 130-1, a tablet computer 130-2, a built-in device (also referred to as an on-board device) 130-3, a tabletop computer 130-4, or the like, or any combination thereof.
  • the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
  • the smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof.
  • the wearable device may include a smart bracelet, a smart footgear, a smart glass, a smart helmet, a smart watch, a smart clothing, a smart backpack, a smart accessory, or the like, or any combination thereof.
  • the smart mobile device may include a smartphone, a personal digital assistance (PDA) , a gaming device, a navigation device, a point of sale (POS) device, or the like, or any combination thereof.
  • PDA personal digital assistance
  • the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, an augmented reality glass, an augmented reality patch, or the like, or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include a Google Glass, an Oculus Rift, a Hololens, a Gear VR, etc.
  • the terminal 130 may be a wireless device with positioning technology for locating the position of the user and/or the terminal 130.
  • the terminal 130 may send and/or receive information for map-matching to the processing engine 112 via a user interface.
  • the user interface may be in the form of an application implemented on the terminal 130.
  • the user interface implemented on the terminal 130 may be configured to facilitate communication between a user and the processing engine 112.
  • a user may input a request for map-matching via the user interface implemented on the terminal 130.
  • the terminal 130 may send the request for the terminal 130 to the processing engine 112 as described elsewhere in the present disclosure (e.g., FIG. 5 and the descriptions thereof) .
  • the user may set information and/or data (e.g., a signal) relating to map-matching via the user interface, such as parameters of a target neural network model, thresholds for candidate road segments determination, etc.
  • the user interface may facilitate the presentation or display of information and/or data (e.g., a signal) relating to map-matching received from the processing engine 112.
  • the information and/or data may include a result generated by the processing engine 112 for map-matching.
  • the result may include one or more images (e.g., two-dimensional images, three-dimensional images, etc. ) , one or more words, one or more digits, voices etc.
  • the information and/or data may be further configured to cause the terminal 130 to display the result to the user.
  • the storage 140 may store data and/or instructions.
  • the storage 140 may store the position data provided by the positioning device and/or the movement data provided by the motion sensor.
  • the storage 140 may store data and/or instructions that the server 110 may execute or use to perform methods described in the present disclosure.
  • the storage 140 may store the target neural network model used to determine a matching probability of the last position with each one of the one or more candidate road segments.
  • the storage 140 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
  • Mass storage may include, for example, a magnetic disk, an optical disk, a solid-state drive, etc.
  • Removable storage may include, for example, a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • Volatile read-and-write memory may include, for example a random access memory (RAM) .
  • RAM may include, for example, a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
  • DRAM dynamic RAM
  • DDR SDRAM double date rate synchronous dynamic RAM
  • SRAM static RAM
  • T-RAM thyristor RAM
  • Z-RAM zero-capacitor RAM
  • ROM may include, for example, a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (PEROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • the storage 140 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the storage 140 may be connected to the network 120 to communicate with one or more components in the system 100 (e.g., the server 110, the terminal 130, etc. ) .
  • One or more components in the system 100 may access the data or instructions stored in the storage 140 via the network 120.
  • the storage 140 may be directly connected to or communicate with one or more components in the system 100 (e.g., the server 110, the terminal 130, etc. ) .
  • the storage 140 may be part of the server 110.
  • one or more components in the system 100 may have a permission to access the storage 140.
  • one or more components in the system 100 may read and/or modify information related to a user when one or more conditions are met.
  • the server 110 may obtain target data from the storage 140, including sample keywords, popularity information, preference information associated with the user of the terminal 130, statistical data related to at least one travel means (also referred to as travel means information) , or the like, or a combination thereof.
  • an element of the system 100 may perform through electrical signals and/or electromagnetic signals.
  • the server 110 may operate logic circuits in its processor to perform such task.
  • the instruction and/or operation may be conducted via electrical signals.
  • the processor retrieves or saves data from a storage medium, it may transmit out electrical signals to a read/write device of the storage medium, which may read or write structured data in the storage medium.
  • the structured data may be transmitted to the processor in the form of electrical signals via a bus of the electronic device.
  • an electrical signal may refer to one electrical signal, a series of electrical signals, and/or a plurality of discrete electrical signals.
  • FIG. 2 is a schematic diagram illustrating a hardware and/or software components of a computing device 200 according to some embodiments of the present disclosure.
  • the server 110 and/or the terminal 130 may be implemented on the computing device 200 shown in FIG. 2.
  • the processing engine 112 may be implemented on the computing device 200 and configured to perform functions of the processing engine 112 disclosed in this disclosure.
  • the computing device 200 may be used to implement any component of the system 100 as described herein.
  • the processing engine 112 may be implemented on the computing device 200, via its hardware, software program, firmware, or a combination thereof.
  • the computer functions as described herein may be implemented in a distributed fashion on a number of similar platforms to distribute the processing load.
  • the computing device 200 may include COM ports 250 connected to and from a network connected thereto to facilitate data communications.
  • the computing device 200 may also include a processor (e.g., the processor 220) , in the form of one or more processors (e.g., logic circuits) , for executing program instructions.
  • the processor 220 may include interface circuits and processing circuits therein.
  • the interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
  • the processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
  • the computing device may further include program storage and data storage of different forms including, for example, a disk 270, and a read-only memory (ROM) 230, or a random-access memory (RAM) 240, for various data files to be processed and/or transmitted by the computing device.
  • the computing device may also include program instructions stored in the ROM 230, RAM 240, and/or another type of non-transitory storage medium to be executed by the processor 220.
  • the methods and/or processes of the present disclosure may be implemented as the program instructions.
  • the computing device 200 may also include an I/O component 260, supporting input/output between the computer and other components.
  • the computing device 200 may also receive programming and data via network communications.
  • processors 220 are also contemplated; thus, operations and/or method steps performed by one processor 220 as described in the present disclosure may also be jointly or separately performed by the multiple processors.
  • the processor 220 of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two different processors 220 jointly or separately in the computing device 200 (e.g., a first processor executes step A and a second processor executes step B, or the first and second processors jointly execute steps A and B) .
  • FIG. 3 is a schematic diagram illustrating hardware and/or software components of a terminal device 300 according to some embodiments of the present disclosure.
  • the terminal 130 may be implemented on the terminal device 300 shown in FIG. 3.
  • the terminal device 300 may be a mobile device, such as a mobile phone, a tablet computer, a laptop computer, a built-in device on a vehicle, etc.
  • the terminal device 300 may include a communication platform 310, a display 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, a mobile operating system (OS) 370, and a storage 390.
  • GPU graphics processing unit
  • CPU central processing unit
  • I/O 350 I/O 350
  • memory 360 a mobile operating system
  • OS mobile operating system
  • the positioning device and/or the motion sensor described earlier may be integrated into the terminal device 300 (not shown in FIG. 3) .
  • any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in the terminal device 300.
  • the mobile operating system 370 e.g., iOS TM , Android TM , Windows Phone TM , etc.
  • the applications 380 may include a browser or any other suitable mobile apps for receiving and rendering information relating to on-demand services or other information from the on-demand service system 100.
  • User interactions with the information stream may be achieved via the I/O 350 and provided to the processing engine 112 and/or other components of the on-demand service system 100 via the network 120.
  • FIG. 4 is a block diagram illustrating a processing engine 112 according to some embodiments of the present disclosure.
  • the processing engine 112 may be in communication with a storage medium (e.g., the storage 140 of the system 100, and/or the storage 390 of the terminal device 300) , and may execute instructions stored in the storage medium.
  • the processing engine 112 may include an obtaining module 410, a candidate road segment determination module 420, a matching module 430, and a storage module 440.
  • the processing engine 112 may be integrated into the server 110.
  • the obtaining module 410 may obtain data related to map-matching.
  • the obtaining module 410 may obtain position data provided by a positioning device that is associated with a position sequence having a last position.
  • the position sequence may include a plurality of consecutive positions associated with a trajectory.
  • the last position in the plurality of consecutive positions in the position sequence may correspond to a current position of the terminal or a position that needs to be corrected.
  • the position data provided by the positioning device may include the plurality of consecutive positions in the trajectory, a position accuracy of each of the plurality of consecutive positions, etc.
  • the obtaining module 410 may obtain movement data provided by a motion sensor that is associated with the plurality of consecutive positions. The movement data may be associated with each of the plurality of consecutive positions in the position sequence.
  • the movement data associated with a position may include an acceleration of the terminal at the position, a direction angle of the terminal at the position, a velocity of the terminal at the position, or the like, or any combination thereof.
  • the obtaining module 420 may obtain one or more characteristics associated with one or more candidate road segments.
  • the road information associated with a candidate road segment may include a number of lanes associated with the candidate road segment, a velocity limit (e.g., a maximum velocity limit and/or a minimum velocity limit) associated with the candidate road segment, a rank of the candidate road segment, a condition of the candidate road segment, a length value of the candidate road segment, a starting point of the candidate road segment, an ending point of the candidate road segment, or the like, or any combination thereof.
  • the obtaining module 410 may obtain a plurality of groups of training samples and a neural network model. The neural network model may be trained using the plurality of groups of training samples to obtain a target neural network model.
  • the candidate road segment determination module 420 may determine one or more candidate road segments based on the data related to map-matching. The last position may be matched with one of the one or more determined candidate road segments. Each road segment may be defined by one or more characteristics (also referred to as road information) stored in a storage device (e.g., the storage 140) . In some embodiments, the candidate road segment determination module 420 may determine the one or more candidate road segments based on the position data associated with the last position. Specifically, the candidate road segment determination module 420 may determine the one or more candidate road segments within a distance from the last position. The distance may be a vertical distance between the last position and each of a plurality of road segments around the last position.
  • the candidate road segment determination module 420 may further compare the vertical distance with a distance threshold. In response to a determination that the vertical distance between the last position and a road segment around the last position is less than or equal to the distance threshold (e.g., 50 meters) , the candidate road segment determination module 420 may determine the road segment around the last position as a candidate road segment.
  • the distance threshold may be set and/or adjusted by a user or according to a default setting of the system 100. For example, the threshold may be predefined as less than or equal to 50 meters, such as 10 meters, 20 meters, 30 meters, 40 meters, 50 meters, etc. In some embodiments, the distance threshold may be adjusted based on the velocity of the vehicle (or the person walking on foot) , the position accuracy of the last position, the number of the determined candidate road segments, or the like, or any combination thereof.
  • the matching module 430 may determine a target road segment that matches with the last position. Specifically, the matching module 430 may use a target neural network model to determine a matching probability of the last position matching with each one of the one or more candidate road segments based on the position and movement data. The matching module 430 may further designate the road segment with the highest matching probability as the target road segment. In some embodiments, the matching module 430 may determine the matching probability of the last position in the position sequence matching with each of the one or more candidate road segments by inputting the movement data, the position data, the road information, etc., to the target neural network model.
  • the matching module 430 may input feature data associated with the position sequence and/or feature data associated with each of the one or more candidate road segments to the target neural network model to obtain the matching probability of the last position matching with each one of the one or more candidate road segments.
  • the feature data associated with the position sequence may include one or more first features relating to the position data associated with the plurality of consecutive positions in the position sequence, and one or more second features associated with movements of the terminal at each of the plurality of consecutive positions in the position sequence.
  • the feature data associated with the one or more candidate road segments may include one or more third features associated with each of the plurality of candidate road segments.
  • the one or more first features may include a distance (e.g., a vertical distance) between each of the plurality of consecutive positions (e.g., the last position) in the position sequence and each of a plurality of candidate road segments, the position accuracy of each of the plurality of consecutive positions, a distance between the last position and other positions in the position sequence, or the like, or any combination thereof.
  • the one or more second features associated with each of the plurality of consecutive positions in the position sequence may include a difference between direction angles of the terminal at the last position and a previous position, a difference between accelerations of the terminal at the last position and a previous position, a difference between velocities of the terminal at the last position and a previous position, or the like, or any combination thereof.
  • the one or more third features associated with each of the plurality of candidate road segments may include the number of lanes associated with each of the plurality of candidate road segments, the velocity limit associated with each of the plurality of candidate road segments, the rank of each of the plurality of candidate road segments, the condition of each of the plurality of candidate road segments, or the like, or any combination thereof.
  • the storage module 440 may store data related to map-matching.
  • the storage module 440 may store one or more position sequence.
  • the storage module 440 may store the position data associated with each of the plurality of positions in the position sequence, the movement data associated with each of the plurality of positions in the position sequence, and the road information associated with a plurality of road segments (e.g., one or more candidate road segments) .
  • the storage module 440 may store an untrained neural network model and a plurality of training samples for training the untrained neural network model.
  • the storage module 440 may store the target neural network model.
  • the storage module 440 may store the target road segment that matches with the last position.
  • the modules in FIG. 4 may be connected to or communicate with each other via a wired connection or a wireless connection.
  • the wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or a combination thereof.
  • the wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or a combination thereof.
  • LAN Local Area Network
  • WAN Wide Area Network
  • Bluetooth a ZigBee
  • NFC Near Field Communication
  • two or more of the modules may be combined into a single module, and any one of the modules may be divided into two or more units.
  • FIG. 5 is a flowchart illustrating a process 500 for map-matching according to some embodiments of the present disclosure.
  • the process 500 shown in FIG. 5 may be applied to location-based services, such as navigation services, food-delivery services, online car-hailing services, etc.
  • the process 500 may be executed by the system 100.
  • the process 500 may be implemented as a set of instructions (e.g., an application) stored in the storage (e.g., ROM 230 or RAM 240 of the computing device 200) .
  • the processing engine 112 and/or modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processing engine 112 and/or the modules may be configured to perform the process 500.
  • process 500 presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 500 as illustrated in FIG. 5 and described below is not intended to be limiting.
  • the processing engine 112 may obtain position data provided by a positioning device that is associated with a position sequence having a last position.
  • the position sequence may include a plurality of consecutive positions associated with a trajectory.
  • the trajectory may include many consecutive positions that a terminal (e.g., the terminal 130) has passed through with time.
  • the terminal may include a vehicle, an on-board device of the vehicle (e.g., a car, a bus, a motorbike) , a mobile device carried by a user, or the like.
  • the user may include a driver, a passenger, a person walking on foot, etc.
  • the plurality of consecutive positions in the position sequence may be ranked in a chronological order based on a plurality of time points when the plurality of consecutive positions are determined by the positioning device.
  • the last position may refer to a position in the consecutive positions that is determined at a latest time point in the plurality of time points.
  • the last position in the plurality of consecutive positions in the position sequence may correspond to a current position of the terminal.
  • the last position in the plurality of consecutive positions may be a position that needs to be corrected.
  • the processing engine 112 may determine the position sequence associated with the trajectory by selecting a preset number of the plurality of consecutive positions including the current position.
  • the preset number of the plurality of consecutive positions in the position sequence may be between 25 and 35 such as 30, between 35-45 such as 40, between 45-55 such as 50, between 55-65 such as 60 etc.
  • the positioning device may include one or more positioning chips or circuits.
  • the positioning device may include one or more processors as described elsewhere in the present disclosure.
  • the positioning device may be integrated into the terminal associated with a user, such as a vehicle, a mobile terminal, etc.
  • the positioning device may include an on-board device of the vehicle, a positioning chip installed in a mobile terminal associated with the user (e.g., the terminal device 300 shown in FIG. 3) .
  • the plurality of consecutive positions may be determined by the positioning device using a positioning technique.
  • the user may include but not limited to a driver or a passenger in/on the vehicle, a user walking on foot, or the like.
  • the positioning technology may include but not limited to a global positioning system (GPS) , a global navigation satellite system (GLONASS) , a Galileo positioning system, a quasi-zenith satellite system (QZSS) , a Beidou navigation satellite system, a wireless fidelity (WiFi) positioning technology, or the like, or any combination thereof.
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • QZSS quasi-zenith satellite system
  • Beidou navigation satellite system a wireless fidelity (WiFi) positioning technology, or the like, or any combination thereof.
  • the position data provided by the positioning device may include the plurality of consecutive positions in the trajectory, a position accuracy of each of the plurality of consecutive positions, etc.
  • Each of the plurality of consecutive positions may be represented by geographic coordinates.
  • the geographic coordinates may be denoted by a coordinate system.
  • the coordinate system may include a latitude and longitude coordinate system, an earth-centered coordinate system, earth-fixed (ECEF) coordinate system, a local east-north-up (ENU) coordinate system, etc.
  • position accuracy refers to a matching degree between position information (i.e., estimated position) of a subject (e.g., the terminal) determined by the positioning device and an actual position of the subject.
  • the position accuracy may be denoted by a variation range of a distance between the position (i.e., estimated position) determined by the positioning device at the time point and the actual position at the time point.
  • the position accuracy of each of the consecutive positions may be associated with a strength of a positioning signal, such as a GPS signal. For instance, when the positioning signal is relatively strong, the position accuracy of a corresponding position determined by the positioning device may be relatively high (e.g., 5 meters, 10 meters, 20 meters) . When the positioning signal is relatively weak, the position accuracy of each of the corresponding position determined by the positioning device may be relatively low (e.g., 100 meters, 200 meters) . The position accuracy of each of the consecutive positions may be obtained from the positioning device.
  • the positioning device may determine the position data at predetermined time intervals (e.g., 30 seconds, 45 seconds) and transmit the position data to the processing engine 112 via a network (e.g., the network 120 shown in FIG. 1) .
  • a network e.g., the network 120 shown in FIG. 1.
  • other types of data related to map-matching may also be obtained by the processing engine 112, such as movement data determined by a motion sensor as described in connection with operation 504.
  • the position data and the movement data related to map-matching may be obtained simultaneously.
  • the position data and the movement data related to map-matching may be obtained in any order. For example, the movement data related to map-matching may be obtained before the position data related to map-matching.
  • the processing engine 112 may obtain movement data provided by a motion sensor that is associated with the plurality of consecutive positions.
  • the movement data may be associated with each of the plurality of consecutive positions in the position sequence.
  • the movement data associated with a position may include an acceleration of the terminal at the position, a direction angle of the terminal at the position, a velocity of the terminal at the position, or the like, or any combination thereof.
  • the acceleration may include a linear acceleration and/or an angular acceleration.
  • the velocity may include a linear velocity and/or an angular velocity.
  • the direction angle may be used to indicate a direction the terminal (e.g., the vehicle) or the user is heading toward.
  • the direction angle may refer to an angle between a direction the terminal is heading toward and a road that the terminal is moving on.
  • the direction angle may be a steering angle, an azimuth, etc.
  • an angle between a direction the vehicle or the person is heading toward and the specific road that the vehicle or person is moving on may be zero.
  • the angle between a direction the vehicle is heading toward and the specific road may be 90 degrees (or 270 degrees) .
  • the angle between a direction the vehicle or the person is heading toward and the specific road may be 180 degrees.
  • the motion sensor that detects the movement data may include but not limited to a gyroscope, an accelerometer, a velocity sensor, or the like, or any combination thereof.
  • the motion sensor may be integrated into the terminal (e.g., terminal 130) .
  • the processing engine 112 may determine one or more candidate road segments associated with the last position.
  • the one or more candidate road segments may be determined based on the data related to map-matching obtained by the processing engine 112, such as the position data obtained in operation 502.
  • a road segment refers to at least one portion of a road, a street, an avenue, or the like.
  • a road segment may be defined by an identity (ID) , a length value, a starting point, an ending point, and/or other information.
  • Each road segment may be defined by one or more characteristics (also referred to as road information) stored in a storage device (e.g., the storage 140) .
  • the road information associated with a candidate road segment may include a number of lanes associated with the candidate road segment, a velocity limit (e.g., a maximum velocity limit and/or a minimum velocity limit) associated with the candidate road segment, a rank of the candidate road segment, a condition of the candidate road segment, a length value of the candidate road segment, a starting point of the candidate road segment, an ending point of the candidate road segment, or the like, or any combination thereof.
  • a velocity limit e.g., a maximum velocity limit and/or a minimum velocity limit
  • the rank of the candidate road segment refers to the type of the candidate road segment, including but not limited to a major highway, a minor highway, a national highway, a provincial highway, a ramp, a primary street, a local road, a service road, a one-way road, an off-road, a parking lot road, a private road, pedestrian boardwalks, or the like.
  • the condition of the candidate road segment may include whether the candidate road segment is closed (temporally or permanently) , whether the pavement and/or axillary facilities (e.g., street lamps) of the candidate road segment are damaged, whether the candidate road segment is congested, preference data related to the candidate road segment, or the like, or any combination thereof.
  • the preference data associated with the candidate road segment may relate to historical data of a plurality of users (e.g., drivers) or a specific user (e.g., the driver of the vehicle) .
  • the historical data of the plurality of users may include a total frequency of the plurality of users passing through the candidate road segment (also referred to as “traffic flow” ) during a time period.
  • the historical data of a specific user may include a specific frequency that the specific user passes through the candidate road segment during a time period.
  • the processing engine 112 may determine the one or more candidate road segments based on the position data associated with the last position. Further, the processing engine 112 may determine the one or more candidate road segments within a distance from the last position. The distance may be a vertical distance between the last position and each of a plurality of road segments around the last position. For instance, the processing engine 112 may project the last position in the consecutive positions to a road network, and determine the vertical distance between the last position and each of a plurality of road segments around the last position. The processing engine 112 may further compare the vertical distance with a distance threshold.
  • the processing engine 112 may determine the road segment around the last position as a candidate road segment.
  • the distance threshold may be set and/or adjusted by a user or according to a default setting of the system 100.
  • the threshold may be predefined as less than or equal to 50 meters, such as 10 meters, 20 meters, 30 meters, 40 meters, 50 meters, etc.
  • the distance threshold may be adjusted based on the velocity of the vehicle (or the person walking on foot) , the position accuracy of the last position, the number of the determined candidate road segments, or the like, or any combination thereof.
  • the distance threshold when the velocity of the vehicle is relatively high (e.g., 60 Km/h) , the distance threshold may be set as a higher value that is bigger than 50 meters, e.g., 75 meters or 100 meters. When the position accuracy of the last position is relatively low, the distance threshold may be set as a higher value (e.g., 100 meters, 150 meters) . As another example, when the number of the determined candidate road segments is relatively large (e.g., 5, 6) , the distance threshold may be decreased (e.g., 20 meters, 30 meters) .
  • the last position may be matched with one of the one or more determined candidate road segments.
  • the matching of the last position and a candidate road segment may refer to that the last position may be geographically located at the candidate road segment.
  • the processing engine 112 may further determine which one of the one or more determined one of the candidate road segments where the last position may be geographically located at according to operations 508 and 510.
  • the processing engine 112 may use a target neural network model to determine a matching probability of the last position matching with each one of the one or more candidate road segments based on the position data and the movement data.
  • the target neural network model may be configured to generate a probability of a specific position matching with a specific road segment by inputting data associated with the specific position (e.g., data associated with the last position in the position sequence) and/or a specific sequence including the specific position (e.g., the position data and the movement data associated with the plurality of consecutive positions in the position sequence as described in 502) .
  • the processing engine 112 may input the movement data, the position data, and the information/data associated with the one or more candidate road segments as described into the target neural network model to obtain the matching probability of the last position matching with each one of the one or more candidate road segments.
  • the processing engine 112 may input the coordinates of each of the plurality of consecutives of positions, the position accuracy of each of the plurality of consecutives of positions, the acceleration of the terminal at each of the plurality of consecutives of positions, the velocity of the terminal at each of the plurality of consecutives of positions, the direction angle of the terminal at each of the plurality of consecutives of positions, the starting point of the specific candidate road segment, the ending point of each of the specific candidate road segment, the length value of the specific candidate road segment, the speed limit of the specific candidate road segment, etc., into the target neural network model.
  • the target neural network model may generate and output a matching probability of the last position matching with the specific candidate road segment.
  • the processing engine 112 may input feature data associated with the position sequence in operation 502 and/or feature data associated with each of the one or more candidate road segments in operation 506 to the target neural network model to obtain the matching probability of the last position matching with each one of the one or more candidate road segments.
  • the feature data associated with the position sequence in operation 502 may include one or more first features relating to the position data associated with the plurality of consecutive positions in the position sequence, and one or more second features associated with movements of the terminal at each of the plurality of consecutive positions in the position sequence.
  • the feature data associated with the one or more candidate road segments in operation 506 may include one or more third features associated with each of the plurality of candidate road segments.
  • the one or more first features may include a distance (e.g., a vertical distance) between each of the plurality of consecutive positions (e.g., the last position) in the position sequence and each of a plurality of candidate road segments, the position accuracy of each of the plurality of consecutive positions, a distance between the last position and other positions in the position sequence, or the like, or any combination thereof.
  • the one or more second features associated with each of the plurality of consecutive positions in the position sequence may include a difference between direction angles of the terminal at the last position and a previous position, a difference between accelerations of the terminal at the last position and a previous position, a difference between velocities of the terminal at the last position and a previous position, or the like, or any combination thereof.
  • the one or more third features associated with each of the plurality of candidate road segments may include the number of lanes associated with each of the plurality of candidate road segments, the velocity limit associated with each of the plurality of candidate road segments, the rank of each of the plurality of candidate road segments, the condition of each of the plurality of candidate road segments, or the like, or any combination thereof.
  • the processing engine 112 may input the position data, the movement data, the feature data associated with the position sequence in operation 502, and/or the feature data associated with each of the one or more candidate road segments in operation 506 to the target neural network model to obtain the matching probability of the last position matching with each one of the one or more candidate road segments.
  • the processing engine 112 may input the coordinates associated with each of the plurality of consecutive positions, the distance (e.g., a vertical distance) between each of the plurality of consecutive positions (e.g., the last position) in the position sequence and each of a plurality of candidate road segments, the position accuracy of each of the plurality of consecutive positions, distances between the last position and one or more other positions in the position sequence, the acceleration of the terminal at each of the plurality of consecutive positions, the direction angle of the terminal at each of the plurality of consecutive positions, the velocity of the terminal at each of the plurality of consecutive positions, a difference between direction angles of the terminal at the last position and a previous position, a difference between accelerations of the terminal at the last position and a previous position, a difference between velocities of the terminal at the last position and a previous position, the number of lanes associated with each of the plurality of candidate road segments, the velocity limit associated with each of the plurality of candidate road segments, the rank of each of the plurality of candidate road segments, the condition of
  • the one or more first features and the one or more second features associated with a specific position may be represented as a feature vector corresponding to the specific position
  • the one or more third features associated with a specific candidate road segment may be represented as a feature vector associated with the specific candidate road segment.
  • the one or more feature vectors associated with the specific position and the one or more features vectors associated with the specific candidate road segment may be determined and/or fused by the processing engine 112 based on the one or more first features, the one or more second features, the one or more third features, or the like.
  • the one or more feature vectors associated with the specific position and the one or more features vectors associated with the specific candidate road segment may be fused by the target neural network model (e.g., a kernel of the target neural network model) via inputting the one or more first features, one or more second features, and/or one or more third features.
  • a fused feature vector associated with a specific position and a candidate road segment may include the one or more first features and the one or more second features associated with the specific position, and the one or more third features associated with the specific candidate road segment.
  • the processing engine 112 may obtain the target neural network from a storage device (e.g., the storage 140 shown in FIG. 1) .
  • the target neural network model may be a trained neural network model.
  • a plurality of groups of training samples may be used to train a neural network model.
  • Each group of the plurality of groups of training samples may include one or more reference features associated with reference positions of a reference position sequence, one or more reference features associated with a movement of a reference terminal (e.g., positioning device) at the reference positions of the reference position sequence, and one or more reference features associated with one or more reference road segments matching with the reference positions of the reference position sequence.
  • Each group of the plurality of groups of training samples may further include a label configured to indicate whether a reference position (e.g., a last reference position) in the reference position sequence matches with a reference road segment. Details regarding training the neural network model may be found elsewhere in the present disclosure, for example, in FIG. 6 and the descriptions thereof.
  • the neural network model may be constructed based on at least one of a long short term memory (LSTM) model, a recurrent neural network (RNN) model, a gated recurrent unit (GRU) model, or the like, or a combination thereof.
  • the neural network model may further include one or more one-dimension convolution layers, a full connection (FC) layer, and one or more activation layers. Details regarding the structure of the neural network model may be found elsewhere in the present disclosure, for example, in FIG. 7 and the descriptions thereof.
  • the processing engine 112 may designate the road segment with the highest matching probability as a target road segment.
  • the last position of the position sequence may be determined to match with the target road segment.
  • the last position may be corrected based a matching result between the last position and the target road segment. For example, the processing engine 112 may project the last position on the road network to obtain a projection position. If the projected position corresponding to the last position is located exactly in the target road segment, the processing engine 112 may determine that the last position may not need to be corrected. If the projected position corresponding to the last position is not located in the target road segment, the processing engine 112 may correct the last position to generate a target position located exactly on the target road segment.
  • the processing engine 112 may determine a position on the target road segment that is closest to the last position as the target position. For instance, the processing engine 112 may determine a straight line passing through the last position that is vertical to the target road segment. The straight line may intersect with the target road segment at an intersection point. The intersection point may be determined as the target position. In some embodiments, the processing engine 112 may further determine target coordinates of the target position and designate the target coordinates as corrected coordinates of the last position. The target position may be regarded as a current position of the vehicle and/or a corrected position corresponding to the time point when the last position is determined by the positioning device. In some embodiments, the last position and/or the target position may be displayed on a digital map implemented on a terminal (e.g., the terminal 130 shown in FIG. 1) .
  • a terminal e.g., the terminal 130 shown in FIG. 1
  • the highest matching probability may correspond to more than one road segment in the one or more candidate road segments.
  • the processing engine 112 may further determine the target road segment based on the position data, the movement data, and/or the road information associated with each of the more than one candidate road segment.
  • the more than one road segment corresponding to the highest matching probability may include a first road segment and a second road segment.
  • the processing engine 112 may designate the second road segment as the target road segment.
  • the processing engine 112 may designate the first road segment as the target road segment.
  • the processing engine 112 may determine whether the trajectory of the terminal has changed, is changing, or remains unchanged based on the target road segment. For example, if the target road segment is different from a road segment matched with a previous position of the last position, the processing engine 112 may determine that the trajectory of the terminal has changed. If the target road segment is the same as the road segment matched with the previous position of the last position, the processing engine 112 may determine that the trajectory of the terminal remains unchanged or is changing.
  • the operation 506 may be performed before the operation 504.
  • operations 502 and 504 may be performed simultaneously.
  • the processing engine 112 may determine the one or more candidate road segments and then determine an angle between the direction the terminal is heading toward and each of the one or more candidate road segments as the direction angle.
  • FIG. 6 is a flowchart illustrating a process 600 for training a neural network model to obtain a target neural network model according to some embodiments of the present disclosure.
  • the target neural network model described in connection with FIG. 5 may be obtained according to the process 600.
  • the process 600 may be executed by the system 100.
  • the process 600 may be implemented as a set of instructions (e.g., an application) stored in the storage (e.g., ROM 230 or RAM 240 of the computing device 200) .
  • the processing engine 112 and/or modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processing engine 112 and/or the modules may be configured to perform the process 600.
  • the process 600 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 600 as illustrated in FIG. 6 and described below is not intended to be limiting. As shown in FIG. 6, the process 600 may include the following operations.
  • each group of the plurality of groups of training samples may include reference information related to a reference position sequence associated with a reference trajectory, reference road information related to one or more reference road segments, and one or more labels associated with each of the one or more reference road segments and a reference last position in the reference position sequence.
  • the reference position sequence may include a plurality of consecutive reference positions ranked in a chronological order. A preset number of the plurality of the consecutive reference positions in the reference position sequence may be equal to, such as thirty, forty-five, sixty, etc.
  • the reference information related to the reference position sequence may be stored in a storage device (e.g., the storage 140 in FIG. 1) .
  • the reference information related to the reference position sequence may include but not limited to reference position data associated with each of the plurality of consecutive reference positions, reference movement data associated with each of the plurality of consecutive reference positions, or the like, or any combination thereof.
  • the reference position data may include the reference consecutive positions (e.g., represented by geographic coordinates) and/or a reference position accuracy of each of the plurality of reference consecutive positions.
  • the reference movement data associated with each of the plurality of reference consecutive positions may include but not limited to a reference acceleration, a reference direction angle, a reference velocity, or the like, or any combination thereof. Similar to the descriptions in operation 506 in FIG.
  • the reference road information associated with the reference road segment (s) may include a reference number of lanes, a reference velocity limit (e.g., a maximum velocity limit and/or a minimum velocity limit) , a reference rank, a reference condition, or the like, or any combination thereof.
  • a reference velocity limit e.g., a maximum velocity limit and/or a minimum velocity limit
  • a label may include a matching probability of the last reference position matching with a reference road segment.
  • the label associated with the reference road segment and the reference last position may include a matching probability between 0 and 1 (or 100%) to denote a matching degree between the reference road segment and the last reference position.
  • the label associated with the reference road segment and the reference last position may be a positive label or a negative label.
  • the positive label may suggest that the reference road segment matches with the last reference position.
  • the positive label may include a reference matching probability of 1 (or 100%) .
  • the negative label may suggest that the reference road segment does not match with the last reference position.
  • the negative label may include a reference matching probability of 0.
  • the plurality of groups of training samples may be associated with a same reference trajectory.
  • the plurality of groups of training samples may include a number of last reference positions on the same reference trajectory.
  • the number of last reference positions may be in a range from 1 million to 5 million, such as 1 million, 2 million, 3 million, 4 million, 5 million etc.
  • the number of last reference positions may be labeled manually by a user, for example via a user terminal.
  • the last reference position in each group of the plurality of groups of training samples may be determined by matching it with a reference road segment by the user manually ..
  • the user may select a reference position sequence including a last reference position for each of the plurality of groups of training samples on a map.
  • the user may select a plurality of reference road segments around the last reference position.
  • the user may determine whether the last reference position matches with each of the plurality of reference road segments and label each of the plurality of reference road segments with a matching probability based on the determination.
  • GPS positions may drift during map-matching, when a trajectory is examined, based on the entire trajectory, any of the positions on the trajectory can be assigned to a road segment that it belongs manually.
  • 500million positions were assigned to a road segment that it belongs manually as a training sample.
  • each of the plurality of groups of training samples may include one or more reference features associated with the reference position sequence and the one or more labels associated with each of the one or more reference road segments and the reference last position.
  • the one or more reference features may include one or more first reference features determined based on the reference movement data associated with each of the plurality of reference consecutive positions, one or more second reference features determined based on the reference movement data associated with each of the plurality of consecutive positions, and one or more third reference features determined based on the reference road information associated with the reference road segment.
  • the one or more first reference features may include a reference distance (e.g., a vertical distance) between each of the plurality of reference consecutive positions in the position sequence and the reference road segment, the position accuracy of each of the plurality of reference consecutive positions, or the like, or any combination thereof.
  • the one or more features may be represented as one or more reference feature vectors.
  • the processing engine 112 may determine one or more fused reference feature vectors based on the one or more first reference features, the one or more second reference features, the one or more third reference features, or the like, or any combination thereof.
  • Each of the plurality of groups of training samples may include one or more fused feature vectors.
  • the plurality of groups of training samples may be used to train a neural network model for map-matching.
  • the neural network model may be obtained as described in connection with 604 and trained as described in connection with 606.
  • the processing engine 112 may obtain a neural network model.
  • the plurality of groups of training samples and the neural network model may be obtained simultaneously or in any order.
  • the neural network model may be constructed based on a long short term memory (LSTM) model, a recurrent neural network (RNN) model, a gated recurrent unit (GRU) model, or the like, or a combination thereof.
  • the LSTM model may be a bidirectional LSTM (BLSTM) model.
  • the RNN model may be a bidirectional RNN (BRNN) model.
  • the neural network model may include multiple layers, for example, an input layer, multiple hidden layers, and an output layer.
  • the multiple hidden layers may include one or more convolutional layers, one or more batch normalization layers, one or more activation layers, a full connection layer (also referred to as a “fully connected layer) , etc.
  • the multiple layers may be configured with one or more functions.
  • a convolutional layer may be configured with an activation function, such as a Relu function, a Sigmoid function, a Tanh function, a Maxout function, etc.
  • the activation layer may be configured to convert data outputted from a previous layer using an activation function, so that the converted data may be more suitable for being inputted into a next layer.
  • the full connection layer may have connections to all activations in a previous layer and may be configured to connect every node in one layer to every node in another layer.
  • Each of the multiple layers may include a plurality of nodes.
  • the neural network model may be defined by a plurality of parameters. Parameters of the neural network model may include for example the size of a convolutional kernel, the number of layers, the number of nodes in each layer, a connected weight between two connected nodes, a bias vector relating to a node, etc.
  • the connected weight between two connected nodes may be configured to represent a proportion of an output value of a node to be as an input value of another connected node.
  • the bias vector relating to a node may be configured to control an output value of the node deviating from an origin.
  • One portion of the plurality of parameters may be set by a user or according to a default setting of the system 100. For example, the number of nodes of a layer may be 30, 45, 60, 128, 256, etc.
  • the neural network model may include one or more one-dimension convolution layers, a forward LSTM layer, a backward LSTM layer, a full connection (FC) layer, and one or more activation layers.
  • the one or more one-dimensional layer may be configured with an activation function (e.g., a Relu function) .
  • the number of nodes in the forward LSTM layer and the backward LSTM layer may be equal to the number of the plurality of consecutive positions in the position sequence, such as 30, 45, 60, etc.
  • the number of nodes in the FC layer may be 128, 256, 512, etc.
  • a Sigmoid layer may receive the output of the FC layer and may be configured to determine an estimated matching probability of the last reference position matching with the reference road segment using the Sigmoid function. More descriptions for the neural network model may be found elsewhere in the present disclosure (e.g., FIG. 7 and the description thereof) .
  • the processing engine 112 may train the neural network model using the plurality of groups of training samples to obtain a target neural network model.
  • the neural network model may be trained using a neural network training algorithm.
  • Neural network training algorithm may include, for example, a gradient descent algorithm, a Newton’s algorithm, a Quasi-Newton algorithm, a Levenberg-Marquardt algorithm, a conjugate gradient algorithm, or the like, or a combination thereof.
  • the neural network model may be trained by performing a plurality of iterations based on a cost function. Before the plurality of iterations, the parameters of the neural network model may be initialized.
  • the connected weights and/or the bias vector of nodes of the neural network model may be initialized to be random values in a range, e.g., the range from -1 to 1.
  • all the connected weights of the neural network model may have a same value in the range from -1 to 1, for example, 0.
  • the bias vector of nodes in the neural network model may be initialized to be random values in a range from 0 to 1.
  • the parameters of the neural network model may be initialized based on a Gaussian random algorithm, a Xavier algorithm, etc. Then the plurality of iterations may be performed to update the parameters of the neural network model until a condition is satisfied.
  • the condition may provide an indication of whether the neural network model is sufficiently trained. For example, the condition may be satisfied if the value of the cost function associated with the neural network model is minimal or smaller than a threshold (e.g., a pre-set value) .
  • the cost function may be a logarithmic loss (also referred to as “log loss” ) that measures an uncertainty of a prediction result based on how much the prediction result varies from the label.
  • the condition may be satisfied if the value of the cost function converges. The convergence may be deemed to have occurred if the variation of the values of the cost function in two or more consecutive iterations is smaller than a threshold (e.g., a pre-set value) .
  • the condition may be satisfied when a specified number of iterations are performed in the training process.
  • each group of the plurality of groups of training samples may be inputted into the neural network model.
  • the one or more reference features (or the reference information related to the reference position sequence and the reference road information related to the reference road segment) and the label associated with the reference road segment and the reference last position may be processed by one or more layers of the neural network model to generate an estimated matching probability of the last reference position matching with the reference road segment.
  • the estimated matching probability may be compared with the reference matching probability based on the cost function of the neural network model.
  • the cost function of the neural network model may be configured to assess a difference between a testing value (e.g., the estimated matching probability) of the neural network model and a desired value (e.g., the reference matching probability) .
  • the parameters of the neural network model may be adjusted and updated to cause the value of the cost function (i.e., the difference between the estimated matching probability and the reference matching probability) to be smaller than the threshold. Accordingly, in a next iteration, another group of training samples may be inputted into the neural network model to train the neural network model as described above until the condition is satisfied. In some embodiments, the trained neural network model may be determined based on the updated parameters. In some embodiments, the target neural network model may be transmitted to the storage 140, the storage module 440, or any other storage device for storage.
  • the trained neural network model (i.e., the target neural network model) may be configured to output an estimated matching probability based on the data inputted into the trained neural network model.
  • the data inputted into the trained neural network model may include the position data and movement data related to each of a plurality of consecutive positions in a position sequence, and road information related to one or more candidate road segments within a distance from a last position in the position sequence.
  • the data inputted into the trained neural network model may include one or more first features determined based on the movement data, one or more second features determined based on the movement data, and one or more third features determined based on the road information.
  • FIG. 7 is a schematic diagram illustrating a structure of a neural network model 700 according to some embodiments of the present disclosure.
  • the neural network model may include two one-dimension convolution (1d conv) layers, a forward LSTM layer, a backward LSTM layer, an FC layer, and a Sigmoid layer.
  • the one-dimension convolution layer may be configured to increase dimensions of the one or more (fused) feature vectors associated with each of the plurality of reference consecutive positions in each of the plurality of reference position sequences, and thus more enriched correlations between the data in the one or more (fused) feature vectors may be obtained.
  • the one-dimension convolution layer may be configured with a Relu activation function.
  • the one-dimension convolution layer may be denoted as the following equation (1) :
  • i refers to the i th reference consecutive position in the reference position sequence
  • k refers to an index number of a feature set including a plurality of features related to the i th reference consecutive position (e.g., the first feature, the second feature, the third feature)
  • w k and b k are parameters associated with a k th filter
  • the max may introduce the Relu activation function in the one-dimension convolution layer.
  • the two one-dimension convolution layers may be optional.
  • the forward LSTM layer and the backward LSTM layer may be configured to process data related to each of the plurality of training samples described in operation 602 or an output from a previous one-dimension convolution layer.
  • the FC layer may include, for example, 128 nodes, 256 nodes, etc. Each node of the FC layer may obtain an input from every node of the previous layer (e.g., a backward LSTM layer) .
  • the Sigmoid layer may serve as an activation layer for outputting an estimated matching probability of the last reference position matching with the reference road segment corresponding to the last reference position. In some embodiments, the Sigmoid layer may be integrated into the FC layer.
  • the neural network model may not include any one-dimension convolution layers.
  • the neural network model may include only one one-dimension convolution layer or more than two one-dimension convolution layers.
  • the neural network model instead of the forward LSTM layer and the backward LSTM layer, may include a forward GRU layer and a backward GRU layer.
  • Other forward RNN layers and backward RNN layers may also be used for constructing the neural network model, which are not limited by the present disclosure.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer-readable program code embodied thereon.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electromagnetic, optical, or the like, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in a combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
PCT/CN2019/090233 2019-06-04 2019-06-06 Systems and methods for map-matching WO2020243937A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910479798.5 2019-06-04
CN201910479798.5A CN110686686B (zh) 2019-06-04 2019-06-04 用于地图匹配的系统和方法

Publications (1)

Publication Number Publication Date
WO2020243937A1 true WO2020243937A1 (en) 2020-12-10

Family

ID=69107569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090233 WO2020243937A1 (en) 2019-06-04 2019-06-06 Systems and methods for map-matching

Country Status (2)

Country Link
CN (1) CN110686686B (zh)
WO (1) WO2020243937A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113295173A (zh) * 2021-05-24 2021-08-24 安徽师范大学 环形路段的地图匹配方法
CN116086453A (zh) * 2022-12-12 2023-05-09 无锡恺韵来机器人有限公司 一种基于概率优化计算的惯导和地图组合定位方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11473927B2 (en) * 2020-02-05 2022-10-18 Electronic Arts Inc. Generating positions of map items for placement on a virtual map
CN111784798B (zh) * 2020-06-30 2021-04-09 滴图(北京)科技有限公司 地图生成方法、装置、电子设备和存储介质
CN112067005B (zh) * 2020-09-02 2023-05-05 四川大学 一种基于转弯点的离线地图匹配方法、装置及终端设备
CN112084285B (zh) * 2020-09-11 2023-08-08 北京百度网讯科技有限公司 用于地图匹配的方法、装置、电子设备以及可读介质
WO2022066098A1 (en) * 2020-09-22 2022-03-31 Grabtaxi Holdings Pte. Ltd Method and device for determining a navigation profile for a vehicle in a geographical area
CN112653997A (zh) * 2020-12-29 2021-04-13 西安九索数据技术股份有限公司 一种基于基站序列的位置轨迹计算方法
CN112883058A (zh) * 2021-03-23 2021-06-01 北京车和家信息技术有限公司 用于车辆定位的标定方法、装置、设备、车辆和介质
CN113188553B (zh) * 2021-04-15 2023-11-21 杭州海康威视系统技术有限公司 路线规划方法、装置、电子设备及机器可读存储介质
CN115394107A (zh) * 2022-08-03 2022-11-25 内蒙古巨宇测绘有限公司 一种错峰停车方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090107845A (ko) * 2008-04-10 2009-10-14 엘지전자 주식회사 차량 항법 방법 및 그 장치
CN104034338A (zh) * 2014-06-17 2014-09-10 百度在线网络技术(北京)有限公司 一种动态导航方法及装置
CN108253976A (zh) * 2018-01-04 2018-07-06 重庆大学 一种充分借助车辆航向的三阶段在线地图匹配算法
WO2018175441A1 (en) * 2017-03-20 2018-09-27 Mobileye Vision Technologies Ltd. Navigation by augmented path prediction

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571400A (zh) * 2009-01-04 2009-11-04 四川川大智胜软件股份有限公司 基于动态交通信息的嵌入式车载组合导航系统
EP2972094A4 (en) * 2013-03-15 2017-03-08 Hewlett-Packard Enterprise Development LP Map matching
CN105628033B (zh) * 2016-02-26 2019-04-02 广西鑫朗通信技术有限公司 一种基于道路连通关系的地图匹配方法
US10579065B2 (en) * 2016-11-23 2020-03-03 Baidu Usa Llc Algorithm and infrastructure for robust and efficient vehicle localization
CN108680174B (zh) * 2018-05-10 2019-05-10 长安大学 一种基于机器学习算法改进地图匹配异常点的方法
CN108763558B (zh) * 2018-05-25 2020-12-18 武汉大学 一种基于地图匹配的众包地图道路质量改进方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090107845A (ko) * 2008-04-10 2009-10-14 엘지전자 주식회사 차량 항법 방법 및 그 장치
CN104034338A (zh) * 2014-06-17 2014-09-10 百度在线网络技术(北京)有限公司 一种动态导航方法及装置
WO2018175441A1 (en) * 2017-03-20 2018-09-27 Mobileye Vision Technologies Ltd. Navigation by augmented path prediction
CN108253976A (zh) * 2018-01-04 2018-07-06 重庆大学 一种充分借助车辆航向的三阶段在线地图匹配算法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113295173A (zh) * 2021-05-24 2021-08-24 安徽师范大学 环形路段的地图匹配方法
CN113295173B (zh) * 2021-05-24 2023-08-29 安徽师范大学 环形路段的地图匹配方法
CN116086453A (zh) * 2022-12-12 2023-05-09 无锡恺韵来机器人有限公司 一种基于概率优化计算的惯导和地图组合定位方法
CN116086453B (zh) * 2022-12-12 2024-03-12 运来智能装备(无锡)有限公司 一种基于概率优化计算的惯导和地图组合定位方法

Also Published As

Publication number Publication date
CN110686686B (zh) 2020-10-02
CN110686686A (zh) 2020-01-14

Similar Documents

Publication Publication Date Title
WO2020243937A1 (en) Systems and methods for map-matching
US11024163B2 (en) Systems and methods for monitoring traffic congestion
US10979863B2 (en) Systems and methods for recommending a destination
JP6503474B2 (ja) 移動デバイスの経路を求めるシステム及び方法
US10904724B2 (en) Methods and systems for naming a pick up location
WO2017202112A1 (en) Systems and methods for distributing request for service
AU2017411198B2 (en) Systems and methods for route planning
US20200158522A1 (en) Systems and methods for determining a new route in a map
US11003730B2 (en) Systems and methods for parent-child relationship determination for points of interest
US11290547B2 (en) Systems and methods for determining an optimal transportation service type in an online to offline service
US20210065548A1 (en) Systems and methods for navigation based on intersection coding
US11105644B2 (en) Systems and methods for identifying closed road section
WO2020107569A1 (en) Systems and methods for determining traffic information of a region
WO2021087663A1 (en) Systems and methods for determining name for boarding point
WO2021056250A1 (en) Systems and methods for recommendation and display of point of interest
US20220178701A1 (en) Systems and methods for positioning a target subject
WO2021012243A1 (en) Positioning systems and methods
US20220178719A1 (en) Systems and methods for positioning a target subject
WO2022087767A1 (en) Systems and methods for recommending pick-up locations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19932162

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19932162

Country of ref document: EP

Kind code of ref document: A1