CN111831763A - Map processing method, map processing device, map processing equipment and computer readable storage medium - Google Patents

Map processing method, map processing device, map processing equipment and computer readable storage medium Download PDF

Info

Publication number
CN111831763A
CN111831763A CN201910860108.0A CN201910860108A CN111831763A CN 111831763 A CN111831763 A CN 111831763A CN 201910860108 A CN201910860108 A CN 201910860108A CN 111831763 A CN111831763 A CN 111831763A
Authority
CN
China
Prior art keywords
lattices
vector matrix
map
lattice
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910860108.0A
Other languages
Chinese (zh)
Inventor
张洪荣
吴羡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN201910860108.0A priority Critical patent/CN111831763A/en
Publication of CN111831763A publication Critical patent/CN111831763A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a map processing method, a map processing device, map processing equipment and a computer-readable storage medium, wherein the method comprises the following steps: determining grids of the start and the end of each journey from a plurality of outgoing data, wherein the grids correspond to preset position areas on a map; performing network training according to the data of the lattices to obtain a lattice vector matrix, wherein the lattice vector matrix comprises: vectors of labels of a plurality of said lattices ordered according to lattice travel frequency; and clustering a plurality of lattices in the map according to the lattice vector matrix. The travel data are trained to obtain a lattice vector matrix, and then all lattices in the map are clustered through the lattice vector matrix, so that the travel characteristics of the user can be embodied through the clustered lattices, and the travel density of the user in different regions can be embodied.

Description

Map processing method, map processing device, map processing equipment and computer readable storage medium
Technical Field
The present application relates to the field of big data technologies, and in particular, to a map processing method, apparatus, device, and computer-readable storage medium.
Background
With the continuous development of scientific technology, maps are applied more and more widely in daily life, and in different application programs, maps are also optimized continuously.
In the related art, each area may be divided into a grid in the map according to the area corresponding to the preset size, so that when the map is formed by a plurality of grids, each grid may correspond to a unique tag, so that the grids corresponding to the starting point and the ending point of the user when going out may be determined according to the tags.
However, each grid in the map is a plurality of areas independent of each other, and does not reflect the travel characteristics of the user.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a method, an apparatus, a device, and a computer-readable storage medium for processing a map, which can solve the problem that in the prior art, each grid is a plurality of areas independent of each other and cannot reflect the travel characteristics of a user by classifying the grids in the map.
According to a first aspect of the present application, there is provided a map processing method, the method comprising:
determining grids of the start and the end of each journey from a plurality of outgoing data, wherein the grids correspond to preset position areas on a map;
performing network training according to the data of the lattices to obtain a lattice vector matrix, wherein the lattice vector matrix comprises: vectors of labels of a plurality of said lattices ordered according to lattice travel frequency;
and clustering a plurality of lattices in the map according to the lattice vector matrix.
In some embodiments, the network training according to the data of the plurality of lattices to obtain the lattice vector matrix includes:
sorting the grids according to the travel frequency of each grid;
and carrying out network training according to the sorted data of the lattices to obtain the lattice vector matrix.
In some embodiments, the network training according to the data of the plurality of lattices to obtain the lattice vector matrix includes:
inputting the data of each lattice into a preset neural network model to obtain the probability value of each lattice to a plurality of lattices; the neural network model is used for obtaining probability values of the lattices to the lattices according to the data of the lattices, input vectors corresponding to the labels of the lattices in the first vector matrix and output vectors corresponding to the labels of the lattices in the second vector matrix; wherein the row vectors of the first vector matrix comprise: input vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the input vectors are used for representing the lattices as vectors of a starting lattice; the column vectors of the second vector matrix include: output vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the output vectors are used for representing the lattices as vectors for terminating the lattices;
calculating a loss function value of the neural network model according to the probability value of each grid to the plurality of grids and the real label of the termination grid corresponding to each grid;
adjusting vector values in the first vector matrix and the second vector matrix according to the loss function values of the neural network model;
and if the loss function curve of the neural network model obtained by training based on the first vector matrix after adjustment and the second vector matrix after adjustment is converged, determining that the first vector matrix corresponding to the convergence of the loss function curve is determined as the lattice vector matrix.
In some embodiments, said clustering a plurality of said grids in said map according to said grid vector matrix comprises:
clustering a plurality of lattices of the map by adopting a preset hierarchical clustering algorithm according to the lattice vector matrix and a preset connectivity matrix; the connectivity matrix includes: parameters for characterizing adjacency relationships between lattices, the lattices of the same class after clustering being located in the same region on the map.
In some embodiments, the lattices of the same type after clustering have the same category label.
According to a second aspect of the present application, there is provided a map processing apparatus, the apparatus comprising:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining grids of the start and the end of each journey from a plurality of outgoing data, and the grids correspond to preset position areas on a map;
a training module, configured to perform network training according to data of the multiple lattices to obtain a lattice vector matrix, where the lattice vector matrix includes: vectors of labels of a plurality of said lattices ordered according to lattice travel frequency;
and the clustering module is used for clustering the lattices in the map according to the lattice vector matrix.
In some embodiments, the training module is further configured to rank the plurality of bins according to the travel frequency of each bin; and carrying out network training according to the sorted data of the lattices to obtain the lattice vector matrix.
In some embodiments, the training module is further configured to input data of each lattice into a preset neural network model, so as to obtain probability values of each lattice to a plurality of lattices; the neural network model is used for obtaining probability values of the lattices to the lattices according to the data of the lattices, input vectors corresponding to the labels of the lattices in the first vector matrix and output vectors corresponding to the labels of the lattices in the second vector matrix; wherein the row vectors of the first vector matrix comprise: input vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the input vectors are used for representing the lattices as vectors of a starting lattice; the column vectors of the second vector matrix include: output vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the output vectors are used for representing the lattices as vectors for terminating the lattices; calculating a loss function value of the neural network model according to the probability value of each grid to the plurality of grids and the real label of the termination grid corresponding to each grid; adjusting vector values in the first vector matrix and the second vector matrix according to the loss function values of the neural network model; and if the loss function curve of the neural network model obtained by training based on the first vector matrix after adjustment and the second vector matrix after adjustment is converged, determining that the first vector matrix corresponding to the convergence of the loss function curve is determined as the lattice vector matrix.
In some embodiments, the clustering module is further configured to cluster the multiple lattices of the map by using a preset hierarchical clustering algorithm according to the lattice vector matrix and a preset connectivity matrix; the connectivity matrix includes: parameters for characterizing adjacency relationships between lattices, the lattices of the same class after clustering being located in the same region on the map.
In some embodiments, the lattices of the same type after clustering have the same category label.
According to a third aspect of the present application, there is provided a map processing apparatus comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the map processing apparatus is running, the processor executing the machine-readable instructions to perform the steps of the map processing method according to any one of the first aspect.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the map processing method according to any one of the first aspects.
Based on any one of the above aspects, in the embodiment provided by the application, the grids at the start and the end of the multiple trips are determined from the multiple outgoing data, network training is performed according to the data of the multiple grids to obtain a grid vector matrix, and then the multiple grids in the map are clustered according to the grid vector matrix. The travel data are trained to obtain a lattice vector matrix, and then all lattices in the map are clustered through the lattice vector matrix, so that the travel characteristics of the user can be embodied through the clustered lattices, and the travel density of the user in different regions can be embodied.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of a map application system of an application scenario to which a map processing method of some embodiments of the present application is directed;
FIG. 2 illustrates a schematic diagram of exemplary hardware and software components of a map processing device that may implement the concepts of the present application, according to some embodiments of the present application;
FIG. 3 is a flow chart illustrating a map processing method according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating a map processing method according to another embodiment of the present application;
FIG. 5 shows a block diagram of a map processing apparatus of an embodiment of the present application;
fig. 6 shows a schematic structural diagram of a map processing device provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
To enable those skilled in the art to use the present disclosure, the following embodiments are presented in conjunction with a specific application scenario "map in taxi taking application". It will be apparent to those skilled in the art that the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the application. Although the present application is described primarily in the context of a map optimization process in a taxi taking application, it should be understood that this is merely one exemplary embodiment. The application can be applied to any other traffic type. For example, the present application may be applied to different transportation system environments, including terrestrial, marine, or airborne, among others, or any combination thereof. The present application may also include any service system that provides services based on maps, for example, a system for sending and/or receiving couriers, a service system for business transactions between buyers and sellers. Applications of the system or method of the present application may include web pages, plug-ins for browsers, client terminals, customization systems, internal analysis systems, or artificial intelligence robots, among others, or any combination thereof.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The term "user" in this application may refer to an individual, entity or tool that requests a service, subscribes to a service, provides a service, or facilitates the provision of a service. For example, the user may be a passenger, a driver, an operator, etc., or any combination thereof. In the present application, "passenger" and "passenger terminal" may be used interchangeably, and "driver" and "driver terminal" may be used interchangeably.
The terms "service request" and "order" are used interchangeably herein to refer to a request initiated by a passenger, a service requester, a driver, a service provider, or a supplier, the like, or any combination thereof. Accepting the "service request" or "order" may be a passenger, a service requester, a driver, a service provider, a supplier, or the like, or any combination thereof. The service request may be charged or free.
The Positioning technology used in the present application may be based on a Global Positioning System (GPS), a Global Navigation Satellite System (GLONASS), a beidou System, a COMPASS Navigation System (COMPASS), a galileo Positioning System, a Quasi-Zenith Satellite System (QZSS), a Wireless Fidelity (WiFi) Positioning technology, or any combination thereof. One or more of the above-described positioning systems may be used interchangeably in this application.
Before the present application is proposed, the existing technical solutions are: the method is characterized in that a map in an application program is divided into a plurality of grids according to a plurality of preset areas, and the starting point and the ending point of the user trip are determined through the labels of the grids.
The technical problems caused by the method are as follows: each grid in the map is a plurality of areas independent from each other, and the travel characteristics of the user cannot be reflected.
In order to solve the above technical problem, an embodiment of the present invention provides a map processing method. The core improvement point is as follows: the method comprises the steps of performing network training based on a plurality of trip data to obtain a grid vector matrix, clustering each grid in a map through the grid vector matrix, determining a starting point and an end point of user trip based on the grids, and determining types corresponding to regions where users trip according to classifications corresponding to the grids, so that trip information of the users is reflected in multiple aspects.
The technical solution of the present invention is explained below by means of possible implementations.
Fig. 1 shows a block diagram of a map application system 100 of an application scenario to which a map processing method according to some embodiments of the present application relates. For example, the map application system 100 may be an online transportation service platform for transportation services such as taxis, designated driving services, express, carpooling, bus services, driver rentals, or regular bus services, or any combination thereof. The map application system 100 may include one or more of a server 110, a network 120, a service requester terminal 130, a service provider terminal 140, and a database 150, and the server 110 may include a processor therein that performs instruction operations.
In some embodiments, the server 110 may be a single server or a group of servers. The set of servers can be centralized or distributed (e.g., the servers 110 can be a distributed system). In some embodiments, the server 110 may be local or remote to the terminal. For example, the server 110 may access information and/or data stored in the service requester terminal 130, the service provider terminal 140, or the database 150, or any combination thereof, via the network 120. As another example, the server 110 may be directly connected to at least one of the service requester terminal 130, the service provider terminal 140, and the database 150 to access stored information and/or data. In some embodiments, the server 110 may be implemented on a cloud platform; by way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud (community cloud), a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the server 110 may include a processor. The processor may process information and/or data related to the service request to perform one or more of the functions described herein. For example, the processor may determine the start and end points of a user trip based on a service request obtained from the service requester terminal 130. In some embodiments, a processor may include one or more processing cores (e.g., a single-core processor (S) or a multi-core processor (S)). Merely by way of example, a Processor may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a Reduced Instruction Set computer (Reduced Instruction Set Computing, RISC), a microprocessor, or the like, or any combination thereof.
Network 120 may be used for the exchange of information and/or data. In some embodiments, one or more components (e.g., server 110, service requestor terminal 130, service provider terminal 140, and database 150) in the map application system 100 may send information and/or data to other components. For example, the server 110 may obtain a service request from the service requester terminal 130 via the network 120. In some embodiments, the network 120 may be any type of wired or wireless network, or combination thereof. Merely by way of example, Network 120 may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunications Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, a Near Field Communication (NFC) Network, or the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of the mapping application system 100 may connect to the network 120 to exchange data and/or information.
In some embodiments, the user of the service requestor terminal 130 may be someone other than the actual demander of the service. For example, the user a of the service requester terminal 130 may use the service requester terminal 130 to initiate a service request for the service actual demander B (for example, the user a may call a car for his friend B), or receive service information or instructions from the server 110. In some embodiments, the user of the service provider terminal 140 may be the actual provider of the service or may be another person than the actual provider of the service. For example, user C of the service provider terminal 140 may use the service provider terminal 140 to receive a service request serviced by the service provider entity D (e.g., user C may pick up an order for driver D employed by user C), and/or information or instructions from the server 110. In some embodiments, "service requester" and "service requester terminal" may be used interchangeably, and "service provider" and "service provider terminal" may be used interchangeably.
In some embodiments, the service requester terminal 130 may comprise a mobile device, a tablet computer, a laptop computer, or a built-in device in a motor vehicle, etc., or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart televisions, smart cameras, or walkie-talkies, or the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, or a point of sale (POS) device, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include various virtual reality products and the like. In some embodiments, the built-in devices in the motor vehicle may include an on-board computer, an on-board television, and the like. In some embodiments, the service requester terminal 130 may be a device having a location technology for locating the location of the service requester and/or service requester terminal.
In some embodiments, the service provider terminal 140 may be a similar or identical device as the service requestor terminal 130. In some embodiments, the service provider terminal 140 may be a device with location technology for locating the location of the service provider and/or the service provider terminal. In some embodiments, the service requester terminal 130 and/or the service provider terminal 140 may communicate with other locating devices to determine the location of the service requester, service requester terminal 130, service provider, or service provider terminal 140, or any combination thereof. In some embodiments, the service requester terminal 130 and/or the service provider terminal 140 may transmit the location information to the server 110.
Database 150 may store data and/or instructions. In some embodiments, the database 150 may store data obtained from the service requester terminal 130 and/or the service provider terminal 140. In some embodiments, database 150 may store data and/or instructions for the exemplary methods described herein. In some embodiments, database 150 may include mass storage, removable storage, volatile Read-write Memory, or Read-Only Memory (ROM), among others, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state drives, and the like; removable memory may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, and the like; volatile read-write Memory may include Random Access Memory (RAM); the RAM may include Dynamic RAM (DRAM), Double data Rate Synchronous Dynamic RAM (DDR SDRAM); static RAM (SRAM), Thyristor-Based Random Access Memory (T-RAM), Zero-capacitor RAM (Zero-RAM), and the like. By way of example, ROMs may include Mask Read-Only memories (MROMs), Programmable ROMs (PROMs), Erasable Programmable ROMs (PERROMs), Electrically Erasable Programmable ROMs (EEPROMs), compact disk ROMs (CD-ROMs), digital versatile disks (ROMs), and the like. In some embodiments, database 150 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, across clouds, multiple clouds, or the like, or any combination thereof.
In some embodiments, a database 150 may be connected to the network 120 to communicate with one or more components in the map application system 100 (e.g., the server 110, the service requester terminal 130, the service provider terminal 140, etc.). One or more components in the mapping application 100 may access data or instructions stored in the database 150 via the network 120. In some embodiments, the database 150 may be directly connected to one or more components in the map application system 100 (e.g., the server 110, the service requestor terminal 130, the service provider terminal 140, etc.); alternatively, in some embodiments, database 150 may also be part of server 110.
In some embodiments, one or more components in the map application system 100 (e.g., the server 110, the service requestor terminal 130, the service provider terminal 140, etc.) may have access to the database 150. In some embodiments, one or more components in the map application system 100 may read and/or modify information related to a service requestor, a service provider, or the public, or any combination thereof, when certain conditions are met. For example, server 110 may read and/or modify information for one or more users after receiving a service request.
In some embodiments, the exchange of information by one or more components in the mapping application system 100 may be accomplished through a request service. The object of the service request may be any product. In some embodiments, the product may be a tangible product or a non-physical product. Tangible products may include food, pharmaceuticals, commodities, chemical products, appliances, clothing, automobiles, homes, or luxury goods, and the like, or any combination thereof. The non-material product may include a service product, a financial product, a knowledge product, an internet product, or the like, or any combination thereof. The internet product may include a stand-alone host product, a network product, a mobile internet product, a commercial host product, an embedded product, or the like, or any combination thereof. The internet product may be used in software, programs, or systems of the mobile terminal, etc., or any combination thereof. The mobile terminal may include a tablet, a laptop, a mobile phone, a Personal Digital Assistant (PDA), a smart watch, a Point of sale (POS) device, a vehicle-mounted computer, a vehicle-mounted television, a wearable device, or the like, or any combination thereof. The internet product may be, for example, any software and/or application used in a computer or mobile phone. The software and/or applications may relate to social interaction, shopping, transportation, entertainment time, learning, or investment, or the like, or any combination thereof. In some embodiments, the transportation-related software and/or applications may include travel software and/or applications, vehicle dispatch software and/or applications, mapping software and/or applications, and the like. In vehicle dispatch software and/or applications.
Fig. 2 illustrates a schematic diagram of exemplary hardware and software components of a map processing device 200 that may implement the concepts of the present application, according to some embodiments of the present application. For example, the processor 220 may be used on the map processing device 200 and to perform the functions herein.
The map processing apparatus 200 may be a general-purpose computer or a special-purpose computer, both of which may be used to implement the map processing method of the present application. Although only a single computer is shown, for convenience, the functions described herein may be implemented in a distributed fashion across multiple similar platforms to balance processing loads.
For example, the map processing device 200 may include a network port 210 connected to a network, one or more processors 220 for executing program instructions, a communication bus 230, and a different form of storage medium 240, such as a disk, ROM, or RAM, or any combination thereof. Illustratively, the computer platform may also include program instructions stored in ROM, RAM, or other types of non-transitory storage media, or any combination thereof. The method of the present application may be implemented in accordance with these program instructions. The map processing device 200 also includes an Input/Output (I/O) interface 250 between the computer and other Input/Output devices (e.g., keyboard, display screen).
For ease of illustration, only one processor is depicted in the map processing device 200. However, it should be noted that the map processing apparatus 200 in the present application may also include a plurality of processors, and thus the steps performed by one processor described in the present application may also be performed by a plurality of processors in combination or individually. For example, if the processor of the map processing apparatus 200 executes step a and step B, it should be understood that step a and step B may be executed by two different processors together or executed in one processor alone. For example, a first processor performs step a and a second processor performs step B, or the first processor and the second processor perform steps a and B together.
It should be noted that the map processing apparatus 200 may be a computer apparatus connected to the server shown in fig. 1, and may acquire request information of a trip of a user from the server shown in fig. 1, so as to perform optimization processing on a map according to the request information.
The computer device may be a terminal device or a server, which is not limited in this embodiment of the present application.
Fig. 3 is a flowchart illustrating a map processing method according to an embodiment of the present application. The execution subject of the method may be a map processing apparatus as shown in fig. 2. As shown in fig. 3, the method includes:
step 301 determines the starting and ending grids of each run from the multiple outgoing data.
Wherein the grid corresponds to a preset location area on the map.
In order to optimize the map, more user travel information can be embodied according to the optimized map, grids corresponding to the starting point and the end point of each travel in a plurality of travel data can be determined according to the travel data of a large number of users, and therefore data of each grid can be determined, and in the subsequent step, the grids can be clustered according to the data of each grid.
The trip data may be trip data acquired from the map application system by the map processing device within a preset time period, and the trip data may be a starting point and an ending point of a trip included in a service request sent to the server by the service request terminal.
Also, the data of each grid may be a label of the grid for uniquely identifying the grid in the map, and the label may be represented in the form of a hash code.
In an optional embodiment, if it is detected that a preset time interval has elapsed since the last time of map processing optimization is performed, all trip data received from the last time of map processing optimization to the current time may be obtained, that is, trip data within the preset time interval is obtained, a start point and an end point of each trip in the trip data are extracted, a corresponding grid of each start point and each end point in the map is determined, and thus data of each grid is determined.
For example, all trip data within 1 month may be acquired, all trips shown in table 1 may be obtained according to the trip data, and a start point and an end point included in each trip are extracted, so as to obtain data of each grid, where the data of the grid corresponding to the extracted start point may be [400, 722, 3009, 1843, 302], and the data of the grid corresponding to the end point may be [66, 553, 3943, 2749, 401 ].
TABLE 1
Stroke control Starting point Terminal point
Order 0 OL14F7i9664j9125 400 OL14F7i9662j9113 66
Order 1 OL14F7i9670j9127 722 OL14F7i9668j9124 553
Order 2 OL14F7i9689j9102 3009 OL14F7i9690j9107 3943
Order 3 OL14F7i9642j9154 1843 OL14F7i9645j9152 2749
Order 4 OL14F7i9674j9129 302 OL14F7i9673j9127 401
Step 302, network training is performed according to the data of the lattices to obtain a lattice vector matrix.
After each grid is determined and data of each grid is obtained, network training can be performed based on the data of each grid to obtain a grid vector matrix, so that in the subsequent step, each grid in the map can be classified through the grid vector matrix.
In an optional embodiment, data of each lattice may be input into a preset model, so that training is performed on the data of each lattice, the preset model includes a preset initial lattice vector matrix, in the process of training the preset model, optimization training may be continuously performed on the initial lattice vector matrix in the model, and when a loss function of a result output by the preset model converges or the number of times of training the model reaches a preset number threshold, it is indicated that training of the model is completed, that is, training of the initial lattice vector matrix in the model is completed, so that a lattice vector matrix may be obtained.
It should be noted that the lattice vector matrix may include: the vectors of the labels of the multiple grids are sorted according to the grid travel frequency, that is, each vector in the grid vector matrix may be a label corresponding to each grid in the map, and each vector in the grid vector matrix is sorted according to the travel frequency of each grid.
The travel frequency of the grid is used for representing the number of times of the grid appearing in each travel data, and if the travel frequency is higher, the more frequently the grid appears in the travel data is indicated.
And 303, clustering a plurality of grids in the map according to the grid vector matrix.
After the lattice vector matrix is obtained, clustering can be performed on each lattice according to a vector corresponding to a label of each lattice included in the lattice vector matrix based on the travel frequency of each lattice, so that a plurality of clustered lattices are obtained, and then a map subjected to optimization processing is formed by the plurality of clustered lattices.
In an optional embodiment, labels of lattices corresponding to each vector in the lattice vector matrix can be sequentially obtained according to the sequence of the vectors in the lattice vector matrix from top to bottom or from left to right, corresponding lattices are identified in the map according to the labels of each lattice, and finally lattices in different sequences are clustered according to the sequence of identifying each lattice in the map, so that the classification of a plurality of lattices is realized.
It should be noted that, in practical applications, after clustering the grids in the map, different types of grids may be displayed in the map by using different identification information based on different types corresponding to each grid.
For example, different types of lattices may be displayed by means of labeling the lattices, filling in colours or filling in different stripes.
In summary, in the map processing method provided in the embodiment of the present application, the grids at the start and the end of a plurality of routes are determined from a plurality of outgoing data, network training is performed according to data of the grids to obtain a grid vector matrix, and then a plurality of grids in a map are clustered according to the grid vector matrix. The travel data are trained to obtain a lattice vector matrix, and then all lattices in the map are clustered through the lattice vector matrix, so that the travel characteristics of the user can be embodied through the clustered lattices, and the travel density of the user in different regions can be embodied.
Fig. 4 is a schematic flowchart illustrating a map processing method according to another embodiment of the present application, where as shown in fig. 4, the method includes:
step 401, from the plurality of outgoing data, a grid of the start and end of each trip is determined.
Wherein the grid corresponds to a preset location area on the map.
Since the process of determining each cell in step 401 is similar to that in step 301, it is not described here again.
Step 402, network training is carried out according to data of a plurality of grids to obtain a grid vector matrix.
Wherein the lattice vector matrix may include: a vector of labels of a plurality of lattices ordered by lattice travel frequency.
After the data of each grid is obtained, network training can be performed according to the data of each grid, so that a grid vector matrix for classifying the grids is obtained, and in the subsequent step, each grid in the map can be clustered according to the grid vector matrix.
Because the lattice vector matrix can be obtained by training in various ways, the method for obtaining the lattice vector matrix by training through the neural network model is only used as an example for explanation, and the method for obtaining the lattice vector matrix by training is not limited in the application.
Optionally, the data of each lattice may be input into a preset neural network model to obtain probability values of each lattice to the plurality of lattices, a loss function value of the neural network model is calculated according to the probability value of each lattice to the plurality of lattices and a real label of a termination lattice corresponding to each lattice, and then vector values in the first vector matrix and the second vector matrix are adjusted according to the loss function value of the neural network model.
If the loss function curve of the neural network model obtained by training based on the first vector matrix after adjustment and the second vector matrix after adjustment converges, the first vector matrix corresponding to the convergence of the loss function curve may be determined as the lattice vector matrix.
In an optional embodiment, data of each grid may be input into a preset neural network model, so that a first vector matrix and a second vector matrix in the neural network model are used for calculation and analysis, the probability of starting from each grid to each other grid is output, then a loss function value of the neural network model is calculated by combining the starting grid and the ending grid in each stroke, and finally vector values in the first vector matrix and the second vector matrix are adjusted according to the loss function value to obtain an adjusted first vector matrix and an adjusted second vector matrix.
And then training the data of each lattice according to the adjusted first vector matrix and the adjusted second vector matrix, so as to calculate again to obtain a corresponding loss function value, and if the curve corresponding to the loss function is determined to be converged, the training is finished, and the current first vector matrix can be used as the lattice vector matrix.
The neural network model may be configured to obtain probability values of each lattice to the plurality of lattices according to the data of each lattice, the input vector corresponding to the label of the lattice in the first vector matrix, and the output vector corresponding to the label of the lattice in the second vector matrix.
That is, in the process of training through the neural network model, the neural network model may receive input data of each lattice, perform calculation in combination with the first lattice vector matrix to obtain an input vector of each lattice, perform calculation according to the input vector and the second lattice vector matrix, and output a probability value starting from each lattice and ending at other lattices.
Also, the row vectors of the first vector matrix may include: input vectors corresponding to the labels of the grids sorted according to the travel frequency are used for representing the grids as vectors of a starting grid; and the column vectors of the second vector matrix may include: and outputting vectors corresponding to the labels of the grids sorted according to the travel frequency, wherein the output vectors are used for representing the grids as vectors for terminating the grids.
In addition, in practical application, the neural network model may be a skip-gram model, and may also be other neural network models for data prediction, which is not limited in this embodiment of the present application.
It should be noted that, in practical applications, in the process of performing network training on data of multiple lattices, the multiple lattices may be sorted according to the trip frequency of each lattice, and then network training may be performed according to the sorted data of the multiple lattices to obtain a lattice vector matrix.
That is, statistics may be performed according to a large amount of trip data to determine the number of times that each grid at the start and the end of the trip appears in the trip data, so as to determine the trip frequency of each grid, then the grids are sorted according to the trip frequency to obtain sorted grids, and vectors corresponding to each grid in the first grid vector matrix and the second vector matrix are also sorted according to the sorted grids to obtain a sorted first grid vector matrix and a sorted second vector matrix, and finally the sorted first grid vector matrix and the sorted second vector matrix are trained according to data of a plurality of grids to obtain a grid vector matrix.
It should be noted that, in practical application, a large number of grids exist in the map, and the travel frequencies of most of the grids are low, so that most of the grids with low travel frequencies can be uniformly represented by preset labels.
For example, a frequency threshold may be preset, if the frequency of going out of a certain grid is greater than or equal to the frequency threshold, the hash code corresponding to the unique identifier of the grid in the map may be used to represent the label of the grid, and for grids whose frequency of going out is less than the frequency threshold, the preset code "UNK" may be used to represent the label of the grid uniformly.
And 403, clustering a plurality of grids of the map by adopting a preset hierarchical clustering algorithm according to the grid vector matrix and a preset connectivity matrix.
Wherein the connectivity matrix may include: and parameters for representing the adjacent relation between the grids, wherein the grids of the same class after clustering are positioned in the same area on the map.
After the lattice vector matrix is determined, clustering can be performed according to vectors corresponding to each lattice in the lattice vector matrix and in combination with the adjacent street matrix, so that an optimized map composed of the clustered lattices is obtained.
In an optional embodiment, the vector value of the vector corresponding to each lattice in the lattice vector matrix can be determined according to the lattice vector matrix, hierarchical clustering is performed through the vector value of each lattice, and in the clustering process, the connectivity matrix representing connectivity among each lattice can be combined, so that clustering is performed based on the vector value of each lattice and the adjacent relation between each lattice in the connectivity matrix, a regional clustering result is obtained, and a map composed of the clustered lattices can be obtained.
Note that the lattices of the same type after clustering have the same category label. That is, different types of grids may be included in the map formed by the clustered grids, and the grids of each type may have the same category label so that the different types of grids in the map may be clearly shown.
In summary, in the map processing method provided in the embodiment of the present application, the grids at the start and the end of a plurality of routes are determined from a plurality of outgoing data, network training is performed according to data of the grids to obtain a grid vector matrix, and then a plurality of grids in a map are clustered according to the grid vector matrix. The travel data are trained to obtain a lattice vector matrix, and then all lattices in the map are clustered through the lattice vector matrix, so that the travel characteristics of the user can be embodied through the clustered lattices, and the travel density of the user in different regions can be embodied.
Fig. 5 shows a block diagram of a map processing apparatus according to an embodiment of the present application, which implements functions corresponding to the steps performed by the above method. The apparatus may be understood as the map processing device, or a processor of the map processing device, or may also be understood as a component which is independent from the map processing device or the processor and implements the functions of the present application under the control of the map processing device, as shown in the figure, the map processing apparatus may include: determination module 501, training module 502, and clustering module 503:
a determining module 501, configured to determine a grid of a start and a stop of each trip from the multiple outgoing data, where the grid corresponds to a preset location area on the map;
a training module 502, configured to perform network training according to data of a plurality of lattices to obtain a lattice vector matrix, where the lattice vector matrix includes: a plurality of vectors of the labels of the grid are sorted according to grid travel frequency;
a clustering module 503, configured to cluster the lattices in the map according to the lattice vector matrix.
Optionally, the training module 502 is further configured to sort the multiple lattices according to the travel frequency of each lattice; and performing network training according to the sorted data of the lattices to obtain the lattice vector matrix.
Optionally, the training module 502 is further configured to input data of each grid into a preset neural network model, so as to obtain probability values of each grid to a plurality of grids; the neural network model is used for obtaining probability values of the lattices to the lattices according to the data of the lattices, the input vectors corresponding to the labels of the lattices in the first vector matrix and the output vectors corresponding to the labels of the lattices in the second vector matrix; wherein the row vectors of the first vector matrix include: input vectors corresponding to the labels of the grids are sorted according to the travel frequency, and the input vectors are used for representing the grids as vectors of a starting grid; the column vectors of the second vector matrix include: output vectors corresponding to the labels of the lattices and sorted according to the travel frequency, wherein the output vectors are used for representing the lattices as vectors for terminating the lattices; calculating a loss function value of the neural network model according to the probability value of each grid to a plurality of grids and the real label of the termination grid corresponding to each grid; adjusting vector values in the first vector matrix and the second vector matrix according to the loss function value of the neural network model; and if the loss function curve of the neural network model obtained by training based on the first vector matrix after adjustment and the second vector matrix after adjustment is converged, determining that the first vector matrix corresponding to the convergence of the loss function curve is determined as the lattice vector matrix.
Optionally, the clustering module 503 is further configured to cluster the multiple grids of the map by using a preset hierarchical clustering algorithm according to the grid vector matrix and a preset connectivity matrix; the connectivity matrix includes: and parameters for representing the adjacent relation between grids, wherein the grids of the same class after clustering are positioned in the same area on the map.
Optionally, the lattices of the same type after clustering have the same category label.
In summary, the map processing apparatus provided in the embodiment of the present application determines the grids at the start and the end of the multiple trips from the multiple outgoing data, performs network training according to the data of the multiple grids to obtain a grid vector matrix, and clusters the multiple grids in the map according to the grid vector matrix. The travel data are trained to obtain a lattice vector matrix, and then all lattices in the map are clustered through the lattice vector matrix, so that the travel characteristics of the user can be embodied through the clustered lattices, and the travel density of the user in different regions can be embodied.
The modules may be connected or in communication with each other via a wired or wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, etc., or any combination thereof. The wireless connection may comprise a connection over a LAN, WAN, bluetooth, ZigBee, NFC, or the like, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be noted that the above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
Fig. 6 is a schematic structural diagram of a map processing apparatus provided in an embodiment of the present application, and as shown in fig. 6, the map processing apparatus includes: a processor 601, a storage medium 602, and a bus 603, wherein:
the storage medium 602 stores machine-readable instructions executable by the processor 601, and when the map processing device runs, the processor 601 communicates with the storage medium 602 through the bus 603, and the processor 601 executes the machine-readable instructions to perform the above-mentioned method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the invention also provides a program product, such as a computer-readable storage medium, having stored thereon a computer program for performing the above-mentioned method embodiments when being executed by a processor.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of map processing, the method comprising:
determining grids of the start and the end of each journey from a plurality of outgoing data, wherein the grids correspond to preset position areas on a map;
performing network training according to the data of the lattices to obtain a lattice vector matrix, wherein the lattice vector matrix comprises: vectors of labels of a plurality of said lattices ordered according to lattice travel frequency;
and clustering a plurality of lattices in the map according to the lattice vector matrix.
2. The method of claim 1, wherein the network training according to the data of the plurality of lattices to obtain a lattice vector matrix comprises:
sorting the grids according to the travel frequency of each grid;
and carrying out network training according to the sorted data of the lattices to obtain the lattice vector matrix.
3. The method of claim 1, wherein the network training according to the data of the plurality of lattices to obtain a lattice vector matrix comprises:
inputting the data of each lattice into a preset neural network model to obtain the probability value of each lattice to a plurality of lattices; the neural network model is used for obtaining probability values of the lattices to the lattices according to the data of the lattices, input vectors corresponding to the labels of the lattices in the first vector matrix and output vectors corresponding to the labels of the lattices in the second vector matrix; wherein the row vectors of the first vector matrix comprise: input vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the input vectors are used for representing the lattices as vectors of a starting lattice; the column vectors of the second vector matrix include: output vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the output vectors are used for representing the lattices as vectors for terminating the lattices;
calculating a loss function value of the neural network model according to the probability value of each grid to the plurality of grids and the real label of the termination grid corresponding to each grid;
adjusting vector values in the first vector matrix and the second vector matrix according to the loss function values of the neural network model;
and if the loss function curve of the neural network model obtained by training based on the first vector matrix after adjustment and the second vector matrix after adjustment is converged, determining that the first vector matrix corresponding to the convergence of the loss function curve is determined as the lattice vector matrix.
4. The method of any of claims 1-3, wherein clustering the plurality of grids in the map according to the grid vector matrix comprises:
clustering a plurality of lattices of the map by adopting a preset hierarchical clustering algorithm according to the lattice vector matrix and a preset connectivity matrix; the connectivity matrix includes: parameters for characterizing adjacency relationships between lattices, the lattices of the same class after clustering being located in the same region on the map.
5. The method of claim 4, wherein the lattices of the same type after clustering have the same class label.
6. A map processing apparatus, characterized in that the apparatus comprises:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining grids of the start and the end of each journey from a plurality of outgoing data, and the grids correspond to preset position areas on a map;
a training module, configured to perform network training according to data of the multiple lattices to obtain a lattice vector matrix, where the lattice vector matrix includes: vectors of labels of a plurality of said lattices ordered according to lattice travel frequency;
and the clustering module is used for clustering the lattices in the map according to the lattice vector matrix.
7. The apparatus of claim 6, wherein the training module is further configured to order the plurality of bins according to the frequency of travel of each bin; and carrying out network training according to the sorted data of the lattices to obtain the lattice vector matrix.
8. The apparatus of claim 6, wherein the training module is further configured to input data of each of the lattices into a preset neural network model, so as to obtain probability values of each of the lattices to a plurality of the lattices; the neural network model is used for obtaining probability values of the lattices to the lattices according to the data of the lattices, input vectors corresponding to the labels of the lattices in the first vector matrix and output vectors corresponding to the labels of the lattices in the second vector matrix; wherein the row vectors of the first vector matrix comprise: input vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the input vectors are used for representing the lattices as vectors of a starting lattice; the column vectors of the second vector matrix include: output vectors corresponding to the labels of all the lattices and sorted according to the travel frequency, wherein the output vectors are used for representing the lattices as vectors for terminating the lattices; calculating a loss function value of the neural network model according to the probability value of each grid to the plurality of grids and the real label of the termination grid corresponding to each grid; adjusting vector values in the first vector matrix and the second vector matrix according to the loss function values of the neural network model; and if the loss function curve of the neural network model obtained by training based on the first vector matrix after adjustment and the second vector matrix after adjustment is converged, determining that the first vector matrix corresponding to the convergence of the loss function curve is determined as the lattice vector matrix.
9. The apparatus according to any one of claims 6 to 8, wherein the clustering module is further configured to cluster the plurality of grids of the map by using a preset hierarchical clustering algorithm according to the grid vector matrix and a preset connectivity matrix; the connectivity matrix includes: parameters for characterizing adjacency relationships between lattices, the lattices of the same class after clustering being located in the same region on the map.
10. The apparatus of claim 9, wherein the lattices of the same type after clustering have the same class label.
11. A map processing apparatus, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the map processing apparatus is operating, the processor executing the machine-readable instructions to perform the steps of the map processing method according to any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the map processing method according to any one of claims 1 to 5.
CN201910860108.0A 2019-09-11 2019-09-11 Map processing method, map processing device, map processing equipment and computer readable storage medium Pending CN111831763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910860108.0A CN111831763A (en) 2019-09-11 2019-09-11 Map processing method, map processing device, map processing equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910860108.0A CN111831763A (en) 2019-09-11 2019-09-11 Map processing method, map processing device, map processing equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111831763A true CN111831763A (en) 2020-10-27

Family

ID=72912570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910860108.0A Pending CN111831763A (en) 2019-09-11 2019-09-11 Map processing method, map processing device, map processing equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111831763A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886607A (en) * 2017-03-21 2017-06-23 乐蜜科技有限公司 Urban area division methods, device and terminal device
CN108509434A (en) * 2017-02-23 2018-09-07 中国移动通信有限公司研究院 A kind of method for digging and device of group of subscribers
CN109003107A (en) * 2017-06-06 2018-12-14 北京嘀嘀无限科技发展有限公司 Region partitioning method and device
CN109405840A (en) * 2017-08-18 2019-03-01 中兴通讯股份有限公司 Map data updating method, server and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509434A (en) * 2017-02-23 2018-09-07 中国移动通信有限公司研究院 A kind of method for digging and device of group of subscribers
CN106886607A (en) * 2017-03-21 2017-06-23 乐蜜科技有限公司 Urban area division methods, device and terminal device
CN109003107A (en) * 2017-06-06 2018-12-14 北京嘀嘀无限科技发展有限公司 Region partitioning method and device
CN109405840A (en) * 2017-08-18 2019-03-01 中兴通讯股份有限公司 Map data updating method, server and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN111476588B (en) Order demand prediction method and device, electronic equipment and readable storage medium
US11011057B2 (en) Systems and methods for generating personalized destination recommendations
US11079244B2 (en) Methods and systems for estimating time of arrival
CN108701279B (en) System and method for determining a predictive distribution of future points in time of a transport service
US20210064616A1 (en) Systems and methods for location recommendation
CN111862585B (en) System and method for traffic prediction
JP6632723B2 (en) System and method for updating a sequence of services
EP3479306A1 (en) Method and system for estimating time of arrival
CA3028479A1 (en) System and method for determining safety score of driver
WO2018209551A1 (en) Systems and methods for determining an estimated time of arrival
US20190139070A1 (en) Systems and methods for cheat examination
JP7047096B2 (en) Systems and methods for determining estimated arrival times for online-to-offline services
CN111105120B (en) Work order processing method and device
CN108885726A (en) Service time point prediction system and method
CN110839346A (en) System and method for distributing service requests
CN111367575A (en) User behavior prediction method and device, electronic equipment and storage medium
US20210042817A1 (en) Methods and systems for order allocation
CN111489214B (en) Order allocation method, condition setting method, device and electronic equipment
CN114041129A (en) System and method for determining name of boarding point
US20200302362A1 (en) Systems and methods for cheat examination
CN110832513B (en) System and method for on-demand services
CN111831763A (en) Map processing method, map processing device, map processing equipment and computer readable storage medium
CN112036774A (en) Service policy evaluation method, device, equipment and storage medium
WO2019128477A1 (en) Systems and methods for assigning service requests
CN111275232A (en) Method and system for generating future value prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination