CN116964604A - System and method for predicting delivery time of batch order - Google Patents

System and method for predicting delivery time of batch order Download PDF

Info

Publication number
CN116964604A
CN116964604A CN202280012915.7A CN202280012915A CN116964604A CN 116964604 A CN116964604 A CN 116964604A CN 202280012915 A CN202280012915 A CN 202280012915A CN 116964604 A CN116964604 A CN 116964604A
Authority
CN
China
Prior art keywords
time
location
processors
order
batch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280012915.7A
Other languages
Chinese (zh)
Inventor
范海金
郑盛忠
阿希什·兰詹·卡恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Grabtaxi Holdings Pte Ltd
Original Assignee
Grabtaxi Holdings Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Grabtaxi Holdings Pte Ltd filed Critical Grabtaxi Holdings Pte Ltd
Publication of CN116964604A publication Critical patent/CN116964604A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0835Relationships between shipper or supplier and carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0835Relationships between shipper or supplier and carriers
    • G06Q10/08355Routing methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A system for predicting delivery time of a batch order is disclosed. The system may include one or more processors; and a memory having instructions stored therein, which when executed by the one or more processors, cause the one or more processors to: identifying a first location, wherein one or more merchants are located at the first location; identifying a second location, wherein one or more users may be located at the second location; predicting dynamic buffering time; wherein the delivery time may include a dynamic buffer time, and wherein the one or more processors are configured to predict the dynamic buffer time based on one or more batch orders between a first position of the first location and the second position and a second position of the first location and the third location.

Description

System and method for predicting delivery time of batch order
Technical Field
Aspects of the present disclosure relate to a system for predicting delivery times for batch orders. Aspects of the present disclosure relate to a method for predicting delivery time of a batch order. Aspects of the present disclosure relate to a non-transitory computer-readable medium storing computer-executable code comprising instructions for predicting delivery times of batch orders. Aspects of the present disclosure relate to a computer executable code including instructions for predicting delivery times of batch orders.
Background
In a delivery market where the driver supply is limited (e.g., food delivery), the driver's order allocation may be poor, resulting in some order placing users not having access to the order. As for the driver, the Return On Investment (ROI) for delivery work may not be as high as for passenger traffic work because additional effort and waiting time are required to wait at the merchant store.
For order batching, i.e. combining multiple orders into one driver trip, the probability that the user is assigned a driver will rise. The drivers' revenue per hour is also greater. Food orders from the same and/or nearby merchant stores may be batched together and delivered to the user or to several nearby user locations. When the driver is on the way to pick up, the batch will be determined by the route added to the driver's way of transportation and some restrictions will be imposed, such as delivery time to the customer, route matching (fewer detours), capacity, etc.
Although the maximum delivery time delay may be limited by setting constraints during batch creation, a problem arises in that the delivery time of an order needs to be predicted because: the prediction occurs before driver allocation (and batch creation) and the additional delivery time due to the batch may be anywhere between zero to the maximum amount of delay. The current delivery prediction methods are based on single order assumptions only and do not take into account the order batching aspect.
Disclosure of Invention
Accordingly, it is desirable to accurately predict the delivery time of a batch order. There is also a need to improve user and driver satisfaction.
Advantages of the present disclosure may include dynamically adjusting delivery times in real-time, resulting in more accurate delivery time predictions for a split order.
Advantages of the present disclosure may include higher user satisfaction due to an increase in order allocation rate.
These and other foregoing advantages and features of the aspects disclosed herein will be apparent from the following description and drawings. Furthermore, it is to be understood that the features of the various aspects described herein are not mutually exclusive and may exist in various combinations and permutations.
The present disclosure relates generally to a system for predicting delivery time of a batch order. The system may include one or more processors; and a memory having instructions stored therein, which when executed by the one or more processors, cause the one or more processors to: identifying a first location, wherein one or more merchants are located at the first location; identifying a second location, wherein one or more users may be located at the second location; predicting dynamic buffering time; wherein the delivery time may include a dynamic buffer time, and wherein the one or more processors are configured to predict the dynamic buffer time based on one or more batch orders between a first location (bearing) of the first location and the second location and a second location (bearing) of the first location and a third location.
According to one embodiment, the one or more processors may be configured to predict the dynamic buffering time based on at least one of: a ratio of batch orders or a number of batch orders between a first orientation of the first location and the second location and a second orientation of the first location and the third location.
According to one embodiment, the dynamic buffering time may be additional time due to order batching.
According to one embodiment, the first orientation and the second orientation may have an angular difference of 45 degrees.
According to one embodiment, the one or more processors may be configured to allow one or more users of the second location to order batchwise from one or more merchants of the first location.
According to one embodiment, the one or more processors may be configured to assign the same delivery driver to the batch order.
According to an embodiment, the one or more processors may be configured to predict the dynamic buffering time based on the context information. The context information may include one of a time of order, a merchant type, a price range, and a basket size of the order.
According to one embodiment, the one or more processors may be configured to predict the dynamic buffering time using machine learning of historical data of the order. The historical data may be at least one of an allocation time prediction, a pick-up time prediction, a wait time prediction, an order preparation time prediction, and a delivery time prediction.
According to one embodiment, the one or more processors may be configured to perform a biased data transformation on the historical data prior to using the historical data to predict the dynamic buffering time.
The present disclosure relates generally to a method for predicting delivery time of a batch order. The method may include using one or more processors to: identifying a first location, wherein one or more merchants are located at the first location; identifying a second location, wherein one or more users may be located at the second location; predicting dynamic buffering time; wherein the delivery time may include a dynamic buffer time, and wherein the one or more processors are configured to predict the dynamic buffer time based on one or more batch orders between a first position of the first location and the second position and a second position of the first location and the third location.
According to one embodiment, the one or more processors may be configured to predict the dynamic buffering time based on at least one of: a ratio of batch orders or a number of batch orders between a first orientation of the first location and the second location and a second orientation of the first location and the third location.
According to one embodiment, the dynamic buffering time may be additional time due to order batching.
According to one embodiment, the first orientation and the second orientation may have an angular difference of 45 degrees.
According to one embodiment, the one or more processors may be configured to allow one or more users of the second location to order batchwise from one or more merchants of the first location.
According to one embodiment, the one or more processors may be configured to assign the same delivery driver to the batch order.
According to an embodiment, the one or more processors may be configured to predict the dynamic buffering time based on the context information. The context information may include one of a time of order, a merchant type, a price range, and a basket size of the order.
According to one embodiment, the one or more processors may be configured to predict the dynamic buffering time using machine learning of historical data of the order. The historical data may be at least one of an allocation time forecast, a pick-up time forecast, a wait time forecast, an order preparation time forecast, and a delivery time forecast.
According to one embodiment, the one or more processors may be configured to perform a biased data transformation on the historical data prior to using the historical data to predict the dynamic buffering time.
The present disclosure relates generally to a non-transitory computer readable medium storing computer executable code comprising instructions for predicting delivery times of batch orders according to the present disclosure.
The present disclosure generally relates to computer executable code including instructions for predicting delivery times of batch orders according to the present disclosure.
To the accomplishment of the foregoing and related ends, the one or more embodiments comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed and the description is intended to include all such aspects and their equivalents.
Drawings
In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the disclosure. The dimensions of the various features or elements may be arbitrarily expanded or reduced for clarity. In the following description, various aspects of the disclosure are described with reference to the following drawings, in which:
FIG. 1 illustrates a system according to various embodiments.
FIG. 2 illustrates a flow chart of a method according to various embodiments.
FIG. 3 illustrates an exemplary flow chart of various variables of a Food Delivery Time (FDT) according to various embodiments.
Fig. 4A illustrates an exemplary azimuth graph, according to various embodiments.
FIG. 4B illustrates an exemplary orientation classification table for the exemplary orientation chart of FIG. 4A, according to various embodiments.
Fig. 5A illustrates an exemplary first graph showing symmetrically distributed data, in accordance with various embodiments.
Fig. 5B illustrates an exemplary second graph showing right-hand bias distribution data, in accordance with various embodiments.
Fig. 5C illustrates an exemplary third graph illustrating left-hand distribution data, in accordance with various embodiments.
FIG. 6 illustrates an exemplary machine learning flow chart in accordance with various embodiments.
Detailed Description
The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural and logical changes may be made without departing from the scope of the present invention. The various embodiments are not necessarily mutually exclusive, as some embodiments may be combined with one or more other embodiments to form new embodiments.
The embodiments described in the context of one of a system or server or method or computer program are similarly valid for other systems or servers, methods or computer programs and vice versa.
Features described in the context of embodiments may be correspondingly applicable to the same or similar features in other embodiments. Features described in the context of embodiments may be correspondingly applicable to other embodiments even if not explicitly described in these other embodiments. Furthermore, the addition and/or combination and/or substitution of features described in the context of an embodiment may be applied to the same or similar features in other embodiments accordingly.
The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
In the context of various embodiments, the articles "a," "an," and "the" are used with respect to a feature or element to include references to one or more features or elements.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terms "at least one" and "one or more" may be understood to include values greater than or equal to one (e.g., one, two, three, four, [ … … ], etc.). The term "plurality of (a pluralities)" may be understood to include numerical values greater than or equal to two (e.g., two, three, four, five, [ … … ], etc.).
"plural" and "multiple" in the specification and claims explicitly refer to a number greater than one. Thus, any phrase (e.g., "multiple [ objects ]," plural [ objects ] ") explicitly referring to the above words refers to a certain number of objects, explicitly referring to more than one of the objects. The terms "group … …", "set of … …", "collection of … …", "series of … …", "sequence of … …", "grouping of … …", and the like, as used in the specification and claims, refer to an amount equal to or greater than one, if any, i.e., one or more. The terms "proper subset", "reduced subset" and "smaller subset" refer to subsets of a set that are not equal to a set, i.e., a subset of a set that contains fewer elements than a set.
The term "data" as used herein may be understood to include information in any suitable analog or digital form, such as provided as a file, a portion of a file, a set of files, a signal or stream, a portion of a signal or stream, a set of signals or streams, and the like. Furthermore, the term "data" may also be used to represent references to information, for example in the form of pointers. However, the term data is not limited to the above examples, and may take various forms and represent any information understood in the art.
For example, the term "processor" or "controller" as used herein may be understood as any kind of entity that allows processing of data, signals, etc. Data, signals, etc. may be processed according to one or more particular functions performed by a processor or controller.
Thus, the processor or controller may be or include analog circuitry, digital circuitry, mixed signal circuitry, logic circuitry, a processor, a microprocessor, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), an integrated circuit, an Application Specific Integrated Circuit (ASIC), or the like, or any combination thereof. Any other type of implementation of the various functions described in further detail below may also be understood as a processor, controller, or logic circuitry. It should be understood that any two (or more) of the processors, controllers, or logic circuits detailed herein may be implemented as a single entity having equivalent functionality or the like, and conversely, any single processor, controller, or logic circuit detailed herein may be implemented as two (or more) separate entities having equivalent functionality or the like.
The term "system" (e.g., drive system, position detection system, etc.) as detailed herein may be understood as a set of interacting elements, which may be, for example and without limitation, one or more mechanical components, one or more electrical components, one or more instructions (e.g., encoded in a storage medium), one or more controllers, and the like.
As used herein, "circuitry" is understood to be any type of logic implementing entity, which may include dedicated hardware or a processor executing software. Thus, the circuitry may be analog circuitry, digital circuitry, mixed signal circuitry, logic circuitry, a processor, a microprocessor, a central processing unit ("CPU"), a graphics processing unit ("GPU"), a digital signal processor ("DSP"), a field programmable gate array ("FPGA"), an integrated circuit, an application specific integrated circuit ("ASIC"), or the like, or any combination thereof. Any other type of implementation of the various functions described in further detail below may also be understood as a "circuit". It should be understood that any two (or more) of the circuits detailed herein may be implemented as a single circuit having substantially equivalent functionality, and conversely, any single circuit detailed herein may be implemented as two (or more) separate circuits having substantially equivalent functionality. Further, reference to "a circuit" may refer to two or more circuits that together form a single circuit.
As used herein, "memory" may be understood as a non-transitory computer-readable medium in which data or information may be stored for retrieval. Thus, references herein to "memory" may be understood to refer to volatile or non-volatile memory, including random access memory ("RAM"), read only memory ("ROM"), flash memory, solid state memory, magnetic tape, hard disk drives, optical disk drives, and the like, or any combination thereof. Furthermore, it should be understood that registers, shift registers, processor registers, data buffers, etc. are also encompassed herein by the term memory. It should be appreciated that a single component referred to as "memory" or "one memory" may be composed of more than one different type of memory and thus may refer to a collective component comprising one or more types of memory. It will be readily appreciated that any single memory component may be separated into multiple, commonly equivalent memory components, and vice versa. Further, while the memory may be described as being separate from one or more other components (such as in the figures), it is to be understood that the memory may be integrated within another component, such as on a common integrated chip.
The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and aspects in which the disclosure may be practiced. These aspects are described in sufficient detail to enable those skilled in the art to practice the disclosure. Various aspects are provided for the present system and for the method. It should be understood that the basic nature of the system also applies to the method and vice versa. Other aspects may be utilized and structural and logical changes may be made without departing from the scope of the present disclosure. The various aspects are not necessarily mutually exclusive, as some aspects may be combined with one or more other aspects to form new aspects.
For easier understanding and put into practice, the present system, method and other specific aspects will now be described by way of example and not by way of limitation with reference to the accompanying drawings. Repeated descriptions of features and characteristics may be omitted for brevity.
It should be understood that any of the properties described herein with respect to a particular system or device may also be applicable to any of the systems or devices described herein. It will also be appreciated that any of the properties described herein for a particular method may be applicable to any of the methods described herein. Moreover, it should be understood that not all of the described components or operations will be packaged in a device, system, or method for any of the devices, systems, or methods described herein, but only some (but not all) of the components or operations may be packaged.
The term "comprising" is to be interpreted as having a broad meaning similar to the term "comprising" and is to be interpreted as implying any particular integer, operation, group of integers or operations to which it relates, but without excluding any other integer, operation, group of integers or operations. This definition also applies to variants of the term "comprising" such as "comprising" and "including".
The term "coupled" (or "connected") herein may be understood as electrically or mechanically coupled, such as attached or fixed or attached, or merely touching without any fixing, and it is understood that direct coupling or indirect coupling (in other words: coupling without direct contact) may be provided.
Fig. 1 illustrates a system 100 according to various embodiments.
According to various embodiments, the system 100 may include a server 110 and/or a user device 120.
In various embodiments, the server 110 and the user device 120 may communicate with each other over a communication network 130. In one embodiment, although fig. 1 shows lines connecting server 110 to communication network 130, lines connecting user device 120 to communication network 130, server 110 and user device 120 may not be physically connected to each other, such as by a cable. In one embodiment, the server 110 and the user device 120 may be capable of communicating wirelessly over a communication network 130 of an internet communication protocol or over a mobile cellular communication network.
In various embodiments, the server 110 may be a single server as schematically illustrated in FIG. 1, or have functions performed by the server 110 distributed across multiple server components. In one embodiment, the server 110 may include one or more server processors 112. In one embodiment, the various functions performed by server 110 may be performed on one or more server processors. In one embodiment, each particular function of the various functions performed by server 110 may be performed by a particular server processor(s) of the one or more server processors.
In one embodiment, the server 110 may include a memory 114. In one embodiment, server 110 may also include a database. In one embodiment, the memory 114 and database may be one component or may be separate components. In one embodiment, the memory 114 of the server may include computer executable code defining the functions that the server 110 performs under the control of one or more server processors 112. In one embodiment, the database and/or memory 114 may include historical data of past order services, such as user location and/or merchant location, and/or time of order, and/or merchant type and/or price range and/or basket size of orders and/or previous single order history data and/or previous batch order data. In one embodiment, the memory 114 may include or be a computer program product, such as a non-transitory computer-readable medium.
According to various embodiments, a computer program product may store computer executable code comprising instructions for predicting delivery times of batch orders according to various embodiments. In one embodiment, the computer executable code may be a computer program. In one embodiment, a computer program product may be a non-transitory computer readable medium. In one embodiment, the computer program product may be in the system 100 and/or the server 110.
In some embodiments, server 110 may also include input and/or output modules that allow server 110 to communicate over communications network 130. In one embodiment, server 110 may also include a user interface for user control of server 110. In one embodiment, the user interface may include, for example, computing peripherals such as a display monitor, user input devices, e.g., touch screen devices, and computer keyboards.
In one embodiment, the user device 120 may include a user device memory 122. In one embodiment, the user device 120 may include a user device processor 124. In one embodiment, the user device memory 122 may include computer executable code defining the functions performed by the user device 120 under the control of the user device processor 124. In one embodiment, the user device memory 122 may include or be a computer program product, such as a non-transitory computer readable medium.
In one embodiment, the user device 120 may also include input and/or output modules that allow the user device 120 to communicate over the communication network 130. In one embodiment, the user device 120 may also include a user interface for a user to control the user device 120. In one embodiment, the user interface may be a touch panel display. In one embodiment, the user interface may include a display monitor, a keyboard, or buttons.
In one embodiment, the system 100 may be used to predict the delivery time of a batch order. In one embodiment, memory 114 may have instructions stored therein. In one embodiment, the processor 112 may be configured to identify the first location. The first location may be in or may be a first geographic hash. The term "geographic hash" may be a predefined geocoding unit of a partition of a city or country. In various embodiments, the first location may be a building, such as a shopping mall or a food center. In various embodiments, the first location may be defined based on a predetermined radius or distance. In various embodiments, one or more merchants are located at a first location.
In one embodiment, the processor 112 may be configured to identify the second location. The second location may be in the second geographic hash, or may be the second geographic hash. In various embodiments, the second location may be a defined area, such as a residential or office building or a predefined neighborhood. In various embodiments, the second location may be defined based on a predetermined radius or distance. One or more users may be located at the second location.
In one embodiment, the processor 112 may be configured to predict the dynamic buffer time. Delivery time may include dynamic buffer time. In one embodiment, the dynamic buffering time may be additional time due to order batching.
In one embodiment, the processor 112 may be configured to predict the dynamic buffer time based on one or more batch orders between a first position of the first location and the second position and a second position of the first location and the third position.
According to one embodiment, the first orientation and the second orientation may have an angular difference of 45 degrees.
In one embodiment, the third location may be in or may be a third geographic hash. In various embodiments, the third location may be defined based on a predetermined radius or distance. In one embodiment, the third location may be proximate to the second location, wherein the third location may be a predetermined distance from the second location.
In one embodiment, the processor 112 may be configured to predict the dynamic buffering time based on at least one of: a ratio of batch orders or a number of batch orders between a first orientation of the first location and the second location and a second orientation of the first location and the third location.
In one embodiment, the processor 112 may be configured to allow one or more users of the second location to order in batches from one or more merchants of the first location. In one embodiment, a first user (e.g., user a) at a second location may subscribe in batches from a first merchant (e.g., merchant a) and a second merchant (e.g., merchant B) at the first location. In another embodiment, a first user (e.g., user a) and a second user (e.g., user B) may each subscribe from a first merchant (e.g., merchant a). In one embodiment, a first user (e.g., user a) and a second user (e.g., user B) at a second location may each subscribe from a first merchant (e.g., merchant a) and a second merchant (e.g., merchant B) at the first location, respectively.
According to one embodiment, the one or more processors may be configured to assign the same delivery driver to the batch order. In one embodiment, the same delivery driver may be assigned to deliver orders from one or more merchants at a first location to one or more users at a second location.
According to an embodiment, the one or more processors may be configured to predict the dynamic buffering time based on the context information. The context information may include one of a time of order, a merchant type, a price range, and a basket size of the order.
According to one embodiment, the one or more processors may be configured to predict the dynamic buffering time using machine learning of historical data of the order. The historical data may be AT least one of a dispensing time forecast (AT), a pick-up time-in-transit (PRT) forecast, a Wait Time (WT) forecast, an order preparation time forecast (e.g., a Food Preparation Time (FPT) forecast), and a delivery time-in-transit (DRT) forecast.
According to one embodiment, the history data of the order may be used as input to a machine learning system and/or model. The predicted dynamic buffering time may be an output of the machine learning system. In one embodiment, the system 100 is configured to predict dynamic buffering time based on the probability that an order is batched and make dynamic adjustments at the merchant level in real-time. According to one embodiment, the delivery time may include a dynamic buffer time and one or more of an allocation time prediction (AT), a pick-up time-in-transit (PRT) prediction, a Wait Time (WT) prediction, an order preparation time prediction (e.g., a Food Preparation Time (FPT) prediction), and a delivery time-in-transit (DRT) prediction.
According to one embodiment, the one or more processors 112 may be configured to perform a biased data transformation on the historical data prior to using the historical data to predict the dynamic buffering time. In one embodiment, the skewed data transformation may be used to correct for right or left bias of the historical data. The one or more processors 112 may be configured to determine whether the historical data is biased based on the average and the median. For example, if the mean and median values are not the same or not substantially similar, the processor 112 may determine that the data is biased. The processor may also determine whether the mean and median are substantially similar based on a predetermined threshold. That is, if the difference between the average and median is within a predetermined threshold, the processor 112 may determine that the average and median are substantially similar. On the other hand, if the difference between the average and median is not within the predetermined threshold, the processor 112 may determine that the average and median are not substantially similar and that the historical data is biased.
In one embodiment, advantages of the present disclosure may include dynamically adjusting delivery times in real time, resulting in a more accurate delivery time forecast with an increase in forecast accuracy of batch orders of about 15%.
Fig. 2 illustrates a flow chart of a method 200 according to various embodiments.
According to various embodiments, a method 200 for predicting delivery times for batch orders may be provided. In some embodiments, the method 200 may include a step 202 of identifying a first location using one or more processors of a system (e.g., system 100). One or more merchants may be located at a first location. In one embodiment, the method 200 may include a step 204 of identifying, using one or more processors, the second location. One or more users may be located at the second location. In one embodiment, the method 200 may include a step 206 of predicting the dynamic buffer time based on one or more batch orders between the first position and the first position of the second position and the second position of the first position and the third position using one or more processors. Delivery time may include dynamic buffer time.
Steps 202 through 206 are shown in a particular order, however other arrangements are possible, for example, in some embodiments, step 202 may be performed after step 204. In some cases, steps may also be combined. Any suitable order of steps 202 through 206 may be used.
FIG. 3 illustrates an exemplary flow chart 300 of various variables of a Food Delivery Time (FDT) according to various embodiments.
In one embodiment, the FDT prediction may be divided into one or more components, such as allocation time prediction (A1), pick-path time Prediction (PRT), wait time prediction (WT), food preparation time prediction (FPT), delivery path time prediction, and dynamic buffer time prediction (DBT).
However, in FIG. 3, food delivery is illustrated, and other types of suitable delivery orders may be suitable, such as grocery orders.
In one embodiment, in the exemplary flowchart 300, an order 302 may be received. The system 100 may predict an allocation time 304 between receiving an order 302 and an order allocation 306. The allocation time forecast may be a forecast of the time required to allocate the driver to the order. The allocation time prediction may be predicted based on supply and demand conditions.
In one embodiment, the system 100 may predict a pick-way time 308 between the order allocation 306 and the allocated driver's arrival time 310 at a merchant (e.g., restaurant). The pick-up time-in-transit prediction 308 may be a prediction of the time required for the driver to travel from his/her current location to the restaurant location. The pick-up time-in-transit prediction 308 may be predicted based on at least one of location, vehicle speed, and traffic conditions.
In one embodiment, the system 100 may predict a food preparation time 312 between the order allocation 306 and a food removal time 314 at the merchant. Food preparation time forecast 312 may be a forecast of the time required for a merchant to prepare a food product. The food preparation time prediction 312 may be predicted based on historical data and/or contextual data.
In one embodiment, the system 100 may predict a wait time 316 between the assigned driver's arrival at the merchant 310 and the merchant's food take-off time 314. The wait time prediction 316 may be a prediction of the time required for the driver to wait for the food product. The latency prediction 316 may be predicted based on historical data and/or context data.
In one embodiment, the system 100 may predict a delivery path time 318 between the food item pick-up time 314 and the delivery time 320. The delivery time-in-transit prediction 318 may be a prediction of the time required for the driver to travel from the location of the restaurant to the customer. The delivery time-in-transit prediction 318 may be predicted based on at least one of location, vehicle speed, and traffic conditions.
In one embodiment, dynamic buffer time prediction (DBT) may be used to predict delivery times for batch orders. DBT may be an additional time component for improving FDT prediction of a batch order. In one embodiment, DBT may be calculated based on the following formula: fdt=at+max (prt+wt, FPT) +drt+dbt.
Fig. 4A illustrates an exemplary orientation chart 400 in accordance with various embodiments. Fig. 4B illustrates an example orientation classification table 450 for the example orientation chart 400 of fig. 4A, in accordance with various embodiments.
In one embodiment, the dynamic buffer time may be a component of an Estimated Time of Arrival (ETA) that accounts for additional delivery time due to at least one of order batching and merchant workload levels.
In one embodiment, the dynamic buffering time may be predicted based on various features, such as batch features and/or FDT features. Batch features and FDT features may be aggregated based on historical data. The historical data may be aggregated at predetermined time intervals, for example at the (restaurant x weekday/weekend x 10 minutes) level.
In one embodiment, the exemplary orientation map 400 may include a first location 402. One or more merchants may be located at first location 402. The exemplary azimuth graph 400 may include a second location 404. One or more users may be located at the second location.
In one embodiment, the exemplary azimuth graph 400 may include a first angle 406, a second angle 408, a third angle 410, and a fourth angle 412. The first angle 406 may be perpendicular to the second angle 408 and the fourth angle 412. The first angle 406 may be parallel to the third angle 410. The third angle may be perpendicular to the second angle and the fourth angle. The first angle 406 may be 0 degrees. The second angle 408 may be 90 degrees. The third angle 410 may be 180 degrees. The fourth angle 412 may be 270 degrees.
In one embodiment, the orientation is a direction or position relative to a fixed point, or a direction of movement. In one embodiment, the fixed point may be the first location 402 of one or more merchants. The units of measure of orientation may be degrees. In one embodiment, the exemplary orientation map 400 may include a plurality of orientations (e.g., 8 orientations). Each orientation may have a predetermined angle (e.g., 45 degrees). The total angle of all orientations may be 360 degrees. For example, azimuth 1 414a may have an angle between 0 and 45 degrees, azimuth 2 414b may have an angle between 45 and 90 degrees, azimuth 3 414c may have an angle between 90 and 135 degrees, azimuth 4d may have an angle between 135 and 180 degrees, azimuth 5 414e may have an angle between 180 and 225 degrees, azimuth 6 414f may have an angle between 225 and 270 degrees, azimuth 7 414g may have an angle between 270 and 315 degrees, and azimuth 8 414h may have an angle between 315 and 360 degrees.
In one embodiment, the one or more processors are configured to predict the dynamic buffer time based on one or more batch orders between a first position (e.g., position 2 414 b) of the first location 402 and the second location 404 and a second position (e.g., position 3 414 c) of the first location and the third location. The third position is within a second orientation (e.g., orientation 3 414 c).
In one embodiment, the batch characteristics may include a proportion of batch orders and/or a number of completed batch orders in a current direction (i.e., a first position) and a nearby order direction (i.e., a second position), which may be calculated based on a first position between a first location (Mex) of merchant 402 and a second location (Pax) of user 404, and a second position of the first location and a third location. The proportion of the batch order may be an agent of the likelihood that the order is batched. The completion quantity of a batch order may be an agent of the order density, which correlates to the likelihood that the order is batched.
In one embodiment, the FDT features may include the median and/or average of the following calculated variables: FDT- { AT+Max (PRT+WT, FPT) +DRT }. This feature may be a proxy for the historical batch buffering time.
In one embodiment, the contextual characteristics (or contextual information) may include at least one of a location of the user, a time of order (e.g., hours, days of the week, or weekends), a merchant type, an order price, and an order basket size.
In one embodiment, real-time features in the ETA formula, such as AT, FPT, WT, PRT and DRT, may be predicted based on a single order assumption. The predicted values may then be applied as input features for DBT prediction.
In one embodiment, a machine learning model may be applied to predict Dynamic Buffer Time (DBT). The machine learning model may predict the DBT based on at least one of the FDT of the historical order, the aggregate characteristics, and the output of other time components.
For each historical order, the FDT may be set as the target output of the ML model, and its corresponding features set as the input. The predicted output of the ML model may be denoted as f (input). The model training process may attempt to minimize the loss between the target and the predicted output. The loss function/(order) may be a standard Mean Square Error (MSE): l (order) = (f (input) -FDT) 2
The ML model can automatically implement the optimization model f (input) with minimal loss l (order) based on historical orders by repeating the training process described above.
A variable ML model may be applied to the task. For example: gradient Boosting Decision Tree (GBDT), neural Network (NN), and Logistic Regression (LR).
Fig. 5A illustrates an exemplary first graph showing symmetrically distributed data according to various embodiments. Fig. 5B illustrates an exemplary second graph showing right-hand bias distribution data, in accordance with various embodiments. Fig. 5C illustrates an exemplary third graph showing left-hand distribution data in accordance with various embodiments.
In fig. 5A, an exemplary first graph 500 illustrates symmetric distribution data, where the mean and median values are substantially similar.
In fig. 5B, the average is greater than the median, which may result in a long tail to the right of the data center of graph 510.
In fig. 5C, the median is greater than the average, which may result in a long tail to the left of the data center of graph 520.
Depending on the nature of the data, transformation techniques may be applied to avoid skewing of the model inputs and outputs. Transformation methods can be used to overcome the skew of data (such as long tails on the left/right sides of a data center).
In one embodiment, the input features may be transformed. The values of the other components of the FDT (including AT, FPT, WT, PRT and DRT) may be severely right-shifted due to the long tail in the right portion of the center or severely left-shifted due to the long tail in the left portion of the center.
In one embodiment, if the original features (e.g., bias data) are applied as input, the accuracy of the prediction of the DBT will be affected. These inputs may be logarithmically transformed: f (x) =log (α+x), where α replaces the constant value 1 (application 1 in most conventional applications) is a variable used to control the transformation, which may be 5 percentiles or 10 percentiles of the corresponding input, resulting in better performance in terms of reduced data skew.
In one embodiment, the output (e.g., target output) may be transformed because at the output: a skewing problem may occur in FDT. For example, because the FDT of a batch order is long, this may introduce long tails on the right side of the data center. The transformation may be applied to the output. The ML model may be used to predict transformed values. The transformed values may be transformed back into the final prediction of the DBT.
The transformation function is not limited to logarithmic transformation. Any other suitable means may be used, such as square root transforms and Box-Cox transforms.
FIG. 6 illustrates an exemplary machine learning flow diagram 600 in accordance with various embodiments.
In a first step S1 602, data may be prepared. In this step, data for historical orders (e.g., data from the last month) may be collected. The data of the historical order may include detailed information of the time of delivery of the food item.
The collected data may be used as training data 604, and may be randomly sampled and/or segmented in training, validation, and test sets at a predetermined ratio (e.g., 60:10:30).
In a second step S2 606, based on the collected data, according to various embodiments, the primary features such as AT 608, FPT 610, WT 612, PRT 614, DRT 616, batch feature 618, FDT feature 620, and context feature 622 may be obtained through statistical calculations or aggregation.
In a third step S3 624, an appropriate model 626 may be selected. For example, a gradient-lifted decision tree (GBDT) may be used as the model.
In training, a validation set may be used to step up the model's ability to predict the actual FDT.
Once training is complete, the test data set may be used to evaluate performance. The evaluation allows the model to be tested from data that has never been used for training.
In a fourth step S4 630, deployment 630 may be performed. The ML model can be integrated into an existing production environment, where it accepts features as input and returns DBT predictions. The purpose of this step is that predictions from the trained ML model can be provided as a service to others via an Application Programming Interface (API).
While the present disclosure has been particularly shown and described with reference to particular aspects, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the present disclosure as defined by the appended claims. The scope of the disclosure is therefore indicated by the appended claims, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (20)

1. A system for predicting delivery time of a batch order, the system comprising:
One or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the one or more processors to:
identifying a first location, wherein one or more merchants are located at the first location;
identifying a second location, wherein one or more users are located at the second location;
predicting dynamic buffering time;
wherein the delivery time includes the dynamic buffering time, and
wherein the one or more processors are configured to predict the dynamic buffer time based on one or more batch orders between a first position of the first location and the second position and a second position of the first location and the third position.
2. The system of claim 1, wherein the one or more processors are configured to predict the dynamic buffering time based on at least one of: a ratio of batch orders or a number of batch orders between the first position and the second position of the second position.
3. The system of claim 1 or 2, wherein the dynamic buffering time is additional time due to order batching.
4. A system according to any of claims 1-3, wherein the first and second orientations have an angular difference of 45 degrees.
5. The system of any of claims 1-4, wherein the one or more processors are configured to allow the one or more users of the second location to order batchwise from the one or more merchants of the first location.
6. The system of claim 5, wherein the one or more processors are configured to assign the batch order with the same delivery driver.
7. The system according to any one of claim 1 to 6,
wherein the one or more processors are configured to predict the dynamic buffering time based on context information, and
wherein the contextual information includes one of a time of order, a merchant type, a price range, and a basket size of the order.
8. The system according to any one of claim 1 to 7,
wherein the one or more processors are configured to predict the dynamic buffering time using machine learning of historical data of a single order, and
wherein the history data includes at least one of: allocation time prediction, pick time-to-transmit prediction, wait time prediction, order preparation time prediction, and delivery time-to-transmit prediction.
9. The system of any of claims 1-8, wherein the one or more processors are configured to perform a biased data transformation on the historical data prior to using the historical data to predict the dynamic buffering time.
10. A method for predicting delivery time of a batch order, the method comprising using one or more processors to:
identifying a first location, wherein one or more merchants are located at the first location;
identifying a second location, wherein one or more users are located at the second location;
predicting dynamic buffering time;
wherein the delivery time includes the dynamic buffering time, and
wherein the one or more processors are configured to predict the dynamic buffer time based on one or more batch orders between a first position of the first location and the second position and a second position of the first location and the third position.
11. The method of claim 10, comprising using the one or more processors to: predicting the dynamic buffering time based on at least one of: a ratio of batch orders or a number of batch orders between the first position and the second position of the second position.
12. The method of claim 10 or 11, wherein the dynamic buffering time is additional time due to order batching.
13. The method of any of claims 10-12, wherein the first and second orientations have an angular difference of 45 degrees.
14. The method of any of claims 10-13, comprising using the one or more processors to:
allowing the one or more users of the second location to order batchwise from the one or more merchants of the first location.
15. The method of claim 14, comprising using the one or more processors to assign the batch order to the same delivery driver.
16. The method of any of claims 10-15, comprising using the one or more processors to:
predicting the dynamic buffering time based on context information;
wherein the contextual information includes one of a time of order, a merchant type, a price range, and a basket size of the order.
17. The method of any of claims 10-16, comprising using the one or more processors to:
predicting the dynamic buffering time using machine learning of historical data of a single order;
Wherein the historical data includes at least one of an allocation time prediction, a pick-up time-in-transit prediction, a wait time prediction, an order preparation time prediction, and a delivery time-in-transit prediction.
18. The method of any of claims 10-17, comprising using the one or more processors to perform a biased data transformation on the historical data prior to using the historical data to predict the dynamic buffering time.
19. A non-transitory computer readable medium storing computer executable code comprising instructions for predicting delivery times of batch orders according to any one of claims 1 to 18.
20. Computer executable code comprising instructions for predicting delivery time of a batch order according to any one of claims 1 to 19.
CN202280012915.7A 2021-05-19 2022-05-18 System and method for predicting delivery time of batch order Pending CN116964604A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SG10202105265Y 2021-05-19
SG10202105265Y 2021-05-19
PCT/SG2022/050330 WO2022245295A2 (en) 2021-05-19 2022-05-18 System and method for predicting delivery time for batch orders

Publications (1)

Publication Number Publication Date
CN116964604A true CN116964604A (en) 2023-10-27

Family

ID=84141984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280012915.7A Pending CN116964604A (en) 2021-05-19 2022-05-18 System and method for predicting delivery time of batch order

Country Status (3)

Country Link
KR (1) KR20240009915A (en)
CN (1) CN116964604A (en)
WO (1) WO2022245295A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116627991B (en) * 2023-07-26 2023-09-26 山东朝阳轴承有限公司 Enterprise informatization data storage method and system based on Internet of things

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9811838B1 (en) * 2016-03-16 2017-11-07 Square, Inc. Utilizing a computing system to batch deliveries for logistical efficiency
US11037055B2 (en) * 2017-10-30 2021-06-15 DoorDash, Inc. System for dynamic estimated time of arrival predictive updates
WO2020131987A1 (en) * 2018-12-20 2020-06-25 Zume, Inc. Grouping orders in a delivery system
US11783403B2 (en) * 2019-04-24 2023-10-10 Walmart Apollo, Llc Systems, non-transitory computer readable mediums, and methods for grocery order batching and customer experience
CN111985748A (en) * 2019-05-22 2020-11-24 阿里巴巴集团控股有限公司 Order batch processing method, device and computer system

Also Published As

Publication number Publication date
WO2022245295A2 (en) 2022-11-24
WO2022245295A3 (en) 2023-01-19
KR20240009915A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN107305637B (en) Data clustering method and device based on K-Means algorithm
CN106919957B (en) Method and device for processing data
EP3446469B1 (en) Method of user behavior based service dispatch
CN109690581B (en) User guidance system and method
EP3293641A1 (en) Data processing method and system
CN110335061B (en) Transaction mode portrait establishing method, device, medium and electronic equipment
CN108230067A (en) The appraisal procedure and device of user credit
CN116964604A (en) System and method for predicting delivery time of batch order
CN111459992B (en) Information pushing method, electronic equipment and computer readable medium
CN112836128A (en) Information recommendation method, device, equipment and storage medium
WO2023034118A1 (en) Systems for management of location-aware market data
CN110650170A (en) Method and device for pushing information
CN111080126B (en) Task allocation method and device
CN113326449A (en) Method, apparatus, electronic device, and medium for predicting traffic flow
CN109862188B (en) Information sending method and device, equipment and storage medium
US11693711B2 (en) System, method, and computer program product for processing large data sets by balancing entropy between distributed data segments
CN112766995A (en) Article recommendation method and device, terminal device and storage medium
CN113342781B (en) Data migration method, device, equipment and storage medium
CN115099865A (en) Data processing method and device
CN114218477A (en) Display interface control method and device and electronic equipment
CN113034188A (en) Multimedia content delivery method and device and electronic equipment
CN113807555A (en) Address selection method and device for distribution center, electronic equipment and storage medium
CN110298679A (en) A kind of method and apparatus calculating the distance between sample data
US20240070759A1 (en) Systems and methods for selection of candidate content items
US11449902B1 (en) Automated submission for solicited application slots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination