CN110414731B - Order distribution method and device, computer readable storage medium and electronic equipment - Google Patents

Order distribution method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN110414731B
CN110414731B CN201910667271.5A CN201910667271A CN110414731B CN 110414731 B CN110414731 B CN 110414731B CN 201910667271 A CN201910667271 A CN 201910667271A CN 110414731 B CN110414731 B CN 110414731B
Authority
CN
China
Prior art keywords
order
rider
determining
path
assigned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910667271.5A
Other languages
Chinese (zh)
Other versions
CN110414731A (en
Inventor
周越
侯俊杰
潘基泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910667271.5A priority Critical patent/CN110414731B/en
Publication of CN110414731A publication Critical patent/CN110414731A/en
Application granted granted Critical
Publication of CN110414731B publication Critical patent/CN110414731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0835Relationships between shipper or supplier and carriers
    • G06Q10/08355Routing methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The specification discloses an order distribution method, an order distribution device, a computer-readable storage medium and electronic equipment. And finally, determining the matching degree of each rider with the order to be assigned respectively based on the predicted delivery time of each order, and distributing the order to be assigned. By means of respectively determining the predicted delivery time of each order in the path planning, the situation that prediction errors are accumulated due to the fact that the predicted time consumption of a plurality of road sections is added is avoided, the predicted delivery time is more accurate, the determined distribution result is more accurate, and the distribution efficiency is improved.

Description

Order distribution method and device, computer readable storage medium and electronic equipment
Technical Field
The present application relates to the field of logistics distribution technologies, and in particular, to a method and an apparatus for order allocation, a computer-readable storage medium, and an electronic device.
Background
Currently, in order to improve the delivery efficiency, the take-out delivery platform generally assigns the orders to be assigned to the riders at the scheduling time according to the matching degree of the orders to be assigned and the riders. Moreover, when determining the matching degree of the to-be-assigned orders and the rider, the influence on the execution business of the rider after the to-be-assigned orders are assigned to the rider is also considered. Generally considered are: after an order to be assigned is assigned to a rider, whether the rider can just-in-time deliver the order to be assigned, and whether the rider can just-in-time deliver other orders.
When determining the matching degree between the to-be-assigned orders and the rider, the platform usually performs path planning again according to the assigned orders of the rider and the to-be-assigned orders, determines the optimal delivery scheme of the rider, and then determines whether each order in the path planning can be delivered on time. Since the path planning is resumed, it may happen that the originally assigned order does not time out, but becomes a timed out order in the re-determined path planning, and the order to be assigned cannot be assigned to such a rider. The path plan includes the pick-up position and delivery position of each order to be executed by the rider, and also includes the pick-up and delivery sequence of the rider.
In the prior art, when determining whether each order in the path plan is overtime, both the pickup position and the delivery position in the path plan are used as task points. That is, the task point is where the rider needs to arrive to complete the delivery service. And respectively predicting the time consumption of the rider between the two task points according to the distribution sequence of each task point in the path planning, thereby determining the predicted delivery time of each order and judging whether the overtime order exists.
For example, assuming that the orders assigned by the take-away rider X are a and B, and the order to be assigned is C, the delivery route of fig. 1 is obtained after the route planning. Wherein, the triangle represents the food taking position, the circle represents the food delivery position, and the letter in the figure represents which order is corresponding to the position. The light color is the position that the rider has reached and the dark color is the position that the rider has not reached. The platform will predict the time consumption of the rider between two task points, i.e. the time consumption of each path shown in the figure, i.e. the paths corresponding to the numbers 1-5 in the figure, respectively. The estimated delivery time of the order A is the sum of the consumed time of the road sections 1-4, the estimated delivery time of the order B is the sum of the consumed time of the road sections 1-5, and the estimated delivery time of the order C is the sum of the consumed time of the road sections 1-3.
However, since the predicted delivery time of an order determined in the prior art is a combination of a plurality of prediction periods, a prediction error is easily amplified, so that the predicted delivery time for the order is not accurate enough, and the accuracy of order allocation is affected.
Disclosure of Invention
The embodiment of the specification provides an order distribution method, an order distribution device, a computer-readable storage medium and an electronic device, which are used for partially solving the problems in the prior art.
The embodiment of the specification adopts the following technical scheme:
the order distribution method provided by the specification comprises the following steps:
for at least one rider, determining a path plan for the rider according to the order to be assigned and the assigned order of the rider;
extracting a sub-model according to the path characteristics contained in the path planning and a pre-trained first prediction model, and determining path characteristics;
for each order contained in the path plan, according to the information corresponding to the rider and the information corresponding to the order, extracting a sub-model through rider characteristics contained in the first prediction model, determining the rider characteristics corresponding to the order of the rider, and according to the determined rider characteristics corresponding to the order of the rider and the path characteristics, determining the predicted delivery time of the order through an output sub-model contained in the first prediction model;
determining the matching degree of the rider and the order to be assigned according to the determined estimated delivery time of each order contained in the path plan;
and distributing the order to be assigned according to the matching degree of the at least one rider and the order to be assigned.
Optionally, the path feature extraction submodel comprises a long-short term memory network and an attention layer; accordingly, the number of the first and second electrodes,
the step of determining the path characteristics according to the path planning and the path characteristic extraction submodel contained in the pre-trained first prediction model comprises the following steps:
determining a feature vector of each task point in the path plan according to at least one of the type and the coordinate of the task point;
according to the distribution sequence of each task point in the path planning, sequentially inputting the feature vectors of each task point into the long-term and short-term memory network to obtain output results corresponding to each task point;
and performing attention weighting on the output results corresponding to the task points through the attention layer to obtain an attention weighting result, and taking the attention weighting result as a path feature.
Optionally, the rider feature extraction submodel comprises a first multi-layered perceptron; accordingly, the number of the first and second electrodes,
determining the characteristics of the rider corresponding to the order by a rider characteristic extraction submodel contained in the first prediction model according to the information corresponding to the rider and the information corresponding to the order, comprising:
determining a characteristic vector of the rider according to the information corresponding to the rider and the information corresponding to the order;
and inputting the characteristic vector of the rider into the first multilayer perceptron to obtain an output result of the first multilayer perceptron, and taking the output result as the characteristic of the rider corresponding to the order of the rider.
Optionally, before determining the expected delivery time of the order by the output sub-model included in the first prediction model, the method further includes:
determining that the delivery status of the order is that the goods have been taken;
when the delivery status of the order is not delivery, the method further comprises:
determining information corresponding to a provider of the order distribution materials according to the information corresponding to the order;
according to the information corresponding to the provider, determining the provider characteristics through a provider characteristic extraction submodel contained in a pre-trained second prediction model;
and determining the predicted delivery time of the order through an output sub-model contained in a pre-trained second prediction model according to the characteristics of the rider corresponding to the order, the characteristics of the provider and the path characteristics.
Optionally, the provider feature extraction submodel includes a second multi-tier perceptron; accordingly, the number of the first and second electrodes,
determining the provider characteristics through a provider characteristic extraction submodel contained in a pre-trained second prediction model according to the information corresponding to the provider, comprising the following steps:
determining a provider feature vector according to the information corresponding to the provider;
and inputting the provider feature vector into the second multilayer perceptron to obtain an output result of the second multilayer perceptron as a provider feature.
Optionally, the pre-training the first prediction model comprises:
acquiring historical data corresponding to a historically completed order;
for at least one completed order, determining a rider who has historically executed the completed order as a designated rider;
determining a designated time from a time period in which the delivery state of the finished order is taken historically, and determining a training sample according to information corresponding to the finished order at the designated time, information corresponding to the designated rider and a path plan of the designated rider;
determining the actual delivery time of the completed order according to the historical data corresponding to the completed order, and taking the actual delivery time as the actual delivery time corresponding to the training sample;
and according to the determined training sample, taking the actual delivery time corresponding to the training sample as expected output, and training the first prediction model.
Optionally, the training the first prediction model by using the actual delivery time corresponding to the training sample as an expected output according to the determined training sample includes:
aiming at least two training samples, determining the expected delivery time corresponding to the at least two training samples respectively through the first prediction model;
determining the sum of losses of the at least two training samples through a preset first loss function according to the actual delivery time corresponding to the at least two training samples respectively and the expected delivery time corresponding to the at least two training samples respectively;
and adjusting the parameters of the first prediction model by taking the minimum sum of the losses as an optimization target.
The present specification provides an apparatus for order distribution, comprising:
a path planning module configured for determining, for at least one rider, a path plan for the rider from an order to be assigned and an assigned order for the rider;
a first determining module, configured to determine a path feature according to the path plan and a path feature extraction submodel included in a pre-trained first prediction model;
a second determining module, configured to, for each order included in the path plan, extract a sub-model according to the information corresponding to the rider and the information corresponding to the order by using the rider characteristics included in the first prediction model, determine the rider characteristics of the rider corresponding to the order, and determine the expected delivery time of the order by using the output sub-model included in the first prediction model according to the determined rider characteristics of the rider and the path characteristics;
a third determining module, configured to determine a matching degree between the rider and the order to be assigned according to the determined estimated delivery time of each order included in the path plan;
an allocation module configured to allocate the order to be assigned according to a matching degree of the at least one rider with the order to be assigned.
The present specification provides a computer-readable storage medium, wherein the storage medium stores a computer program, and the computer program, when executed by a processor, implements the order distribution method described above.
The electronic device provided by the present specification includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the order distribution method when executing the program.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
when it is necessary to determine the rider who performs the task corresponding to the order to be assigned, first, it is possible to assign, for at least one rider, determining a path plan for the rider based on the to-be-assigned order and the rider's assigned order, and thereafter, then extracting a sub-model according to the path characteristics contained in the pre-trained first prediction model, determining the path characteristics of the path plan, then, aiming at each order contained in the path plan, according to the information corresponding to the rider and the information corresponding to the order, determining, by a rider feature extraction sub-model included in the first predictive model, a rider feature for the order that corresponds to the rider, that is, the rider feature at that time is actually a collection of rider and order features, and then according to the determined characteristics of the rider and the path characteristics, determining the predicted delivery time of the order through an output sub-model in the first prediction model. And finally, determining the matching degree of the rider and the order to be assigned according to the estimated delivery time of each order contained in the determined path plan, and distributing the order to be assigned based on the determined matching degree of at least one rider and the order to be assigned. Since the order to be assigned may have an effect on the assigned order of the rider after being assigned to the rider, such as may cause the order of distribution of the assigned order to change, the path of distribution to change, and this effect is based on different riders executing different assigned orders, and the effects on different assigned orders of different riders are not exactly the same, the server may first determine that the order to be assigned is assigned to the rider, and then the path plan of the rider to determine the effect of the path change, characterized by the determined path characteristics. However, it is then necessary to determine the estimated delivery time for the rider to execute each order after the order to be assigned is allocated to the rider to determine how to subsequently allocate the order to be assigned. Then, for each rider and each order in the path plan of the rider, rider characteristics are determined from both the rider and the order, and the projected arrival time of each order in the path plan is determined from the rider characteristics and the path characteristics. At this time, it is determined that not only the order to be assigned but also the assigned order of the rider is present, so that it is possible to determine the matching degree of each rider with the order to be assigned based on the expected arrival time of each order in the path plan of each rider, and allocate the assigned order according to the determined matching degrees. Compared with the prior art that the predicted delivery time of the orders is determined through combination of a plurality of prediction time periods, the predicted delivery time which is determined for each order independently is provided by the specification, so that the prediction error is the error of the model, the accumulation of the prediction error is avoided, the predicted delivery time is more accurate, the determined distribution result is more accurate, and the distribution efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a prior art segmented determination of an estimated delivery time;
FIG. 2 is a process for order allocation provided by embodiments of the present description;
fig. 3 is a schematic structural diagram of a path feature extraction submodel included in a first prediction model provided in an embodiment of the present disclosure;
FIG. 4 is a block diagram of a first and a second prediction model provided in an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of an order distribution apparatus provided in an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an electronic device corresponding to fig. 2 provided in an embodiment of the present specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 2 is a process of order allocation provided in the embodiment of the present specification, which may specifically include one or more of the following steps:
s102: for at least one rider, a path plan for the rider is determined based on the order to be assigned and the assigned order for the rider.
In this specification, the order allocation process may be specifically executed by a server of the distribution platform, and since it is necessary to determine a rider who executes a task corresponding to an order to be assigned from among the riders, it is necessary to determine a distribution route of each rider after assigning the order to be assigned to each rider. The server may then determine, for at least one rider, a path plan for the rider based on the order to be assigned and the assigned order for the rider.
Specifically, the server may determine, for at least one rider, an assigned order for the rider. Wherein the assigned order is an order that has been assigned to the rider but has not yet been completed, the assigned order for the rider can be determined based on historical data of orders that have been assigned to the rider and historical data of orders that have been completed. Since the rider's assigned order is the task the rider needs to perform to complete, it must be considered when determining the path plan for the rider.
And then determining the path plan of the rider according to the to-be-assigned order and the task points corresponding to the assigned order of the rider and a preset path optimization algorithm. The task points comprise a goods taking position and a goods delivering position of the order. Since the path plan is performed according to the task points corresponding to several orders (i.e., the orders to be assigned and the assigned orders), the path plan of the rider determined in step S102 may not completely coincide with the path plan when the rider is performing the delivery task at the present moment.
For example, assuming that the pick location for the order to be assigned is located intermediate the pick locations for two different assigned orders by the rider, the determined path plan for the rider may require the rider to pick at the pick location for the order to be assigned first and then at the pick location for the next one of the two assigned orders after picking at the pick location for the previous one of the two assigned orders.
Of course, since it may take a certain time to pick up goods at the picking position of the order to be assigned, the planned route may cause a chain reaction, which may cause an order delivery timeout and the like when the rider plans to execute the business according to the planned route. The server then determines the estimated delivery time for each order contained in the path plan for the rider by a subsequent step.
S104: and extracting a sub-model according to the path planning and the path characteristics contained in the pre-trained first prediction model to determine the path characteristics.
In this specification, since the estimated time to reach of each order included in the path plan is related to the path plan, after determining the path plan of the rider, the server may extract a sub-model from the path features included in the pre-trained first prediction model to determine the path features of the path plan, so as to determine the estimated time to reach of each order included in the path plan in the subsequent step.
Specifically, the path feature extraction submodel in this specification includes: long Short-Term Memory network (LSTM) and Attention layer (Attention), as shown in fig. 3.
Fig. 3 is a schematic structural diagram of a path feature extraction submodel included in a first prediction model provided in this specification, and it can be seen that after input data is input to an LSTM first, results output by the LSTM at different times are input to an attention layer, and then output results weighted by attention are output as path features. By utilizing the characteristic of the LSTM that the input information is memorized for a long time, important information in each task point can be reserved, and after the Attention weighting is carried out through the Attention layer, characteristics which are important for determining order delivery time can be extracted, and path characteristics are determined.
First, the server may determine, for each task point in the path plan of at least one rider, a feature vector for the task point based on at least one of a type and coordinates of the task point.
Wherein, the types of the task points comprise: a pick-up location and a delivery location. In addition, in this specification, the server may further determine information of a task object corresponding to the task point, and determine a feature vector of the task point according to the type and the coordinate of the task point and the information of the task object. When the type of the task point is the goods taking position, the information of the task object corresponding to the task point is the information of the merchant (such as the type of the merchant, the ID of the merchant, the rating of the merchant, the historical average time for preparing the delivered goods, and the like), and when the type of the task point is the goods taking position, the information of the task object corresponding to the task point is the information of the user (such as the frequency of initiating the service by the user, the rating of the user, and the like). For example, the feature vector of a task point is (1, 39.9156343816, 116.43911257741), where 1 represents the meal location and the last two represent the longitude and latitude of the task point.
Then, the server can input the feature vectors of the task points into the LSTM in sequence according to the distribution sequence of the task points in the path plan, and obtain output results corresponding to the task points output by the LSTM. In this specification, since the number of task points and the sequence of the task points in the path plan are determined, the server may sequentially enter and exit the feature vectors of the task points in sequence, and then execute the next step.
Finally, the server can perform attention weighting on the output results corresponding to the task points through the attention layer to obtain an attention weighting result, and the attention weighting result is used as a path feature. Here, since a plurality of output results are output through the LSTM, the input Attention layer may be regarded as a matrix composed of one output result, and the corresponding Attention layer may be cross-multiplied by the Attention matrix and the matrix composed of the output results, i.e., an Attention weighting result, and used as a path feature of the path plan, as shown in the Attention layer in fig. 3.
Wherein, because the dimension of the output result output by the LSTM is fixed, the number of rows of the attention matrix is also fixed, and the server can determine the attention weighted result from the attention weighted matrix. For example, if the data result is an m × n matrix and the attention matrix is an n × p matrix, the server may determine the attention weighting result as an m-dimensional vector from the cross product result m × p matrix. For example, the maximum pooling (max-pooling) method may be used to determine, or to select the maximized element from each cross multiplication result, and so on, which is not limited in this specification.
S106: and aiming at each order contained in the path plan, according to the information corresponding to the rider and the information corresponding to the order, determining the rider characteristic of the rider corresponding to the order through a rider characteristic extraction sub-model contained in the first prediction model, and according to the determined rider characteristic of the rider corresponding to the order and the path characteristic, determining the predicted delivery time of the order through an output sub-model contained in the first prediction model.
In this specification, since there is a difference between orders included in each path plan, the server may determine, for each order included in the path plan, a rider feature corresponding to the order by the rider feature extraction sub-model included in the first prediction model according to information corresponding to the rider and information corresponding to the order. And the predicted delivery time of the order can be determined through an output sub-model contained in the first prediction model according to the determined characteristics of the rider corresponding to the order and the path characteristics.
The input and output of the rider characteristic extraction submodel and the output submodel will be described separately.
Specifically, the rider feature extraction submodel includes: a first Multi-Layer perceptron (MLP). First, the server can determine a characteristic vector of the rider according to the information corresponding to the rider and the information corresponding to the order as the input of the first MLP. Wherein, the information corresponding to the rider can comprise: the rider carries information such as odd numbers, and the information corresponding to the order can comprise: the order price, the distance from the pick location corresponding to the order to the delivery location corresponding to the order in the path plan, the number of new orders generated in the area corresponding to the order, etc. The server may combine the various information described above as a feature vector for the rider.
Then, the server can input the rider feature vector into the first MLP to obtain an output result of the first MLP, and the output result is used as the rider feature of the rider corresponding to the order.
In the present specification, the output submodel may be an MLP model or another prediction model, and the present specification does not limit this. The server receives the input of the output submodel using the rider characteristics and the route characteristics determined in step S104, and obtains the output result of the output submodel, that is, the estimated delivery time of the order.
Further, in this specification, since the time consumed by the rider is different between different users and different merchants, for example, when the merchant is a fast food merchant, the preparation time is generally short, and when the user's goodness is high, the user's sensitivity to the distribution time may be low. The output results of the rider characteristic extraction submodel and the path characteristic extraction submodel are used as the input of the output submodel, and finally the method for determining the expected delivery time through the output submodel lacks the characteristic input corresponding to the merchant, so the process of determining the expected delivery time of the order is only used for determining the delivery state of the order as the expected delivery time of the order which has been taken. Therefore, the expected delivery time for an order whose delivery status is taken is not related to the merchant and therefore may not be considered.
If the delivery state of the order is that the goods are not taken, the server can further determine the information corresponding to the provider of the order delivery object according to the information corresponding to the order. The provider feature vector may be specifically determined according to information corresponding to the provider. The information corresponding to the provider may include the number of the deliveries that the provider did not prepare, the type of the provider (e.g., the type of the provider that was taken out may include house dishes, fast food, barbeque, etc.), the historical complaint rate of the provider, and the ID of the provider, among others.
And then, according to the information corresponding to the provider, determining the provider characteristics through a provider characteristic extraction submodel contained in a pre-trained second prediction model. Wherein the provider feature extraction submodel includes a second MLP. The server may input the provider feature vector into the second MLP, resulting in an output of the second MLP as the provider feature.
And finally, determining the predicted delivery time of the order through an output sub-model contained in a pre-trained second prediction model according to the characteristics of the rider corresponding to the order, the characteristics of the provider and the path characteristics. The estimated time of delivery of the final output order is determined based on the characteristics of the rider containing the information of the order, the characteristics of the provider, and the characteristics of the path for the order, it being seen that the input characteristics are determined to correspond to the order. The output result is determined directly according to the order, and the risk of error increase caused by superposition of multiple prediction results is reduced.
In the present specification, the path feature extraction submodel and the rider feature extraction submodel in the first prediction model and the second prediction model are common because the inputs are identical to each other, and the output submodels included in the first prediction model and the second prediction model are different because the inputs are different from each other. Also the provider feature extraction submodel is specific in the second predictive model and is therefore also where the first and second predictive models differ.
Fig. 4 is a schematic diagram of the architecture of the first and second prediction models provided in the present specification. It can be seen that the path feature extraction submodel and the rider feature extraction submodel within the dashed box are common, while the remaining submodels are not.
S108: and determining the matching degree of the rider and the order to be assigned according to the determined estimated delivery time of each order contained in the path plan.
S110: and distributing the order to be assigned according to the matching degree of the at least one rider and the order to be assigned.
In this specification, for at least one rider, after determining the estimated delivery time of each order included in the path plan of the rider, the matching degree of the order to be assigned and the rider can be judged. For example, as long as the projected delivery time of each order contained in the path plan is not later than the promised delivery time of each order, a match of the order to be assigned with the rider is determined. The server may eventually select one rider from among the riders that each match the order to be assigned to assign the assigned order. Since the determined estimated delivery time of each order included in the path plan is determined individually for each order, the estimated delivery time does not have the problem of error increase caused by accumulation of a plurality of prediction results. Moreover, since other order distribution time may be changed when the rider distributes the order to be assigned, the matching degree between the order to be assigned and the rider can be more accurately determined by determining the expected delivery time of each order on the path plan. For example, while the projected delivery time of an order to be assigned has not timed out, but causes other orders by the rider to be delivered timed out, the rider has a lower degree of match with the order to be assigned.
Based on the method of order allocation shown in fig. 2, since the order to be assigned may have an influence on the assigned order of the rider after being allocated to the rider, such as a change in the distribution order of the assigned order, and a change in the distribution path, the influence is generated based on different riders executing different assigned orders, and the influence on different assigned orders of different riders is not completely the same, the server may determine the path plan of the rider after allocating the order to be assigned to the rider to determine the influence generated by the path change, and characterize the path by the determined path characteristics. However, it is then necessary to determine the estimated delivery time for the rider to execute each order after the order to be assigned is allocated to the rider to determine how to subsequently allocate the order to be assigned. Then, for each rider and each order in the path plan of the rider, rider characteristics are determined from both the rider and the order, and the projected arrival time of each order in the path plan is determined from the rider characteristics and the path characteristics. At this time, it is determined that not only the order to be assigned but also the assigned order of the rider is present, so that it is possible to determine the matching degree of each rider with the order to be assigned based on the expected arrival time of each order in the path plan of each rider, and allocate the assigned order according to the determined matching degrees. Compared with the prior art that the predicted delivery time of the orders is determined through combination of a plurality of prediction time periods, the predicted delivery time which is determined for each order independently is provided by the specification, so that the prediction error is the error of the model, the accumulation of the prediction error is avoided, the predicted delivery time is more accurate, the determined distribution result is more accurate, and the distribution efficiency is improved.
In addition, in step S108 and step S110, it is better for the server to experience the user as the estimated delivery time of the order is closer to the promised delivery time of the order, but too close may result in an order delivery timeout when an accident occurs. The server may then determine a degree of match of each order in the planned path with the rider based on a difference between the projected delivery time of each order and the committed delivery time of the order. For example, the order with the time difference value of 10 to 5 minutes corresponding to each order is determined to have the highest matching degree with the rider, the difference value is determined to be the next time of 15 to 10 minutes, and the like, the matching degree of each order with the rider is determined, and if the difference value is a negative value, the mismatch is determined. And then, selecting one rider from the riders according to the average value or the sum of the matching degrees corresponding to the orders in the distribution path from large to small to distribute the order to be changed.
Further, in the present specification, since there is a case where the submodels of the first and second prediction models are shared, the first and second prediction models can be trained together.
Specifically, the server may first obtain historical data corresponding to a historically completed order.
Next, for at least one completed order, a rider that has historically executed the completed order is determined as the designated rider. For example, when an order is not executed with a replacement rider, the rider who ultimately completes the order is the designated rider for the order.
Then, in the period of time when the delivery state of the completed order is taken historically, the appointed time is determined, and a training sample is determined according to the information corresponding to the completed order, the information corresponding to the appointed rider and the path plan of the appointed rider at the appointed time. The data of the training samples determined at this time, that is, the data of the submodels of the first prediction model, which need to be input in the aforementioned step S104 and step S106, respectively.
And then, according to the historical data corresponding to the finished order, determining the actual delivery time of the finished order as the actual delivery time corresponding to the training sample.
And finally, according to the determined training sample, taking the actual delivery time corresponding to the training sample as expected output, and training the first prediction model.
In the process of training the first prediction model at this time, the trained path feature extraction submodel and rider feature extraction submodel are also trained for the second prediction model.
Further, in this specification, when determining the training sample, the server may further determine a designated time from a time period in which the delivery status of the completed order is not taken historically, and determine the training sample according to information corresponding to the completed order, information corresponding to the designated rider, a path plan of the designated rider, and information corresponding to a supplier of the completed order at the designated time. Then the server determines that the training samples include different training samples that are applicable to the first predictive model and to the second predictive model. Of course, since the same completed order has two cases of the delivery status being not taken and the delivery status being taken, but the actual delivery time is the completed order, the actual delivery time corresponding to the training samples can be used as the expected output, and the first and second prediction models can be trained according to the difference of the input training samples.
In addition, in the present specification, in order to avoid the situation that the training effect is reduced due to too large amplitude of parameter adjustment when training is performed based on a single sample. The server may also adjust parameters of the model based on a sum of losses for the plurality of samples.
Specifically, when the server adjusts parameters of the model, the chairman can determine, for at least two training samples, the expected delivery times corresponding to the at least two training samples respectively through the first prediction model.
And then, determining the sum of the losses of the at least two training samples through a preset loss function according to the actual delivery time corresponding to the at least two training samples respectively and the expected delivery time corresponding to the at least two training samples respectively. The specific loss function is not limited in this specification, and may be set as needed.
Finally, the parameters of the first prediction model are adjusted by taking the minimum sum of the losses as an optimization target.
Further, since the first and second prediction models may be trained simultaneously, the determined loss functions corresponding to the different prediction models may not be completely consistent, for example, the first and second prediction models correspond to the first loss function and the second loss function, respectively. And the loss may be determined based on the predicted arrival time and the actual arrival time output by the simultaneous first and second prediction models, respectively, when calculating the loss. I.e. the total loss is equal to the sum of the losses of the training samples, which may comprise training samples for the first and second prediction models, respectively. Based on the order allocation method shown in fig. 1, the embodiment of this specification further corresponds to a schematic structural diagram of an apparatus for providing order allocation, as shown in fig. 5.
Fig. 5 is a schematic structural diagram of an order distribution apparatus provided in an embodiment of the present specification, where the order distribution apparatus includes:
a path planning module 200 configured for determining, for at least one rider, a path plan for the rider from the order to be assigned and the assigned order of the rider;
a first determining module 202, configured to determine a path feature according to the path plan and a path feature extraction submodel included in a pre-trained first prediction model;
a second determining module 204, configured to, for each order included in the path plan, extract a sub-model according to the information corresponding to the rider and the information corresponding to the order by using the rider characteristics included in the first prediction model, determine the rider characteristics of the rider corresponding to the order, and determine the expected delivery time of the order by using the output sub-model included in the first prediction model according to the determined rider characteristics and the path characteristics of the rider;
a third determining module 206, configured to determine a matching degree between the rider and the order to be assigned according to the determined estimated delivery time of each order included in the path plan;
an allocating module 208 configured to allocate the order to be assigned according to a matching degree of the at least one rider with the order to be assigned.
Optionally, the path feature extraction submodel includes a long and short term memory network and an attention layer, and correspondingly, the first determining module 202 is configured to determine, for each task point in the path plan, a feature vector of the task point according to at least one of a type and a coordinate of the task point, sequentially input the feature vector of each task point into the long and short term memory network according to a distribution sequence of each task point in the path plan, to obtain output results corresponding to each task point, perform attention weighting on the output results corresponding to each task point through the attention layer, to obtain an attention weighting result, and use the attention weighting result as the path feature.
Optionally, the rider feature extraction submodel includes a first multi-layer perceptron, and accordingly, according to information corresponding to the rider and information corresponding to the order, a rider feature vector is determined, the rider feature vector is input into the first multi-layer perceptron, an output result of the first multi-layer perceptron is obtained, and the output result is used as the rider feature of the rider corresponding to the order.
Optionally, a second determining module 204 configured to determine that the delivery status of the order is picked before determining the expected delivery time of the order through the output submodel included in the first predictive model, the apparatus further comprises: a fourth determining module 210, configured to determine information corresponding to a provider of the order delivery according to the information corresponding to the order, extract a sub-model according to the information corresponding to the provider through a provider feature included in a pre-trained second prediction model, determine a provider feature, and determine an expected delivery time of the order through an output sub-model included in the pre-trained second prediction model according to a rider feature, the provider feature, and the path feature of the rider corresponding to the order.
Optionally, the provider feature extraction sub-model includes a second multi-layer perceptron, and correspondingly, the fourth determining module 210 is configured to determine a provider feature vector according to information corresponding to the provider, input the provider feature vector into the second multi-layer perceptron, and obtain an output result of the second multi-layer perceptron as a provider feature.
Optionally, the apparatus further comprises: a training module 212 configured to obtain historical data corresponding to historically completed orders, determine, as a designated rider, a rider who historically executes the completed order for at least one completed order, determine a designated time from a time period in which a delivery status of the historically completed order is a taken item, determine a training sample according to information corresponding to the completed order at the designated time, information corresponding to the designated rider, and a path plan of the designated rider, determine, as an actual delivery time corresponding to the training sample, an actual delivery time of the completed order according to the historical data corresponding to the completed order, and train the first prediction model according to the determined training sample by taking the actual delivery time corresponding to the training sample as an expected output.
Optionally, the training module 212 is configured to determine, for at least two training samples, predicted delivery times corresponding to the at least two training samples respectively through the first prediction model, determine a sum of losses of the at least two training samples through a preset loss function according to actual delivery times corresponding to the at least two training samples respectively and predicted delivery times corresponding to the at least two training samples respectively, and adjust a parameter of the first prediction model with the minimum sum of losses as an optimization target.
Based on the order distribution apparatus shown in fig. 5, since the order to be assigned may affect the assigned order of the rider after being distributed to the rider, such as may cause the distribution sequence of the assigned order to change, and the distribution path to change, and the effect is generated based on different riders executing different assigned orders, and the effect on different assigned orders of different riders is not completely the same, the server may determine the path plan of the rider after distributing the order to be assigned to the rider to determine the effect generated by the path change, and characterize the path by the determined path characteristics. However, it is then necessary to determine the estimated delivery time for the rider to execute each order after the order to be assigned is allocated to the rider to determine how to subsequently allocate the order to be assigned. Then, for each rider and each order in the path plan of the rider, rider characteristics are determined from both the rider and the order, and the projected arrival time of each order in the path plan is determined from the rider characteristics and the path characteristics. At this time, it is determined that not only the order to be assigned but also the assigned order of the rider is present, so that it is possible to determine the matching degree of each rider with the order to be assigned based on the expected arrival time of each order in the path plan of each rider, and allocate the assigned order according to the determined matching degrees. Compared with the prior art that the predicted delivery time of the orders is determined through combination of a plurality of prediction time periods, the predicted delivery time which is determined for each order independently is provided by the specification, so that the prediction error is the error of the model, the accumulation of the prediction error is avoided, the predicted delivery time is more accurate, the determined distribution result is more accurate, and the distribution efficiency is improved.
Embodiments of the present specification also provide a computer-readable storage medium storing a computer program, the computer program being operable to perform any one of the above-described methods of order allocation.
Based on the order allocation method shown in fig. 2, the embodiment of the present specification further proposes a schematic structural diagram of the electronic device shown in fig. 6. As shown in fig. 6, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the method for allocating any one of the orders.
Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method of order distribution, comprising:
for at least one rider, determining a path plan for the rider according to the order to be assigned and the assigned order of the rider;
determining path features according to the path planning and a path feature extraction submodel contained in a pre-trained first prediction model, wherein the path feature extraction submodel comprises: long and short term memory networks and attention layers;
for each order contained in the path plan, according to information corresponding to the rider and information corresponding to the order, extracting a sub-model through rider characteristics contained in the first prediction model, determining rider characteristics corresponding to the order of the rider, according to the determined rider characteristics corresponding to the order of the rider and the path characteristics, determining the predicted delivery time of the order through an output sub-model contained in the first prediction model, wherein the rider characteristic extraction sub-model comprises a first multilayer perceptron;
determining the matching degree of the rider and the order to be assigned according to the determined predicted delivery time of each order and the promised delivery time of each order contained in the path plan;
and distributing the order to be assigned according to the matching degree of the at least one rider and the order to be assigned.
2. The method of claim 1, wherein the path feature extraction submodel includes a long-short term memory network and an attention layer; accordingly, the number of the first and second electrodes,
the step of determining the path characteristics according to the path planning and the path characteristic extraction submodel contained in the pre-trained first prediction model comprises the following steps:
determining a feature vector of each task point in the path plan according to at least one of the type and the coordinate of the task point;
according to the distribution sequence of each task point in the path planning, sequentially inputting the feature vectors of each task point into the long-term and short-term memory network to obtain output results corresponding to each task point;
and performing attention weighting on the output results corresponding to the task points through the attention layer to obtain an attention weighting result, and taking the attention weighting result as a path feature.
3. The method of claim 1, wherein the rider feature extraction submodel includes a first multi-layered perceptron; accordingly, the number of the first and second electrodes,
determining the characteristics of the rider corresponding to the order by a rider characteristic extraction submodel contained in the first prediction model according to the information corresponding to the rider and the information corresponding to the order, comprising:
determining a characteristic vector of the rider according to the information corresponding to the rider and the information corresponding to the order;
and inputting the characteristic vector of the rider into the first multilayer perceptron to obtain an output result of the first multilayer perceptron, and taking the output result as the characteristic of the rider corresponding to the order of the rider.
4. The method of claim 1, wherein prior to determining the expected time of arrival for the order via an output submodel included in the first predictive model, the method further comprises:
determining that the delivery status of the order is that the goods have been taken;
when the delivery status of the order is not delivery, the method further comprises:
determining information corresponding to a provider of the order distribution materials according to the information corresponding to the order;
according to the information corresponding to the provider, determining the provider characteristics through a provider characteristic extraction submodel contained in a pre-trained second prediction model;
and determining the predicted delivery time of the order through an output sub-model contained in a pre-trained second prediction model according to the characteristics of the rider corresponding to the order, the characteristics of the provider and the path characteristics.
5. The method of claim 4, wherein the provider feature extraction submodel includes a second multi-tier perceptron; accordingly, the number of the first and second electrodes,
determining the provider characteristics through a provider characteristic extraction submodel contained in a pre-trained second prediction model according to the information corresponding to the provider, comprising the following steps:
determining a provider feature vector according to the information corresponding to the provider;
and inputting the provider feature vector into the second multilayer perceptron to obtain an output result of the second multilayer perceptron as a provider feature.
6. The method of claim 4, wherein pre-training the first predictive model comprises:
acquiring historical data corresponding to a historically completed order;
for at least one completed order, determining a rider who has historically executed the completed order as a designated rider;
determining a designated time from a time period in which the delivery state of the finished order is taken historically, and determining a training sample according to information corresponding to the finished order at the designated time, information corresponding to the designated rider and a path plan of the designated rider;
determining the actual delivery time of the completed order according to the historical data corresponding to the completed order, and taking the actual delivery time as the actual delivery time corresponding to the training sample;
and according to the determined training sample, taking the actual delivery time corresponding to the training sample as expected output, and training the first prediction model.
7. The method of claim 6, wherein training the first predictive model based on the determined training samples with the actual delivery times corresponding to the training samples as expected outputs comprises:
aiming at least two training samples, determining the expected delivery time corresponding to the at least two training samples respectively through the first prediction model;
determining the sum of losses of the at least two training samples through a preset loss function according to the actual delivery time corresponding to the at least two training samples respectively and the expected delivery time corresponding to the at least two training samples respectively;
and adjusting the parameters of the first prediction model by taking the minimum sum of the losses as an optimization target.
8. An apparatus for order distribution, the apparatus comprising:
a path planning module configured for determining, for at least one rider, a path plan for the rider from an order to be assigned and an assigned order for the rider;
a first determining module configured to determine a path feature according to the path plan and a path feature extraction submodel included in a pre-trained first prediction model, wherein the path feature extraction submodel includes: long and short term memory networks and attention layers;
a second determining module, configured to, for each order included in the path plan, extract a submodel according to information corresponding to the rider and information corresponding to the order, determine a rider characteristic of the rider corresponding to the order through a rider characteristic extraction submodel included in the first prediction model, and determine a predicted delivery time of the order through an output submodel included in the first prediction model according to the determined rider characteristic of the rider and the path characteristic, where the rider characteristic extraction submodel includes a first multilayer perceptor;
a third determining module, configured to determine a matching degree between the rider and the order to be assigned according to the determined predicted delivery time of each order and the promised delivery time of each order included in the path plan;
an allocation module configured to allocate the order to be assigned according to a matching degree of the at least one rider with the order to be assigned.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-7 when executing the program.
CN201910667271.5A 2019-07-23 2019-07-23 Order distribution method and device, computer readable storage medium and electronic equipment Active CN110414731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910667271.5A CN110414731B (en) 2019-07-23 2019-07-23 Order distribution method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910667271.5A CN110414731B (en) 2019-07-23 2019-07-23 Order distribution method and device, computer readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110414731A CN110414731A (en) 2019-11-05
CN110414731B true CN110414731B (en) 2021-02-02

Family

ID=68362726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910667271.5A Active CN110414731B (en) 2019-07-23 2019-07-23 Order distribution method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110414731B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862133A (en) * 2019-11-12 2021-05-28 北京三快在线科技有限公司 Order processing method and device, readable storage medium and electronic equipment
CN110910019B (en) * 2019-11-22 2022-04-05 拉扎斯网络科技(上海)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN112907011A (en) * 2019-12-04 2021-06-04 北京三快在线科技有限公司 Distribution scheduling method, device, storage medium and electronic equipment
CN113222487B (en) * 2020-01-21 2023-04-18 北京三快在线科技有限公司 Scheduling path generation method, device, storage medium and electronic equipment
CN111652477B (en) * 2020-05-19 2024-01-23 拉扎斯网络科技(上海)有限公司 Order processing and similarity calculation model obtaining method and device and electronic equipment
CN112036697B (en) * 2020-07-28 2024-06-11 拉扎斯网络科技(上海)有限公司 Task allocation method and device, readable storage medium and electronic equipment
CN112541610B (en) * 2020-08-13 2022-10-21 深圳优地科技有限公司 Robot control method, device, electronic device and storage medium
CN114330797A (en) * 2020-09-27 2022-04-12 北京三快在线科技有限公司 Distribution time length prediction method and device, storage medium and electronic equipment
CN112258131B (en) * 2020-11-12 2021-08-24 拉扎斯网络科技(上海)有限公司 Path prediction network training and order processing method and device
CN112837128B (en) * 2021-02-19 2023-04-28 拉扎斯网络科技(上海)有限公司 Order assignment method, order assignment device, computer equipment and computer readable storage medium
CN113222202A (en) * 2021-06-01 2021-08-06 携程旅游网络技术(上海)有限公司 Reservation vehicle dispatching method, reservation vehicle dispatching system, reservation vehicle dispatching equipment and reservation vehicle dispatching medium
CN113642603B (en) * 2021-07-05 2023-04-28 北京三快在线科技有限公司 Data matching method and device, storage medium and electronic equipment
CN115409452B (en) * 2022-10-27 2024-02-23 浙江口碑网络技术有限公司 Distribution information processing method, device, system, equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543886A (en) * 2018-11-06 2019-03-29 斑马网络技术有限公司 Prediction technique, device, terminal and the storage medium of destination
CN109791731A (en) * 2017-06-22 2019-05-21 北京嘀嘀无限科技发展有限公司 A kind of method and system for estimating arrival time

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545315B (en) * 2016-06-24 2021-10-08 北京三快在线科技有限公司 Order processing method and device
CN109726843B (en) * 2017-10-30 2023-09-15 阿里巴巴集团控股有限公司 Method, device and terminal for predicting distribution data
EP3507783B1 (en) * 2017-11-23 2021-11-10 Beijing Didi Infinity Technology and Development Co., Ltd. System and method for estimating arrival time
CN109886442A (en) * 2017-12-05 2019-06-14 北京嘀嘀无限科技发展有限公司 It estimates to welcome the emperor duration method and estimate and welcomes the emperor duration system
CN109685276B (en) * 2018-12-27 2021-01-01 拉扎斯网络科技(上海)有限公司 Order processing method and device, electronic equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791731A (en) * 2017-06-22 2019-05-21 北京嘀嘀无限科技发展有限公司 A kind of method and system for estimating arrival time
CN109543886A (en) * 2018-11-06 2019-03-29 斑马网络技术有限公司 Prediction technique, device, terminal and the storage medium of destination

Also Published As

Publication number Publication date
CN110414731A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110414731B (en) Order distribution method and device, computer readable storage medium and electronic equipment
CN110231044B (en) Path planning method and device
US20180174108A1 (en) Method, system and non-transitory computer-readable recording medium for providing predictions on calendar
CN110020427B (en) Policy determination method and device
CN111738409B (en) Resource scheduling method and related equipment thereof
CN108683692A (en) A kind of service request processing method and device
CN110705934A (en) Abnormal order identification method and device, readable storage medium and electronic equipment
CN114936085A (en) ETL scheduling method and device based on deep learning algorithm
CN110766513A (en) Information sorting method and device, electronic equipment and readable storage medium
CN111401766A (en) Model, service processing method, device and equipment
CN114925982A (en) Model training method and device, storage medium and electronic equipment
CN112561112A (en) Order distribution method and device, computer readable storage medium and electronic equipment
CN109947564B (en) Service processing method, device, equipment and storage medium
CN116151907A (en) Order processing method and device, electronic equipment and computer storage medium
CN112434986A (en) Order form changing method and device, computer readable storage medium and electronic equipment
CN115439180A (en) Target object determination method and device, electronic equipment and storage medium
CN113298445B (en) Method and device for model training and unmanned equipment scheduling
CN112417275A (en) Information providing method, device storage medium and electronic equipment
CN112862133A (en) Order processing method and device, readable storage medium and electronic equipment
CN114077944A (en) Order allocation method and device, storage medium and electronic equipment
WO2021073237A1 (en) Order assignment
CN114092168A (en) Service processing method, device, storage medium and electronic equipment
CN114202132A (en) Order allocation method and device, storage medium and electronic equipment
CN116738239B (en) Model training method, resource scheduling method, device, system, equipment and medium
US20240232751A9 (en) Information technology automation based on job return on investment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant