CN110648102A - Task processing method and device, electronic equipment and storage medium - Google Patents

Task processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110648102A
CN110648102A CN201910927793.4A CN201910927793A CN110648102A CN 110648102 A CN110648102 A CN 110648102A CN 201910927793 A CN201910927793 A CN 201910927793A CN 110648102 A CN110648102 A CN 110648102A
Authority
CN
China
Prior art keywords
task
processing
processor
address
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910927793.4A
Other languages
Chinese (zh)
Other versions
CN110648102B (en
Inventor
赵思
叶畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rajax Network Technology Co Ltd
Original Assignee
Rajax Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rajax Network Technology Co Ltd filed Critical Rajax Network Technology Co Ltd
Priority to CN201910927793.4A priority Critical patent/CN110648102B/en
Priority claimed from CN201910927793.4A external-priority patent/CN110648102B/en
Publication of CN110648102A publication Critical patent/CN110648102A/en
Application granted granted Critical
Publication of CN110648102B publication Critical patent/CN110648102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0836Recipient pick-ups

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure discloses a task processing method and device, electronic equipment and a storage medium. The method comprises the following steps: retrieving, by the at least one processor, the task waiting list from the memory in response to a trigger message or instruction from the at least one processor; determining, by at least one processor, a type of a task to be processed in the task waiting list; when the type of the task to be processed is a main task, determining the processing state of a subtask corresponding to the main task through at least one processor; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task; and automatically processing the main task according to the processing state of the subtasks through at least one processor.

Description

Task processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of internet technology, the quality requirements of users on logistics distribution services are higher and higher. Currently, a common immediate delivery service can take an article out of a pickup address in a short time and then deliver the article to a receiving address. However, in the actual distribution, some addresses with great difficulty in picking up or delivering goods, such as large shopping malls, hospitals, office buildings, universities, etc., may be encountered, and such addresses may cause a long distribution time for the distribution resources, thereby greatly reducing the distribution efficiency.
Disclosure of Invention
The embodiment of the disclosure provides a task processing method and device, electronic equipment and a storage medium.
In a first aspect, a task processing method is provided in an embodiment of the present disclosure.
Specifically, the task processing method includes:
retrieving, by the at least one processor, the task waiting list from the memory in response to a trigger message or instruction from the at least one processor;
determining, by at least one processor, a type of a task to be processed in the task waiting list;
when the type of the task to be processed is a main task, determining the processing state of a subtask corresponding to the main task through at least one processor; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
and automatically processing the main task according to the processing state of the subtasks through at least one processor.
With reference to the first aspect, in a first implementation manner of the first aspect, the present disclosure further includes:
fetching, by the at least one processor, the target processing task from the memory in response to a trigger message or instruction from the at least one processor;
when the goods taking address of the target processing task is matched with a preset target address, the target processing task is automatically split into the main task and the subtasks through at least one processor, and the target address is determined as a transfer address of the target processing task;
automatically adding, by at least one processor, the subtasks and the main task to the task waiting list and storing in a memory.
With reference to the first aspect and/or the first implementation manner of the first aspect, in a second implementation manner of the first aspect, after determining, by at least one processor, a type of a task to be processed in the task waiting list, the method further includes:
when the type of the task to be processed is a subtask, automatically setting the goods taking time and/or the goods delivery time of the subtask to be a first preset threshold value through at least one processor;
the subtasks are automatically added to a task processing list by at least one processor and stored in a memory.
With reference to the first aspect, the first implementation manner of the first aspect, and/or the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the processing, by at least one processor, the main task according to the processing state of the sub task automatically includes at least one of:
when the processing state of the subtasks is a finished state, or when the processing state of the subtasks is an unfinished state and the remaining delivery time of the subtasks is less than a second preset threshold value, automatically adding the main task into a task processing list through at least one processor;
and when the processing state of the subtasks is an uncompleted state and the subtasks do not allocate processing resources or the remaining delivery time of the subtasks is greater than a second preset threshold, automatically delaying the processing time of the main task by at least one processor.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, and/or the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the present disclosure further includes:
and responding to a trigger message or an instruction from at least one processor, and automatically processing the tasks in the task processing list according to a preset processing strategy through at least one processor.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, the third implementation manner of the first aspect, and/or the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the automatically processing, by at least one processor, the tasks in the task processing list according to a preset processing policy includes:
and when the type of the current processing task in the task processing list is a subtask, at least allocating the subtask to a processing resource in a preset group through at least one processor.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, the third implementation manner of the first aspect, the fourth implementation manner of the first aspect, and/or the fifth implementation manner of the first aspect, in a sixth implementation manner of the first aspect, a distance between the pickup address and the preset address is smaller than a preset distance threshold, and a distance between the pickup address and the preset address is smaller than a distance between the preset address and the delivery address.
In a second aspect, a task processing device is provided in an embodiment of the present disclosure.
Specifically, the task processing device includes:
a first obtaining module configured to obtain, by at least one processor, a task waiting list from a memory in response to a trigger message or an instruction from the at least one processor;
a first determining module configured to determine, by at least one processor, a type of a task to be processed in the task waiting list;
the second determining module is configured to determine, by at least one processor, a processing state of a subtask corresponding to the main task when the type of the task to be processed is the main task; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
a first processing module configured to automatically process, by at least one processor, the main task according to the processing state of the subtasks.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the task processing device includes a memory and a processor, the memory is used for storing one or more computer instructions for supporting the task processing device to execute the task processing method in the first aspect, and the processor is configured to execute the computer instructions stored in the memory. The task processing device may further comprise a communication interface for the task processing device to communicate with other devices or a communication network.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a memory and a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of:
retrieving, by the at least one processor, the task waiting list from the memory in response to a trigger message or instruction from the at least one processor;
determining, by at least one processor, a type of a task to be processed in the task waiting list;
when the type of the task to be processed is a main task, determining the processing state of a subtask corresponding to the main task through at least one processor; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
and automatically processing the main task according to the processing state of the subtasks through at least one processor. With reference to the third aspect, the present disclosure provides in a first implementation manner of the third aspect, wherein the one or more computer instructions are further executed by the processor to implement the following method steps:
fetching, by the at least one processor, the target processing task from the memory in response to a trigger message or instruction from the at least one processor;
when the goods taking address of the target processing task is matched with a preset target address, the target processing task is automatically split into the main task and the subtasks through at least one processor, and the target address is determined as a transfer address of the target processing task;
automatically adding, by at least one processor, the subtasks and the main task to the task waiting list and storing in a memory.
With reference to the third aspect and/or the first implementation manner of the third aspect, in a second implementation manner of the third aspect, after determining, by at least one processor, the type of the task to be processed in the task waiting list, the one or more computer instructions are further executed by the processor to implement the following method steps
When the type of the task to be processed is a subtask, automatically setting the goods taking time and/or the goods delivery time of the subtask to be a first preset threshold value through at least one processor;
the subtasks are automatically added to a task processing list by at least one processor and stored in a memory.
With reference to the third aspect, the first implementation manner of the third aspect, and/or the second implementation manner of the third aspect, in a third implementation manner of the third aspect, the processing, by at least one processor, the main task according to the processing state of the sub task automatically includes at least one of:
when the processing state of the subtasks is a finished state, or when the processing state of the subtasks is an unfinished state and the remaining delivery time of the subtasks is less than a second preset threshold value, automatically adding the main task into a task processing list through at least one processor;
and when the processing state of the subtasks is an uncompleted state and the subtasks do not allocate processing resources or the remaining delivery time of the subtasks is greater than a second preset threshold, automatically delaying the processing time of the main task by at least one processor. With reference to the third aspect, the first implementation manner of the third aspect, the second implementation manner of the third aspect, and/or the third implementation manner of the third aspect, in a fourth implementation manner of the third aspect, the one or more computer instructions are further executed by the processor to implement the following method steps:
and responding to a trigger message or an instruction from at least one processor, and automatically processing the tasks in the task processing list according to a preset processing strategy through at least one processor.
With reference to the third aspect, the first implementation manner of the third aspect, the second implementation manner of the third aspect, the third implementation manner of the third aspect, and/or the fourth implementation manner of the third aspect, in a fifth implementation manner of the third aspect, the automatically processing, by at least one processor, the tasks in the task processing list according to a preset processing policy includes:
and when the type of the current processing task in the task processing list is a subtask, at least allocating the subtask to a processing resource in a preset group through at least one processor.
With reference to the third aspect, the first implementation manner of the third aspect, the second implementation manner of the third aspect, the third implementation manner of the third aspect, the fourth implementation manner of the third aspect, and/or the fifth implementation manner of the third aspect, in a sixth implementation manner of the third aspect, the distance between the pickup address and the preset address is smaller than a preset distance threshold, and the distance between the pickup address and the preset address is smaller than the distance between the preset address and the delivery address.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium for storing computer instructions for a task processing apparatus, which includes computer instructions for performing any of the methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the process of processing the task, the task to be processed is processed according to the type of the task to be processed, and the task to be processed can be two parts obtained by splitting a target processing task, namely a subtask and a main task; the subtask is used for taking out the article from the goods-taking address and temporarily placing the article to a preset address, and the main task is used for delivering the article at the preset address to the goods-delivering address; the task of taking the goods address as the difficult-to-take address is divided into two parts by the mode, one part of special processing resources are responsible for taking the goods from the difficult-to-take address to a centralized preset address, and other processing resources are used for normally delivering the goods after taking the goods from the preset address; for such a processing mode, in the embodiment of the present disclosure, when the task to be processed is the main task, the processing state of the sub task corresponding to the main task is determined first, and the main task is processed according to the processing state of the sub task, so as to avoid a situation that the article is not sent from the pickup address to the preset address, and the processing resource of the main task has already reached the preset address to wait.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow diagram of a task processing method according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a split task according to the embodiment illustrated in FIG. 1;
FIG. 3 illustrates a flow diagram for a subtask joining a task waiting list according to the embodiment illustrated in FIG. 1;
fig. 4 is a block diagram showing a structure of a task processing device according to an embodiment of the present disclosure;
FIG. 5 illustrates a block diagram of the structure of a split task according to the embodiment shown in FIG. 4;
FIG. 6 is a block diagram illustrating the structure of a subtask join task pending list 2 according to the embodiment shown in FIG. 4;
fig. 7 is a schematic structural diagram of an electronic device suitable for implementing a task processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flowchart of a task processing method according to an embodiment of the present disclosure. As shown in fig. 1, the task processing method includes the following steps:
in step S101, in response to a trigger message or an instruction from at least one processor, acquiring, by the at least one processor, a task waiting list from a memory;
in step S102, determining, by at least one processor, a type of a task to be processed in the task waiting list;
in step S103, when the type of the task to be processed is a main task, determining, by at least one processor, a processing state of a sub task corresponding to the main task; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
in step S104, the main task is automatically processed according to the processing state of the subtasks by at least one processor.
In this embodiment, a fetch-and-send separation mode is adopted for the hard address and the hard address encountered in the task processing. The picking and delivering separation means that special processing resources are arranged at a picking address and a delivery address according to the hard-picking address and the hard-delivery address, and only picking or delivering operation is performed. Namely the fetching and sending separation comprises a fetching and separating mode and a sending and separating mode; for the separation mode, after a part of specially-arranged processing resources take articles from the goods-taking address belonging to the difficult-to-take address, the articles are placed at a preset centralized goods-taking address near the difficult-to-take address, and then the other processing resources take the goods from the centralized address and distribute the goods to the goods-receiving address according to a normal processing mode; in the delivery and separation mode, when the delivery address is a delivery-difficult address, the processing resource takes the article from the delivery address and processes the article according to a normal processing mode, but the delivered address is not the delivery address but a preset centralized delivery address near the delivery address, and then the special processing resource takes the article from the centralized delivery address and delivers the article to the delivery address. The common processing resources only need to take goods at the centralized goods taking address or send the goods to the centralized goods sending address, so that the problem that too much time is spent on taking or sending goods in the building is avoided.
In order to implement the above-mentioned pick-and-send separation mode, the target processing task whose pick-and-send address is a difficult-to-pick address and/or whose send address is a difficult-to-send address may be split into two parts, which may be referred to as a subtask and a main task. The embodiments of the present disclosure are directed to a separation mode, that is, a situation where the pick-up address is a difficult-to-pick address, and therefore, taking the separation mode as an example, the subtask is used to deliver the to-be-processed item from the pick-up address of the target processing task to a preset transfer address, and the main task is used to deliver the to-be-processed item from the preset transfer address to the pick-up address of the target processing task. The preset transit address can be a preset centralized pick-up address near the pick-up address. A part of processing resources can be specially arranged near the centralized goods taking address, and are used for taking goods from peripheral difficult goods taking addresses and then sending the goods to the centralized goods taking address, and then taking the goods from the centralized goods taking address by other common processing resources and distributing the goods to a goods receiving address.
The transit addresses can be set according to the actual conditions of the distribution areas, for example, office buildings, hospitals, schools and the like all belong to difficult-to-take addresses, so that corresponding transit addresses can be preset near the difficult-to-take addresses, so that the to-be-processed articles corresponding to tasks with the goods taking addresses matched with the difficult-to-take addresses can be taken out from the goods taking addresses by special processing resources and then are firstly delivered to the transit addresses, and then the to-be-processed articles at the transit addresses are delivered to the goods delivering addresses of the tasks by other processing resources.
In some embodiments, the distance between the pickup address and the preset address is less than a preset distance threshold, and the distance between the pickup address and the preset address is less than the distance between the preset address and the delivery address. The preset distance threshold may be a smaller value, that is, the preset address is near the pickup address, and may be determined according to the actual situation, which is not limited herein. By the mode, a centralized goods taking address, namely the transit address, is specially arranged near the difficult address for the task with the goods taking address being the difficult address such as an office building, a hospital, a school and the like, the special processing resources can take the goods to be processed to the transit address from the difficult address, and then the goods to be processed are sent to the goods sending address of the task according to the distribution scheme given by the scheduling strategy by other distribution resources, so that the time for taking the goods to be processed at the difficult address can be saved, and the processing efficiency of the task is improved.
The task processing method in this embodiment is executed on the server side, at least one processor in the server may periodically generate a trigger message or an instruction under the trigger of a timer, and after receiving the trigger message or the instruction, the at least one processor in the server acquires the task waiting list from the memory. The task waiting list may include a plurality of pending tasks that are waiting to be scheduled. The task to be processed may be the above-mentioned main task, sub-task or ordinary task without splitting.
In order to improve the processing efficiency and ensure that when the processing resources allocated to the main task reach the preset address, the special processing resources allocated to the subtasks already fetch the object to be processed from the pick-up address to the preset address without waiting for the processing resources allocated to the main task, in the embodiment of the disclosure, when the server processes the task to be processed in the task waiting list through at least one processor, when the type of the currently processed task to be processed is the main task obtained by splitting any target processing task, the processing state of the subtask obtained by splitting the target processing task is determined through at least one processor, and then the main task is processed according to the processing state of the subtask, for example, if the processing state of the subtask is not completed, the main task may not be processed first, but the processing state of the subtask is waited until the processing state of the subtask becomes completed, the main task is processed again.
Of course, it can be understood that when the task to be processed is not the main task, the processing may be performed normally, for example, scheduling is performed.
In the process of processing the task, the task to be processed is processed according to the type of the task to be processed, and the task to be processed can be two parts obtained by splitting a target processing task, namely a subtask and a main task; the subtask is used for taking out the article from the goods-taking address and temporarily placing the article to a preset address, and the main task is used for delivering the article at the preset address to the goods-delivering address; the task of taking the goods address as the difficult-to-take address is divided into two parts by the mode, one part of special processing resources are responsible for taking the goods from the difficult-to-take address to a centralized preset address, and other processing resources are used for normally delivering the goods after taking the goods from the preset address; for such a processing mode, in the embodiment of the present disclosure, when the task to be processed is the main task, the processing state of the sub task corresponding to the main task is determined first, and the main task is processed according to the processing state of the sub task, so as to avoid a situation that the article is not sent from the pickup address to the preset address, and the processing resource of the main task has already reached the preset address to wait.
In an optional implementation manner of this embodiment, as shown in fig. 2, the method further includes the following steps:
in step S201, in response to a trigger message or instruction from at least one processor, a target processing task fetched from a memory by the at least one processor;
in step S202, when the pick-up address of the target processing task matches a preset target address, automatically splitting the target processing task into the main task and the subtask by at least one processor, and determining the target address as a transit address of the target processing task;
in step S203, the subtasks and the main task are automatically added to the task waiting list by at least one processor and stored in a memory.
In this optional implementation, some difficult addresses may be found in advance, for example, the difficult addresses may be office buildings, shopping malls, schools, and the like, a preset target address is selected around each difficult address, and some special processing resources are arranged, and the processing resources are used for picking goods from the goods-picking address within the coverage range of the difficult address and sending the goods to the preset target address.
After the dispatching system takes out a target processing task from the storage area under the trigger of a trigger message or instruction generated by at least one processor, the dispatching system can judge whether the pickup address of the target processing task is matched with one of preset target addresses through the at least one processor, for example, whether the pickup address is within a range covered by the preset target address, if the pickup address is matched with the preset target address, the target processing task is divided into a sub-task and a main task through the at least one processor, and the sub-task and the main task are added into a task list for processing, such as dispatching and waiting. And meanwhile, determining the preset target address as a transit address of the target processing task through at least one processor. The scheduling system also stores, by the at least one processor, the subtasks and the main task in a task waiting list in the memory.
In an optional implementation manner of this embodiment, as shown in fig. 3, after the step S102 of determining, by at least one processor, a type of the task to be processed in the task waiting list, the method further includes the following steps:
in step S301, when the type of the task to be processed is a subtask, automatically setting, by at least one processor, a pickup time and/or a delivery time of the subtask to a first preset threshold;
in step S302, the subtasks are automatically added to the task processing list by at least one processor and stored in a memory.
In this optional embodiment, when the scheduling system schedules the to-be-processed task in the task waiting list, if the to-be-processed task is a subtask split from the target processing task, the at least one processor may automatically set the pickup time and/or delivery time of the subtask to a first preset threshold, and the at least one processor may automatically add the subtask to the task processing list and store the subtask in the memory. The goods taking time of the subtask is the time for the processing resource to receive the subtask and take the goods to be processed from the goods taking address, and the goods delivery time is the time for the processing resource to take the goods to be processed to deliver the goods to the preset address.
The task to be processed included in the task processing list is a task to be entered into a processing flow, such as scheduling. And the processing system adds the tasks which need to be processed immediately into the task processing list, and sends the tasks and the current available processing resource list into an execution process of a preset processing strategy to perform the next round of processing. The preset processing strategy is predetermined by the processing platform and is used for matching the tasks in the task processing list to obtain the best candidate processing resource from the processing resource list and then returning a matching result, and the processing system can select one processing resource from the candidate processing resources for each task to be allocated. The preset processing strategies employed by different processing platforms may be different, and are not limited herein.
In an optional implementation manner of this embodiment, the step S104, namely, the step of automatically processing, by at least one processor, the main task according to the processing state of the subtask further includes at least one of the following steps:
when the processing state of the subtasks is a finished state, or when the processing state of the subtasks is an unfinished state and the remaining delivery time of the subtasks is less than a second preset threshold value, automatically adding the main task into a task processing list through at least one processor;
and when the processing state of the subtasks is an uncompleted state and the subtasks do not allocate processing resources or the remaining delivery time of the subtasks is greater than a second preset threshold, automatically delaying the processing time of the main task by at least one processor.
In this alternative implementation, if the sub-task has been delivered to completion, indicating that the object to be processed has been delivered from the pick-up address to the preset address, the main task may be added to the task processing list by the at least one processor for processing.
In addition, a predicted pick-up time and delivery time can be set for the subtasks through the at least one processor, and when the subtasks are not delivered, the at least one processor can automatically determine the remaining delivery time according to the pick-up time and the delivery time, namely the remaining delivery time can be obtained by adding the pick-up time and the delivery time to the time of the allocated processing resources and subtracting the current time.
Since it also takes a certain time for the processing resource to reach the preset address from the current location after the main task is allocated with the processing resource, the second preset threshold may be set, by the at least one processor, to the expected time from when the processing resource receives the main task to when the processing resource reaches the preset address. And when the remaining delivery time of the subtasks is less than a second preset threshold value, adding the main task into the task processing list for processing, and after the processing resources are allocated to the main task and reach the preset address, completing the processing of the subtasks, namely, sending the to-be-processed tasks to the preset address, so that the situation that the processing resources of the main task wait for the to-be-processed objects to reach the preset address cannot occur.
In an optional implementation manner of this embodiment, the method further includes the following steps:
and responding to a trigger message or an instruction from at least one processor, and automatically processing the tasks in the task processing list according to a preset processing strategy through at least one processor.
In this optional implementation manner, after receiving the task, the processing system may add the task into the task waiting list through at least one processor, and perform preset condition determination for the task in the task waiting list, that is, determine whether the task in the task waiting list meets a condition for performing processing immediately, for example, the main task needs to wait for the sub-task to complete delivery before processing, and some tasks may be delayed due to other reasons. Therefore, only when the condition that the processing can be immediately carried out is met, the processing system adds the task into the task processing list, and sends the task processing list and the currently available processing resource list into the execution process of the preset processing strategy to carry out the next round of processing. By executing the process on at least one processor, the best candidate processing resource can be obtained by matching the task in the task processing list from the processing resource list according to the preset processing strategy, and the processing system can allocate the corresponding processing resource to the task from the candidate processing resource. The preset processing strategies used by different platforms are different, and the general principle is to allocate nearby processing resources to a task and to achieve the maximum utilization rate of the processing resources, which is determined according to the actual situation and is not limited herein.
In an optional implementation manner of this embodiment, the step of automatically processing, by at least one processor, the tasks in the task processing list according to a preset processing policy further includes the following steps:
and when the type of the current processing task in the task processing list is a subtask, at least allocating the subtask to a processing resource in a preset group through at least one processor.
In this optional implementation, since the subtask needs to be fetched and sent by the processing resource specially set near the inaccessible address, when processing the task in the task processing list, at least one processor first determines whether the currently processed task is a subtask, and if so, the processor controls the subtask to be allocated to the processing resource in the preset packet. The preset group corresponds to the pick-up address of the subtask and is preset. The processing resources in the predetermined group may be a batch of processing resources specifically set at the transit address according to the actual situation, and are specifically used for processing subtasks, such as taking items from the pickup address and placing the items at the transit address. The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 4 shows a block diagram of a task processing device according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both. As shown in fig. 4, the task processing device includes:
a first obtaining module 401 configured to obtain, by at least one processor, a task waiting list from a memory in response to a trigger message or an instruction from the at least one processor;
a first determining module 402 configured to determine, by at least one processor, a type of a task to be processed in the task waiting list;
a second determining module 403, configured to determine, by at least one processor, a processing state of a sub-task corresponding to the main task when the type of the to-be-processed task is the main task; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
a first processing module 404 configured to automatically process, by at least one processor, the main task according to the processing state of the subtasks.
In this embodiment, a fetch-and-send separation mode is adopted for the hard address and the hard address encountered in the task processing. The picking and delivering separation means that special processing resources are arranged at a picking address and a delivery address according to the hard-picking address and the hard-delivery address, and only picking or delivering operation is performed. Namely the fetching and sending separation comprises a fetching and separating mode and a sending and separating mode; for the separation mode, after a part of specially-arranged processing resources take articles from the goods-taking address belonging to the difficult-to-take address, the articles are placed at a preset centralized goods-taking address near the difficult-to-take address, and then the other processing resources take the goods from the centralized address and distribute the goods to the goods-receiving address according to a normal processing mode; in the delivery and separation mode, when the delivery address is a delivery-difficult address, the processing resource takes the article from the delivery address and processes the article according to a normal processing mode, but the delivered address is not the delivery address but a preset centralized delivery address near the delivery address, and then the special processing resource takes the article from the centralized delivery address and delivers the article to the delivery address. The common processing resources only need to take goods at the centralized goods taking address or send the goods to the centralized goods sending address, so that the problem that too much time is spent on taking or sending goods in the building is avoided.
In order to implement the above-mentioned pick-and-send separation mode, the target processing task whose pick-and-send address is a difficult-to-pick address and/or whose send address is a difficult-to-send address may be split into two parts, which may be referred to as a subtask and a main task. The embodiments of the present disclosure are directed to a separation mode, that is, a situation where the pick-up address is a difficult-to-pick address, and therefore, taking the separation mode as an example, the subtask is used to deliver the to-be-processed item from the pick-up address of the target processing task to a preset transfer address, and the main task is used to deliver the to-be-processed item from the preset transfer address to the pick-up address of the target processing task. The preset transit address can be a preset centralized pick-up address near the pick-up address. A part of processing resources can be specially arranged near the centralized goods taking address, and are used for taking goods from peripheral difficult goods taking addresses and then sending the goods to the centralized goods taking address, and then taking the goods from the centralized goods taking address by other common processing resources and distributing the goods to a goods receiving address.
The transit addresses can be set according to the actual conditions of the distribution areas, for example, office buildings, hospitals, schools and the like all belong to difficult-to-take addresses, so that corresponding transit addresses can be preset near the difficult-to-take addresses, so that the to-be-processed articles corresponding to tasks with the goods taking addresses matched with the difficult-to-take addresses can be taken out from the goods taking addresses by special processing resources and then are firstly delivered to the transit addresses, and then the to-be-processed articles at the transit addresses are delivered to the goods delivering addresses of the tasks by other processing resources.
In some embodiments, the distance between the pickup address and the preset address is less than a preset distance threshold, and the distance between the pickup address and the preset address is less than the distance between the preset address and the delivery address. The preset distance threshold may be a smaller value, that is, the preset address is near the pickup address, and may be determined according to the actual situation, which is not limited herein. By the mode, a centralized goods taking address, namely the transit address, is specially arranged near the difficult address for the task with the goods taking address being the difficult address such as an office building, a hospital, a school and the like, the special processing resources can take the goods to be processed to the transit address from the difficult address, and then the goods to be processed are sent to the goods sending address of the task according to the distribution scheme given by the scheduling strategy by other distribution resources, so that the time for taking the goods to be processed at the difficult address can be saved, and the processing efficiency of the task is improved.
The task processing device in this embodiment is located on the server side, at least one processor in the server may periodically generate a trigger message or an instruction under the trigger of a timer, and after receiving the trigger message or the instruction, the at least one processor in the server obtains the task waiting list from the memory. The task waiting list may include a plurality of pending tasks that are waiting to be scheduled. The task to be processed may be the above-mentioned main task, sub-task or ordinary task without splitting.
In order to improve the processing efficiency and ensure that when the processing resources allocated to the main task reach the preset address, the special processing resources allocated to the subtasks already fetch the object to be processed from the pick-up address to the preset address without waiting for the processing resources allocated to the main task, in the embodiment of the disclosure, when the server processes the task to be processed in the task waiting list through at least one processor, when the type of the currently processed task to be processed is the main task obtained by splitting any target processing task, the processing state of the subtask obtained by splitting the target processing task is determined through at least one processor, and then the main task is processed according to the processing state of the subtask, for example, if the processing state of the subtask is not completed, the main task may not be processed first, but the processing state of the subtask is waited until the processing state of the subtask becomes completed, the main task is processed again.
Of course, it can be understood that when the task to be processed is not the main task, the processing may be performed normally, for example, scheduling is performed.
In the process of processing the task, the task to be processed is processed according to the type of the task to be processed, and the task to be processed can be two parts obtained by splitting a target processing task, namely a subtask and a main task; the subtask is used for taking out the article from the goods-taking address and temporarily placing the article to a preset address, and the main task is used for delivering the article at the preset address to the goods-delivering address; the task of taking the goods address as the difficult-to-take address is divided into two parts by the mode, one part of special processing resources are responsible for taking the goods from the difficult-to-take address to a centralized preset address, and other processing resources are used for normally delivering the goods after taking the goods from the preset address; for such a processing mode, in the embodiment of the present disclosure, when the task to be processed is the main task, the processing state of the sub task corresponding to the main task is determined first, and the main task is processed according to the processing state of the sub task, so as to avoid a situation that the article is not sent from the pickup address to the preset address, and the processing resource of the main task has already reached the preset address to wait.
In an optional implementation manner of this embodiment, as shown in fig. 5, the apparatus further includes:
a second fetching module 501 configured to fetch, by at least one processor, a target processing task from a memory in response to a trigger message or instruction from the at least one processor;
a splitting module 502 configured to, by at least one processor, automatically split the target processing task into the main task and the sub-task when the pickup address of the target processing task matches a preset target address, and determine the target address as a transit address of the target processing task;
a first joining module 503 configured to automatically join, by at least one processor, the subtasks and the main task in the task waiting list and store in a memory.
In this optional implementation, some difficult addresses may be found in advance, for example, the difficult addresses may be office buildings, shopping malls, schools, and the like, a preset target address is selected around each difficult address, and some special processing resources are arranged, and the processing resources are used for picking goods from the goods-picking address within the coverage range of the difficult address and sending the goods to the preset target address.
After the dispatching system takes out a target processing task from the storage area under the trigger of a trigger message or instruction generated by at least one processor, the dispatching system can judge whether the pickup address of the target processing task is matched with one of preset target addresses through the at least one processor, for example, whether the pickup address is within a range covered by the preset target address, if the pickup address is matched with the preset target address, the target processing task is divided into a sub-task and a main task through the at least one processor, and the sub-task and the main task are added into a task list for processing, such as dispatching and waiting. And meanwhile, determining the preset target address as a transit address of the target processing task through at least one processor. The scheduling system also stores, by the at least one processor, the subtasks and the main task in a task waiting list in the memory.
In an optional implementation manner of this embodiment, as shown in fig. 6, after the first determining module 402, the apparatus further includes:
a setting module 601 configured to automatically set, by at least one processor, a pick-up time and/or a delivery time of the subtask to a first preset threshold when the type of the task to be processed is the subtask;
a second joining module 602 configured to automatically join, by at least one processor, the subtasks into a task processing list and store in memory.
In this optional embodiment, when the scheduling system schedules the to-be-processed task in the task waiting list, if the to-be-processed task is a subtask split from the target processing task, the at least one processor may automatically set the pickup time and/or delivery time of the subtask to a first preset threshold, and the at least one processor may automatically add the subtask to the task processing list and store the subtask in the memory. The goods taking time of the subtask is the time for the processing resource to receive the subtask and take the goods to be processed from the goods taking address, and the goods delivery time is the time for the processing resource to take the goods to be processed to deliver the goods to the preset address.
The task to be processed included in the task processing list is a task to be entered into a processing flow, such as scheduling. And the processing system adds the tasks which need to be processed immediately into the task processing list, and sends the tasks and the current available processing resource list into an execution process of a preset processing strategy to perform the next round of processing. The preset processing strategy is predetermined by the processing platform and is used for matching the tasks in the task processing list to obtain the best candidate processing resource from the processing resource list and then returning a matching result, and the processing system can select one processing resource from the candidate processing resources for each task to be allocated. The preset processing strategies employed by different processing platforms may be different, and are not limited herein.
In an optional implementation manner of this embodiment, the first processing module 404 includes at least one of:
the joining sub-module is configured to automatically join the main task into a task processing list through at least one processor when the processing state of the subtask is a completed state or when the processing state of the subtask is an uncompleted state and the remaining delivery time of the subtask is less than a second preset threshold value;
and the delay submodule is configured to automatically delay the processing time of the main task through at least one processor when the processing state of the subtask is an incomplete state and the subtask does not allocate processing resources or the remaining delivery time of the subtask is greater than a second preset threshold value.
In this alternative implementation, if the sub-task has been delivered to completion, indicating that the object to be processed has been delivered from the pick-up address to the preset address, the main task may be added to the task processing list by the at least one processor for processing.
In addition, a predicted pick-up time and delivery time can be set for the subtasks through the at least one processor, and when the subtasks are not delivered, the at least one processor can automatically determine the remaining delivery time according to the pick-up time and the delivery time, namely the remaining delivery time can be obtained by adding the pick-up time and the delivery time to the time of the allocated processing resources and subtracting the current time.
Since it also takes a certain time for the processing resource to reach the preset address from the current location after the main task is allocated with the processing resource, the second preset threshold may be set, by the at least one processor, to the expected time from when the processing resource receives the main task to when the processing resource reaches the preset address. And when the remaining delivery time of the subtasks is less than a second preset threshold value, adding the main task into the task processing list for processing, and after the processing resources are allocated to the main task and reach the preset address, completing the processing of the subtasks, namely, sending the to-be-processed tasks to the preset address, so that the situation that the processing resources of the main task wait for the to-be-processed objects to reach the preset address cannot occur.
In an optional implementation manner of this embodiment, the apparatus further includes:
and the second processing module is configured to respond to a trigger message or an instruction from at least one processor, and automatically process the tasks in the task processing list according to a preset processing strategy through at least one processor.
In this optional implementation manner, after receiving the task, the processing system may add the task into the task waiting list through at least one processor, and perform preset condition determination for the task in the task waiting list, that is, determine whether the task in the task waiting list meets a condition for performing processing immediately, for example, the main task needs to wait for the sub-task to complete delivery before processing, and some tasks may be delayed due to other reasons. Therefore, only when the condition that the processing can be immediately carried out is met, the processing system adds the task into the task processing list, and sends the task processing list and the currently available processing resource list into the execution process of the preset processing strategy to carry out the next round of processing. By executing the process on at least one processor, the best candidate processing resource can be obtained by matching the task in the task processing list from the processing resource list according to the preset processing strategy, and the processing system can allocate the corresponding processing resource to the task from the candidate processing resource. The preset processing strategies used by different platforms are different, and the general principle is to allocate nearby processing resources to a task and to achieve the maximum utilization rate of the processing resources, which is determined according to the actual situation and is not limited herein.
In an optional implementation manner of this embodiment, the second processing module includes:
and the allocation submodule is configured to allocate at least the subtask to a processing resource in a preset packet through at least one processor when the type of the current processing task in the task processing list is the subtask.
In this optional implementation, since the subtask needs to be fetched and sent by the processing resource specially set near the inaccessible address, when processing the task in the task processing list, at least one processor first determines whether the currently processed task is a subtask, and if so, the processor controls the subtask to be allocated to the processing resource in the preset packet. The preset group corresponds to the pick-up address of the subtask and is preset. The processing resources in the predetermined group may be a batch of processing resources specifically set at the transit address according to the actual situation, and are specifically used for processing subtasks, such as taking items from the pickup address and placing the items at the transit address.
The embodiment of the present disclosure also provides an electronic device, as shown in fig. 7, including at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701; wherein the memory 702 stores instructions executable by the at least one processor 701 to perform, by the at least one processor 701, the steps of:
retrieving, by the at least one processor, the task waiting list from the memory in response to a trigger message or instruction from the at least one processor;
determining, by at least one processor, a type of a task to be processed in the task waiting list;
when the type of the task to be processed is a main task, determining the processing state of a subtask corresponding to the main task through at least one processor; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
and automatically processing the main task according to the processing state of the subtasks through at least one processor.
Wherein the one or more computer instructions are further executable by the processor to implement the method steps of:
fetching, by the at least one processor, the target processing task from the memory in response to a trigger message or instruction from the at least one processor;
when the goods taking address of the target processing task is matched with a preset target address, the target processing task is automatically split into the main task and the subtasks through at least one processor, and the target address is determined as a transfer address of the target processing task;
automatically adding, by at least one processor, the subtasks and the main task to the task waiting list and storing in a memory.
Wherein, after determining, by at least one processor, a type of a task to be processed in the task waiting list, the one or more computer instructions are further executable by the processor to perform method steps
When the type of the task to be processed is a subtask, automatically setting the goods taking time and/or the goods delivery time of the subtask to be a first preset threshold value through at least one processor;
the subtasks are automatically added to a task processing list by at least one processor and stored in a memory.
Wherein automatically processing the main task according to the processing state of the subtask by at least one processor comprises at least one of:
when the processing state of the subtasks is a finished state, or when the processing state of the subtasks is an unfinished state and the remaining delivery time of the subtasks is less than a second preset threshold value, automatically adding the main task into a task processing list through at least one processor;
and when the processing state of the subtasks is an uncompleted state and the subtasks do not allocate processing resources or the remaining delivery time of the subtasks is greater than a second preset threshold, automatically delaying the processing time of the main task by at least one processor.
Wherein the one or more computer instructions are further executable by the processor to implement the method steps of:
and responding to a trigger message or an instruction from at least one processor, and automatically processing the tasks in the task processing list according to a preset processing strategy through at least one processor.
The method for processing the tasks in the task processing list through at least one processor automatically according to a preset processing strategy comprises the following steps:
and when the type of the current processing task in the task processing list is a subtask, at least allocating the subtask to a processing resource in a preset group through at least one processor.
The distance between the goods taking address and the preset address is smaller than a preset distance threshold value, and the distance between the goods taking address and the preset address is smaller than the distance between the preset address and the goods delivery address.
Specifically, the processor 701 and the memory 702 may be connected by a bus or by other means, and fig. 7 illustrates an example of connection by a bus. Memory 702, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 701 executes various functional applications of the device and data processing by executing nonvolatile software programs, instructions, and modules stored in the memory 702, that is, implements the above-described method in the embodiments of the present disclosure.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store historical data of shipping network traffic, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the electronic device optionally includes a communications component 703 and the memory 702 optionally includes memory remotely located from the processor 701, which may be connected to an external device through the communications component 703. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 702, and when executed by the one or more processors 701, perform the above-described methods in the embodiments of the present disclosure.
The product can execute the method provided by the embodiment of the disclosure, has corresponding functional modules and beneficial effects of the execution method, and reference can be made to the method provided by the embodiment of the disclosure for technical details which are not described in detail in the embodiment.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A task processing method, comprising:
retrieving, by the at least one processor, the task waiting list from the memory in response to a trigger message or instruction from the at least one processor;
determining, by at least one processor, a type of a task to be processed in the task waiting list;
when the type of the task to be processed is a main task, determining the processing state of a subtask corresponding to the main task through at least one processor; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
and automatically processing the main task according to the processing state of the subtasks through at least one processor.
2. The method of claim 1, further comprising:
fetching, by the at least one processor, the target processing task from the memory in response to a trigger message or instruction from the at least one processor;
when the goods taking address of the target processing task is matched with a preset target address, the target processing task is automatically split into the main task and the subtasks through at least one processor, and the target address is determined as a transfer address of the target processing task;
automatically adding, by at least one processor, the subtasks and the main task to the task waiting list and storing in a memory.
3. The method according to claim 1 or 2, wherein after determining, by at least one processor, the type of the task to be processed in the task waiting list, further comprising:
when the type of the task to be processed is a subtask, automatically setting the goods taking time and/or the goods delivery time of the subtask to be a first preset threshold value through at least one processor;
the subtasks are automatically added to a task processing list by at least one processor and stored in a memory.
4. The method according to claim 1 or 2, wherein processing the main task according to the processing state of the subtasks automatically by at least one processor comprises at least one of:
when the processing state of the subtasks is a finished state, or when the processing state of the subtasks is an unfinished state and the remaining delivery time of the subtasks is less than a second preset threshold value, automatically adding the main task into a task processing list through at least one processor;
and when the processing state of the subtasks is an uncompleted state and the subtasks do not allocate processing resources or the remaining delivery time of the subtasks is greater than a second preset threshold, automatically delaying the processing time of the main task by at least one processor.
5. The method of claim 3, further comprising:
and responding to a trigger message or an instruction from at least one processor, and automatically processing the tasks in the task processing list according to a preset processing strategy through at least one processor.
6. The method of claim 5, wherein automatically processing the tasks in the task processing list according to a preset processing policy by at least one processor comprises:
and when the type of the current processing task in the task processing list is a subtask, at least allocating the subtask to a processing resource in a preset group through at least one processor.
7. The method of any of claims 1-2, 5-6, wherein a distance between the pickup address and the predetermined address is less than a predetermined distance threshold, and wherein the distance between the pickup address and the predetermined address is less than the distance between the predetermined address and the delivery address.
8. A task processing apparatus, comprising:
a first obtaining module configured to obtain, by at least one processor, a task waiting list from a memory in response to a trigger message or an instruction from the at least one processor;
a first determining module configured to determine, by at least one processor, a type of a task to be processed in the task waiting list;
the second determining module is configured to determine, by at least one processor, a processing state of a subtask corresponding to the main task when the type of the task to be processed is the main task; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
a first processing module configured to automatically process, by at least one processor, the main task according to the processing state of the subtasks.
9. An electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of:
retrieving, by the at least one processor, the task waiting list from the memory in response to a trigger message or instruction from the at least one processor;
determining, by at least one processor, a type of a task to be processed in the task waiting list;
when the type of the task to be processed is a main task, determining the processing state of a subtask corresponding to the main task through at least one processor; the main task and the subtask are two parts of a target processing task; the subtasks are used for distributing the to-be-processed articles from the goods taking address of the target processing task to the transfer address of the target processing task; the main task is used for distributing the to-be-processed article from the preset address to a receiving address of the target processing task;
and automatically processing the main task according to the processing state of the subtasks through at least one processor.
10. A computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the method of any one of claims 1-7.
CN201910927793.4A 2019-09-27 Task processing method and device, electronic equipment and storage medium Active CN110648102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927793.4A CN110648102B (en) 2019-09-27 Task processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927793.4A CN110648102B (en) 2019-09-27 Task processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110648102A true CN110648102A (en) 2020-01-03
CN110648102B CN110648102B (en) 2024-05-28

Family

ID=

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252337A (en) * 2013-06-27 2014-12-31 塔塔咨询服务有限公司 Task execution in grid computing system, edge device, andgrid server
CN107093046A (en) * 2017-04-21 2017-08-25 北京京东尚科信息技术有限公司 Unmanned dispensing vehicle method for allocating tasks, system and unmanned dispensing vehicle
CN107392412A (en) * 2017-06-05 2017-11-24 北京小度信息科技有限公司 Order dispatch method and apparatus
RU2639676C2 (en) * 2014-04-25 2017-12-21 Фирсов Евгений Валентинович Method and system of order distribution control carried by ground transport
WO2018072556A1 (en) * 2016-10-18 2018-04-26 无锡知谷网络科技有限公司 Logistics control method for goods, and electronic device
CN108154327A (en) * 2016-12-02 2018-06-12 北京三快在线科技有限公司 A kind of dispatching task processing method, device and electronic equipment
CN108182560A (en) * 2017-12-26 2018-06-19 北京小度信息科技有限公司 Dispense method for allocating tasks, device, electronic equipment and computer storage media
CN108288139A (en) * 2018-01-29 2018-07-17 北京小度信息科技有限公司 Resource allocation methods and device
CN108335071A (en) * 2018-02-06 2018-07-27 北京小度信息科技有限公司 Dispense method for allocating tasks, device, electronic equipment and computer storage media
CN108717612A (en) * 2018-03-30 2018-10-30 拉扎斯网络科技(上海)有限公司 A kind of allocator and device
CN108921483A (en) * 2018-07-16 2018-11-30 深圳北斗应用技术研究院有限公司 A kind of logistics route planing method, device and driver arrange an order according to class and grade dispatching method, device
CN109003011A (en) * 2017-06-06 2018-12-14 北京三快在线科技有限公司 The distribution method and device of delivery service resource, electronic equipment
CN109685609A (en) * 2018-12-14 2019-04-26 拉扎斯网络科技(上海)有限公司 Order allocation method, device, electronic equipment and storage medium
US20190174208A1 (en) * 2017-12-05 2019-06-06 The Government of the United States of America, as represented by the Secretary of Homeland Security Systems and Methods for Integrating First Responder Technologies
CN110110995A (en) * 2019-05-05 2019-08-09 拉扎斯网络科技(上海)有限公司 Production task dispatching method, device, electronic equipment and computer storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252337A (en) * 2013-06-27 2014-12-31 塔塔咨询服务有限公司 Task execution in grid computing system, edge device, andgrid server
RU2639676C2 (en) * 2014-04-25 2017-12-21 Фирсов Евгений Валентинович Method and system of order distribution control carried by ground transport
WO2018072556A1 (en) * 2016-10-18 2018-04-26 无锡知谷网络科技有限公司 Logistics control method for goods, and electronic device
CN108154327A (en) * 2016-12-02 2018-06-12 北京三快在线科技有限公司 A kind of dispatching task processing method, device and electronic equipment
CN107093046A (en) * 2017-04-21 2017-08-25 北京京东尚科信息技术有限公司 Unmanned dispensing vehicle method for allocating tasks, system and unmanned dispensing vehicle
CN107392412A (en) * 2017-06-05 2017-11-24 北京小度信息科技有限公司 Order dispatch method and apparatus
CN109003011A (en) * 2017-06-06 2018-12-14 北京三快在线科技有限公司 The distribution method and device of delivery service resource, electronic equipment
US20190174208A1 (en) * 2017-12-05 2019-06-06 The Government of the United States of America, as represented by the Secretary of Homeland Security Systems and Methods for Integrating First Responder Technologies
CN108182560A (en) * 2017-12-26 2018-06-19 北京小度信息科技有限公司 Dispense method for allocating tasks, device, electronic equipment and computer storage media
CN108288139A (en) * 2018-01-29 2018-07-17 北京小度信息科技有限公司 Resource allocation methods and device
CN108335071A (en) * 2018-02-06 2018-07-27 北京小度信息科技有限公司 Dispense method for allocating tasks, device, electronic equipment and computer storage media
CN108717612A (en) * 2018-03-30 2018-10-30 拉扎斯网络科技(上海)有限公司 A kind of allocator and device
CN108921483A (en) * 2018-07-16 2018-11-30 深圳北斗应用技术研究院有限公司 A kind of logistics route planing method, device and driver arrange an order according to class and grade dispatching method, device
CN109685609A (en) * 2018-12-14 2019-04-26 拉扎斯网络科技(上海)有限公司 Order allocation method, device, electronic equipment and storage medium
CN110110995A (en) * 2019-05-05 2019-08-09 拉扎斯网络科技(上海)有限公司 Production task dispatching method, device, electronic equipment and computer storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BRIAN COLTIN 等: "Online pickup and delivery planning with transfers for mobile robots", 《2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》, 29 September 2014 (2014-09-29), pages 5786 - 5791 *
张立峰 等: "基于两阶段算法的大规模成品油二次配送优化", 《系统工程理论与实践》, vol. 36, no. 11, 30 November 2016 (2016-11-30), pages 2951 - 2963 *
苏千吉: "一汽—大众售后备件供应链体系优化方案研究", 《中国优秀硕士学位论文全文数据库 经济与管理科学辑》, no. 4, 15 April 2014 (2014-04-15), pages 152 - 1134 *

Similar Documents

Publication Publication Date Title
CN106330769B (en) Service processing method and server
CN115328663B (en) Method, device, equipment and storage medium for scheduling resources based on PaaS platform
CN112508616A (en) Order allocation method and device
EP3386169B1 (en) Address allocation method, gateway and system
CN103561049A (en) Method for processing terminal scheduling request, system thereof and device thereof
CN111026541B (en) Rendering resource scheduling method, device, equipment and storage medium
CN106897299B (en) Database access method and device
CN103150213A (en) Method and device for balancing load
CN113971519B (en) Robot scheduling method and device, electronic equipment and storage medium
CN106331192B (en) Network protocol IP address allocation method and device
WO2015042904A1 (en) Method, apparatus and system for scheduling resource pool in multi-core system
CN106775975B (en) Process scheduling method and device
CN108509264B (en) Overtime task scheduling system and method
CN111163140A (en) Method, apparatus and computer readable storage medium for resource acquisition and allocation
CN104158860A (en) Job scheduling method and job scheduling system
US11163619B2 (en) Timer-based message handling for executing stateful services in a stateless environment
CN110648102B (en) Task processing method and device, electronic equipment and storage medium
CN110648102A (en) Task processing method and device, electronic equipment and storage medium
CN106911739B (en) Information distribution method and device
CN111062553B (en) Order distribution method, device, server and nonvolatile storage medium
CN113298387B (en) Cargo handling distribution method, distribution system, electronic device, and readable storage medium
CN111309467B (en) Task distribution method and device, electronic equipment and storage medium
US20220164221A1 (en) Preserving persistent link connections during a cloud-based service system upgrade
CN114019960A (en) Scheduling method and device for multi-robot delivery
US11797342B2 (en) Method and supporting node for supporting process scheduling in a cloud system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant