WO2024066342A1 - 任务处理方法、装置、电子设备及存储介质 - Google Patents

任务处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024066342A1
WO2024066342A1 PCT/CN2023/091271 CN2023091271W WO2024066342A1 WO 2024066342 A1 WO2024066342 A1 WO 2024066342A1 CN 2023091271 W CN2023091271 W CN 2023091271W WO 2024066342 A1 WO2024066342 A1 WO 2024066342A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
execution
server
subtask
sent
Prior art date
Application number
PCT/CN2023/091271
Other languages
English (en)
French (fr)
Inventor
钞娜娜
妥鑫
Original Assignee
京东科技信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东科技信息技术有限公司 filed Critical 京东科技信息技术有限公司
Publication of WO2024066342A1 publication Critical patent/WO2024066342A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a task processing method, device, electronic device and storage medium.
  • the total business task is usually split into multiple subtasks and distributed to different servers for execution to improve the business processing efficiency.
  • the overall task of the business is split into subtasks and distributed to the (Message Queue, MQ) middleware, which is then pushed to each task execution server by MQ to obtain the execution result of the overall task.
  • MQ Message Queue
  • the related technology cannot know the current task execution capabilities of each task execution server, and therefore cannot distribute tasks according to the current task execution capabilities of the task execution servers, making it difficult to fully utilize the task execution capabilities of the task execution servers, resulting in low task execution efficiency.
  • the present disclosure provides a task processing method, device, electronic device, storage medium and computer program product.
  • a first aspect of the present disclosure provides a task processing method, which is executed by a task scheduling server.
  • the method includes:
  • a first execution result of the pre-execution subtask sent by the task execution server is received, and a second execution result of the overall task is generated based on the first execution result.
  • a total task issued by a task issuing server is obtained, and the total task is split into multiple subtasks, a task execution request sent by a task execution server is received, a pre-execution subtask of the task execution server is determined from multiple subtasks based on the task execution request, and the pre-execution subtask is sent to the task execution server, a first execution result of the pre-execution subtask sent by the task execution server is received, and based on the first execution result, a second execution result of the total task is generated.
  • the task scheduling server can know the task execution capability of the task execution server based on the task execution request sent by the task execution server, so that tasks can be distributed based on the task execution capability of the task execution server, thereby giving full play to the task execution capability of the task execution server and improving the efficiency of task execution.
  • a second aspect of the present disclosure provides a task processing method, which is executed by a task execution server.
  • the method includes:
  • the pre-execution subtask is executed, and a first execution result of the pre-execution subtask is sent to the task scheduling server to generate a second execution result of the overall task, wherein the overall task is composed of a plurality of pre-execution subtasks.
  • a task execution request is generated and sent to a task scheduling server, wherein the task execution request is used to instruct the task scheduling server to send a corresponding pre-execution subtask to the task execution server, receive the pre-execution subtask sent by the task scheduling server, execute the pre-execution subtask, and send the first execution result of the pre-execution subtask to the task scheduling server to generate a second execution result of the total task, wherein the total task is composed of multiple pre-execution subtasks.
  • the task execution server can request the task scheduling server to distribute tasks based on its current execution capability, thereby giving full play to the task execution capability of the task execution server and improving the efficiency of task execution.
  • a third aspect of the present disclosure provides a task processing method, which is executed by a task issuing server.
  • the method includes:
  • a total task is generated based on the current business scenario, and the total task is sent to the task scheduling server, which splits and schedules the total task to obtain the execution result of the total task, and receives the execution result sent by the task scheduling server.
  • the task publishing server sends the total task to the task scheduling server, which splits and schedules the total task, which can be fully utilized. Give full play to the task execution capabilities of the task execution server and improve the efficiency of task execution.
  • the fourth aspect embodiment of the present disclosure proposes a task processing device, including: an acquisition module, used to acquire a total task released by a task publishing server, and split the total task into multiple sub-tasks; a receiving module, used to receive a task execution request sent by a task execution server; a first sending module, used to determine a pre-execution subtask of the task execution server from multiple sub-tasks based on the task execution request, and send the pre-execution subtask to the task execution server; a generation module, used to receive a first execution result of the pre-execution subtask sent by the task execution server, and generate a second execution result of the total task based on the first execution result.
  • the fifth aspect embodiment of the present disclosure proposes a task processing device, including: a generation module, used to generate a task execution request, and send the task execution request to a task scheduling server, wherein the task execution request is used to instruct the task scheduling server to send a corresponding pre-execution subtask to the task execution server; a receiving module, used to receive the pre-execution subtask sent by the task scheduling server; an execution module, used to execute the pre-execution subtask, and send the first execution result of the pre-execution subtask to the task scheduling server to generate a second execution result of the total task, wherein the total task is composed of multiple pre-execution subtasks.
  • the sixth aspect embodiment of the present disclosure proposes a task processing device, including: a generation module, used to generate a total task based on the current business scenario; a sending module, used to send the total task to a task scheduling server, and the task scheduling server splits and schedules the total task to obtain the execution result of the total task; a receiving module, used to receive the execution result sent by the task scheduling server.
  • the seventh aspect embodiment of the present disclosure proposes an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute a task processing method as described in the first aspect, the second aspect, or the third aspect embodiment above.
  • An eighth aspect embodiment of the present disclosure proposes a computer-readable storage medium storing computer instructions, wherein the computer instructions are used to enable the computer to execute a task processing method as described in the first, second or third aspect embodiments above.
  • a ninth aspect of the present disclosure provides a computer program product, including a computer program, which implements the task processing method of the first, second or third aspect of the present disclosure when executed by a processor.
  • FIG1 is a schematic diagram of a process flow of a task processing method provided by an embodiment of the present disclosure
  • FIG2 is a flowchart of a task processing method provided by an embodiment of the present disclosure
  • FIG3 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG4 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG5 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG6 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG7 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG8 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG9 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG10 is a schematic diagram of a task processing method according to an embodiment of the present disclosure.
  • FIG11 is a flowchart showing an example of a task execution server executing a pre-execution subtask
  • FIG12 is a flowchart of a task processing method provided by an embodiment of the present disclosure.
  • FIG13 is a schematic diagram of the interaction between the task publishing server, the task scheduling server and the task execution server;
  • FIG14 is a schematic diagram of the structure of a task processing device provided by an embodiment of the present disclosure.
  • FIG15 is a schematic diagram of the structure of a task processing device provided by an embodiment of the present disclosure.
  • FIG16 is a schematic diagram of the structure of a task processing device provided by an embodiment of the present disclosure.
  • FIG. 17 is a block diagram of an electronic device provided in an embodiment of the present disclosure.
  • Fig. 1 is a flowchart of a task processing method provided in an embodiment of the present disclosure. It should be noted that the execution subject of the task processing method provided in an embodiment of the present disclosure is a task scheduling server.
  • the method includes the following steps:
  • the task issuing server After issuing the overall task, the task issuing server sends the overall task to the task scheduling server. After receiving the overall task, the task scheduling server splits the overall task into multiple subtasks.
  • S102 Receive a task execution request sent by a task execution server.
  • the method Before receiving the task execution request sent by the task execution server, the method further includes: sending a task broadcast to the task execution server, wherein the task broadcast is used to instruct the task execution server to generate the task execution request.
  • the task scheduling server After the task scheduling server splits the total task issued by the task issuing server into multiple subtasks, it can send a task broadcast to the task execution server to wake up the task execution server.
  • the task execution server receives the task broadcast sent by the task scheduling server, it can respond to the task broadcast, generate a task execution request based on its own task execution capability, and send the task execution request to the task scheduling server.
  • S103 Based on the task execution request, determine a pre-execution subtask of the task execution server from a plurality of subtasks, and send the pre-execution subtask to the task execution server.
  • the task scheduling server After the task scheduling server receives the task execution request sent by the task execution server, it determines the current task execution capability of the task execution server based on the task execution request, and determines the pre-execution subtask of the task execution server from the multiple subtasks split from the total task based on the current task execution capability of the task execution server, and sends the pre-execution subtask to the task execution server, which executes the pre-execution subtask.
  • the task execution server executes the pre-execution subtask and obtains the first execution result of the pre-execution subtask, it sends the first execution result to the task scheduling server.
  • the task scheduling server receives the first execution results of all subtasks split from the total task, it can obtain the second execution result of the total task based on the first execution result.
  • a total task issued by a task issuing server is obtained, and the total task is split into multiple subtasks, a task execution request sent by a task execution server is received, a pre-execution subtask of the task execution server is determined from multiple subtasks based on the task execution request, and the pre-execution subtask is sent to the task execution server, a first execution result of the pre-execution subtask sent by the task execution server is received, and based on the first execution result, a second execution result of the total task is generated.
  • the task scheduling server can know the task execution capability of the task execution server based on the task execution request sent by the task execution server, so that tasks can be distributed based on the task execution capability of the task execution server, thereby giving full play to the task execution capability of the task execution server and improving the efficiency of task execution.
  • FIG2 is a flowchart of a task processing method provided by an embodiment of the present disclosure. Based on the above embodiment, the process of splitting the overall task is further explained in conjunction with FIG2, which includes the following steps:
  • S201 based on the current business scenario, determine a target splitting strategy for the overall task from candidate splitting strategies.
  • the candidate splitting strategy is a pre-set splitting strategy, which may include: splitting strategy according to the sub-table suffix, splitting strategy according to the data identity (Identity Document, ID) range, etc., and no limitation is made here.
  • splitting strategies can be used to split the total task.
  • the target splitting strategy can be a splitting strategy based on the sub-table suffix.
  • the ABS filtering task can be split into 400 sub-tasks based on the sub-table suffix splitting strategy, and each sub-task represents an asset filtering task for a sub-table.
  • the target splitting strategy of the total task is determined from the candidate splitting strategies, and the total task is split into multiple subtasks based on the target splitting strategy.
  • different splitting strategies can be used to split the total task to avoid confusion in the splitting of the total task and affect the execution of the task.
  • FIG3 is a flow chart of a task processing method provided by an embodiment of the present disclosure. As shown in FIG3 , the method includes the following steps:
  • S302 Receive a task execution request sent by a task execution server.
  • the task execution request includes the Internet Protocol Address (IP) address and the number of tasks.
  • IP Internet Protocol Address
  • steps S301 to S302 please refer to the relevant contents in the above embodiment, which will not be repeated here.
  • S303 Determine a pre-execution subtask of the task execution server from the plurality of subtasks according to the number of tasks.
  • the task scheduling server may select a corresponding number of pre-execution subtasks from the multiple subtasks split from the total task according to the number of tasks in the task execution request.
  • the task scheduling server selects 200 pre-execution subtasks from the 400 subtasks and sends them to the task execution server.
  • S304 Send the pre-execution subtask to the task execution server according to the IP address.
  • the task scheduling server can send the pre-execution subtask to the corresponding task execution server according to the IP address in the task execution request.
  • the task execution server executes the pre-execution subtask and sends the first execution result of the pre-execution subtask to the task scheduling server.
  • S305 Receive a first execution result of the pre-execution subtask sent by the task execution server, and generate a second execution result of the overall task based on the first execution result.
  • step S305 please refer to the relevant contents in the above embodiment, which will not be repeated here.
  • the total task issued by the task issuing server is obtained, and the total task is split into multiple subtasks, a task execution request sent by the task execution server is received, and the pre-execution subtask of the task execution server is determined from the multiple subtasks according to the number of tasks, and the pre-execution subtask is sent to the task execution server according to the IP address, and the first execution result of the pre-execution subtask sent by the task execution server is received, and based on the first execution result, the second execution result of the total task is generated.
  • a corresponding number of tasks are allocated to the task execution server for execution, which can give full play to the task execution capability of the task execution server.
  • FIG4 is a flow chart of a task processing method provided by an embodiment of the present disclosure. As shown in FIG4 , the method includes the following steps:
  • S402 Receive a task execution request sent by a task execution server.
  • S403 Based on the task execution request, determine a pre-execution subtask of the task execution server from multiple subtasks, and send the pre-execution subtask to the task execution server.
  • steps S401 to S403 please refer to the relevant contents in the above embodiment, which will not be repeated here.
  • the pre-execution subtask After the task scheduling server determines the pre-execution subtask of the task execution server from multiple subtasks and sends the pre-execution subtask to the task execution server, the pre-execution subtask is locked. When the pre-execution subtask is in a locked state, it cannot be sent again to avoid the pre-execution subtask being executed repeatedly.
  • S405 In response to receiving the execution failure information of the pre-execution subtask, releasing the locking state of the pre-execution subtask.
  • the task scheduling server After the pre-execution subtask is locked, if the task scheduling server receives the execution failure information of the pre-execution subtask, the locking state of the pre-execution subtask is released to reschedule the pre-execution subtask to ensure that the pre-execution subtask is executed by the task execution server; if the task scheduling server does not receive the execution failure information of the pre-execution subtask, the pre-execution subtask continues to remain locked.
  • the total task issued by the task issuing server is obtained, and the total task is split into multiple subtasks, a task execution request sent by the task execution server is received, and based on the task execution request, the pre-execution subtask of the task execution server is determined from multiple subtasks, and the pre-execution subtask is sent to the task execution server, and the pre-execution subtask is locked, wherein the pre-execution subtask cannot be sent again when it is in a locked state, and in response to receiving the execution failure information of the pre-execution subtask, the locking state of the pre-execution subtask is released, and in response to not receiving the execution failure information of the pre-execution subtask, the locked state is maintained.
  • locking the assigned tasks can avoid repeated assignment of tasks and waste of computing resources.
  • FIG5 is a flow chart of a task processing method provided by an embodiment of the present disclosure. As shown in FIG5 , the method includes the following steps:
  • the execution progress of the pre-execution subtask may be determined by determining whether the pre-execution subtask is sent to the task execution server and the execution result of the pre-execution subtask.
  • S502 Based on the current execution progress of the pre-execution subtask, update the current execution state of the pre-execution subtask.
  • the current state of the pre-execution subtask is determined to be an unexecuted state; in response to the pre-execution subtask having been sent to the task execution server, the current state of the pre-execution subtask is updated to an executing state; in response to receiving the first execution result of the pre-execution subtask, the current state of the pre-execution subtask is updated to a completed state.
  • the execution progress of the pre-execution subtask is monitored, and the current execution status of the pre-execution subtask is updated based on the current execution progress of the pre-execution subtask.
  • the disclosed embodiment can automatically update the current status of the task according to the execution progress of the task, so as to facilitate the allocation and management of the task.
  • FIG6 is a flowchart of a task processing method provided by an embodiment of the present disclosure. As shown in FIG6 , the method includes the following steps:
  • S602 Receive a task execution request sent by a task execution server.
  • S603 Based on the task execution request, determine a pre-execution subtask of the task execution server from multiple subtasks, and send the pre-execution subtask to the task execution server.
  • steps S601 to S604 please refer to the relevant contents in the above embodiment, which will not be repeated here.
  • the identification information may be a unique ID.
  • S606 Determine a target task issuing server for issuing the overall task from the candidate task issuing servers based on the identification information.
  • S607 Send the second execution result to the target task issuing server.
  • the task publishing server when the task publishing server sends the overall task to the task scheduling server, it can also send the identification information of the overall task to the task scheduling server. After receiving the identification information, the server can store the identification information in its own storage space to send the execution result of the overall task to the task publishing server based on the identification.
  • the task scheduling server can retrieve the identification information of the total task from its own storage space, and based on the identification information, determine the target task publishing server that publishes the total task from the candidate task publishing servers, and send the second execution result of the total task to the target task publishing server.
  • the above embodiment is a situation where the task scheduling server actively feeds back the overall task execution result to the task issuing server after obtaining the second execution result of the overall task. After the task scheduling server obtains the second execution result of the overall task, it may not actively feed back the execution result to the task server, but the task issuing server may actively query the task scheduling server for the execution result of the overall task.
  • the task publishing server sends identification information of the total task to the task scheduling server.
  • the task scheduling server can use the identification information as a reference to query the current execution status of the total task published by the task publishing server. If the total task has been completed, the second execution result of the total task is sent to the task server; if the total task has not been completed, the current execution status of the total task is sent to the task publishing server.
  • the two solutions described in the above two embodiments can be combined so that the task scheduling server can actively feedback the execution result of the total task, and the task issuing server can also actively query the execution result of the total task.
  • the specific process can be found in the relevant description of the above two embodiments, which will not be repeated here.
  • a total task issued by a task issuing server is obtained, and the total task is split into multiple subtasks, a task execution request sent by a task execution server is received, and based on the task execution request, a pre-execution subtask of the task execution server is determined from multiple subtasks, and the pre-execution subtask is sent to the task execution server, a first execution result of the pre-execution subtask sent by the task execution server is received, and based on the first execution result, a second execution result of the total task is generated, identification information of the total task is obtained, and based on the identification information, a target task issuing server that issues the total task is determined from candidate task issuing servers.
  • the execution result of the total task issued by the task issuing server is sent to the task issuing server based on the identification information, which can avoid data disorder, resulting in an error in the execution result of the task issuing server, and ensure the correctness of the task execution result obtained by the task issuing server.
  • FIG7 is a flowchart of a task processing method provided by an embodiment of the present disclosure. As shown in FIG7 , the method includes the following steps:
  • step S701 For a detailed description of step S701, please refer to the relevant contents in the above embodiment, which will not be repeated here.
  • S702 Send a task broadcast to each task execution server with task execution authority at a set interval time to instruct the task execution server to obtain a pre-execution subtask at the set interval time.
  • the task execution authority of the task execution server can be set in advance, so that the task execution server without the task execution authority cannot execute the task.
  • the task execution authority can be set according to actual needs, and no limitation is made here.
  • the task scheduling server can send a task broadcast to each task execution server with task execution authority at a set interval. After receiving the task broadcast sent by the task scheduling server, the task execution server with task execution authority responds to the task broadcast, generates a task execution request, and sends the task execution request to the task scheduling server. After receiving the task execution request sent by the task execution server, the task scheduling server can determine the pre-execution subtask of the task execution server from multiple subtasks according to the task execution request, and send the pre-execution subtask to the task execution server, and the task execution server executes the pre-execution subtask.
  • the total task issued by the task issuing server is obtained, and the total task is divided into multiple subtasks.
  • a task broadcast is sent to each task execution server with task execution authority to instruct the task execution server to obtain the pre-execution subtask at the set interval.
  • the task scheduling server regularly schedules each task execution server through task broadcast, which can avoid the situation where some task execution servers are idle, so that the performance of each task execution server can be fully utilized and the efficiency of task execution can be improved.
  • Fig. 8 is a flowchart of a task processing method provided by an embodiment of the present disclosure. It should be noted that the execution subject of the embodiment of the present disclosure is a task execution server.
  • the method comprises the following steps:
  • the task execution request is used to instruct the task scheduling server to send the corresponding pre-execution subtask to the task execution server.
  • the task execution server can generate A task execution request is formed and the task execution request is sent to the task scheduling server.
  • the task scheduling server determines a pre-execution subtask of the task execution server from multiple subtasks according to the task execution request, and sends the pre-execution subtask to the task execution server.
  • S802 Receive a pre-execution subtask sent by a task scheduling server.
  • the task execution server After receiving the pre-execution subtask sent by the task scheduling server, the task execution server executes the pre-execution subtask and sends the first execution result of the pre-execution subtask to the task scheduling server. After all subtasks of the overall task are completed, the task scheduling server can generate the second execution result of the overall task based on the first execution results of all subtasks.
  • the pre-execution subtask is executed according to its set priority.
  • pre-execution subtask a For example, assuming that the priority of pre-execution subtask a of overall task A is set to 1, and the priority of pre-execution subtask b of overall task B is set to 2, then pre-execution subtask a is executed first.
  • pre-execution subtask a For example, assuming that the priority of pre-execution subtask a of overall task A is set to 1, and the priority of pre-execution subtask b of overall task B is 2, then pre-execution subtask a is executed first.
  • Scenario 1 General Task A and General Task B are released at the same time. You can execute the pre-execution subtask a of General Task A first, and then execute the pre-execution subtask b of General Task B according to the set priority.
  • Scenario 2 Total task B is released first, and total task A is released later.
  • execute pre-execution subtask a first pause the pre-execution subtask b that is being executed, jump the queue and execute all pre-execution subtasks a of total task A, and then execute pre-execution subtask b.
  • a task execution request is generated and sent to a task scheduling server, wherein the task execution request is used to instruct the task scheduling server to send a corresponding pre-execution subtask to the task execution server, receive the pre-execution subtask sent by the task scheduling server, execute the pre-execution subtask, and send the first execution result of the pre-execution subtask to the task scheduling server to generate a second execution result of the total task, wherein the total task is composed of multiple pre-execution subtasks.
  • the task execution server can request the task scheduling server to distribute tasks based on its current execution capability, thereby giving full play to the task execution capability of the task execution server and improving the efficiency of task execution.
  • FIG9 is a flow chart of a task processing method provided by an embodiment of the present disclosure. As shown in FIG9 , the method includes the following steps:
  • the task scheduling server After the task scheduling server splits the total task issued by the task issuing server into multiple subtasks, it sends a task broadcast to the task execution server. After the task execution server receives the task broadcast sent by the task scheduling server, it obtains its current resource information.
  • S902 Determine whether the task execution server meets the task execution condition based on the resource information.
  • the resource information includes CPU usage and/or memory usage.
  • based on CPU usage and/or memory usage it is determined whether the task execution server meets the task execution conditions. In response to the CPU usage and/or memory usage being greater than their respective thresholds, it is determined that the task execution server meets the task execution conditions; in response to the CPU usage and/or memory usage being less than or equal to their respective thresholds, it is determined that the task execution server does not meet the task execution conditions.
  • the number of tasks executable by the task execution server is determined based on CPU usage and/or memory usage, and a task execution request is generated based on the number of tasks and the IP address of the task execution server.
  • the current resource information of the task execution server is obtained, and based on the resource information, it is determined whether the task execution server meets the task execution condition, and in response to meeting the task execution condition, a task execution request is generated.
  • the number of tasks that can currently be executed by the task execution server can be accurately determined through the current CPU usage and/or memory usage of the task execution server, so as to give full play to the task execution capability of the task execution server.
  • FIG10 is a flow chart of a task processing method provided by an embodiment of the present disclosure. As shown in FIG10 , the method includes the following steps:
  • the thread pool may be an asynchronous thread pool to asynchronously execute the pre-execution subtask.
  • S1002 In response to a submission failure, generating execution failure information of the pre-execution subtask.
  • S1003 Send execution failure information to the task scheduling server, wherein the execution failure information is used to instruct the task scheduling server to unlock the pre-execution subtask.
  • the task execution server can submit the pre-execution subtask to the thread pool and determine whether the submission is successful. If not, it generates execution failure information of the pre-execution subtask and executes the pre-execution subtask. The failure information is sent to the task scheduling server. After receiving the execution failure information of the pre-execution subtask, the task scheduling server can unlock the pre-execution subtask to ensure that the pre-execution subtask can be acquired and executed again; if so, the task execution server can execute the pre-execution subtask.
  • the pre-execution subtask sent by the task scheduling server is submitted to the thread pool, and it is determined whether the submission is successful.
  • execution failure information of the pre-execution subtask is generated, and the execution failure information is sent to the task scheduling server, wherein the execution failure information is used to instruct the task scheduling server to unlock the pre-execution subtask.
  • the pre-execution subtask is executed.
  • the task execution server submits the task assigned by the task scheduling server to the thread pool, which can improve the execution efficiency of the task, and can unlock the task lock state when the task execution fails, so as to reallocate the task and ensure the execution of the task.
  • Figure 11 is a flow chart showing an example of a task execution server executing a pre-execution subtask. As shown in Figure 11, the following steps are included:
  • the task scheduling server wakes up the task execution server.
  • the task execution server obtains current CPU usage and/or memory usage.
  • S1105 Generate a task execution request and send it to the task scheduling server.
  • step S1107 determine whether the submission is successful. If yes, execute step S1109; if no, execute step S1108.
  • Fig. 12 is a flowchart of a task processing method provided by an embodiment of the present disclosure. It should be noted that the execution subject of the embodiment of the present disclosure is a task issuing server.
  • the task issuing server may generate an ABS filtering task.
  • the task publishing server After the task publishing server generates a total task based on the current business scenario, it can send the total task to the task scheduling server. After the task scheduling server receives the total task, it can split the total task into multiple subtasks and schedule the multiple subtasks to the task execution server for execution to obtain the execution result of the total task.
  • the task publishing server when the task publishing server sends the overall task to the task scheduling server, it can also send the identification information of the overall task to the task scheduling server. After receiving the identification information, the server can store the identification information in its own storage space to send the execution result of the overall task to the task publishing server based on the identification.
  • the task scheduling server can retrieve the identification information of the total task from its own storage space, and based on the identification information, determine the task publishing server that publishes the total task from multiple candidate task publishing servers, and send the execution result of the total task to the task publishing server.
  • the task issuing server sends identification information of the overall task to the task scheduling server, wherein the identification information is used to instruct the task scheduling server to query the execution result of the overall task and send the execution result to the task issuing server.
  • the task scheduling server can use the identification information as a reference to query the current execution status of the total task published by the task publishing server. If the total task has been completed, the execution result of the total task is sent to the task server; if the total task is not completed, the current execution status of the total task is sent to the task publishing server.
  • a total task is generated based on the current business scenario, and the total task is sent to the task scheduling server, which splits and schedules the total task to obtain the execution result of the total task, and receives the execution result sent by the task scheduling server.
  • the task publishing server sends the total task to the task scheduling server, which splits and schedules it, which can give full play to the task execution capability of the task execution server and improve the efficiency of task execution.
  • Figure 13 is a schematic diagram of the interaction between the task publishing server, the task scheduling server and the task execution server. As shown in Figure 13, the functions of the task publishing server include: publishing the total task and query results.
  • the functions of the task scheduling server include: task splitting, task broadcasting, task locking and task status updating.
  • task splitting includes splitting strategy
  • task broadcasting includes timed scheduling.
  • the functions of the task execution server include: obtaining tasks and executing tasks.
  • the task publishing server publishes the total task and sends it to the task scheduling server.
  • the task scheduling server splits the total task.
  • the task scheduling server sends a task broadcast to the task execution server.
  • the task execution server obtains the pre-execution subtask.
  • the task execution server locks the pre-execution subtask.
  • the task execution server executes the pre-execution subtask.
  • the task scheduling server updates the task status according to the execution progress of the pre-execution subtask.
  • the task publishing server queries the execution results of the total task.
  • FIG14 is a schematic diagram of the structure of the task processing device of an embodiment of the present disclosure.
  • the task processing device 1400 includes:
  • An acquisition module 1410 is used to acquire a total task issued by a task issuing server and split the total task into multiple subtasks;
  • a receiving module 1420 is used to receive a task execution request sent by a task execution server
  • a first sending module 1430 is used to determine a pre-execution subtask of a task execution server from a plurality of subtasks based on the task execution request, and send the pre-execution subtask to the task execution server;
  • the generating module 1440 is used to receive the first execution result of the pre-execution subtask sent by the task execution server, and generate the second execution result of the overall task based on the first execution result.
  • the acquisition module 1410 is further used to determine a target splitting strategy for the overall task from candidate splitting strategies based on the current business scenario; and split the overall task into multiple subtasks based on the target splitting strategy.
  • the task processing device 1400 also includes: a second sending module, which is used to send a task broadcast to the task execution server before receiving the task execution request sent by the task execution server, wherein the task broadcast is used to instruct the task execution server to generate a task execution request.
  • a second sending module which is used to send a task broadcast to the task execution server before receiving the task execution request sent by the task execution server, wherein the task broadcast is used to instruct the task execution server to generate a task execution request.
  • the task execution request includes a network protocol IP address and a task quantity
  • the first sending module 1430 is also used to: determine a pre-execution subtask of the task execution server from multiple subtasks according to the task quantity; and send the pre-execution subtask to the task execution server according to the IP address.
  • the task processing device 1400 also includes: a locking module, which is used to lock the pre-execution subtask after sending the pre-execution subtask to the task execution server, wherein the pre-execution subtask cannot be sent again when it is in a locked state; and a release module, which is used to release the locking state of the pre-execution subtask in response to receiving execution failure information of the pre-execution subtask.
  • a locking module which is used to lock the pre-execution subtask after sending the pre-execution subtask to the task execution server, wherein the pre-execution subtask cannot be sent again when it is in a locked state
  • a release module which is used to release the locking state of the pre-execution subtask in response to receiving execution failure information of the pre-execution subtask.
  • the task processing device 1400 further includes: an update module, which is used to monitor the execution progress of the pre-execution subtask; and update the current execution state of the pre-execution subtask based on the current execution progress of the pre-execution subtask.
  • the update module is also used to determine that the current state of the pre-execution subtask is a non-executed state in response to the pre-execution subtask not being sent to the task execution server, or receiving execution failure information of the pre-execution subtask; update the current state of the pre-execution subtask to an executing state in response to the pre-execution subtask having been sent to the task execution server; and update the current state of the pre-execution subtask to a completed state in response to receiving the first execution result of the pre-execution subtask.
  • the task processing device 1400 also includes: a third sending module, used to obtain identification information of the overall task; based on the identification information, determine the target task publishing server that publishes the overall task from the candidate task publishing servers; and send the second execution result to the target task publishing server.
  • a third sending module used to obtain identification information of the overall task; based on the identification information, determine the target task publishing server that publishes the overall task from the candidate task publishing servers; and send the second execution result to the target task publishing server.
  • the task processing device 1400 also includes: a fourth sending module, which is used to send a task broadcast to each task execution server with task execution authority at a set interval time to instruct the task execution server to obtain a pre-execution subtask at the set interval time.
  • a fourth sending module which is used to send a task broadcast to each task execution server with task execution authority at a set interval time to instruct the task execution server to obtain a pre-execution subtask at the set interval time.
  • a total task issued by a task issuing server is obtained, and the total task is split into multiple subtasks, a task execution request sent by a task execution server is received, a pre-execution subtask of the task execution server is determined from multiple subtasks based on the task execution request, and the pre-execution subtask is sent to the task execution server, a first execution result of the pre-execution subtask sent by the task execution server is received, and based on the first execution result, a second execution result of the total task is generated.
  • the task scheduling server can know the task execution capability of the task execution server based on the task execution request sent by the task execution server, so that tasks can be distributed based on the task execution capability of the task execution server, thereby giving full play to the task execution capability of the task execution server and improving the efficiency of task execution.
  • FIG15 is a schematic diagram of the structure of the task processing device of an embodiment of the present disclosure. As shown in FIG15, the task processing device 1500 includes:
  • a generating module 1510 is used to generate a task execution request and send the task execution request to the task scheduling server, wherein the task execution request is used to instruct the task scheduling server to send the corresponding pre-execution subtask to the task execution server;
  • the receiving module 1520 is used to receive the pre-execution subtask sent by the task scheduling server;
  • the execution module 1530 is used to execute the pre-execution subtask and send the first execution result of the pre-execution subtask to the task scheduling server to generate a second execution result of the overall task, wherein the overall task is composed of multiple pre-execution subtasks.
  • the generation module 1510 is also used to obtain the current resource information of the task execution server in response to receiving a task broadcast sent by the task scheduling server; based on the resource information, determine whether the task execution server meets the task execution conditions; and generate a task execution request in response to meeting the task execution conditions.
  • the resource information includes CPU usage and/or memory usage
  • the generation module 1510 is also used to: determine whether the task execution server meets the task execution conditions based on the CPU usage and/or memory usage; in response to the CPU usage and/or memory usage being greater than their respective thresholds, determine that the task execution server meets the task execution conditions; in response to the CPU usage and/or memory usage being less than or equal to their respective thresholds, determine that the task execution server does not meet the task execution conditions.
  • the generation module 1510 is also used to: determine the number of tasks executable by the task execution server based on CPU usage and/or memory usage; and generate a task execution request based on the number of tasks and the IP address of the task execution server.
  • the task processing device 1500 also includes a submission module, which is used to submit the pre-execution subtask sent by the task scheduling server to the thread pool, and determine whether the submission is successful; in response to a submission failure, generate execution failure information of the pre-execution subtask; send the execution failure information to the task scheduling server, wherein the execution failure information is used to instruct the task scheduling server to unlock the pre-execution subtask; in response to a successful submission, execute the pre-execution subtask.
  • a submission module which is used to submit the pre-execution subtask sent by the task scheduling server to the thread pool, and determine whether the submission is successful; in response to a submission failure, generate execution failure information of the pre-execution subtask; send the execution failure information to the task scheduling server, wherein the execution failure information is used to instruct the task scheduling server to unlock the pre-execution subtask; in response to a successful submission, execute the pre-execution subtask.
  • the execution module 1530 is further configured to execute the pre-execution subtask according to the set priority of the pre-execution subtask.
  • a task execution request is generated and sent to a task scheduling server, wherein the task execution request is used to instruct the task scheduling server to send a corresponding pre-execution subtask to the task execution server, receive the pre-execution subtask sent by the task scheduling server, execute the pre-execution subtask, and send the first execution result of the pre-execution subtask to the task scheduling server to generate a second execution result of the total task, wherein the total task is composed of multiple pre-execution subtasks.
  • the task execution server can request the task scheduling server to distribute tasks based on its current execution capability, thereby giving full play to the task execution capability of the task execution server and improving the efficiency of task execution.
  • FIG16 is a schematic diagram of the structure of the task processing device of an embodiment of the present disclosure. As shown in FIG16, the task processing device 1600 includes:
  • a generation module 1610 is used to generate a general task based on the current business scenario
  • the sending module 1620 is used to send the overall task to the task scheduling server, and the task scheduling server splits and schedules the overall task to obtain the execution result of the overall task;
  • the receiving module 1630 is used to receive the execution result sent by the task scheduling server.
  • the sending module 1620 is also used to send identification information of the total task to the task scheduling server before receiving the execution result sent by the task scheduling server, wherein the identification information is used to instruct the task scheduling server to query the execution result of the total task and send the execution result to the task publishing server.
  • a total task is generated based on the current business scenario, and the total task is sent to the task scheduling server, which splits and schedules the total task to obtain the execution result of the total task, and receives the execution result sent by the task scheduling server.
  • the task publishing server sends the total task to the task scheduling server, which splits and schedules it, which can give full play to the task execution capability of the task execution server and improve the efficiency of task execution.
  • FIG. 17 it is a block diagram of an electronic device according to the task processing method of an embodiment of the present disclosure.
  • the electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workbenches, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • the electronic device can also represent various forms of mobile devices, such as intelligent voice interaction devices, personal digital processing, cellular phones, smart phones, wearable devices and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are merely examples and are not intended to limit the implementation of the present disclosure described and/or required herein.
  • the electronic device includes: one or more processors 1701, memory 1702, and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces.
  • the various components are connected to each other using different buses, and can be installed on a common mainboard or installed in other ways as needed.
  • the processor 1701 can process the instructions executed in the electronic device, including instructions stored in or on the memory to display the graphical information of the GUI on an external input/output device (such as a display device coupled to the interface).
  • an external input/output device such as a display device coupled to the interface.
  • multiple processors and/or multiple buses can be used together with multiple memories and multiple memories.
  • multiple electronic devices can be connected, and each device provides some necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system).
  • a processor 1701 is taken as an example.
  • the memory 1702 is a non-transitory computer-readable storage medium provided by the present disclosure.
  • the non-transitory computer readable storage medium of the present disclosure stores computer instructions, which are used to cause a computer to execute the task processing method provided by the present disclosure.
  • the memory 1702 is a non-transient computer-readable storage medium that can be used to store non-transient software programs, non-transient computer executable programs and modules, such as program instructions/modules corresponding to the task processing method in the embodiment of the present disclosure.
  • the processor 1701 executes various functional applications and data processing of the server by running the non-transient software programs, instructions and modules stored in the memory 1702, that is, implementing the task processing method in the above method embodiment.
  • the memory 1702 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application required by at least one function; the data storage area may store data created according to the use of the electronic device of the task processing method, etc.
  • the memory 1702 may include a high-speed random access memory, and may also include a non-transient memory, such as at least one disk storage device, a flash memory device, or other non-transient solid-state storage device.
  • the memory 1702 may optionally include a memory remotely arranged relative to the processor 1701, and these remote memories may be connected to the electronic device of the task processing method via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the electronic device of the task processing method may further include: an input device 1703 and an output device 1704.
  • the processor 1701, the memory 1702, the input device 1703 and the output device 1704 may be connected via a bus or other means, and FIG17 takes the bus connection as an example.
  • the input device 1703 can receive input digital or character information, and generate key signal input related to user settings and function control of the electronic device of the task processing method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, an indicator rod, one or more mouse buttons, a trackball, a joystick and other input devices.
  • the output device 1704 may include a display device, an auxiliary lighting device (e.g., an LED) and a tactile feedback device (e.g., a vibration motor), etc.
  • the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display and a plasma display. In some embodiments, the display device may be a touch screen.
  • the present disclosure also proposes a non-temporary computer-readable storage medium, on which a computer program is stored.
  • the program When the program is executed by a processor, it implements the task processing method proposed in the first aspect embodiment, the second aspect embodiment, or the third aspect embodiment of the present disclosure.
  • the present disclosure proposes a computer program product, including a computer program, which, when executed by a processor, implements the task processing method of the above-mentioned first aspect embodiment, second aspect embodiment, or third aspect embodiment of the present disclosure.
  • Various implementations of the systems and techniques described herein can be realized in digital electronic circuit systems, integrated circuit systems, dedicated ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include: being implemented in one or more computer programs that can be executed and/or interpreted on a programmable system including at least one programmable processor, which can be a special purpose or general purpose programmable processor that can receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • a programmable processor which can be a special purpose or general purpose programmable processor that can receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • the systems and techniques described herein can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) through which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device e.g., a mouse or trackball
  • Other types of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form (including acoustic input, voice input, or tactile input).
  • the systems and techniques described herein may be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., a user computer with a graphical user interface or a web browser through which a user can interact with implementations of the systems and techniques described herein), or a computing system that includes any combination of such back-end components, middleware components, or front-end components.
  • the components of the system may be interconnected by any form or medium of digital data communication (e.g., a communications network). Examples of communications networks include: a local area network (LAN), a wide area network (WAN), and the Internet.
  • a computer system may include a client and a server.
  • the client and the server are generally remote from each other and usually interact through a communication network.
  • the relationship between the client and the server is generated by computer programs running on the corresponding computers and having a client-server relationship with each other.
  • the server may be a cloud server, also known as a cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the defects of difficult management and weak business scalability in traditional physical hosts and VPS services ("Virtual Private Server", or "VPS" for short).
  • VPN Virtual Private Server
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include at least one of the features. In the description of this disclosure, the meaning of "plurality” is at least two, such as two, three, etc., unless otherwise clearly and specifically defined.

Abstract

一种任务处理方法包括:获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务;接收任务执行服务器发送的任务执行请求;基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务;接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。

Description

任务处理方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请基于申请号为202211172522.0、申请日为2022年9月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及计算机技术领域,尤其涉及一种任务处理方法、装置、电子设备及存储介质。
背景技术
在业务的处理任务量较大时,为了提高业务的处理效率,通常将业务的总任务拆分成多个子任务,分发到不同的服务器执行,以提高业务的处理效率。
相关技术中,将业务的总任务拆分成子任务分发给(Message Queue,MQ)中间件,由MQ推送给各个任务执行服务器执行,以得到总任务的执行结果。
相关技术无法知晓各个任务执行服务器当前的任务执行能力,因此无法根据任务执行服务器当前的任务执行能力来分发任务,从而难以充分发挥任务执行服务器的任务执行能力,导致任务执行的效率较低。
发明内容
本公开提出一种任务处理方法、装置、电子设备、存储介质及计算机程序产品。
本公开第一方面实施例提出了一种任务处理方法,由任务调度服务器执行,该方法包括:
获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务;接收任务执行服务器发送的任务执行请求;
基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务;
接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。
本公开实施例中,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务,接收任务执行服务器发送的任务执行请求,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。本公开实施例中,任务调度服务器可以根据任务执行服务器发送的任务执行请求,知晓任务执行服务器的任务执行能力,从而能够根据任务执行服务器的任务执行能力来分发任务,进而充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
本公开第二方面实施例提出了一种任务处理方法,由任务执行服务器执行,该方法包括:
生成任务执行请求,并向任务调度服务器发送任务执行请求,其中,任务执行请求用于指示任务调度服务器向任务执行服务器发送对应的预执行子任务;
接收任务调度服务器发送的预执行子任务;
执行预执行子任务,并将预执行子任务的第一执行结果发送给任务调度服务器,以生成总任务的第二执行结果,其中,总任务由多个预执行子任务组成。
本公开实施例中,生成任务执行请求,并向任务调度服务器发送任务执行请求,其中,任务执行请求用于指示任务调度服务器向任务执行服务器发送对应的预执行子任务,接收任务调度服务器发送的预执行子任务,执行预执行子任务,并将预执行子任务的第一执行结果发送给任务调度服务器,以生成总任务的第二执行结果,其中,总任务由多个预执行子任务组成。本公开实施例中,任务执行服务器可以根据自身当前的执行能力,请求任务调度服务器分发任务,从而充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
本公开第三方面实施例提出了一种任务处理方法,由任务发布服务器执行,该方法包括:
基于当前业务场景生成总任务;
将总任务发送给任务调度服务器,由任务调度服务器对总任务进行拆分调度,以得到总任务的执行结果;
接收任务调度服务器发送的执行结果。
本公开实施例中,基于当前业务场景生成总任务,将总任务发送给任务调度服务器,由任务调度服务器对总任务进行拆分调度,以得到总任务的执行结果,接收任务调度服务器发送的执行结果。本公开实施例中,任务发布服务器将总任务发送给任务调度服务器,由任务调度服务器进行拆分调度,能够充 分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
本公开第四方面实施例提出一种任务处理装置,包括:获取模块,用于获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务;接收模块,用于接收任务执行服务器发送的任务执行请求;第一发送模块,用于基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务;生成模块,用于接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。
本公开第五方面实施例提出一种任务处理装置,包括:生成模块,用于生成任务执行请求,并向任务调度服务器发送任务执行请求,其中,任务执行请求用于指示任务调度服务器向任务执行服务器发送对应的预执行子任务;接收模块,用于接收任务调度服务器发送的预执行子任务;执行模块,用于执行预执行子任务,并将预执行子任务的第一执行结果发送给任务调度服务器,以生成总任务的第二执行结果,其中,总任务由多个预执行子任务组成。
本公开第六方面实施例提出一种任务处理装置,包括:生成模块,用于基于当前业务场景生成总任务;发送模块,用于将总任务发送给任务调度服务器,由任务调度服务器对总任务进行拆分调度,以得到总任务的执行结果;接收模块,用于接收任务调度服务器发送的执行结果。
本公开第七方面实施例提出了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上述第一方面或第二方面或第三方面实施例的任务处理方法。
本公开第八方面实施例提出了一种存储有计算机指令的计算机可读存储介质,所述计算机指令用于使所述计算机执行如上述第一方面或第二方面或第三方面实施例的任务处理方法。
本公开第九方面实施例提出了一种计算机程序产品,包括计算机程序,计算机程序在被处理器执行时实现本公开第一方面或第二方面或第三方面实施例的任务处理方法。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开实施例提供的任务处理方法的流程示意图;
图2为本公开实施例提供的任务处理方法的流程示意图;
图3为本公开实施例提供的任务处理方法的流程示意图;
图4为本公开实施例提供的任务处理方法的流程示意图;
图5为本公开实施例提供的任务处理方法的流程示意图;
图6为本公开实施例提供的任务处理方法的流程示意图;
图7为本公开实施例提供的任务处理方法的流程示意图;
图8为本公开实施例提供的任务处理方法的流程示意图;
图9为本公开实施例提供的任务处理方法的流程示意图;
图10为本公开实施例提供的任务处理方法的流程示意图;
图11为任务执行服务器执行预执行子任务的流程示例图;
图12为本公开实施例提供的任务处理方法的流程示意图;
图13为任务发布服务器、任务调度服务器和任务执行服务器三者之间的交互示意图;
图14为本公开实施例提供的任务处理装置的结构示意图;
图15为本公开实施例提供的任务处理装置的结构示意图;
图16为本公开实施例提供的任务处理装置的结构示意图;
图17为本公开实施例提供的电子设备的框图。
具体实施方式
下面详细描述本公开的实施例,实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图描述本公开实施例的任务处理方法、装置、电子设备和存储介质。
图1为本公开一实施例提供的任务处理方法的流程示意图。需要说明的是,本公开实施例提供的任务处理方法的执行主体为任务调度服务器。
如图1所示,该方法包含以下步骤:
S101,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务。
任务发布服务器在发布总任务之后,将该总任务发送给任务调度服务器,任务调度服务器接收到该总任务之后将该总任务拆分成多个子任务。
S102,接收任务执行服务器发送的任务执行请求。
接收任务执行服务器发送的任务执行请求之前,还包括:向任务执行服务器发送任务广播,其中,任务广播用于指示任务执行服务器生成任务执行请求。
任务调度服务器将任务发布服务器发布的总任务拆分成多个子任务之后,可以将任务执行服务器发送任务广播,以唤醒该任务执行服务器。该任务执行服务器在接收到任务调度服务器发送的任务广播时,可以对该任务广播进行响应,基于自身的任务执行能力生成任务执行请求,并将该任务执行请求发送给任务调度服务器。
S103,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务。
任务调度服务器接收到任务执行服务器发送的任务执行请求后,根据该任务执行请求,确定该任务执行服务器当前的任务执行能力,并根据该任务执行服务器当前的任务执行能力,从总任务拆分的多个子任务中确定该任务执行服务器的预执行子任务,并将该预执行子任务发送给该任务执行服务器,由该任务执行服务器执行该预执行子任务。
S104,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。
任务执行服务器执行完预执行子任务,得到该预执行子任务的第一执行结果后,将该第一执行结果发送给任务调度服务器,任务调度服务器接收到该总任务拆分的所有子任务的第一执行结果后,可以根据该第一执行结果得到该总任务的第二执行结果。
本公开实施例中,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务,接收任务执行服务器发送的任务执行请求,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。本公开实施例中,任务调度服务器可以根据任务执行服务器发送的任务执行请求,知晓任务执行服务器的任务执行能力,从而能够根据任务执行服务器的任务执行能力来分发任务,进而充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
图2为本公开一实施例提供的任务处理方法的流程示意图,在上述实施例的基础上,进一步结合图2,对总任务的拆分的过程进行解释说明,包含以下步骤:
S201,基于当前业务场景,从候选拆分策略中确定总任务的目标拆分策略。
其中,候选拆分策略为预先设定的拆分策略,该候选拆分策略可以包括:按照分表后缀拆分策略、按照数据身份标识(Identity Document,ID)范围拆分策略等,此处不做任何限定。
S202,基于目标拆分策略,将总任务拆分成多个子任务。
在不同的业务场景下,可以使用不同的拆分策略,对总任务进行拆分。
举例而言,若当前业务场景为资产证券化(Asset Backed Securities,ABS)业务,任务发布服务器发布的总任务为ABS过滤任务,底层具有400张分表,则目标拆分策略可以为按照分表后缀拆分策略,可以采用按照分表后缀拆分策略将ABS过滤任务拆分成400个子任务,每个子任务代表一个分表的资产过滤任务。当400个子任务完成时,全表扫描过滤了一遍,此时整体ABS过滤任务完成。
本公开实施例中,基于当前业务场景,从候选拆分策略中确定总任务的目标拆分策略,基于目标拆分策略,将总任务拆分成多个子任务。本公开实施例中,对于不同业务的总任务,可以采用不同的拆分策略对总任务进行拆分,避免总任务拆分混乱,影响任务的执行。
图3为本公开一实施例提供的任务处理方法的流程示意图。如图3所示,该方法包含以下步骤:
S301,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务。
S302,接收任务执行服务器发送的任务执行请求。
其中,任务执行请求包括网络协议(Internet Protocol Address,IP)地址和任务数量。
关于步骤S301~S302的具体介绍可参见上述实施例中相关内容的记载,此处不再赘述。
S303,按照任务数量从多个子任务中,确定任务执行服务器的预执行子任务。
任务调度服务器接收到任务执行服务器发送的任务执行请求后,可以按照任务执行请求中的任务数量,从总任务拆分的多个子任务中,选取对应数量的预执行子任务。
举例而言,若任务调度服务器将任务发布服务器发布的总任务拆分成了400个子任务,任务服务器发送的任务执行请求指示任务数量为200个任务,则任务调度服务器从400个子任务中,选取200个预执行子任务发送给任务执行服务器。
S304,按照IP地址将预执行子任务发送给任务执行服务器。
任务调度服务器在确定预执行子任务后,可以按照任务执行请求中IP地址将预执行子任务发送给对应的任务执行服务器,由该任务执行服务器执行该预执行子任务,并将该预执行子任务的第一执行结果发送给任务调度服务器。
S305,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。
关于步骤S305的具体介绍可参见上述实施例中相关内容的记载,此处不再赘述。
本公开实施例中,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务,接收任务执行服务器发送的任务执行请求,按照任务数量从多个子任务中,确定任务执行服务器的预执行子任务,按照IP地址将预执行子任务发送给任务执行服务器,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。本公开实施例中,根据任务执行服务器当前可执行的任务数量,向任务执行服务器分配对应数量的任务进行执行,能够充分发挥任务执行服务器的任务执行能力。
图4为本公开一实施例提供的任务处理方法的流程示意图。如图4所示,该方法包含以下步骤:
S401,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务。
S402,接收任务执行服务器发送的任务执行请求。
S403,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务。
关于步骤S401~S403的具体介绍可参见上述实施例中相关内容的记载,此处不再赘述。
S404,锁定预执行子任务,其中,预执行子任务处于锁定状态时无法被再次发送。
任务调度服务器从多个子任务中,确定任务执行服务器预执行子任务并将该预执行子任务发送给任务执行服务器后,将该预执行子任务进行锁定,该预执行子任务处于锁定状态时无法被再次发送,以避免该预执行子任务被重复执行。
S405,响应于接收到预执行子任务的执行失败信息,则解除预执行子任务的锁定状态。
S406,响应于未接收到预执行子任务的执行失败信息,则保持锁定状态。
在预执行子任务被锁定之后,若任务调度服务器接收到了该预执行子任务的执行失败信息,则解除该预执行子任务的锁定状态,以重新对该预执行子任务进行调度,确保该预执行子任务被任务执行服务器执行;若任务调度服务器未接收到该预执行子任务的执行失败信息,则该预执行子任务继续保持锁定状态。
本公开实施例中,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务,接收任务执行服务器发送的任务执行请求,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务,锁定预执行子任务,其中,预执行子任务处于锁定状态时无法被再次发送,响应于接收到预执行子任务的执行失败信息,则解除预执行子任务的锁定状态,响应于未接收到预执行子任务的执行失败信息,则保持锁定状态。本公开实施例中,将已分配的任务进行锁定,能够避免任务的重复分配,浪费算力资源。
图5为本公开一实施例提供的任务处理方法的流程示意图。如图5所示,该方法包含以下步骤:
S501,监控预执行子任务的执行进度。
可以通过确定预执行子任务是否被发送给任务执行服务器和预执行子任务的执行结果,确定该预执行子任务的执行进度。
S502,基于预执行子任务的当前执行进度,对预执行子任务的当前执行状态进行更新。
响应于预执行子任务未被发送给任务执行服务器,或者接收到预执行子任务的执行失败信息,则确定预执行子任务的当前状态为未被执行状态;响应于预执行子任务已被发送给任务执行服务器,则将预执行子任务的当前状态更新为执行中状态;响应于接收到预执行子任务的第一执行结果,则将预执行子任务的当前状态更新为已完成状态。
本公开实施例中,监控预执行子任务的执行进度,基于预执行子任务的当前执行进度,对预执行子任务的当前执行状态进行更新。本公开实施例能够根据任务的执行进度,自动更新任务的当前状态,以便于任务的分配管理。
图6为本公开一实施例提供的任务处理方法的流程示意图。如图6所示,该方法包含以下步骤:
S601,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务。
S602,接收任务执行服务器发送的任务执行请求。
S603,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务。
S604,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任 务的第二执行结果。
关于步骤S601~S604的具体介绍可参见上述实施例中相关内容的记载,此处不再赘述。
S605,获取总任务的标识信息。
在一些实施例中,标识信息可以为唯一ID。
S606,基于标识信息,从候选任务发布服务器中,确定发布总任务的目标任务发布服务器。
S607,将第二执行结果发送给目标任务发布服务器。
一些实施例中,任务发布服务器在将总任务发送给任务调度服务器时,可以一同将该总任务的标识信息发送给任务调度服务器,服务器接收到该标识信息后,可以将该标识信息存储于自身的存储空间中,以基于该标识将总任务的执行结果发送给任务发布服务器。
具体地,任务调度服务器得到总任务的第二执行结果之后,可以从自身的存储空间中,调取该总任务的标识信息,并基于该标识信息,从候选任务发布服务器中,确定发布该总任务的目标任务发布服务器,并将该总任务的第二执行结果发送给该目标任务发布服务器。
上述实施例为任务调度服务器得到总任务的第二执行结果后,主动向任务发布服务器反馈总任务执行结果的情况。任务调度服务器得到总任务的第二执行结果后,也可以不主动向任务服务器反馈执行结果,而是由任务发布服务器主动向任务调度服务器查询总任务的执行结果。
另一实施例中,任务发布服务器向任务调度服务器发送总任务的标识信息,任务调度服务器接收到该标识信息后,可以以该标识信息为引索,查询该任务发布服务器发布的总任务的当前执行状态,若该总任务已完成,则将该总任务的第二执行结果发送给该任务服务器;若该总任务未完成,则将该总任务的当前执行状态发送给任务发布服务器。
进一步地,还可以将上述两个实施例所描述两种方案进行结合,使得任务调度服务器可以主动反馈总任务的执行结果,任务发布服务器也可以主动查询总任务的执行结果。其具体过程可以参见上述两个实施例的相关描述,此处不再赘述。
本公开实施例中,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务,接收任务执行服务器发送的任务执行请求,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果,获取总任务的标识信息,基于标识信息,从候选任务发布服务器中,确定发布总任务的目标任务发布服务器。本公开实施例中,基于标识信息将任务发布服务器发布的总任务的执行结果发送给该任务发布服务器,能够避免数据出现紊乱,导致任务发布服务器的执行结果错误,保证了任务发布服务器得到的任务执行结果的正确性。
图7为本公开一实施例提供的任务处理方法的流程示意图。如图7所示,该方法包含以下步骤:
S701,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务。
关于步骤S701的具体介绍可参见上述实施例中相关内容的记载,此处不再赘述。
S702,按照设定间隔时间,向每个具备任务执行权限的任务执行服务器发送任务广播,以指示任务执行服务器按照设定间隔时间获取预执行子任务。
本公开实施例中,可以预先对任务执行服务器的任务执行权限进行设置,使得不具备任务执行权限的任务执行服务器无法执行任务。其中,任务执行权限可以根据实际需求进行设置,此处不做任何限制。
任务调度服务器在将任务发布服务器发布的总任务拆分成多个子任务后,可以按照设定间隔时间,向每个具备任务执行权限的任务执行服务器发送任务广播,具备任务执行权限的任务执行服务器接收到任务调度服务器发送的任务广播后,对该任务广播进行响应,生成任务执行请求,并将该任务执行请求发送给任务调度服务器。任务调度服务器接收到任务执行服务器发送的任务执行请求后,可以根据该任务执行请求从多个子任务中确定该任务执行服务器的预执行子任务,并将该预执行子任务发送给该任务执行服务器,由该任务执行服务器执行该预执行子任务。
本公开实施例中,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务,按照设定间隔时间,向每个具备任务执行权限的任务执行服务器发送任务广播,以指示任务执行服务器按照设定间隔时间获取预执行子任务。本公开实施例中,任务调度服务器通过任务广播定时调度各个任务执行服务器,能够避免出现部分任务执行服务器处于空闲状态,从而能够充分发挥每一个任务执行服务器的性能,提高任务执行的效率。
图8为本公开一实施例提供的任务处理方法的流程示意图。需要说明的是,本公开实施例的执行主体为任务执行服务器。
如图8所示,该方法包含以下步骤:
S801,生成任务执行请求,并向任务调度服务器发送任务执行请求。
其中,任务执行请求用于指示任务调度服务器向任务执行服务器发送对应的预执行子任务。
在任务调度服务器将任务发布服务器发布的总任务拆分成多个子任务之后,任务执行服务器可以生 成任务执行请求,并将该任务执行请求发送给任务调度服务器,任务调度服务器接收到该任务执行请求之后,根据该任务执行请求从多个子任务中,确定任务执行服务器的预执行子任务,并将该预执行子任务发送给该任务执行服务器。
S802,接收任务调度服务器发送的预执行子任务。
S803,执行预执行子任务,并将预执行子任务的第一执行结果发送给任务调度服务器,以生成总任务的第二执行结果。其中,总任务由多个预执行子任务组成。
任务执行服务器接收到任务调度服务器发送预执行子任务后,执行该预执行子任务,并将该预执行子任务的第一执行结果发送给任务调度服务器。在总任务的全部子任务完成之后,任务调度服务器可以根据全部子任务的第一执行结果,生成总任务的第二执行结果。
在一些实施例中,按照预执行子任务的设定优先级执行预执行子任务。
举例而言,假设总任务A的预执行子任务a的设定优先级为1,总任务B的预执行子任务b的设定优先级为2,则优先执行预执行子任务a。
进一步地,由于业务需求也是影响任务执行顺序的因素,还可以结合业务需求来确定任务的执行顺序。
举例而言,假设总任务A的预执行子任务a的设定优先级为1;总任务B的预执行子任务b的优先级为2,则优先执行预执行子任务a。
场景一:总任务A和总任务B同时发布。可以按照设定优先级先执行总任务A的预执行子任务a,再执行总任务B的预执行子任务b。
场景二:总任务B先发布,总任务A后发布。可根据业务场景需要,选择先执行当前正在执行中的预执行子任务b,直到把总任务B的所有预执行子任务b执行完毕,再执行预执行子任务a。或者根据场景需要,先执行预执行子任务a,暂停正在执行中的预执行子任务b,插队把总任务A的所有预执行子任务a执行完毕后,再执行预执行子任务b。
本公开实施例中,生成任务执行请求,并向任务调度服务器发送任务执行请求,其中,任务执行请求用于指示任务调度服务器向任务执行服务器发送对应的预执行子任务,接收任务调度服务器发送的预执行子任务,执行预执行子任务,并将预执行子任务的第一执行结果发送给任务调度服务器,以生成总任务的第二执行结果,其中,总任务由多个预执行子任务组成。本公开实施例中,任务执行服务器可以根据自身当前的执行能力,请求任务调度服务器分发任务,从而充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
图9为本公开一实施例提供的任务处理方法的流程示意图。如图9所示,该方法包含以下步骤:
S901,响应于接收到任务调度服务器发送的任务广播,获取任务执行服务器当前的资源信息。
任务调度服务器将任务发布服务器发布的总任务拆分成多个子任务之后,向任务执行服务器发送任务广播,任务执行服务器接收到任务调度服务器发送的任务广播之后,获取自身当前的资源信息。
S902,基于资源信息,确定任务执行服务器是否满足任务执行条件。
在一些实施例中,资源信息包括CPU使用率和/或内存使用率。
在一些实施例中,基于CPU使用率和/或内存使用率,确定任务执行服务器是否满足任务执行条件,响应于CPU使用率和/或内存使用率大于各自的阈值,则确定任务执行服务器满足任务执行条件;响应于CPU使用率和/或内存使用率小于或等于各自的阈值,则确定任务执行服务器未满足任务执行条件。
S903,响应于满足任务执行条件,则生成任务执行请求。
在一些实施例中,基于CPU使用率和/或内存使用率,确定任务执行服务器可执行的任务数量,并基于任务数量和任务执行服务器的IP地址,生成任务执行请求。
本公开实施例中,响应于接收到任务调度服务器发送的任务广播,获取任务执行服务器当前的资源信息,基于资源信息,确定任务执行服务器是否满足任务执行条件,响应于满足任务执行条件,则生成任务执行请求。本公开实施例中,通过任务执行服务器当前的CPU使用率和/或内存使用率,能够准确地确定任务执行服务器当前可执行的任务数量,从而充分发挥任务执行服务器的任务执行能力。
图10为本公开一实施例提供的任务处理方法的流程示意图。如图10所示,该方法包含以下步骤:
S1001,将任务调度服务器发送的预执行子任务提交至线程池,并确定是否提交成功。
在一些实施例中,线程池可以为异步线程池,以异步执行预执行子任务。
S1002,响应于提交失败,则生成预执行子任务的执行失败信息。
S1003,将执行失败信息发送给任务调度服务器,其中,执行失败信息用于指示任务调度服务器将预执行子任务解除锁定状态。
S1004,响应于提交成功,则执行预执行子任务。
任务执行服务器接收到任务调度服务器发送的预执行子任务后,可以将该预执行子任务提交至线程池,并确定是否提交成功,若否,则生成该预执行子任务的执行失败信息,并将该预执行子任务的执行 失败信息发送给任务调度服务器,任务调度服务器接收到该预执行子任务的执行失败信息后,可以将该预执行子任务解除锁定状态,以保证该预执行子任务能够被再次获取执行;若是,则任务执行服务器可以执行该预执行子任务。
本公开实施例中,将任务调度服务器发送的预执行子任务提交至线程池,并确定是否提交成功,响应于提交失败,则生成预执行子任务的执行失败信息,将执行失败信息发送给任务调度服务器,其中,执行失败信息用于指示任务调度服务器将预执行子任务解除锁定状态,响应于提交成功,则执行预执行子任务。本公开实施例中,任务执行服务器将任务调度服务器分配的任务提交至线程池,能够提高任务的执行效率,且在任务执行失败时可以解除任务锁定状态,以将任务重新分配,确保任务的执行。
图11为任务执行服务器执行预执行子任务的流程示例图。如图11所示,包括以下步骤:
S1101,任务调度服务器唤醒任务执行服务器。
S1102,任务执行服务器获取当前CPU使用率和/或内存使用率。
S1103,判断CPU使用率和/或内存使用率是否大于各自的阈值。若是,则执行步骤S1105;若否,则执行步骤S1104。
S1104,放弃获取任务,等待自身资源释放,本次任务获取操作结束。
S1105,生成任务执行请求,并发送给任务调度服务器。
S1106,获取预执行子任务,并提交至异步线程池。
S1107,判断是否提交成功。若是,则执行步骤S1109;若否,则执行步骤S1108。
S1108,生成执行失败信息发送给任务调度服务器,本次任务获取操作结束。
S1109,执行预执行子任务。
S1110,将预执行子任务的第一执行结果发送给任务调度服务器。
图12为本公开一实施例提供的任务处理方法的流程示意图。需要说明的是,本公开实施例的执行主体为任务发布服务器。
如图12所示包含以下步骤:
S1201,基于当前业务场景生成总任务。
确定当前业务场景,并根据当前业务场景生成总任务。
举例而言,假设当前业务场景为ABS过滤的业务场景,则任务发布服务器可以生成ABS过滤任务。
S1202,将总任务发送给任务调度服务器,由任务调度服务器对总任务进行拆分调度,以得到总任务的执行结果。
任务发布服务器基于当前业务场景生成总任务之后,可以将该总任务发送给任务调度服务器,任务调度服务器接收到该总任务之后,可以将该总任务拆分成多个子任务,并将该多个子任务调度给任务执行服务器进行执行,以得到该总任务的执行结果。
需要说明的是,子任务的具体调度过程可以参见上述实施例中相关内容的记载,此处不再赘述。
S1203,接收任务调度服务器发送的执行结果。
一些实施例中,任务发布服务器在将总任务发送给任务调度服务器时,可以一同将该总任务的标识信息发送给任务调度服务器,服务器接收到该标识信息后,可以将该标识信息存储于自身的存储空间中,以基于该标识将总任务的执行结果发送给任务发布服务器。
具体地,任务调度服务器得到总任务的执行结果后,可以从自身的存储空间中调取该总任务的标识信息,并基于该标识信息,从多个候选任务发布服务器中,确定发布该总任务的任务发布服务器,并将该总任务的执行结果发送给该任务发布服务器。
另一实施例中,任务发布服务器向任务调度服务器发送总任务的标识信息,其中,该标识信息用于指示任务调度服务器查询总任务的执行结果,并将执行结果发送给所述任务发布服务器。
具体地,任务调度服务器接收到任务调度服务器发送的标识信息后,可以以该标识信息为引索,查询该任务发布服务器发布的总任务的当前执行状态,若该总任务已完成,则将该总任务的执行结果发送给该任务服务器;若该总任务未完成,则将该总任务的当前执行状态发送给任务发布服务器。
本公开实施例中,基于当前业务场景生成总任务,将总任务发送给任务调度服务器,由任务调度服务器对总任务进行拆分调度,以得到总任务的执行结果,接收任务调度服务器发送的执行结果。本公开实施例中,任务发布服务器将总任务发送给任务调度服务器,由任务调度服务器进行拆分调度,能够充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
图13为任务发布服务器、任务调度服务器和任务执行服务器三者之间的交互示意图。如图13所示,任务发布服务器的功能包括:发布总任务和查询结果。
任务调度服务器的功能包括:任务拆分、任务广播、任务锁定和任务状态更新。其中,任务拆分包括拆分策略,任务广播包括定时调度。
任务执行服务器的功能包括:获取任务和执行任务。
如图13所示,任务发布服务器、任务调度服务器和任务执行服务器三者之间的交互过程如下:
1、任务发布服务器发布总任务,并将总任务发送给任务调度服务器。
2、任务调度服务器拆分总任务。
3、任务调度服务器向任务执行服务器发送任务广播。
4、任务执行服务器获取预执行子任务。
5、任务执行服务器将预执行子任务锁定。
6、任务执行服务器执行预执行子任务。
7、任务调度服务器根据预执行子任务的执行进度更新任务状态。
8、任务发布服务器查询总任务的执行结果。
为了实现上述第一方面实施例的任务处理方法,本公开提出了一种任务处理装置,图14为本公开一实施例的任务处理装置的结构示意图。如图14所示,任务处理装置1400包括:
获取模块1410,用于获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务;
接收模块1420,用于接收任务执行服务器发送的任务执行请求;
第一发送模块1430,用于基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务;
生成模块1440,用于接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。
在本公开的一个实施例中,获取模块1410,还用于基于当前业务场景,从候选拆分策略中确定总任务的目标拆分策略;基于目标拆分策略,将总任务拆分成多个子任务。
在本公开的一个实施例中,任务处理装置1400还包括:第二发送模块,用于接收任务执行服务器发送的任务执行请求之前,向任务执行服务器发送任务广播,其中,任务广播用于指示任务执行服务器生成任务执行请求。
在本公开的一个实施例中,任务执行请求包括网络协议IP地址和任务数量,第一发送模块1430还用于:按照任务数量从多个子任务中,确定任务执行服务器的预执行子任务;按照IP地址将预执行子任务发送给任务执行服务器。
在本公开的一个实施例中,任务处理装置1400还包括:锁定模块,用于向任务执行服务器发送预执行子任务之后,锁定预执行子任务,其中,预执行子任务处于锁定状态时无法被再次发送;解除模块,用于响应于接收到预执行子任务的执行失败信息,则解除预执行子任务的锁定状态。
在本公开的一个实施例中,任务处理装置1400还包括:更新模块,用于监控预执行子任务的执行进度;基于预执行子任务的当前执行进度,对预执行子任务的当前执行状态进行更新。
在本公开的一个实施例中,更新模块,还用于响应于预执行子任务未被发送给任务执行服务器,或者接收到预执行子任务的执行失败信息,则确定预执行子任务的当前状态为未被执行状态;响应于预执行子任务已被发送给任务执行服务器,则将预执行子任务的当前状态更新为执行中状态;响应于接收到预执行子任务的第一执行结果,则将预执行子任务的当前状态更新为已完成状态。
在本公开的一个实施例中,任务处理装置1400还包括:第三发送模块,用于获取总任务的标识信息;基于标识信息,从候选任务发布服务器中,确定发布总任务的目标任务发布服务器;将第二执行结果发送给目标任务发布服务器。
在本公开的一个实施例中,任务处理装置1400还包括:第四发送模块,用于按照设定间隔时间,向每个具备任务执行权限的任务执行服务器发送任务广播,以指示任务执行服务器按照设定间隔时间获取预执行子任务。
需要说明的是,上述对第一方面的任务处理方法实施例的解释说明,也适用于本公开实施例的任务处理装置,具体过程此处不再赘述。
本公开实施例中,获取任务发布服务器发布的总任务,并将总任务拆分成多个子任务,接收任务执行服务器发送的任务执行请求,基于任务执行请求,从多个子任务中确定任务执行服务器的预执行子任务,并向任务执行服务器发送预执行子任务,接收任务执行服务器发送的预执行子任务的第一执行结果,并基于第一执行结果,生成总任务的第二执行结果。本公开实施例中,任务调度服务器可以根据任务执行服务器发送的任务执行请求,知晓任务执行服务器的任务执行能力,从而能够根据任务执行服务器的任务执行能力来分发任务,进而充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
为了实现上述第二方面实施例的任务处理方法,本公开提出了一种任务处理装置,图15为本公开一实施例的任务处理装置的结构示意图。如图15所示,任务处理装置1500包括:
生成模块1510,用于生成任务执行请求,并向任务调度服务器发送任务执行请求,其中,任务执行请求用于指示任务调度服务器向任务执行服务器发送对应的预执行子任务;
接收模块1520,用于接收任务调度服务器发送的预执行子任务;
执行模块1530,用于执行预执行子任务,并将预执行子任务的第一执行结果发送给任务调度服务器,以生成总任务的第二执行结果,其中,总任务由多个预执行子任务组成。
在本公开的一个实施例中,生成模块1510,还用于响应于接收到任务调度服务器发送的任务广播,获取任务执行服务器当前的资源信息;基于资源信息,确定任务执行服务器是否满足任务执行条件;响应于满足任务执行条件,则生成任务执行请求。
在本公开的一个实施例中,资源信息包括CPU使用率和/或内存使用率,生成模块1510还用于:基于CPU使用率和/或内存使用率,确定任务执行服务器是否满足任务执行条件;响应于CPU使用率和/或内存使用率大于各自的阈值,则确定任务执行服务器满足任务执行条件;响应于CPU使用率和/或内存使用率小于或等于各自的阈值,则确定任务执行服务器未满足任务执行条件。
在本公开的一个实施例中,生成模块1510还用于:基于CPU使用率和/或内存使用率,确定任务执行服务器可执行的任务数量;基于任务数量和任务执行服务器的IP地址,生成任务执行请求。
在本公开的一个实施例中,任务处理装置1500还包括提交模块,用于将任务调度服务器发送的预执行子任务提交至线程池,并确定是否提交成功;响应于提交失败,则生成预执行子任务的执行失败信息;将执行失败信息发送给任务调度服务器,其中,执行失败信息用于指示任务调度服务器将预执行子任务解除锁定状态;响应于提交成功,则执行预执行子任务。
在本公开的一个实施例中,执行模块1530还用于按照预执行子任务的设定优先级执行预执行子任务。
需要说明的是,上述对第二方面的任务处理方法实施例的解释说明,也适用于本公开实施例的任务处理装置,具体过程此处不再赘述。
本公开实施例中,生成任务执行请求,并向任务调度服务器发送任务执行请求,其中,任务执行请求用于指示任务调度服务器向任务执行服务器发送对应的预执行子任务,接收任务调度服务器发送的预执行子任务,执行预执行子任务,并将预执行子任务的第一执行结果发送给任务调度服务器,以生成总任务的第二执行结果,其中,总任务由多个预执行子任务组成。本公开实施例中,任务执行服务器可以根据自身当前的执行能力,请求任务调度服务器分发任务,从而充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
为了实现上述第三方面实施例的任务处理方法,本公开提出了一种任务处理装置,图16为本公开一实施例的任务处理装置的结构示意图。如图16所示,任务处理装置1600包括:
生成模块1610,用于基于当前业务场景生成总任务;
发送模块1620,用于将总任务发送给任务调度服务器,由任务调度服务器对总任务进行拆分调度,以得到总任务的执行结果;
接收模块1630,用于接收任务调度服务器发送的执行结果。
在本公开的一个实施例中,发送模块1620还用于接收任务调度服务器发送的执行结果之前,向任务调度服务器发送总任务的标识信息,其中,标识信息用于指示任务调度服务器查询总任务的执行结果,并将执行结果发送给任务发布服务器。
需要说明的是,上述对第二方面的任务处理方法实施例的解释说明,也适用于本公开实施例的任务处理装置,具体过程此处不再赘述。
本公开实施例中,基于当前业务场景生成总任务,将总任务发送给任务调度服务器,由任务调度服务器对总任务进行拆分调度,以得到总任务的执行结果,接收任务调度服务器发送的执行结果。本公开实施例中,任务发布服务器将总任务发送给任务调度服务器,由任务调度服务器进行拆分调度,能够充分发挥任务执行服务器的任务执行能力,提高任务执行的效率。
如图17所示,是根据本公开实施例的任务处理方法的电子设备的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,智能语音交互设备、个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图17所示,该电子设备包括:一个或多个处理器1701、存储器1702,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器1701可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图17中以一个处理器1701为例。
存储器1702即为本公开所提供的非瞬时计算机可读存储介质。其中,存储器存储有可由至少一个处 理器执行的指令,以使至少一个处理器执行本公开所提供的任务处理方法。本公开的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本公开所提供的任务处理方法。
存储器1702作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本公开实施例中的任务处理方法对应的程序指令/模块。处理器1701通过运行存储在存储器1702中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的任务处理方法。
存储器1702可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据任务处理方法的电子设备的使用所创建的数据等。此外,存储器1702可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器1702可选包括相对于处理器1701远程设置的存储器,这些远程存储器可以通过网络连接至任务处理方法的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
任务处理方法的电子设备还可以包括:输入装置1703和输出装置1704。处理器1701、存储器1702、输入装置1703和输出装置1704可以通过总线或者其他方式连接,图17中以通过总线连接为例。
输入装置1703可接收输入的数字或字符信息,以及产生与任务处理方法的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置1704可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。
为了实现上述实施例,本公开还提出一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本公开前述第一方面实施例或第二方面实施例或第三方面实施例提出的任务处理方法。
为了实现上述实施例,本公开提出了一种计算机程序产品,包括计算机程序,计算机程序在被处理器执行时实现本公开前述第一方面实施例或第二方面实施例或第三方面实施例的任务处理方法。
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与VPS服务("Virtual Private Server",或简称"VPS")中,存在的管理难度大,业务扩展性弱的缺陷。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
在本说明书的描述中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (23)

  1. 一种任务处理方法,由任务调度服务器执行,所述方法包括:
    获取任务发布服务器发布的总任务,并将所述总任务拆分成多个子任务;
    接收任务执行服务器发送的任务执行请求;
    基于所述任务执行请求,从所述多个子任务中确定所述任务执行服务器的预执行子任务,并向所述任务执行服务器发送所述预执行子任务;
    接收所述任务执行服务器发送的所述预执行子任务的第一执行结果,并基于所述第一执行结果,生成所述总任务的第二执行结果。
  2. 根据权利要求1所述的方法,其中,所述将所述总任务拆分成多个子任务,包括:
    基于当前业务场景,从候选拆分策略中确定所述总任务的目标拆分策略;
    基于所述目标拆分策略,将所述总任务拆分成所述多个子任务。
  3. 根据权利要求1或2所述的方法,其中,还包括:
    向所述任务执行服务器发送任务广播,其中,所述任务广播用于指示所述任务执行服务器生成所述任务执行请求。
  4. 根据权利要求1至3中任一项所述的方法,其中,所述任务执行请求包括网络协议IP地址和任务数量,其中,所述基于所述任务执行请求,从所述多个子任务中确定所述任务执行服务器的预执行子任务,并向所述任务执行服务器发送所述预执行子任务,包括:
    按照所述任务数量从所述多个子任务中,确定所述任务执行服务器的预执行子任务;
    按照所述IP地址将所述预执行子任务发送给所述任务执行服务器。
  5. 根据权利要求1至4中任一项所述的方法,还包括:
    锁定所述预执行子任务,其中,所述预执行子任务处于锁定状态时无法被再次发送;
    响应于接收到所述预执行子任务的执行失败信息,则解除所述预执行子任务的锁定状态。
  6. 根据权利要求1至5中任一项所述的方法,还包括:
    监控所述预执行子任务的执行进度;
    基于所述预执行子任务的当前执行进度,对所述预执行子任务的当前执行状态进行更新。
  7. 根据权利要求6所述的方法,其中,所述基于所述预执行子任务的当前执行进度,对所述预执行子任务的当前执行状态进行更新,包括:
    响应于所述预执行子任务未被发送给所述任务执行服务器,或者接收到所述预执行子任务的执行失败信息,则确定所述预执行子任务的当前状态为未被执行状态;
    响应于所述预执行子任务已被发送给所述任务执行服务器,则将所述预执行子任务的当前状态更新为执行中状态;
    响应于接收到所述预执行子任务的第一执行结果,则将所述预执行子任务的当前状态更新为已完成状态。
  8. 根据权利要求1至7中任一项所述的方法,还包括:
    获取所述总任务的标识信息;
    基于所述标识信息,从候选任务发布服务器中,确定发布所述总任务的目标任务发布服务器;
    将所述第二执行结果发送给所述目标任务发布服务器。
  9. 根据权利要求1至8中任一项所述的方法,还包括:
    按照设定间隔时间,向每个具备任务执行权限的任务执行服务器发送任务广播,以指示所述任务执行服务器按照所述设定间隔时间获取所述预执行子任务。
  10. 一种任务处理方法,由任务执行服务器执行,所述方法包括:
    生成任务执行请求,并向任务调度服务器发送所述任务执行请求,其中,所述任务执行请求用于指示所述任务调度服务器向所述任务执行服务器发送对应的预执行子任务;
    接收所述任务调度服务器发送的所述预执行子任务;
    执行所述预执行子任务,并将所述预执行子任务的第一执行结果发送给所述任务调度服务器,以生 成总任务的第二执行结果,其中,所述总任务由多个所述预执行子任务组成。
  11. 根据权利要求10所述的方法,还包括:
    响应于接收到所述任务调度服务器发送的任务广播,获取所述任务执行服务器当前的资源信息;
    基于所述资源信息,确定所述任务执行服务器是否满足任务执行条件;
    响应于满足所述任务执行条件,则生成所述任务执行请求。
  12. 根据权利要求11所述的方法,其中,所述资源信息包括CPU使用率和/或内存使用率,其中,所述基于所述资源信息,确定所述任务执行服务器是否满足任务执行条件,包括:
    基于所述CPU使用率和/或所述内存使用率,确定所述任务执行服务器是否满足所述任务执行条件;
    响应于所述CPU使用率和/或内存使用率大于各自的阈值,则确定所述任务执行服务器满足所述任务执行条件;
    响应于所述CPU使用率和/或内存使用率小于或等于各自的阈值,则确定所述任务执行服务器未满足所述任务执行条件。
  13. 根据权利要求12所述的方法,其中,所述生成任务执行请求,包括:
    基于所述CPU使用率和/或内存使用率,确定所述任务执行服务器可执行的任务数量;
    基于所述任务数量和所述任务执行服务器的IP地址,生成所述任务执行请求。
  14. 根据权利要求10至13中任一项所述方法,还包括:
    将所述任务调度服务器发送的所述预执行子任务提交至线程池,并确定是否提交成功;
    响应于提交失败,则生成所述预执行子任务的执行失败信息;
    将所述执行失败信息发送给所述任务调度服务器,其中,所述执行失败信息用于指示所述任务调度服务器将所述预执行子任务解除锁定状态;
    响应于提交成功,则执行所述预执行子任务。
  15. 根据权利要求10至14中任一项所述的方法,其中,所述执行所述预执行子任务,包括:
    按照所述预执行子任务的设定优先级执行所述预执行子任务。
  16. 一种任务处理方法,由任务发布服务器执行,所述方法包括:
    基于当前业务场景生成总任务;
    将所述总任务发送给任务调度服务器,由所述任务调度服务器对所述总任务进行拆分调度,以得到所述总任务的执行结果;
    接收所述任务调度服务器发送的所述执行结果。
  17. 根据权利要求16所述的方法,还包括:
    向所述任务调度服务器发送所述总任务的标识信息,其中,所述标识信息用于指示所述任务调度服务器查询所述总任务的执行结果,并将所述执行结果发送给所述任务发布服务器。
  18. 一种任务处理装置,包括:
    获取模块,用于获取任务发布服务器发布的总任务,并将所述总任务拆分成多个子任务;
    接收模块,用于接收任务执行服务器发送的任务执行请求;
    第一发送模块,用于基于所述任务执行请求,从所述多个子任务中确定所述任务执行服务器的预执行子任务,并向所述任务执行服务器发送所述预执行子任务;
    生成模块,用于接收所述任务执行服务器发送的所述预执行子任务的第一执行结果,并基于所述第一执行结果,生成所述总任务的第二执行结果。
  19. 一种任务处理装置,包括:
    生成模块,用于生成任务执行请求,并向任务调度服务器发送所述任务执行请求,其中,所述任务执行请求用于指示所述任务调度服务器向所述任务执行服务器发送对应的预执行子任务;
    接收模块,用于接收所述任务调度服务器发送的所述预执行子任务;
    执行模块,用于执行所述预执行子任务,并将所述预执行子任务的第一执行结果发送给所述任务调度服务器,以生成总任务的第二执行结果,其中,所述总任务由多个所述预执行子任务组成。
  20. 一种任务处理装置,包括:
    生成模块,用于基于当前业务场景生成总任务;
    发送模块,用于将所述总任务发送给任务调度服务器,由所述任务调度服务器对所述总任务进行拆分调度,以得到所述总任务的执行结果;
    接收模块,用于接收所述任务调度服务器发送的所述执行结果。
  21. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-9或权利要求10-15或权利要求16-17中任一项所述的方法。
  22. 一种存储有计算机指令的计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行如权利要求1-9或权利要求10-15或权利要求16-17中任一项所述的方法。
  23. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现如权利要求1-9或权利要求10-15或权利要求16-17中任一项所述的方法。
PCT/CN2023/091271 2022-09-26 2023-04-27 任务处理方法、装置、电子设备及存储介质 WO2024066342A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211172522.0A CN115576684A (zh) 2022-09-26 2022-09-26 任务处理方法、装置、电子设备及存储介质
CN202211172522.0 2022-09-26

Publications (1)

Publication Number Publication Date
WO2024066342A1 true WO2024066342A1 (zh) 2024-04-04

Family

ID=84582262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/091271 WO2024066342A1 (zh) 2022-09-26 2023-04-27 任务处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115576684A (zh)
WO (1) WO2024066342A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115576684A (zh) * 2022-09-26 2023-01-06 京东科技信息技术有限公司 任务处理方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902587A (zh) * 2011-07-28 2013-01-30 中国移动通信集团四川有限公司 分布式任务调度方法、系统和装置
CN106095585A (zh) * 2016-06-22 2016-11-09 中国建设银行股份有限公司 任务请求处理方法、装置和企业信息系统
CN111459659A (zh) * 2020-03-10 2020-07-28 中国平安人寿保险股份有限公司 数据处理方法、装置、调度服务器及介质
CN112035258A (zh) * 2020-08-31 2020-12-04 中国平安财产保险股份有限公司 数据处理方法、装置、电子设备及介质
CN113687932A (zh) * 2021-08-30 2021-11-23 上海商汤科技开发有限公司 一种任务调度的方法、装置、系统、电子设备及存储介质
CN113821506A (zh) * 2020-12-23 2021-12-21 京东科技控股股份有限公司 用于任务系统的任务执行方法、装置、系统、服务器和介质
CN115576684A (zh) * 2022-09-26 2023-01-06 京东科技信息技术有限公司 任务处理方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902587A (zh) * 2011-07-28 2013-01-30 中国移动通信集团四川有限公司 分布式任务调度方法、系统和装置
CN106095585A (zh) * 2016-06-22 2016-11-09 中国建设银行股份有限公司 任务请求处理方法、装置和企业信息系统
CN111459659A (zh) * 2020-03-10 2020-07-28 中国平安人寿保险股份有限公司 数据处理方法、装置、调度服务器及介质
CN112035258A (zh) * 2020-08-31 2020-12-04 中国平安财产保险股份有限公司 数据处理方法、装置、电子设备及介质
CN113821506A (zh) * 2020-12-23 2021-12-21 京东科技控股股份有限公司 用于任务系统的任务执行方法、装置、系统、服务器和介质
CN113687932A (zh) * 2021-08-30 2021-11-23 上海商汤科技开发有限公司 一种任务调度的方法、装置、系统、电子设备及存储介质
CN115576684A (zh) * 2022-09-26 2023-01-06 京东科技信息技术有限公司 任务处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115576684A (zh) 2023-01-06

Similar Documents

Publication Publication Date Title
JP7238006B2 (ja) ブロックチェーンベースのデータ処理方法、装置、デバイス、記憶媒体、及びプログラム
US8645745B2 (en) Distributed job scheduling in a multi-nodal environment
CN103038788B (zh) 提供多个网络资源
WO2020147330A1 (zh) 一种数据流处理方法及系统
JP2019200580A (ja) 分散台帳システム、分散台帳サブシステム、および、分散台帳ノード
CN101645022B (zh) 用于多个集群的作业调度管理系统及方法
US9104501B2 (en) Preparing parallel tasks to use a synchronization register
CN105786603B (zh) 一种基于分布式的高并发业务处理系统及方法
US11182217B2 (en) Multilayered resource scheduling
WO2024066342A1 (zh) 任务处理方法、装置、电子设备及存储介质
US20100153962A1 (en) Method and system for controlling distribution of work items to threads in a server
WO2023024740A1 (zh) 一种基于容器部署联邦学习任务的方法及装置
WO2021013105A1 (zh) 一种作业状态的推送方法及装置
US20220171652A1 (en) Distributed container image construction scheduling system and method
CN111782365A (zh) 定时任务处理方法、装置、设备及存储介质
CN112153167A (zh) 网际互连协议管理方法、装置、电子设备及存储介质
CN113821506A (zh) 用于任务系统的任务执行方法、装置、系统、服务器和介质
CN110647570B (zh) 数据处理方法、装置以及电子设备
JP4529812B2 (ja) 分散資源配分システム、分散資源配分方法およびプログラム
CN113032125A (zh) 作业调度方法、装置、计算机系统和计算机可读存储介质
CN113010498A (zh) 一种数据同步方法、装置、计算机设备及存储介质
CN115328664B (zh) 一种消息消费方法、装置、设备及介质
CN111352944B (zh) 数据处理方法、装置、电子设备与存储介质
WO2020259326A1 (zh) 一种信号传输方法及装置
US11113106B2 (en) Coordinating distributed task execution