CN113946417A - Distributed task execution method, related device and equipment - Google Patents

Distributed task execution method, related device and equipment Download PDF

Info

Publication number
CN113946417A
CN113946417A CN202111112892.0A CN202111112892A CN113946417A CN 113946417 A CN113946417 A CN 113946417A CN 202111112892 A CN202111112892 A CN 202111112892A CN 113946417 A CN113946417 A CN 113946417A
Authority
CN
China
Prior art keywords
task
scheduler
new
original
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111112892.0A
Other languages
Chinese (zh)
Inventor
黄湘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202111112892.0A priority Critical patent/CN113946417A/en
Publication of CN113946417A publication Critical patent/CN113946417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Abstract

The application discloses a distributed task execution method, a related device and equipment, wherein the distributed task execution method comprises the following steps: starting a new scheduler in response to the start of the timer, and judging whether the current task is completely executed by the original scheduler through the new scheduler; if the current task is not executed by the original scheduler, scheduling the current task through the new scheduler and the original scheduler together; and if the current task is completely executed by the original scheduler, determining a new task from a plurality of distributed tasks by using the distributed locks through the original scheduler, and executing the new task through the executor. According to the scheme, the execution efficiency of the distributed tasks can be improved.

Description

Distributed task execution method, related device and equipment
Technical Field
The present application relates to the field of distributed task technologies, and in particular, to a method for executing a distributed task, and a related apparatus and device.
Background
The distributed task can be composed of a plurality of independent or interdependent subtasks, the completion state of the distributed task completely depends on the completion state of the subtasks in the distributed task, and the distributed task is completed only when all subtasks of the distributed task are completed.
In distributed tasks, tasks are mostly executed in a centralized manner, that is, the distributed tasks are scheduled by a centralized scheduling platform, and are generally executed in the form of executing scripts.
The scheme cannot be combined with the distributed service, so that the resource waste is caused, and the execution efficiency of the distributed task is reduced due to the complex structure of a plurality of deployment schemes for the service.
Disclosure of Invention
The application provides a distributed task execution method, a related device and equipment, and solves the problem that the execution efficiency of a distributed task is low in the prior art.
The application provides a distributed task execution method, which comprises the following steps: starting a new scheduler in response to the start of the timer, and judging whether the current task is completely executed by the original scheduler through the new scheduler; if the current task is not executed by the original scheduler, scheduling the current task through the new scheduler and the original scheduler together; and if the current task is completely executed by the original scheduler, determining a new task from a plurality of distributed tasks by using the distributed locks through the original scheduler, and executing the new task through the executor.
Wherein, the step of scheduling the current task by the new scheduler and the original scheduler together further comprises the following steps: and responding to the fact that the number of the schedulers exceeds the preset number, adding a new container service.
The execution method of the distributed task further comprises the following steps: in response to the completion of the execution of the current task and the failure of the preset time, closing one scheduler, and judging whether the number of the remaining schedulers exceeds the preset number; if the number of remaining schedulers does not exceed the preset number, one container service is reduced.
The method for scheduling the current task by the new scheduler and the original scheduler comprises the following steps: and respectively scheduling the corresponding actuators to execute the current tasks through the new scheduler and the original scheduler.
The method comprises the following steps of responding to the starting of a timer, starting a new scheduler, and judging whether a current task is completely executed by an original scheduler through the new scheduler, wherein the steps comprise: starting a new scheduler in response to the starting of the timer, and judging whether a subtask needing to be executed exists in a task queue of the current task through the new scheduler based on a scheduling rule; if the subtask to be executed does not exist in the task queue of the current task, determining that the current task is completely executed by the original scheduler; and if the subtasks needing to be executed exist in the task queue of the current task, determining that the current task is not completely executed by the original scheduler.
The method for determining the new task from the plurality of distributed tasks by using the distributed locks through the original scheduler comprises the following steps: determining a task with a creation authority from a plurality of distributed tasks by using a Redis distributed lock through an original scheduler; and determining the task with the creation authority as a new task.
Wherein, the step of determining a new task from a plurality of distributed tasks by using a distributed lock through an original scheduler and executing the new task through an executor further comprises: writing each subtask corresponding to the new task into a task queue; and acquiring each subtask from the task queue through the original scheduler, and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask to be executed exists in the task queue.
The step of writing each subtask corresponding to the new task into the task queue comprises the following steps: initializing the new task through a preset rule to obtain each subtask corresponding to the new task; writing each subtask into a task queue of a list structure of the Redis distributed lock; in response to all subtasks being written to the task queue, an Event is invoked to execute a task start Event.
The method comprises the following steps of obtaining each subtask from a task queue through an original scheduler, calling an actuator corresponding to the original scheduler to execute each subtask until no subtask needing to be executed exists in the task queue, wherein the steps comprise: and sequentially acquiring each subtask from the task queue by using an FIFO rule through an original scheduler, calling an actuator to execute each subtask, and calling an Event to execute a task execution Event.
The method comprises the following steps of obtaining each subtask from a task queue through an original scheduler, calling an actuator corresponding to the original scheduler to execute each subtask until no subtask needing to be executed exists in the task queue, and further comprising the following steps: responding to the fact that an original scheduler acquires subtasks from a task queue, and judging whether subtasks needing to be executed exist in the task queue through the llen of the Redis distributed lock; if the task queue has subtasks to be executed, continuing to execute the steps of obtaining each subtask from the task queue through the original scheduler, and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask to be executed exists in the task queue; and if the subtask which needs to be executed does not exist in the task queue, after the currently acquired subtask is executed, calling the Event to execute the task ending Event.
The present application further provides an execution device of a distributed task, including: the judging module is used for responding to the starting of the timer, starting a new scheduler and judging whether the current task is completely executed by the original scheduler or not through the new scheduler; the scheduling module is used for scheduling the current task together with the original scheduler through the new scheduler if the current task is not executed by the original scheduler; and the execution module is used for determining a new task from the plurality of distributed tasks by using the distributed locks through the original scheduler and executing the new task through the executor if the current task is executed by the original scheduler.
The present application further provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement any of the above-mentioned methods for performing distributed tasks.
The present application also provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement a method of performing any of the above-described distributed tasks.
According to the scheme, when the timer is started, the new scheduler is started, whether the current task is completely executed by the original scheduler is judged through the new scheduler, if the current task is not completely executed by the original scheduler, the current task is scheduled through the new scheduler and the original scheduler together, and the execution efficiency and speed of the current task can be improved through the addition of the scheduler; if the current task is executed by the original scheduler, the original scheduler determines a new task from a plurality of distributed tasks by using the distributed locks, and the actuator executes the new task, so that the creation and execution of the new task can be determined by using the distributed locks, the creation and execution of the task are independent of any distributed micro-service architecture, the creation and execution of the task are simpler, and the execution efficiency of the distributed tasks is further improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for performing distributed tasks according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a method for performing distributed tasks according to the present application;
FIG. 3 is a block diagram of an embodiment of a corresponding distributed system in the embodiment of FIG. 2;
FIG. 4 is a block diagram of an embodiment of a distributed task execution apparatus according to the present application;
FIG. 5 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 6 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, and there may be three relationships, e.g., a and/or B, and: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in this document, the character "/", generally, the former and latter related objects are in an "or" relationship. Further, herein, "more" than two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a method for executing distributed tasks according to the present application.
Step S11: and responding to the start of the timer, starting a new scheduler, and judging whether the current task is completely executed by the original scheduler through the new scheduler.
In response to the timer (Scheduler) starting, a new Scheduler (Dispatcher) is started, and whether the current task is executed by the original Scheduler (Dispatcher) is judged by the new Scheduler (Dispatcher). The current task is a distributed task currently being executed by an original scheduler (Dispatcher), wherein only one distributed task can be executed at any time. The number of the original schedulers executing the current task includes at least one, and specifically may be 1, 2, 5, and the like, which is not limited herein.
The timer (Scheduler) may be configured with a timer start period or a timer start specific time including the Scheduler, and the timer is started based on the timer start period or the timer start specific time cycle of the Scheduler. For example: the timer can be configured to start once in 10 seconds, once in 20 seconds, start in 8 months and 18 days, and start at 9:00 times, and the like, and the setting can be specifically set based on actual requirements, which is not limited herein.
The timer (Scheduler) of the present embodiment may use a @ Scheduled timer of spring, a java timer, or a timer of other technologies.
And the scheduler (Dispatcher) may include a scheduling rule of the task, for example: the method for task initialization, the task content, the number of subtasks corresponding to the task, the execution method of the task, etc. may determine the relevant scheduling of the task (task), the task Queue (Queue), the Executor (execution), and the Event (Event).
Step S12: and if the current task is not completely executed by the original scheduler, scheduling the current task together with the original scheduler through the new scheduler.
And if the new scheduler judges that the current task is not completely executed by the original scheduler, the new scheduler and the original scheduler corresponding to the current task schedule the current task together. Therefore, if the current task is not completely executed, a new scheduler is added to schedule the current task.
In a specific application scenario, when a new scheduler and an original scheduler schedule a current task together, a modulo splitting may be performed on the current task, so as to divide the current task into a plurality of subtasks, and the plurality of subtasks are respectively allocated to the new scheduler and the original scheduler according to an FIFO rule, so that the new scheduler and the original scheduler respectively schedule corresponding actuators to execute the allocated subtasks, thereby scheduling the current task together.
In a specific application scenario, when a new scheduler and an original scheduler schedule a current task together, the current task may be fixedly divided based on symbolic data in program data corresponding to the current task, so as to divide the current task into a plurality of subtasks, and the plurality of subtasks are respectively allocated to the new scheduler and the original scheduler according to a FIFO rule, so that the new scheduler and the original scheduler respectively schedule corresponding actuators to execute the allocated subtasks, thereby scheduling the current task together. Wherein, the marking data in the program data can be letters ' a ', punctuation '; "or the word" in ", etc., the specific symbolic data may be set based on the actual situation, and is not limited herein.
In a specific application scenario, when the new scheduler and the original scheduler schedule the current task together, the new scheduler and the original scheduler may also execute the current task repeatedly, and select the scheduler with the highest execution speed as the scheduler of the current task, and after selection, stop the other schedulers. Therefore, the scheduler with the highest execution efficiency is selected from the schedulers for scheduling, so that the execution efficiency of the task is improved.
In a specific application scenario, when a current task is currently executed by one scheduler, a timer is started, a new scheduler is started and determines that the current task is not completely executed, the new scheduler and an original scheduler schedule the current task together, that is, the current task is scheduled by two schedulers at this time. In a specific application scenario, when the current task is currently executed by two schedulers, and the timer is started, the new scheduler is started and determines that the current task is not completely executed, the new scheduler and the original two schedulers schedule the current task together, that is, the current task is scheduled by three schedulers at this time.
In a specific application scenario, when the scheduler schedules a current task, the current task may be acquired by the scheduler and then sent to the executor, so that the current task is scheduled by the executor in a manner of specifically executing the current task.
In a specific application scenario, when a scheduler is added to schedule a current task, an actuator required in the scheduling process of the scheduler can be added at the same time, so that the current task is executed through the original actuator and the added actuator based on the scheduling of the respective schedulers, and the execution efficiency and speed of the current task are improved.
In a specific application scenario, when a new scheduler is added to schedule a current task, the current task can be scheduled together with the scheduling logic of the original scheduler by adding a set of scheduling logic of the scheduler, so that the execution efficiency and speed of the current task are improved.
Step S13: and if the current task is completely executed by the original scheduler, determining a new task from a plurality of distributed tasks by using the distributed locks through the original scheduler, and executing the new task through the executor.
When the timer is started, the new scheduler is started, and after the new scheduler judges that the current task is completely executed, the new scheduler is finished, the new task is further determined from the plurality of distributed tasks by the original scheduler corresponding to the executed original current task through the distributed locks, and the new task is executed through the actuator corresponding to the original scheduler. The distributed lock is a mode for controlling synchronous access to shared resources among distributed systems, and can prevent interference and ensure consistency. The method is used for ensuring that only the same task can be executed at the same time in the distributed application cluster.
When the original scheduler determines a new task from a plurality of distributed tasks by using the distributed locks, the situation that a centralized scheduling platform is arranged to determine the new task can be avoided, so that the creation and execution of the task are independent of any distributed micro-service architecture, and the creation and execution of the task are simpler.
By the method, when the timer is started, the execution method of the distributed task of the embodiment judges whether the current task is executed by the original scheduler or not by starting the new scheduler, and if the current task is not executed by the original scheduler, the current task is scheduled by the new scheduler and the original scheduler together, so that the execution efficiency and speed of the current task can be improved by adding the new scheduler; if the current task is executed by the original scheduler, the original scheduler determines a new task from a plurality of distributed tasks by using the distributed locks, and the actuator executes the new task, so that the creation and execution of the new task can be determined by using the distributed locks, the creation and execution of the task are independent of any distributed micro-service architecture, the creation and execution of the task are simpler, and the execution efficiency of the distributed tasks is further improved.
Referring to fig. 2-3, fig. 2 is a schematic flow chart diagram illustrating another embodiment of a distributed task execution method according to the present application. Fig. 3 is a block diagram of an embodiment of a corresponding distributed system in the embodiment of fig. 2. The distributed system in this embodiment is described with two schedulers and their corresponding actuators, and when there are multiple or one scheduler, the framework is similar to that in this embodiment, and will not be described again here.
The distributed system 30 of the present embodiment is used to implement the execution method of the distributed task of any embodiment, and includes a timer (Scheduler)31, an executor 32, a Scheduler (Dispatcher)36, an Event (Event)35, a task Queue (Queue)33, and a task (task) 34.
The timer 31 may be configured with a timer starting period or a timer starting specific time including the scheduler 36, and the timer 31 is started cyclically based on the above timing rule to start the scheduler 36; the scheduler 36 configures the scheduling rules of the tasks 34, for example: the scheduler 36 includes an original scheduler 361 and a new scheduler 362, and the like. The original scheduler 361 and the new scheduler 362 may be configured with all the scheduling rules of the scheduler 36 independently, and all the functions of the scheduler 36 may be realized. Tasks 34, wherein each task 34 corresponds to one task queue 33; the task queue 33 may include the specific content of the corresponding task 34, each value of the task queue 33 being a subtask of the task 34; the executor 32 includes an original executor 321 and a new executor 322, where the original executor 321 corresponds to the original scheduler 361, the new executor 322 corresponds to the new scheduler 362, and the executor 32 is used to specifically execute the sub-task corresponding to the task 34; the event 35 is used for events such as creation of the task 34, execution of the task 34, and termination of the task 34.
Specifically, the timer 31 is used to start and start the scheduler 36 based on the timing rule, and the scheduler 36 is used to initialize the task queue 33, the fetch task 34, and the sub-tasks in the fetch task queue 33. The task 34 is used to generate the task queue 33 and the subtasks therein. The scheduler 36 is also used for receiving start information of the timer 31 and informing the executor 32 to execute the task 34 by executing the subtask in the task queue 33, and the scheduler 36 is also used for informing the event 35 to execute the task event.
The specific implementation steps of the distributed system 30 are as follows:
step S21: and responding to the start of the timer, starting a new scheduler, and judging whether the current task is completely executed by the original scheduler through the new scheduler.
The timer 31 is started cyclically according to the configured timing rules. When the timer 31 is started and the new scheduler 362 is started, the new scheduler 362 executes the corresponding scheduling rule after being started. The scheduling rules comprise related rule logic for creating and executing the tasks.
In a specific application scenario, in response to the timer 31 starting, the new scheduler 362 is started, and it can be determined by the new scheduler 362 whether the current task is completely executed by the original scheduler 361 based on the scheduling rule. The current task is a task currently being performed by the distributed system.
In one particular application scenario, a new scheduler 362 is started in response to the timer 31 starting. Whether a subtask that needs to be executed exists in the task queue 33 of the current task may be determined by the new scheduler 362 based on the scheduling rule; if the sub task needing to be executed does not exist in the task queue 33 of the current task 34, determining that the current task 34 is executed by the original scheduler 361; if there are subtasks to be executed in the task queue 33 of the current task 34, it is determined that the current task 34 is not completely executed by the original scheduler 361.
Step S22: and if the current task is not completely executed by the original scheduler, scheduling the current task together with the original scheduler through the new scheduler, and responding to the fact that the number of the schedulers exceeds the preset number, and adding a new container service.
When the new scheduler 362 determines that the current task 34 is not executed by the original scheduler 361, the new scheduler 362 and the original scheduler 361 schedule the current task 34. In a specific application scenario, when the current task 34 is scheduled by the new scheduler 362 and the original scheduler 361, a new executor 322 corresponding to the new scheduler 362 may be added at the same time, so as to execute the current task 34 by the original executor 321 and the new executor 322 together, thereby improving the execution efficiency and speed of the current task 34.
In a specific application scenario, when an original scheduler 361 currently schedules an original executor 321 to execute a current task 34, and the timer 31 is started, the new scheduler 362 is started, and it is determined that the current task 34 is not completely executed by the original scheduler 361, the new scheduler 362 and the original scheduler 361 schedule the current task 34 together, that is, the new scheduler 362 and the original scheduler 361 schedule the new executor 322 and the original executor 321 to execute the current task 34 together.
When a plurality of schedulers simultaneously schedule the current task together, the corresponding executors can be respectively scheduled by each scheduler to execute the current task. In a specific application scenario, each scheduler may obtain a subtask of a current task from a task queue, and schedule a corresponding actuator to execute the subtask. The allocation rule for each scheduler to obtain the sub-tasks from the task queue may be based on the execution state of the executor corresponding to each scheduler, the space required for the execution of the sub-tasks, and/or the order in which the sub-tasks are arranged, and the like, to allocate the sub-tasks obtained by each scheduler. Please refer to step S25 for the specific assignment rule of the subtasks.
In response to the addition of the scheduler 36, it is determined whether the number of schedulers 36 exceeds a preset number, and if the number of schedulers 36 exceeds the preset number, a container service is added. The preset number may include 3, 5, 6, and the like, and may be specifically set based on actual needs, which is not limited herein. The container service may be used to carry the scheduler 36, and when the number of schedulers 36 on one container service exceeds the preset number, one container service is newly added to increase the starting upper limit of the scheduler 36 through the newly added container service, so that the execution efficiency of the task can be further increased by adding the scheduler 36.
Step S23: and in response to the completion of the current task and the failure of the preset time, closing a scheduler, judging whether the number of the remaining schedulers exceeds the preset number, and if the number of the remaining schedulers does not exceed the preset number, reducing one container service.
And when the current task is executed and the current time does not reach the preset time, closing one scheduler, judging whether the number of the remaining schedulers exceeds the preset number, and reducing one container service if the number of the schedulers does not exceed the preset number, thereby avoiding resource waste. The preset time may include (3/4, (n-1)/n) minutes, or other time periods, which may be set based on actual situations, and is not limited herein. Wherein n is any positive integer.
In a specific application scenario, when the sub-task currently acquired from the task queue 33 is executed by the executor 32, it may be determined, according to a preset frequency, by the scheduler 36 whether the current task 34 has been executed and completed, if the current task 34 has been executed and completed, it may be determined whether an interval between a completion time of the current task 34 and the current time is less than a preset time, and if the interval is less than the preset time, one scheduler 36 is closed. The preset frequency may be 1 second/one time, 3 seconds/one time, or 6 seconds/one time, and may be specifically set based on an actual situation, which is not limited herein.
In a specific application scenario, it may be determined whether the interval between the completion time of the current task 34 and the current time is less than 3/4 ~ (n-1)/n of the start time interval of the timer 31, and if so, one scheduler 36 is turned off, wherein the turned-off scheduler 36 may be selected according to the start time of each scheduler 36, for example: the scheduler 36 with the longest boot time is shut down.
Step S24: and if the current task is completely executed, determining the task with the creation authority from the plurality of distributed tasks by using the Redis distributed lock through the original scheduler, and determining the task with the creation authority as a new task.
When the new scheduler 362 determines that the current task 34 is completely executed, the new scheduler 362 is finished, the original scheduler 361 determines the task with the creation authority from the plurality of distributed tasks by using the Redis distributed lock, and the task with the creation authority is determined as the new task.
In a specific application scenario, when it is determined by the new scheduler 362 that there is no subtask that needs to be executed in the task queue 33 of the current task 34, it may be further checked whether there is a new task to be created and executed, and if there is no new task that reaches its creation time, the new scheduler 362 and the timer 31 end the starting. If there is at least one new task that reaches its creation time, the new task can be created and scheduled jointly by the new scheduler 362 and the original scheduler 361.
Specifically, when a new task is created, a plurality of distributed tasks all initiate a request for creating task permissions, and the task brought to the creation permissions is created or executed. In a specific application scenario, a plurality of distributed tasks can compete and acquire locks in a set and setnx mode, locks provided by third-party plug-ins Jedis, Redisson and lette can also compete and acquire locks of Redis distributed locks, when a certain distributed task competes for the locks of the Redis distributed locks, a creation permission is acquired, and the distributed task is a new task needing to be created.
And the distributed task waiting timer 31 that has not contended for the lock starts again to loop through step S21.
Step S25: and writing each subtask corresponding to the new task into the task queue, acquiring each subtask from the task queue through the original scheduler, and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask needing to be executed exists in the task queue.
After the created new task is determined, each subtask corresponding to the new task can be initialized through a preset rule. Therein, initialization is the process of dividing a new task into a corresponding plurality of sub-tasks. The preset rule includes a modulus slicing task or a fixed subtask (1 ═ c, 2 ═ b, and 3 ═ e), and the like, and is not limited herein.
In a specific application scenario, the manner of fixing the subtasks may include fixedly dividing the current task based on the symbolic data in the program data corresponding to the task, so as to divide the current task into a plurality of subtasks. In a specific application scenario, the mode of modulo slicing task may include performing modulo slicing on program data corresponding to the task, and when the modulus value is a fixed value or has a certain rule, dividing the corresponding position of the modulus value in the program data, thereby dividing the current task into a plurality of subtasks. Wherein, certain rules of the module value may include "1, 2, 3, 4, 5.. 9" or other rules.
After initialization, each subtask is written into the task queue 33 of the list structure of the Redis distributed lock. Wherein, list is the storage structure of Redis technology. In response to all subtasks being written to the task queue 33, a task start Event is executed by the scheduler 36 invoking the Event 35 (Event). And the subtasks are sequentially stacked in a way of using the Rflush of the Redis distributed lock, and are popped in a way of using the Lpop of the Redis distributed lock when the subtasks are obtained. Where the Rbump command of the Redis distributed lock is used to insert one or more subtasks at the tail of the task queue 33. The Lpop command of the Redis distributed lock is used to fetch and return the first subtask of the task queue 33.
The scheduler 36 acquires each sub-task from the task queue 33, and schedules the executor 32 to execute each sub-task until there is no sub-task in the task queue 33 to be executed. In a specific application scenario, the sub-task may be retrieved from the task queue 33 by the scheduler 36 based on the FIFO rule, executed by the executor 32, and invoked to execute the task execution event 35. Wherein, the FIFO rule is first in first out.
In a specific application scenario, when the new scheduler 362 and the original scheduler 361 schedule the current task together, at this time, the task queue 33 has been written into the sub-task corresponding to the current task, at this time, the task queue 33 allocates the sub-task to the new scheduler 362 or the original scheduler 361 obtained previously according to the FIFO rule, i.e. the first-in first-out rule, and the timing when the new scheduler 362 or the original scheduler 361 obtains the sub-task from the task queue 33 is determined based on the time when the new scheduler 362 or the original scheduler 361 itself executes the previous sub-task and the starting time.
Specifically, the scheduler starts, that is, the sub-tasks are acquired from the task queue, the task queue allocates one sub-task to the scheduler according to the FIFO rule, the scheduler acquires and schedules the executor to execute the sub-task, after the sub-task is executed, the scheduler acquires the sub-task from the task queue, and so on, each scheduler acquires the sub-task from the task queue according to the time for executing the previous sub-task and the starting time.
In a specific application scenario, when there are sub tasks to be executed in the task queue 33, each scheduler acquires the sub tasks in the task queue 33 in an Lpop manner of the Redis distributed lock, and if the sub tasks are not acquired, the scheduler checks whether there is a new task to be created and executed, i.e., step S24.
In the execution process of the subtasks, in response to the subtasks obtained from the task queue 33, judging whether the subtasks needing to be executed exist in the task queue 33 through the Llen of the Redis distributed lock; if the sub-tasks needing to be executed exist in the task queue 33, the step of sequentially acquiring and executing the sub-tasks from the task queue 33 through the executor 32 is continuously executed until the sub-tasks needing to be executed do not exist in the task queue 33; if the subtask that needs to be executed does not exist in the task queue 33, after the currently acquired subtask is executed, the event 35 is called to execute a task end event.
The Llen command of Redis is used to return the length of task queue 33. If the list key does not exist, the key is interpreted as an empty list, returning 0. In a specific application scenario, in response to acquiring a sub-task from the task queue 33, the Llen of the Redis distributed lock is used to determine whether the sub-task that needs to be executed exists in the task queue 33, if the Llen returns to 0, it is described that the acquired sub-task is the last sub-task that needs to be executed in the task queue 33, when the sub-task is executed, the event 35 is invoked to execute the task end event, if the value returned by the Llen is greater than 0, the sub-task is continuously acquired from the task queue 33 in the Lpop manner of Redis, and then the executor 32 is used to execute the sub-task.
When all the subtasks are completed, i.e. the new task is completed, the timer is waited to start again to loop to step S21.
Through the above steps, the execution method of the distributed task of the embodiment starts at the timer, when the current task is not executed, the new scheduler and the original scheduler are used for scheduling the current task together, the execution efficiency and speed of the current task can be improved by adding the new scheduler, and when the number of the schedulers exceeds the preset number, adding new container service to improve the starting upper limit of the schedulers, thereby further improving the execution efficiency of each actuator, closing a scheduler when the current task is executed and the time for restarting the timer is not reached, reducing container service when the number of the schedulers does not exceed the preset number, therefore, the expansion and contraction capacity can be realized based on the execution state of the task, the execution capacity of the executor is enhanced, the task execution speed is accelerated, and the situations of low task execution efficiency, long execution time and resource waste in a peak period are avoided. The distributed locks are used for determining the creation and execution of the new tasks, so that the creation and execution of the tasks do not depend on any distributed micro-service architecture, the creation and execution of the tasks are simpler, the execution efficiency of the distributed tasks is further improved, and the waste of service deployment resources is reduced by avoiding the execution of the distributed tasks by using scripts.
Referring to fig. 4, fig. 4 is a block diagram illustrating an embodiment of a distributed task execution apparatus according to the present application. The distributed task execution device 40 includes a judgment module 41, a scheduling module 42, and an execution module 43. A judging module 41, configured to start a new scheduler in response to the start of the timer, and judge whether the current task is executed by the original scheduler through the new scheduler; the scheduling module 42 is configured to schedule the current task together with the original scheduler through the new scheduler if the current task is not executed by the original scheduler; and the execution module 43 is used for determining a new task from the plurality of distributed tasks by using the distributed locks through the original scheduler and executing the new task through the executor if the current task is executed by the original scheduler.
The scheduling module 42 is further configured to add a new container service in response to the number of schedulers exceeding the preset number.
The scheduling module 42 is further configured to close one scheduler in response to that the current task is executed completely and the preset time is not reached, and determine whether the number of remaining schedulers exceeds a preset number; if the number of remaining schedulers does not exceed the preset number, one container service is reduced.
The scheduling module 42 is further configured to schedule the corresponding executors to execute the current task through the new scheduler and the original scheduler, respectively.
The judging module 41 is further configured to start a new scheduler in response to the start of the timer, and judge whether there is a subtask that needs to be executed in the task queue of the current task through the new scheduler based on the scheduling rule; if the subtask to be executed does not exist in the task queue of the current task, determining that the current task is completely executed by the original scheduler; and if the subtasks needing to be executed exist in the task queue of the current task, determining that the current task is not completely executed by the original scheduler.
The execution module 43 is further configured to determine, by the original scheduler, a task having a creation authority from among the plurality of distributed tasks using a Redis distributed lock; and determining the task with the creation authority as a new task.
The execution module 43 is further configured to write each subtask corresponding to the new task into the task queue; and acquiring each subtask from the task queue through the original scheduler, and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask to be executed exists in the task queue.
The execution module 43 is further configured to initialize the new task according to a preset rule to obtain each subtask corresponding to the new task; writing each subtask into a task queue of a list structure of the Redis distributed lock; in response to all subtasks being written to the task queue, an Event is invoked to execute a task start Event.
The execution module 43 is further configured to sequentially obtain each subtask from the task queue through the original scheduler by using the FIFO rule, call the executor to execute each subtask, and call the Event to execute the task execution Event.
The execution module 43 is further configured to respond to that the original scheduler acquires a subtask from the task queue, and determine whether there is a subtask that needs to be executed in the task queue through a llen of the Redis distributed lock; if the task queue has subtasks to be executed, continuing to execute the steps of obtaining each subtask from the task queue through the original scheduler, and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask to be executed exists in the task queue; and if the subtask which needs to be executed does not exist in the task queue, after the currently acquired subtask is executed, calling the Event to execute the task ending Event.
According to the scheme, the execution efficiency of the distributed tasks can be improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application. The electronic device 50 comprises a memory 51 and a processor 52 coupled to each other, and the processor 52 is configured to execute program instructions stored in the memory 51 to implement the steps of the method for executing distributed tasks according to any of the embodiments described above. In one particular implementation scenario, electronic device 50 may include, but is not limited to: a microcomputer, a server, and the electronic device 50 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 52 is arranged to control itself and the memory 51 to implement the steps of any of the above described distributed task execution method embodiments. Processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
According to the scheme, the execution efficiency of the distributed tasks can be improved.
Referring to fig. 6, fig. 6 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 60 stores program instructions 601 capable of being executed by the processor, the program instructions 601 being for implementing the steps of the method of performing the distributed tasks of any of the embodiments described above.
According to the scheme, the execution efficiency of the distributed tasks can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. A method for executing a distributed task, the method comprising:
starting a new scheduler in response to the start of the timer, and judging whether the current task is completely executed by the original scheduler through the new scheduler;
if the current task is not executed by the original scheduler, scheduling the current task through the new scheduler and the original scheduler together;
and if the current task is completely executed by the original scheduler, determining a new task from a plurality of distributed tasks by using a distributed lock through the original scheduler, and executing the new task through an actuator.
2. The method of claim 1, wherein the step of scheduling the current task by the new scheduler in conjunction with the original scheduler further comprises:
and responding to the fact that the number of the schedulers exceeds the preset number, adding a new container service.
3. The method for executing a distributed task according to claim 1 or 2, further comprising:
in response to the completion of the execution of the current task and the failure of the preset time, closing a scheduler, and judging whether the number of the remaining schedulers exceeds the preset number;
if the number of remaining schedulers does not exceed the preset number, decreasing one container service.
4. The method of claim 1, wherein the step of scheduling the current task by the new scheduler in conjunction with the original scheduler comprises:
and respectively scheduling corresponding actuators to execute the current task through the new scheduler and the original scheduler.
5. The method of claim 1, wherein the step of starting a new scheduler in response to the timer starting and determining whether the current task is completed by the original scheduler by the new scheduler comprises:
starting the new scheduler in response to the starting of the timer, and judging whether a subtask needing to be executed exists in a task queue of the current task through the new scheduler based on a scheduling rule;
if the subtask which needs to be executed does not exist in the task queue of the current task, the current task is determined to be executed by the original scheduler;
and if the subtasks needing to be executed exist in the task queue of the current task, determining that the current task is not executed by the original scheduler.
6. The method of claim 1, wherein the step of determining, by the original scheduler, a new task from a plurality of distributed tasks using a distributed lock comprises:
determining, by the original scheduler, a task having a creation authority from a plurality of distributed tasks using a Redis distributed lock;
determining the task with creation authority as the new task.
7. The method of claim 1, wherein the step of determining a new task from a plurality of distributed tasks by the original scheduler using a distributed lock and executing the new task by an executor further comprises:
writing each subtask corresponding to the new task into a task queue;
and acquiring each subtask from the task queue through the original scheduler, and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask needing to be executed exists in the task queue.
8. The method for executing a distributed task according to claim 7, wherein the step of writing each subtask corresponding to the new task into a task queue comprises:
initializing the new task through a preset rule to obtain each subtask corresponding to the new task;
writing the subtasks into a task queue of a list structure of the Redis distributed lock;
and responding to all the subtasks written into the task queue, and calling an Event to execute a task starting Event.
9. The method for executing distributed tasks according to claim 7, wherein the step of obtaining each subtask from the task queue through the original scheduler and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask to be executed exists in the task queue includes:
and sequentially acquiring each subtask from the task queue by using an FIFO rule through the original scheduler, calling the actuator to execute each subtask, and calling the Event to execute the task execution Event.
10. The method for executing distributed tasks according to claim 7, wherein the step of obtaining each subtask from the task queue through the original scheduler and invoking an actuator corresponding to the original scheduler to execute each subtask until no subtask to be executed exists in the task queue further comprises:
responding to the original scheduler to acquire the subtasks from the task queue, and judging whether the subtasks needing to be executed exist in the task queue through the llen of the Redis distributed lock;
if the task queue also has subtasks to be executed, continuing to execute the steps of obtaining each subtask from the task queue through the original scheduler, and calling an actuator corresponding to the original scheduler to execute each subtask until no subtask to be executed exists in the task queue;
and if the subtask which needs to be executed does not exist in the task queue, calling an Event to execute the task ending Event after the currently acquired subtask is executed.
11. An apparatus for executing a distributed task, the apparatus comprising:
the judging module is used for responding to the starting of the timer, starting a new scheduler and judging whether the current task is completely executed by the original scheduler or not through the new scheduler;
the scheduling module is used for scheduling the current task through the new scheduler and the original scheduler together if the current task is not executed by the original scheduler;
and the execution module is used for determining a new task from a plurality of distributed tasks by using a distributed lock through the original scheduler and executing the new task through an actuator if the current task is executed by the original scheduler.
12. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement a method of performing a distributed task according to any one of claims 1 to 10.
13. A computer-readable storage medium on which program instructions are stored, which program instructions, when executed by a processor, implement a method of performing a distributed task according to any one of claims 1 to 10.
CN202111112892.0A 2021-09-18 2021-09-18 Distributed task execution method, related device and equipment Pending CN113946417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111112892.0A CN113946417A (en) 2021-09-18 2021-09-18 Distributed task execution method, related device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111112892.0A CN113946417A (en) 2021-09-18 2021-09-18 Distributed task execution method, related device and equipment

Publications (1)

Publication Number Publication Date
CN113946417A true CN113946417A (en) 2022-01-18

Family

ID=79328962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111112892.0A Pending CN113946417A (en) 2021-09-18 2021-09-18 Distributed task execution method, related device and equipment

Country Status (1)

Country Link
CN (1) CN113946417A (en)

Similar Documents

Publication Publication Date Title
Delgado et al. Job-aware scheduling in eagle: Divide and stick to your probes
Buttazzo Rate monotonic vs. EDF: Judgment day
US7925869B2 (en) Instruction-level multithreading according to a predetermined fixed schedule in an embedded processor using zero-time context switching
JP6199477B2 (en) System and method for using a hypervisor with a guest operating system and virtual processor
US8793695B2 (en) Information processing device and information processing method
EP3425502A1 (en) Task scheduling method and device
KR20040068600A (en) A method and a system for executing operating system functions, as well as an electronic device
CN106569891B (en) Method and device for scheduling and executing tasks in storage system
Masrur et al. VM-based real-time services for automotive control applications
CN110888743A (en) GPU resource using method, device and storage medium
Zuberi et al. EMERALDS: a small-memory real-time microkernel
TWI460659B (en) Lock windows for reducing contention
EP1923784A1 (en) Scheduling method, and scheduling device
US9218201B2 (en) Multicore system and activating method
CN111897637B (en) Job scheduling method, device, host and storage medium
KR100791296B1 (en) Apparatus and method for providing cooperative scheduling on multi-core system
KR20150114444A (en) Method and system for providing stack memory management in real-time operating systems
CN115237556A (en) Scheduling method and device, chip, electronic equipment and storage medium
CN114168271A (en) Task scheduling method, electronic device and storage medium
US8225320B2 (en) Processing data using continuous processing task and binary routine
Abeni et al. Adaptive partitioning of real-time tasks on multiple processors
Wu et al. Abp scheduler: Speeding up service spread in docker swarm
US11301304B2 (en) Method and apparatus for managing kernel services in multi-core system
CN113946417A (en) Distributed task execution method, related device and equipment
JP3644042B2 (en) Multitask processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination