CN111290842A - Task execution method and device - Google Patents

Task execution method and device Download PDF

Info

Publication number
CN111290842A
CN111290842A CN201811502731.0A CN201811502731A CN111290842A CN 111290842 A CN111290842 A CN 111290842A CN 201811502731 A CN201811502731 A CN 201811502731A CN 111290842 A CN111290842 A CN 111290842A
Authority
CN
China
Prior art keywords
task
queue
tasks
queues
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811502731.0A
Other languages
Chinese (zh)
Inventor
郑红阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811502731.0A priority Critical patent/CN111290842A/en
Publication of CN111290842A publication Critical patent/CN111290842A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a task execution method and device, and relates to the technical field of computers. One embodiment of the method comprises: generating task queues according to the dependency relationship among tasks, and sequencing the task queues in a main queue, wherein the dependency relationship among the tasks determines the execution sequence of the tasks in the task queues, and the main queue determines the execution sequence of the task queues; and circularly utilizing the threads in the thread pool, and executing the tasks in the task queues according to the sequence of the task queues in the main queue. The implementation method can solve the problem of coupling among tasks, so that the problem of thread blocking is solved, the reusability of the threads is good, the threads do not need to be frequently switched, the utilization rate of memory resources and the utilization rate of a CPU (central processing unit) are high, the problem of frequent creation and destruction of the threads is also solved, the overall stability of the system is improved, and the risk of memory leakage is reduced.

Description

Task execution method and device
Technical Field
The invention relates to the technical field of computers, in particular to a task execution method and a task execution device.
Background
In the current multi-core program design, a thread pool or multiple threads are adopted to realize the concurrent execution of programs.
The thread pool is suitable for a scene with low coupling degree among tasks, if a plurality of tasks have causal relation, a certain thread in the thread pool must wait for the execution result of other threads, so that the thread is blocked, and the program execution efficiency is reduced. For example, if each user establishes a session, the user's operational steps are causally related, and implementing concurrency using thread pools will result in many threads being blocked.
Multithreading adopts a strategy that one session independently corresponds to one thread, and although the problem of thread blocking under the concurrent condition can be solved, the following defects exist:
the thread is not well reused, if one session does not have a task at present, the thread is idle, and other sessions cannot reuse the thread;
generally, a large program needs to process thousands of sessions simultaneously, so that tens of thousands of threads are created, an operating system applies for at least 8 million memories for each thread, and the total memory amount cannot be estimated;
at present, a general CPU (central processing unit) has dozens of cores at most, if thousands of threads exist, the threads must acquire CPU time slices in turn, so that the threads are frequently switched and get into a kernel level, the bid-winning probability of a cache (cache) is reduced, and the program execution performance is reduced;
and the thread is frequently created and deleted, so that the overall stability of the system is reduced, and the risk of memory leakage is increased.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
the scheme of adopting the thread pool leads to thread blocking; the method adopts a multi-thread scheme, threads cannot be reused, the threads are too many, memory resources are wasted, the utilization rate of a CPU is low due to frequent switching among the threads, the threads are frequently created and destroyed, the overall stability of the system is reduced, and the risk of memory leakage is increased.
Disclosure of Invention
In view of this, embodiments of the present invention provide a task execution method and apparatus, which can solve the problem of coupling between tasks, thereby solving the problem of thread blocking, having good thread reusability, requiring no frequent thread switching, having high memory resource utilization rate and CPU utilization rate, and also solving the problem of frequent creation and destruction of threads, thereby improving the overall stability and program execution performance of the system, and reducing the risk of memory leakage.
To achieve the above object, according to an aspect of an embodiment of the present invention, a task execution method is provided.
A task execution method, comprising: generating task queues according to the dependency relationship among tasks, and sequencing the task queues in a main queue, wherein the dependency relationship among the tasks determines the execution sequence of the tasks in the task queues, and the main queue determines the execution sequence of the task queues; and circularly utilizing the threads in the thread pool, and executing the tasks in the task queues according to the sequence of the task queues in the main queue.
Optionally, the step of generating task queues according to the dependency relationship between tasks, and sorting each task queue in the main queue includes: judging whether the input task has a corresponding task queue according to the dependency relationship among the tasks; if yes, adding the input task to a corresponding task queue; otherwise, generating a new task queue for the input task, and arranging the new task queue in the main queue.
Optionally, the step of determining whether the input task has a corresponding task queue according to the dependency relationship between the tasks includes: judging whether the queue on which the input task depends is an empty queue at present according to the dependency relationship among the tasks; if so, judging whether the empty queue is being executed in the thread, if so, determining that the input task has a corresponding task queue, and if not, determining that the input task does not have a corresponding task queue; and if the queue on which the input task depends is not an empty queue currently, the input task has a corresponding task queue.
Optionally, before the step of generating a new task queue for the input task, the method includes: locking the empty queue that is not executing in the thread; and, the step of generating a new task queue for the input task includes: locking the main queue and adding the input task to the empty queue that is not executed in the thread to generate the new task queue.
Optionally, the step of executing the tasks in the task queues according to the order of the task queues in the main queue by cyclically utilizing the threads in the thread pool includes: when idle threads exist in the thread pool, acquiring task queues with the number of the idle threads from the main queue according to the number of the idle threads and the sequence of the task queues in the main queue; allocating an idle thread for each acquired task queue to execute each task in the task queue; and releasing the threads occupied by the task queue after all tasks in one task queue are executed, so that the released idle threads are used for continuously executing the tasks of the rest task queues in the main queue until the tasks of all task queues in the main queue are executed.
According to another aspect of the embodiments of the present invention, there is provided a task performing apparatus.
A task execution device comprising: the queue generating module is used for generating task queues according to the dependency relationship among the tasks and sequencing the task queues in a main queue, wherein the dependency relationship among the tasks determines the execution sequence of the tasks in the task queues, and the main queue determines the execution sequence of the task queues; and the task execution module is used for circularly utilizing the threads in the thread pool and executing the tasks in the task queues according to the sequence of the task queues in the main queue.
Optionally, the queue generating module is further configured to: judging whether the input task has a corresponding task queue according to the dependency relationship among the tasks; if yes, adding the input task to a corresponding task queue; otherwise, generating a new task queue for the input task, and arranging the new task queue in the main queue.
Optionally, the queue generating module includes a determining submodule, configured to: judging whether the queue on which the input task depends is an empty queue at present according to the dependency relationship among the tasks; if so, judging whether the empty queue is being executed in the thread, if so, determining that the input task has a corresponding task queue, and if not, determining that the input task does not have a corresponding task queue; and if the queue on which the input task depends is not an empty queue currently, the input task has a corresponding task queue.
Optionally, the queue generating module includes a locking sub-module, configured to: locking the empty queue that is not executing in the thread; and, the queue generating module further comprises a task queue generating submodule, configured to: locking the main queue and adding the input task to the empty queue that is not executed in the thread to generate the new task queue.
Optionally, the task execution module is further configured to: when idle threads exist in the thread pool, acquiring task queues with the number of the idle threads from the main queue according to the number of the idle threads and the sequence of the task queues in the main queue; allocating an idle thread for each acquired task queue to execute each task in the task queue; and releasing the threads occupied by the task queue after all tasks in one task queue are executed, so that the released idle threads are used for continuously executing the tasks of the rest task queues in the main queue until the tasks of all task queues in the main queue are executed.
According to yet another aspect of an embodiment of the present invention, an electronic device is provided.
An electronic device, comprising: one or more processors; a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the task execution method provided by the present invention.
According to yet another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the task execution method provided by the present invention.
One embodiment of the above invention has the following advantages or benefits: generating task queues according to the dependency relationship among the tasks, and sequencing the task queues in the main queue; and circularly utilizing the threads in the thread pool, and executing the tasks in the task queues according to the sequence of the task queues in the main queue. The invention adopts double queues to realize the concurrent execution of the coupled tasks in the thread pool environment, and records all task queues to be executed through a main queue, and each task queue records the task queued respectively. The problem of coupling among tasks can be solved, so that the problem of thread blocking is solved, the reusability of the threads is good, the threads do not need to be frequently switched, the utilization rate of memory resources and the utilization rate of a CPU (central processing unit) are high, the problem of frequent creation and destruction of the threads is also solved, the overall stability and the program execution performance of a system are improved, and the risk of memory leakage is reduced.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a task execution method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a session processing flow according to one embodiment of the invention;
FIG. 3 is a thread execution flow diagram based on a dual queue mechanism according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a task enqueue process for a non-empty task queue, according to an embodiment of the invention;
FIG. 5 is a diagram illustrating a task enqueue process for an empty queue according to an embodiment of the invention;
FIG. 6 is a task enqueue lock flow diagram according to an embodiment of the invention;
FIG. 7 is a schematic diagram of the main blocks of a task performing device according to an embodiment of the present invention;
FIG. 8 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 9 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
Fig. 1 is a schematic diagram of main steps of a task execution method according to an embodiment of the present invention.
As shown in fig. 1, the task execution method according to the embodiment of the present invention mainly includes steps S101 to S102 as follows.
Step S101: and generating task queues according to the dependency relationship among the tasks, and sequencing the task queues in the main queue.
The dependency relationship among the tasks determines the execution sequence of the tasks in the task queue.
The dependency relationship between tasks, for example, the execution of task a and task b, has the following cause and effect relationship: the execution of the task a depends on the execution result of the task b, and then the task a and the task b are two tasks with a dependency relationship. The dependency relationship between tasks may be the causal dependency relationship described above, or may be another dependency relationship, for example, in execution time, after the task a is completely executed, the task b may be executed. When the embodiment of the invention generates the task queue, two or more tasks with various dependency relationships are put into the same task queue.
The main queue determines an execution order of each task queue, and specifically, each task queue is sorted in the main queue, so that each task queue can be sequentially executed in the order in which it is arranged in the main queue when each task queue is executed.
Step S101, specifically, judging whether an input task has a corresponding task queue according to the dependency relationship among the tasks; if yes, adding the input task to the corresponding task queue; otherwise, generating a new task queue for the input task, and arranging the generated new task queue in the main queue.
The step of judging whether the input task has a corresponding task queue according to the dependency relationship among the tasks may specifically include:
judging whether the queue on which the input task depends is an empty queue at present according to the dependency relationship among the tasks; if yes, judging whether the empty queue is being executed in the thread, if yes, the input task has a corresponding task queue, and if not, the input task has no corresponding task queue; and if the queue on which the input task depends is not an empty queue currently, the input task has a corresponding task queue.
The queue to which the task is not added after the creation and the empty task queue may be both referred to as an empty queue, and the empty task queue refers to: in the thread, after all tasks in one task queue are executed, a queue without tasks is obtained (at this time, the queue still occupies one thread).
Taking a session processing scenario as an example, an input task is a task of a certain session, a task queue corresponding to the input task is a task queue of the session, and the task queue of the session may be a non-empty task queue (i.e., there is a task that has not been executed yet in the task queue), or may be an empty task queue.
The queue on which the input task depends may be a queue to which a task having a dependency relationship with the input task belongs, wherein the queue is a non-empty task queue when the queue does not enter the thread execution, and when the queue is currently in the thread, the queue may be a non-empty task queue or an empty task queue according to the execution condition of the task in the queue. For example, a task queue of a session to which an input task belongs in the session may be used as a queue on which the input task depends.
If there is no dependency between any task (any task including the task that has been executed in the thread) and the input task, the input task corresponds to an empty queue (no task has been added to the empty queue), and the empty queue is used for placing the input task. The empty queue to which no task has been added may also be referred to as the queue on which the task of the input depends. For example, an empty queue created after a session is established, where the tasks of the session have not been added, the empty queue may also serve as a queue on which the incoming tasks depend.
In combination with the above-mentioned session processing scenario, if the queue on which the input task depends is currently an empty queue and is executing in a thread, indicating that the input task belongs to a task queue of a session, and the empty queue is an empty task queue, and the thread is not yet released, the input task is added to the empty task queue so as to continue to execute the input task in the thread, and when the input task is executed completely, if the session has no other newly added task, the thread occupied by the session is released.
If the queue on which the input task depends is an empty queue currently but is not executed in the thread, which means that the empty queue does not add any task under the session to which the input task belongs, the input task is added to the empty queue, so that a task queue of the session is generated, and the task queue of the session is arranged in the main queue to wait for the entering thread to execute.
If the queue on which the input task depends is not an empty queue currently, the queue indicates that the input task belongs to the task queue of the session (the input task is executing in the thread, but the queue has tasks which are not completely executed, or the input task is waiting to enter the thread to execute in the main queue). In this way, the input task is added to the task queue of the session and waits to be executed.
In addition, each task queue needs to be added with an atomicity flag, and whether the task queue is currently in thread execution or currently in an idle state (i.e. waiting for entering thread execution) can be identified through the atomicity flag.
Before an input task is added to a corresponding task queue, the corresponding task queue needs to be locked. The lock is a mutual exclusion lock and is used for ensuring the uniqueness of the execution of the task queue. Similarly, before generating a new task queue for an input task, locking an empty queue which is not executed in a thread, so that the generated new task queue has the exclusive lock, thereby ensuring the uniqueness of task queue execution.
The step of generating a new task queue for the input task may specifically include: the main queue is locked (mutex lock) and the incoming task is added to an empty queue that is not executing in the thread to generate a new task queue. Mutual exclusion locks are added to the main queue to ensure the uniqueness of the main queue.
The task queues are sorted in the main queue according to the generation time of the task queue (enqueue time of the first task). For example, after a session is established, the session corresponds to an empty queue (no task is added), after a first task of the session is received, the task is added into the empty queue (task enqueuing), a task queue of the session is obtained, the session node is mounted in the main queue, so that the task queue of the session is arranged at the current last position of the main queue, and if another session node is mounted subsequently, the task queue of the another session is arranged behind the task queue of the current session.
Step S102: and circularly utilizing the threads in the thread pool, and executing the tasks in the task queues according to the sequence of the task queues in the main queue.
Specifically, when there is an idle thread in the thread pool,
according to the number of idle threads and the sequence of the task queues in the main queue, acquiring the task queues with the number of the idle threads from the main queue; allocating an idle thread for each acquired task queue to execute each task in the task queue; and releasing the threads occupied by the task queue after all the tasks in one task queue are executed, so that the released idle threads are used for continuously executing the tasks of the rest task queues in the main queue (the task queues with the number of the idle threads are obtained from the main queue according to the sequence of the task queues in the main queue) until the tasks of all the task queues in the main queue are executed completely.
Fig. 2 is a schematic diagram of a session processing flow according to an embodiment of the invention.
In a session processing scenario, there is usually a dependency relationship between tasks in each session, and the task execution method according to the embodiment of the present invention is further described below according to a session processing flow.
As shown in fig. 2, a thread pool is created for processing tasks, and there are a fixed number of threads in the thread pool, and this example includes four threads. A main queue is allocated to record all pending sessions.
Each session is assigned a queue, such as queue 1 through queue 9 in the figure. In fig. 2, queues 1 to 7 are task queues of sessions 1 to 7, each task queue is used to record a task queued by the session, queues 1 to 4 are task queues being executed in a thread pool, and queues 5 to 7 are task queues to be executed, that is, task queues of sessions queued in a main queue. Queue 8 and queue 9 are empty queues (and have not added tasks) corresponding to session 8 and session 9, respectively.
Each thread in the thread pool executes a task queue of one session, only one thread is released after all tasks in the task queue of one session are executed, and after the threads are released, the released idle threads acquire the task queue to be executed from the main queue, for example, in fig. 2, the task queues of sessions 5 to 7 are queues to be executed, and the idle threads take out the task queues from the main queue in sequence according to the sequence of the task queues of the sessions and allocate the idle threads to execute. Assuming there is currently an idle thread, the idle thread fetches the task queue of session 5 from the main queue to execute the tasks in queue 5.
The embodiment of the invention adopts a thread execution flow of a double-queue mechanism, and after all tasks of the session are executed, a main session queue also needs to be checked. And if all tasks of the session are completely executed, taking out one session from the main session queue for execution. The thread execution flow based on the dual queue mechanism according to the embodiment of the present invention is shown in fig. 3. As shown in fig. 3, first, it is determined whether there is a session to be executed in the main session queue (i.e., main queue), if so, the session to be executed is removed from the main queue, and the session is started to be executed, otherwise, the waiting main session queue is blocked; when executing each task in the task queue under the session removed from the main queue, when one task is executed, judging whether the task (unexecuted task) is still in the task queue, if so, continuously taking out the task from the session queue (namely the task queue of the session) and executing. If the task queue of the session has no task which is not executed (namely, all tasks in the session queue are executed completely), returning to the step of judging whether the session queue of the main session queue is to be executed or not so as to execute the tasks in the task queues of other sessions.
When an input task is received, whether a task is hung below a session of the task can be judged, namely whether the session of the task has a non-empty task queue or not is judged. For example, queues 1 to 7 in fig. 2 are non-empty task queues of sessions 1 to 7, respectively, and tasks in queues 1 to 7 are tasks mounted below sessions 1 to 7, respectively. If the input task is mounted under the session, the input task is directly mounted under the task queue of the belonging session, fig. 4 exemplarily shows the task enqueuing process of the non-empty task queue of the embodiment of the present invention, and as shown in fig. 4, when the input task is task 4 and task 4 is the task of session 5, task 4 is mounted in the task queue of session 5 and is arranged after task 3.
If the input task belongs to a task which is not mounted below the session, that is, the queue on which the input task depends is currently an empty queue, and no task is added, enqueuing the input task, in addition to mounting the task into the empty queue below the session, a session node needs to be mounted into a main queue for obtaining execution by an idle thread, fig. 5 exemplarily shows a task enqueuing process of the empty queue according to an embodiment of the present invention, as shown in fig. 5, a session 5 corresponds to an empty queue, when receiving a task 1 of a session 5, a task 1 is added into the empty queue to generate a task queue of the session 5, and then the session 5 is mounted into the main queue to queue the task queue of the session 5 to the main queue, and wait for the idle thread to obtain and execute.
If the queue on which the input task depends is currently an empty task queue, the enqueuing process of the empty task queue is as follows: the incoming task is added to the empty task queue so that it can be executed before the thread occupied by the empty task queue is released.
In the dual-queue task processing flow of the embodiment of the present invention, a special design of concurrent locks is required, specifically, two sets of mutual exclusion locks are required to respectively control the uniqueness of the total session queue and the task queue of each session, and in addition, an atomicity flag needs to be performed on whether each session is being executed in a thread, and the atomicity flag can identify whether the task queue is currently being executed in a thread or currently in an idle state (i.e., waiting for entering the thread for execution).
The task enqueue lock flow of the embodiment of the invention is shown in fig. 6. Locking a current queue, wherein the current queue is a task queue of a session to which an input task belongs or an empty queue (no task is added) corresponding to the session, and fig. 6 takes the current queue as a task queue of the session to which the input task belongs, that is, a current session queue (or referred to as the current task queue) as an example; judging whether the current queue is an empty queue or not, if not, adding the input task to the current queue, and finishing enqueuing; if the current queue is an empty queue, judging whether the current queue is executed in the thread (namely whether the session is executed in the thread in fig. 6), if the current queue is executed in the thread, indicating that the current queue is an empty task queue, adding the input task into the empty task queue, and finishing enqueuing; if the current queue is not executed in the thread and indicates that the current queue is an empty queue to which no task is added, the main queue (also called a total session queue) is locked, the input task is added to the empty queue, so that the task queue of the session is generated, then the session is placed in the main queue, so that an idle thread in the thread pool is waited for taking out the task queue of the session and executing the task, and the enqueue is finished.
The conversation processing flow of the embodiment of the invention adopts double queues to realize the concurrency of conversation coupling tasks in a thread pool environment, wherein one main queue records all the conversations which need to be executed currently, and the other queue (task queue) records the tasks which are queued by the current conversation. The method has the advantages that the dispatching of session tasks and the dispatching of threads according to sessions are realized, the thread reuse problem can be solved, each thread is basically ensured to run efficiently, the problem of excessive threads is solved, one program thread generally reaches twice the number of hardware cores, the problems of frequent switching of multiple threads and low CPU utilization efficiency can be solved, in addition, the problems of frequent creation and destruction of threads are also solved, the thread pool is fully utilized on the premise of isolating the coupling of each task, the thread pool is recycled, the problem of thread blocking is solved (the two aspects of reducing the blocking probability and greatly reducing the blocking time are embodied), and the program concurrency and the working efficiency are accelerated.
Fig. 7 is a schematic diagram of main blocks of a task performing device according to an embodiment of the present invention.
The task execution device 700 according to the embodiment of the present invention mainly includes: a queue generating module 701 and a task executing module 702.
The queue generating module 701 is configured to generate task queues according to dependency relationships among the tasks, and sort the task queues in the main queue. The dependency relationship among the tasks determines the execution sequence of the tasks in the task queue, and the main queue determines the execution sequence of each task queue.
The queue generating module 701 may specifically be configured to: judging whether the input task has a corresponding task queue according to the dependency relationship among the tasks; if yes, adding the input task to the corresponding task queue; otherwise, a new task queue is generated for the input task and arranged in the main queue.
The queue generating module 701 may include a determining sub-module for: judging whether the queue on which the input task depends is an empty queue at present according to the dependency relationship among the tasks; if yes, judging whether the empty queue is being executed in the thread, if yes, the input task has a corresponding task queue, and if not, the input task has no corresponding task queue; and if the queue on which the input task depends is not an empty queue currently, the input task has a corresponding task queue.
The queue generating module 701 may further include a locking sub-module for: an empty queue that is not executing in the thread (i.e., an empty queue that is not executing in the thread) is locked.
The queue generating module 701 may further include a task queue generating submodule, configured to: the main queue is locked and incoming tasks are added to empty queues that are not executing in the thread to generate a new task queue.
And the task execution module 702 is configured to cyclically utilize the threads in the thread pool and execute the tasks in the task queues according to the sequence of the task queues in the main queue.
The task execution module 702 may be specifically configured to: when there are idle threads in the thread pool,
according to the number of idle threads and the sequence of the task queues in the main queue, acquiring the task queues with the number of the idle threads from the main queue; allocating an idle thread for each acquired task queue to execute each task in the acquired task queue; and releasing the threads occupied by the task queue after all the tasks in one task queue are executed, so that the released idle threads are used for continuously executing the tasks of the rest task queues in the main queue until the tasks of all the task queues in the main queue are executed.
In addition, the specific implementation of the task execution device in the embodiment of the present invention has been described in detail in the above task execution method, and therefore, the repeated content herein is not described again.
Fig. 8 illustrates an exemplary system architecture 800 to which a task execution method or a task execution device of an embodiment of the present invention may be applied.
As shown in fig. 8, the system architecture 800 may include terminal devices 801, 802, 803, a network 804, and a server 805. The network 804 serves to provide a medium for communication links between the terminal devices 801, 802, 803 and the server 805. Network 804 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 801, 802, 803 to interact with a server 805 over a network 804 to receive or send messages or the like. The terminal devices 801, 802, 803 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 801, 802, 803 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 805 may be a server that provides various services, such as a back-office management server (for example only) that supports shopping-like websites browsed by users using the terminal devices 801, 802, 803. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the task execution method provided by the embodiment of the present invention is generally executed by the server 805, and accordingly, the task execution device is generally disposed in the server 805.
It should be understood that the number of terminal devices, networks, and servers in fig. 8 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 9, shown is a block diagram of a computer system 900 suitable for use in implementing a terminal device or server of an embodiment of the present application. The terminal device or the server shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the system 900 are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
The following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The above-described functions defined in the system of the present application are executed when the computer program is executed by a Central Processing Unit (CPU) 901.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a queue generation module and a task execution module. The names of these modules do not form a limitation on the module itself in some cases, for example, the queue generating module may also be described as a "module for generating task queues according to dependencies among tasks and ordering the task queues in the main queue".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: generating task queues according to the dependency relationship among tasks, and sequencing the task queues in a main queue, wherein the dependency relationship among the tasks determines the execution sequence of the tasks in the task queues, and the main queue determines the execution sequence of the task queues; and circularly utilizing the threads in the thread pool, and executing the tasks in the task queues according to the sequence of the task queues in the main queue.
According to the technical scheme of the embodiment of the invention, task queues are generated according to the dependency relationship among tasks, and the task queues are sequenced in the main queue; and circularly utilizing the threads in the thread pool, and executing the tasks in the task queues according to the sequence of the task queues in the main queue. The invention adopts double queues to realize the concurrent execution of the coupled tasks in the thread pool environment, and records all task queues to be executed through a main queue, and each task queue records the task queued respectively. The problem of coupling among tasks can be solved, so that the problem of thread blocking is solved, the reusability of the threads is good, the threads do not need to be frequently switched, the utilization rate of memory resources and the utilization rate of a CPU (central processing unit) are high, the problem of frequent creation and destruction of the threads is also solved, the overall stability of the system is improved, and the risk of memory leakage is reduced.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A method of task execution, comprising:
generating task queues according to the dependency relationship among tasks, and sequencing the task queues in a main queue, wherein the dependency relationship among the tasks determines the execution sequence of the tasks in the task queues, and the main queue determines the execution sequence of the task queues;
and circularly utilizing the threads in the thread pool, and executing the tasks in the task queues according to the sequence of the task queues in the main queue.
2. The method of claim 1, wherein the step of generating task queues according to dependencies between tasks and ordering the task queues in the main queue comprises:
judging whether the input task has a corresponding task queue according to the dependency relationship among the tasks;
if yes, adding the input task to a corresponding task queue;
otherwise, generating a new task queue for the input task, and arranging the new task queue in the main queue.
3. The method according to claim 2, wherein the step of determining whether the input task has a corresponding task queue according to the dependency relationship between tasks comprises:
judging whether the queue on which the input task depends is an empty queue at present according to the dependency relationship among the tasks;
if so, judging whether the empty queue is being executed in the thread, if so, determining that the input task has a corresponding task queue, and if not, determining that the input task does not have a corresponding task queue;
and if the queue on which the input task depends is not an empty queue currently, the input task has a corresponding task queue.
4. The method of claim 3, wherein the step of generating a new task queue for the incoming task is preceded by:
locking the empty queue that is not executing in the thread; and the number of the first and second electrodes,
a step of generating a new task queue for the input task, comprising:
locking the main queue and adding the input task to the empty queue that is not executed in the thread to generate the new task queue.
5. The method of claim 1, wherein the step of executing the tasks in the task queues in the order of the task queues in the main queue by recycling the threads in the thread pool comprises:
when there are idle threads in the thread pool,
according to the number of the idle threads and the sequence of the task queues in the main queue, acquiring the task queues with the number of the idle threads from the main queue;
allocating an idle thread for each acquired task queue to execute each task in the task queue;
and releasing the threads occupied by the task queue after all tasks in one task queue are executed, so that the released idle threads are used for continuously executing the tasks of the rest task queues in the main queue until the tasks of all task queues in the main queue are executed.
6. A task execution apparatus, comprising:
the queue generating module is used for generating task queues according to the dependency relationship among the tasks and sequencing the task queues in a main queue, wherein the dependency relationship among the tasks determines the execution sequence of the tasks in the task queues, and the main queue determines the execution sequence of the task queues;
and the task execution module is used for circularly utilizing the threads in the thread pool and executing the tasks in the task queues according to the sequence of the task queues in the main queue.
7. The apparatus of claim 6, wherein the queue generating module is further configured to:
judging whether the input task has a corresponding task queue according to the dependency relationship among the tasks;
if yes, adding the input task to a corresponding task queue;
otherwise, generating a new task queue for the input task, and arranging the new task queue in the main queue.
8. The apparatus of claim 7, wherein the queue generating module comprises a determining submodule configured to:
judging whether the queue on which the input task depends is an empty queue at present according to the dependency relationship among the tasks;
if so, judging whether the empty queue is being executed in the thread, if so, determining that the input task has a corresponding task queue, and if not, determining that the input task does not have a corresponding task queue;
and if the queue on which the input task depends is not an empty queue currently, the input task has a corresponding task queue.
9. The apparatus of claim 8, wherein the queue generating module comprises a locking sub-module configured to:
locking the empty queue that is not executing in the thread; and the number of the first and second electrodes,
the queue generating module further comprises a task queue generating submodule for:
locking the main queue and adding the input task to the empty queue that is not executed in the thread to generate the new task queue.
10. The apparatus of claim 6, wherein the task execution module is further configured to:
when there are idle threads in the thread pool,
according to the number of the idle threads and the sequence of the task queues in the main queue, acquiring the task queues with the number of the idle threads from the main queue;
allocating an idle thread for each acquired task queue to execute each task in the task queue;
and releasing the threads occupied by the task queue after all tasks in one task queue are executed, so that the released idle threads are used for continuously executing the tasks of the rest task queues in the main queue until the tasks of all task queues in the main queue are executed.
11. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201811502731.0A 2018-12-10 2018-12-10 Task execution method and device Pending CN111290842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811502731.0A CN111290842A (en) 2018-12-10 2018-12-10 Task execution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811502731.0A CN111290842A (en) 2018-12-10 2018-12-10 Task execution method and device

Publications (1)

Publication Number Publication Date
CN111290842A true CN111290842A (en) 2020-06-16

Family

ID=71028920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811502731.0A Pending CN111290842A (en) 2018-12-10 2018-12-10 Task execution method and device

Country Status (1)

Country Link
CN (1) CN111290842A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101565A (en) * 2020-09-08 2020-12-18 支付宝(杭州)信息技术有限公司 Model iteration realization method and device based on acceleration chip
CN113268325A (en) * 2021-05-21 2021-08-17 北京达佳互联信息技术有限公司 Method, device and storage medium for scheduling task
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101565A (en) * 2020-09-08 2020-12-18 支付宝(杭州)信息技术有限公司 Model iteration realization method and device based on acceleration chip
CN112101565B (en) * 2020-09-08 2023-07-11 支付宝(杭州)信息技术有限公司 Model iteration implementation method and device based on acceleration chip
CN113268325A (en) * 2021-05-21 2021-08-17 北京达佳互联信息技术有限公司 Method, device and storage medium for scheduling task
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Similar Documents

Publication Publication Date Title
US10896060B1 (en) Resource monitor for monitoring long-standing computing resources
CN107241281B (en) Data processing method and device
CN110825535B (en) Job scheduling method and system
CN113641457B (en) Container creation method, device, apparatus, medium, and program product
CN111897633A (en) Task processing method and device
CN113849312B (en) Data processing task allocation method and device, electronic equipment and storage medium
CN111259205B (en) Graph database traversal method, device, equipment and storage medium
CN111478781B (en) Message broadcasting method and device
CN111290842A (en) Task execution method and device
CN114020470A (en) Resource allocation method, device, readable medium and electronic equipment
CN110851276A (en) Service request processing method, device, server and storage medium
CN115904761A (en) System on chip, vehicle and video processing unit virtualization method
CN112000734A (en) Big data processing method and device
CN112104679A (en) Method, apparatus, device and medium for processing hypertext transfer protocol request
CN107045452B (en) Virtual machine scheduling method and device
US9609082B2 (en) Processing a unit of work
CN110245027B (en) Inter-process communication method and device
CN109284177B (en) Data updating method and device
US20230096015A1 (en) Method, electronic deviice, and computer program product for task scheduling
CN114374657A (en) Data processing method and device
CN114490050A (en) Data synchronization method and device
CN112306695A (en) Data processing method and device, electronic equipment and computer storage medium
CN113760487A (en) Service processing method and device
CN113626176A (en) Service request processing method and device
CN113779451A (en) Page loading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination