WO2013091219A1 - 并发任务的处理方法及装置 - Google Patents

并发任务的处理方法及装置 Download PDF

Info

Publication number
WO2013091219A1
WO2013091219A1 PCT/CN2011/084451 CN2011084451W WO2013091219A1 WO 2013091219 A1 WO2013091219 A1 WO 2013091219A1 CN 2011084451 W CN2011084451 W CN 2011084451W WO 2013091219 A1 WO2013091219 A1 WO 2013091219A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
thread
subtask
reached
tasks
Prior art date
Application number
PCT/CN2011/084451
Other languages
English (en)
French (fr)
Inventor
袁学文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201180003339.1A priority Critical patent/CN102630316B/zh
Priority to PCT/CN2011/084451 priority patent/WO2013091219A1/zh
Publication of WO2013091219A1 publication Critical patent/WO2013091219A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Definitions

  • the present invention relates to communications technologies, and in particular, to a method and an apparatus for processing concurrent tasks. Background technique
  • Concurrency processing for concurrent tasks is generally implemented in a multi-threaded manner. For example, when a concurrent task is received, each task is handed over to an idle thread. After the thread is started, it is processed according to the task request, and the thread is released until the task ends.
  • This way of concurrent processing requires the creation of more threads, and the excessive number of threads can cause system scheduling to be busy and inefficient.
  • the second task is allocated to the subtask when the subtask reaches a predetermined running condition, and the processing progress of the first task is recorded, and the first thread is released.
  • Another aspect of the present invention provides a processing apparatus for concurrent tasks, including:
  • An allocating module configured to allocate a first thread to a first task that has reached a predetermined running condition, so that the first thread processes the first task, where the first task is one of the recorded concurrent tasks;
  • FIG. 1 is a flowchart of a processing method of a concurrent task according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of scheduling multiple subtasks according to Embodiment 1 of the present invention.
  • FIG. 3 is a flowchart of a processing method of a concurrent task according to Embodiment 2 of the present invention.
  • FIG. 4 is a flowchart of a processing method of a concurrent task according to Embodiment 3 of the present invention.
  • FIG. 5 is a flowchart of a processing method of a concurrent task according to Embodiment 4 of the present invention.
  • FIG. 6 is a flowchart of a processing method of a concurrent task according to Embodiment 5 of the present invention.
  • FIG. 7 is a schematic structural diagram of a processing device for concurrent tasks according to Embodiment 6 of the present invention. detailed description
  • FIG. 1 is a flowchart of a processing method of a concurrent task according to Embodiment 1 of the present invention. As shown in FIG. 1, the method includes:
  • Step 101 The task scheduler allocates a first thread to the first task that has reached the predetermined running condition, so that the first thread processes the first task, and the first task is one of the recorded concurrent tasks.
  • the task scheduler periodically traverses all the tasks in the recorded concurrent task. Once a task is found to reach the scheduled running condition of the task, an idle thread is assigned to the task.
  • the first task is the proxy of any one of the recorded concurrent tasks.
  • the first thread is also the proxy of any one of the idle threads, and is not required to limit the scope of protection of the present application.
  • all tasks can be recorded in the task data record table.
  • This task data record table can be created at system initialization and then persisted to the local database.
  • the task data record table can be, but is not limited to, as described in Table 1 below:
  • Each task recorded in the task data record table may include, but is not limited to, the following information: task identifier, task status, parent task identifier, subtask identifier, and subtask status.
  • the task identifier is used to uniquely identify a task, such as a task ID, such as TaskOOl shown in Table 1 above.
  • the task status is used to identify whether the task has reached the scheduled operating conditions, is running, is waiting for the subtask result status, and has completed.
  • the task status can be set to Preprocess when the predetermined operating conditions are not met, the scheduled operating condition can be set to Ready (ready to run), and the running can set the task status to Running (running) ), while waiting for the subtask result status, you can set the task status to Waiting, and you can complete the task status to Done.
  • Both the parent task ID and the subtask ID are used to represent the relationship between the tasks.
  • the parent task ID is used to find the parent task and then activate the parent task to continue processing after the current task is processed.
  • the subtask status is used to identify whether the subtask has reached the scheduled operating condition, is running, is waiting for the subtask result status, and has completed.
  • a task can have both a parent task and a child task, such as the task identified as Task004 in Table 1, or only the child task has no parent task, such as the task identified as TaskOOl in Table 1, or only the parent task has no child.
  • Tasks such as the task identified as Task002 in Table 1, can also have neither a parent task nor a subtask, which is not shown in Table 1. When there are no subtasks or parent tasks, they can be marked as NULL.
  • the predetermined operating condition may be a flag bit, such as when the flag representing the predetermined operating condition is set, representing that the predetermined operating condition is reached, or may be a data bit, when the data bit representing the predetermined operating condition (eg, task status) When "Ready", it means that the predetermined operating conditions are reached. While both of the above are illustrative of the likelihood of a predetermined operating condition, the form of predetermined operating conditions obtained by those skilled in the art in light of the above description is within the scope of the present application.
  • Step 102 If the first task has a subtask, assign a second thread to the subtask when the subtask reaches the predetermined running condition, and record the processing progress of the first task, and release the first thread.
  • the parent and child tasks are described in detail here. In general, if the execution of one task depends on the execution result of another task, you can think of "one task” here as “another The parent task of a task, and the “other task” is a child task of "one task”. That is, when the parent task is in the process of execution, the child task is started, that is, the child task reaches the predetermined running condition, which is the child. The task allocates an idle thread (herein called the second thread). Normally, the thread of the parent task is not released, but waits for the execution result of the child task, and then the execution result of the parent task is obtained according to the execution result of the child task.
  • the second thread an idle thread
  • the processing progress of the parent task is recorded, and then the thread of the parent task is released.
  • the advantages of this are: reducing the thread occupation, making more The thread can serve the concurrent request.
  • the subtask can also be used as the parent task of other tasks, and the loop task is processed to realize the modular processing of the subtask, that is, each subtask can be independent. Designing its own subtasks, the task scheduler is responsible for unified scheduling.
  • FIG 2 the embodiment of the present invention Schematic diagram of multiple subtask scheduling, task 5 is a subtask of task 4, task 4 is a subtask of task 3, task 3 is a subtask of task 2, and task 2 is a subtask of task 1.
  • task 4 There is no corresponding thread in Task 3, Task 2, and Task 1. Only when the task as a subtask is completed, the corresponding parent task is triggered again, so that the processing is continued based on the recorded processing progress described above. Parent task.
  • the relationship between the parent task and the child task may be set in advance, and the task that can be processed in milliseconds may not set the subtask.
  • FIG. 3 is a flowchart of a method for processing a concurrent task according to Embodiment 2 of the present invention. As shown in FIG. 3, step 301 and step 302 are the same as step 101 and step 102 in FIG. 1, and details are not described herein. After step 302, the method may further include:
  • Step 303 Receive an execution result of the subtask, and allocate a new thread to the first task, so that the new thread continues to process the first thread according to the execution result of the subtask based on the processing progress of the recorded first task. task.
  • the thread is reassigned for the parent task, which realizes the fast turnover of the thread, which is beneficial to better process the concurrent request.
  • steps 401 to 403 are the same as steps 301 to 303 in FIG. 3, and are not described herein.
  • the method may further include:
  • Step 400 The server receives the concurrent task request, and allocates an idle thread (here called a third thread) for the task request, so that the third thread records the task corresponding to the task request.
  • the third thread is also a proxy for any one of the idle threads, and is not intended to limit the scope of protection of the present application.
  • the task information related to the task may be recorded in the task data record table shown in Table 1, including but not limited to the following information: task identifier, task status, parent task identifier, subtask identifier, and subtask status.
  • FIG. 5 is a flowchart of a method for processing a concurrent task according to Embodiment 4 of the present invention. As shown in FIG. 5, steps 500 to 503 are the same as steps 400 to 403 in FIG. 4, and are not described herein. After step 500, the method may further include:
  • Step 500a releasing the third thread for recording the task in step 500.
  • the third thread for recording the task in step 500 can be released after the recording is completed, and the fast transit of the thread is better realized.
  • FIG. 6 is a flowchart of a method for processing a concurrent task according to Embodiment 5 of the present invention. As shown in FIG. 6, steps 600 to 603 are the same as steps 500 to 503 in FIG. 5, and are not described herein. Before step 601, the method may further include:
  • Step 600b Create a query task, and allocate an idle thread (referred to as a fourth thread) for the query task.
  • the fourth thread is also a proxy for any one of the idle threads and is not intended to limit the scope of protection of this application.
  • the main purpose of creating this query task is to effectively control the number of tasks that reach the predetermined operating conditions at the same time, and temporarily delay the tasks that reach the predetermined operating conditions to ensure that the number of simultaneous threads is not more than the maximum allowed by the thread.
  • the maximum number of threads allowed to be allocated at the same time is the maximum number of threads allowed to be processed simultaneously by the system. Generally, it can be set empirically, and there is no limit here.
  • This query task can effectively prevent too many threads at a certain time, resulting in excessive CPU fluctuations and unstable systems.
  • Step 600c The fourth thread queries the number of tasks that have reached the predetermined running condition
  • step 600d is performed
  • step 601 is performed.
  • each different task is assigned a different delay time.
  • the waiting time is set to 500ms, that is, the task with the I/O processing time of 500ms or more is determined to be a long-time task.
  • the task that the thread processing can be delayed for a certain time it can be considered as a task that meets the preset condition, that is, the task that the thread processing can be delayed for a certain time.
  • the saved task data record table can be, but is not limited to, as shown in Table 2 below:
  • a predetermined wait time is newly added in Table 2 to characterize the delay time allocated for each different task.
  • the processing method of the concurrent request provided by the embodiment of the present invention, if one task is the parent task of another task, in the process of processing the parent task, if the child task meets the predetermined running condition, the processing progress of the parent task is recorded, and Release the thread that processes the parent task. After the child task is processed, assign a new thread to the parent task again, and continue processing the parent task according to the recorded processing progress, thereby realizing the fast turnaround of the thread, effectively improving the system. Processing the number of concurrent task requests enhances the processing power of the system.
  • FIG. 7 is a schematic structural diagram of a processing device for a concurrent task according to Embodiment 6 of the present invention.
  • the device is a specific execution body of the foregoing method embodiment, and the specific execution process may refer to the description in the foregoing method embodiment. Said.
  • the processing device of the concurrent task includes: an allocation module 701 and a release module 702.
  • the allocating module 701 is configured to allocate a first thread to the first task that has reached the predetermined running condition, so that the first thread processes the first task, the first A task is a task in a recorded concurrent task.
  • the release module 702, if the first task has a subtask, is used to allocate a second thread to the subtask when the subtask reaches the predetermined running condition, and records the processing progress of the first task, and releases the first thread.
  • the device may further include: a receiving module.
  • the receiving module is configured to receive an execution result of the subtask, and allocate a new thread to the first task, so that the new thread continues to process the subtask according to the execution result of the subtask.
  • the first task may further include: a receiving module.
  • the receiving module is configured to receive an execution result of the subtask, and allocate a new thread to the first task, so that the new thread continues to process the subtask according to the execution result of the subtask. The first task.
  • the device may further include: a recording module.
  • the recording module is configured to receive a concurrent task request, and allocate a third thread to the task request, so that the third thread records the task corresponding to the task request.
  • the releasing module 702 is further configured to: after the recording module allocates an idle thread for the task request, so that the third thread records the task corresponding to the task request, releasing the third thread.
  • the apparatus may further include: a creating module, configured to create a query task, and assign a fourth thread to the query task, so that the fourth thread queries the number of tasks that have reached the predetermined running condition; if the scheduled operation has been reached The number of tasks of the condition does not exceed the maximum value that the thread allows to be simultaneously allocated, and the allocation module 701 assigns the first thread to the first task that has reached the predetermined operating condition.
  • a creating module configured to create a query task, and assign a fourth thread to the query task, so that the fourth thread queries the number of tasks that have reached the predetermined running condition; if the scheduled operation has been reached The number of tasks of the condition does not exceed the maximum value that the thread allows to be simultaneously allocated, and the allocation module 701 assigns the first thread to the first task that has reached the predetermined operating condition.
  • the creation module can also be used to: If the number of tasks that have reached the predetermined running condition exceeds the maximum value that the thread allows to be simultaneously allocated, the task that meets the preset condition is delayed by a preset time.
  • each task in the recorded concurrent task, records at least the following information: a task identifier, a parent task identifier, a subtask identifier, a subtask status, and the like.
  • the processing device of the concurrent request provided by the embodiment of the present invention, if one task is a parent task of another task, in the process of processing the parent task, if the child task meets the predetermined running condition, the processing progress of the parent task is recorded, and Release the thread that processes the parent task. After the child task is processed, assign a new thread to the parent task again, and continue processing the parent task according to the recorded processing progress, thereby realizing the fast turnaround of the thread, effectively improving the system. Processing the number of concurrent task requests enhances the processing power of the system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本发明提供一种并发任务的处理方法及装置。方法包括:为已达到预定运行条件的第一任务分配第一线程,以使所述第一线程处理所述第一任务,所述第一任务为已记录的并发任务中的一个任务;如果所述第一任务具有子任务,则在所述子任务达到预定运行条件时为所述子任务分配第二线程,并记录所述第一任务的处理进度,然后释放所述第一线程。装置包括:分配模块和释放模块。

Description

并发任务的处理方法及装置 技术领域 本发明涉及通信技术, 尤其涉及一种并发任务的处理方法及装置。 背景技术
在互联网技术(Internet Technology, 简称为: IT )领域的面向服务架构 ( Service-Oriented Architecture, 简称为: SOA )应用, 特别是页面服务(Web Service )应用中, 会遇到大量的并发任务。 服务提供商为了提高用户的体验 或者系统的响应速度, 通常会对这些并发任务进行并发处理。
对于并发任务的并发处理, 一般是采用多线程的方式实现。 例如, 接收 到并发任务时, 将每个任务交给一个空闲的线程, 线程启动后, 根据任务请 求进行处理, 直到任务结束后释放线程。 这种并发处理的方式需要创建较多 的线程, 而线程数目过多会导致系统调度繁忙, 效率低下。 发明内容 本发明的第一个方面是提供一种并发任务的处理方法, 包括:
为已达到预定运行条件的第一任务分配第一线程, 以使所述第一线程处 理所述第一任务, 所述第一任务为已记录的并发任务中的一个任务;
如果所述第一任务具有子任务, 则在所述子任务达到预定运行条件时为 所述子任务分配第二线程, 并记录所述第一任务的处理进度, 释放所述第一 线程。
本发明的另一个方面是提供一种并发任务的处理装置, 包括:
分配模块, 用于为已达到预定运行条件的第一任务分配第一线程, 以使 所述第一线程处理所述第一任务, 所述第一任务为已记录的并发任务中的一 个任务;
释放模块, 如果所述第一任务具有子任务, 则在所述子任务达到预定运 行条件时, 用于为所述子任务分配第二线程, 并记录第一任务的处理进度, 释放所述第一线程。 本发明的技术效果是: 当一个任务是另一个任务的父任务, 则在处理父 任务的过程中, 如果子任务满足预定运行条件, 记录该父任务的处理进度, 并释放处理该父任务的线程, 实现了线程的快速周转使用, 有效提高了系统 处理并发任务请求的数量, 增强了系统的处理能力。 附图说明 图 1为本发明实施例一提供的并发任务的处理方法流程图;
图 2为本发明实施例一提供的多重子任务调度示意图;
图 3为本发明实施例二提供的并发任务的处理方法流程图;
图 4为本发明实施例三提供的并发任务的处理方法流程图;
图 5为本发明实施例四提供的并发任务的处理方法流程图;
图 6为本发明实施例五提供的并发任务的处理方法流程图;
图 7为本发明实施例六提供的并发任务的处理装置结构示意图。 具体实施方式
图 1为本发明实施例一提供的并发任务的处理方法流程图,如图 1所示, 该方法包括:
步骤 101、 任务调度器为已达到预定运行条件的第一任务分配第一线程, 以使该第一线程处理该第一任务, 该第一任务为已记录的并发任务中的一个 任务。
这里需要说明的是, 该任务调度器会定时遍历已记录的并发任务中的全 部任务, 一旦发现某一个任务达到该任务的预定运行条件, 就为该任务分配 一个空闲线程。 第一任务是已记录的并发任务中的任意一个任务的代指, 第 一线程也是空闲线程中任意一个线程的代指,不用以限制本申请的保护范围。 本发明实施例中可以将全部任务记录在任务数据记录表中, 这个任务数据记 录表可以在系统初始化时创建, 然后被持久化到本地数据库。 任务数据记录 表可以但不限于如下表 1所述:
表 1
Figure imgf000003_0001
Task002 Running TaskOOl NULL
Task003 Waiting NULL Task004 Ready
Task004 Waiting Task003 Task005 Waiting
Task005 Running Task004 NULL
其中, 任务数据记录表中记录的每个任务可以但不限于包括如下信息: 任务标识、 任务状态、 父任务标识、 子任务标识以及子任务状态。 其中, 任 务标识用于唯一标识一个任务,如任务 ID,例如上表 1中所示的 TaskOOl等。 任务状态用于标识该任务是否已达到预定运行条件、 正在运行、 是否处于等 待子任务结果状态、 是否已完成。 例如, 未达到预定运行条件时可以将任务 状态设置为 Preprocess (预处理), 达到预定运行条件可以将任务状态设置为 Ready (准备就绪可以运行) , 正在运行可以将任务状态设置为 Running (正 在运行), 处于等待子任务结果状态可以将任务状态设置为 Waiting (等待), 已完成可以将任务状态设置为 Done (完成)。 父任务标识以及子任务标识都 是用于表征任务之间的彼此联系, 父任务标识用于在当前任务处理完成后, 通过自己保存的父任务标识找到父任务进而激活父任务进行继续处理。 子任 务状态用于标识子任务是否已达到预定运行条件、 正在运行、 是否处于等待 子任务结果状态、 是否已完成。 一个任务可以既有父任务又有子任务, 如表 1中任务标识为 Task004的任务,也可以只有子任务没有父任务,如表 1中任 务标识为 TaskOOl的任务, 还可以只有父任务没有子任务, 如表 1 中任务标 识为 Task002的任务, 同样可以既没有父任务也没有子任务, 表 1中未示出。 没有子任务或者父任务时, 可以标记为 NULL。
预定运行条件可以是一个标志位, 如当该代表预定运行条件的标志位被 置位时, 代表达到预定运行条件, 或者可以是一个数据位, 当该代表预定运 行条件的数据位(如任务状态)为 "Ready" 时, 代表达到预定运行条件。 当 然上述两种方式都是对预定运行条件可能性的说明, 本领域技术人员根据上 述说明的启示得到的预定运行条件的形式都在本申请的保护范围内。
步骤 102、 如果第一任务具有子任务, 则在子任务达到预定运行条件时 为该子任务分配第二线程, 并记录该第一任务的处理进度, 释放该第一线程。
这里对父任务和子任务进行详细的说明。 一般说来, 如果一个任务的 执行依赖于另一个任务的执行结果,则可以认为这里的 "一个任务"为 "另 一个任务" 的父任务, 而 "另一个任务" 为 "一个任务" 的子任务。 也就 是说, 当父任务在执行过程中, 会启动子任务, 也即子任务达到预定运行 条件, 为子任务分配空闲的线程(这里叫做第二线程) 。 通常情况下, 父 任务的线程是不会释放的, 而是等待子任务的执行结果, 继而根据子任务 的执行结果得到父任务的执行结果。 但是在本发明实施例中, 一旦为子任 务分配空闲的线程, 便会记录父任务的处理进度, 继而将父任务的线程释 放掉。 这样做的好处在于: 减少了线程的占用, 使得更多的线程可以为并 发请求服务。 需要说明的是, 子任务还可以作为其他任务的父任务, 而进 行一环套一环的处理, 从而实现子任务的模块化处理, 即每个子任务都可 以独立设计自己的子任务, 由任务调度器负责统一调度。 如图 2所示的本 发明实施例一提供的多重子任务调度示意图, 任务 5为任务 4的子任务、 任务 4为任务 3的子任务, 任务 3为任务 2的子任务, 任务 2为任务 1的 子任务。 当任务 5运行时, 任务 4、 任务 3、 任务 2和任务 1都不存在对 应的线程。 只有作为子任务的任务完成时, 才会使得相应的父任务再次被 触发, 从而在上述已记录的处理进度的基础上继续处理该父任务。
还需要说明的是, 父任务与子任务之间的关系可以是事先设置好的, 对于毫秒内就可以处理完的任务可以不设置子任务。
图 3为本发明实施例二提供的并发任务的处理方法流程图,如图 3所示, 步骤 301 以及步骤 302与图 1中的步骤 101 以及步骤 102是相同的, 此处不 做赘述。 在步骤 302之后, 该方法还可以包括:
步骤 303、 接收到子任务的执行结果, 为第一任务分配新的线程, 以使 该新的线程在已记录的第一任务的处理进度的基础上根据子任务的执行结果 继续处理该第一任务。
得到子任务的执行结果后再为父任务重新分配线程, 实现了线程的快速 周转, 有利于更好地处理并发请求。
图 4为本发明实施例三提供的并发任务的处理方法流程图,如图 4所示, 步骤 401至步骤 403与图 3中的步骤 301至步骤 303是相同的, 此处不做赞 述。 在步骤 401之前, 该方法还可以包括:
步骤 400、 服务器接收到并发的任务请求, 为任务请求分配空闲的线程 (这里叫第三线程) , 以使该第三线程将任务请求对应的任务进行记录。 第三线程也是空闲线程中任意一个线程的代指, 不用以限制本申请的保 护范围。
具体可以是在表 1所示的任务数据记录表中记录任务的相关信息, 包括 但不限于如下信息: 任务标识、 任务状态、 父任务标识、 子任务标识以及子 任务状态。
图 5为本发明实施例四提供的并发任务的处理方法流程图,如图 5所示, 步骤 500至步骤 503与图 4中的步骤 400至步骤 403是相同的, 此处不做赞 述。 在步骤 500之后, 该方法还可以包括:
步骤 500a、 释放步骤 500中用于记录任务的第三线程。
步骤 500中用于记录任务的第三线程在完成记录后, 即可以被释放, 更 好地实现了线程的快速中转。
图 6为本发明实施例五提供的并发任务的处理方法流程图,如图 6所示, 步骤 600至步骤 603与图 5中的步骤 500至步骤 503是相同的, 此处不做赞 述。 在步骤 601之前, 该方法还可以包括:
步骤 600b、 创建查询任务, 为该查询任务分配空闲的线程(这里叫做第 四线程) 。
第四线程也是空闲线程中任意一个线程的代指, 不用以限制本申请的保 护范围。
创建该查询任务的主要目的在于: 有效控制同时达到预定运行条件的任 务数量, 对部分达到预定运行条件的任务进行暂时的延迟, 以保证同时分配 线程的数量不超过线程允许同时分配的最大值。 线程允许同时分配的最大值 是允许系统同时处理线程的数量的最大值, 一般可以根据经验进行设定, 此 处不做限制。 该查询任务可以有效防止某一时刻线程过多, 导致的 CPU波动 过大, 系统不稳定的问题。
步骤 600c、 第四线程查询已达到预定运行条件的任务数量;
如果超出线程允许同时分配的最大值, 执行步骤 600d;
如果未超出线程允许同时分配的最大值, 执行步骤 601。 其中, 每一个不同的任务分配不同的延迟时间。 对于处理 I/O耗时^艮长 的任务, 如云计算中创建虚拟卷或创建虚拟机映像等任务, 一般可以将预定 等待时间设定为 500ms, 即 I/O处理时间在 500ms以上的任务定为耗时长的 任务。 对于这种耗时长的任务, 就可以认为是符合预设条件的任务, 即可以 延迟一定时间分配线程处理的任务。
在服务器接收到任务请求时, 即可对上述内容进行记录, 并保存在任务 数据记录表。 保存后的任务数据记录表可以但不限于如下表 2所示:
表 2
Figure imgf000007_0001
相对于表 1所示的任务数据记录表, 表 2中新增加了预定等待时间, 用 以表征每一个不同的任务所分配的延迟时间。
本发明实施例提供的并发请求的处理方法, 如果一个任务是另一个任务 的父任务, 则在处理父任务的过程中, 如果子任务满足预定运行条件, 则记 录该父任务的处理进度, 并释放处理该父任务的线程, 当子任务处理完成后, 再次为该父任务分配新的线程,并根据已记录的处理进度继续处理该父任务, 实现了线程的快速周转使用, 有效提高了系统处理并发任务请求的数量, 增 强了系统的处理能力。
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分步骤 可以通过程序指令相关的硬件来完成, 前述的程序可以存储于一计算机可读 取存储介质中, 该程序在执行时, 执行包括上述方法实施例的步骤; 而前述 的存储介质包括: ROM、 RAM, 磁碟或者光盘等各种可以存储程序代码的介 质。
图 7为本发明实施例六提供的并发任务的处理装置结构示意图, 该装置 是上述方法实施例的特定执行主体, 其具体的执行流程可以参考上述方法实 施例中的描述, 此处不做赞述。 如图 7所示, 该并发任务的处理装置包括: 分配模块 701以及释放模块 702。 其中, 分配模块 701 , 用于为已达到预定运 行条件的第一任务分配第一线程, 以使该第一线程处理该第一任务, 该第一 任务为已记录的并发任务中的一个任务。 释放模块 702 , 如果该第一任务具 有子任务, 则在子任务达到预定运行条件时, 用于为该子任务分配第二线程, 并记录该第一任务的处理进度, 释放该第一线程。
在上述装置的基础上, 该装置还可以包括: 接收模块。 该接收模块用于 接收该子任务的执行结果, 为该第一任务分配新的线程, 以使该新的线程在 该第一任务的处理进度的基础上根据该子任务的执行结果继续处理该第一任 务。
在上述装置的基础上, 该装置还可以包括: 记录模块。 该记录模块用于 接收到并发的任务请求, 为任务请求分配第三线程, 以使该第三线程将任务 请求对应的任务进行记录。
进一步的, 释放模块 702还用于: 在记录模块为任务请求分配空闲线程, 以使该第三线程将任务请求对应的任务进行记录之后, 释放该第三线程。
上述实施方式的基础上, 该装置还可以包括: 创建模块, 用于创建查询 任务, 为查询任务分配第四线程, 以使第四线程查询已达到预定运行条件的 任务数量; 如果已达到预定运行条件的任务数量未超出线程允许同时分配的 最大值, 则分配模块 701为已达到预定运行条件的第一任务分配第一线程。
进一步的, 创建模块还可以用于: 如果已达到预定运行条件的任务数量 超出线程允许同时分配的最大值, 将符合预设条件的任务延迟预设时间。
上述实施方式中, 已记录的并发任务中, 每个任务至少记录如下信息: 任务标识、 父任务标识、 子任务标识、 子任务状态等参数。
本发明实施例提供的并发请求的处理装置, 如果一个任务是另一个任务 的父任务, 则在处理父任务的过程中, 如果子任务满足预定运行条件, 则记 录该父任务的处理进度, 并释放处理该父任务的线程, 当子任务处理完成后, 再次为该父任务分配新的线程,并根据已记录的处理进度继续处理该父任务, 实现了线程的快速周转使用, 有效提高了系统处理并发任务请求的数量, 增 强了系统的处理能力。
最后应说明的是: 以上各实施例仅用以说明本发明的技术方案, 而非对 其限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领域的普通 技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改, 或者对其中部分或者全部技术特征进行等同替换; 而这些修改或者替换, 并 不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims

权 利 要 求 书
1、 一种并发任务的处理方法, 其特征在于, 包括:
为已达到预定运行条件的第一任务分配第一线程, 以使所述第一线程处 理所述第一任务, 所述第一任务为已记录的并发任务中的一个任务;
如果所述第一任务具有子任务, 则在所述子任务达到预定运行条件时为 所述子任务分配第二线程, 并记录所述第一任务的处理进度, 释放所述第一 线程。
2、 根据权利要求 1所述的方法, 其特征在于, 所述释放所述第一线程之 后, 所述方法还包括:
接收到所述子任务的执行结果, 为所述第一任务分配新的线程, 以使所 述新的线程在所述第一任务的处理进度的基础上根据所述子任务的执行结果 继续处理所述第一任务。
3、 根据权利要求 1或 2所述的方法, 其特征在于, 所述为已达到预定运 行条件的第一任务分配第一线程之前, 所述方法还包括:
接收到并发的任务请求;
为所述任务请求分配第三线程, 以使所述第三线程将所述任务请求对应 的任务进行记录。
4、 根据权利要求 3所述的方法, 其特征在于, 所述为所述任务请求分配 第三线程, 以使所述第三线程将所述任务请求对应的任务进行记录之后, 所 述方法还包括:
释放所述第三线程。
5、 根据权利要求 1至 4任一所述的方法, 其特征在于, 所述为已达到预 定运行条件的第一任务分配第一线程, 具体包括:
创建查询任务, 为所述查询任务分配第四线程, 以使所述第四线程查询 已达到所述预定运行条件的任务数量; 如果所述已达到所述预定运行条件的 任务数量未超出预设值, 则为已达到所述预定运行条件的所述第一任务分配 第一线程。
6、 根据权利要求 5所述的方法, 其特征在于, 如果所述已达到所述预定 运行条件的任务数量超出预设值, 所述方法还包括:
将符合预设条件的任务延迟预设时间。
7、 根据权利要求 1至 6中任一项所述的方法, 其特征在于, 所述已记录 的并发任务中, 每个任务至少记录如下信息: 任务标识、 任务状态、 父任务 标识、 子任务标识、 子任务状态。
8、 一种并发任务的处理装置, 其特征在于, 包括:
分配模块, 用于为已达到预定运行条件的第一任务分配第一线程, 以使 所述第一线程处理所述第一任务, 所述第一任务为已记录的并发任务中的一 个任务;
释放模块, 如果所述第一任务具有子任务, 则在所述子任务达到预定运 行条件时, 用于为所述子任务分配第二线程, 并记录所述第一任务的处理进 度, 然后释放所述第一线程。
9、 根据权利要求 8所述的装置, 其特征在于, 所述装置还包括: 接收模块, 用于接收所述子任务的执行结果, 为所述第一任务分配新的 线程 , 以使所述新的线程在所述第一任务的处理进度的基础上根据所述子任 务的执行结果继续处理所述第一任务。
10、根据权利要求 8或 9所述的装置, 其特征在于, 所述装置还包括: 记录模块, 用于接收并发的任务请求, 为所述任务请求分配第三线程, 以使所述第三线程将所述任务请求对应的任务进行记录。
1 1、 根据权利要求 10所述的装置, 其特征在于, 所述释放模块还用 于: 在所述记录模块为所述任务请求分配所述第三线程, 以使所述第三线程 将所述任务请求对应的任务进行记录之后, 释放所述第三线程。
12、 根据权利要求 8至 1 1任一所述的装置, 其特征在于, 还包括: 创建模块, 用于创建查询任务, 为所述查询任务分配第四线程, 以使所 述第四线程查询已达到所述预定运行条件的任务数量; 如果所述已达到所述 预定运行条件的任务数量未超出预设值, 则所述分配模块为已达到预定运行 条件的第一任务分配第一线程。
13、 根据权利要求 12所述的装置, 其特征在于, 所述创建模块还用于: 如果所述已达到所述预定运行条件的任务数量超出预设值, 将符合预设条件 的任务延迟预设时间。
14、 根据权利要求 8至 13中任一项所述的装置, 其特征在于, 所述已记 录的并发任务中, 每个任务至少记录如下信息: 任务标识、 任务状态、 父任 务标识、 子任务标识、 子任务状态。
PCT/CN2011/084451 2011-12-22 2011-12-22 并发任务的处理方法及装置 WO2013091219A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201180003339.1A CN102630316B (zh) 2011-12-22 2011-12-22 并发任务的处理方法及装置
PCT/CN2011/084451 WO2013091219A1 (zh) 2011-12-22 2011-12-22 并发任务的处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/084451 WO2013091219A1 (zh) 2011-12-22 2011-12-22 并发任务的处理方法及装置

Publications (1)

Publication Number Publication Date
WO2013091219A1 true WO2013091219A1 (zh) 2013-06-27

Family

ID=46588266

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/084451 WO2013091219A1 (zh) 2011-12-22 2011-12-22 并发任务的处理方法及装置

Country Status (2)

Country Link
CN (1) CN102630316B (zh)
WO (1) WO2013091219A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110333915A (zh) * 2019-05-31 2019-10-15 深圳壹账通智能科技有限公司 定时执行任务的方法、装置、计算机设备及存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942098A (zh) * 2014-04-29 2014-07-23 国家电网公司 一种任务处理系统和方法
CN105630588B (zh) * 2014-11-06 2019-03-01 卓望数码技术(深圳)有限公司 一种分布式作业调度方法和系统
CN106325989A (zh) * 2016-08-17 2017-01-11 东软集团股份有限公司 任务执行方法及装置
CN106531168B (zh) * 2016-11-18 2020-04-28 北京云知声信息技术有限公司 一种语音识别方法及装置
CN108319495B (zh) * 2017-01-16 2023-01-31 阿里巴巴集团控股有限公司 任务处理方法及装置
CN107256180B (zh) * 2017-05-19 2019-04-26 腾讯科技(深圳)有限公司 数据处理方法、装置及终端
CN107341054B (zh) * 2017-06-29 2020-06-16 广州市百果园信息技术有限公司 任务执行方法、装置及计算机可读存储介质
CN109582396B (zh) * 2017-09-25 2022-02-18 北京国双科技有限公司 一种任务状态处理方法、装置、系统及存储介质
CN107678838B (zh) * 2017-10-19 2021-07-02 郑州云海信息技术有限公司 一种跟踪虚拟机操作的方法、装置及虚拟机管理平台
CN108196882A (zh) * 2017-12-29 2018-06-22 普强信息技术(北京)有限公司 一种针对神经网络计算的加速方法及装置
CN117290113B (zh) * 2023-11-22 2024-02-13 太平金融科技服务(上海)有限公司 一种任务处理方法、装置、系统和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070773A1 (en) * 2007-09-10 2009-03-12 Novell, Inc. Method for efficient thread usage for hierarchically structured tasks
US20090172361A1 (en) * 2007-12-31 2009-07-02 Freescale Semiconductor, Inc. Completion continue on thread switch mechanism for a microprocessor
CN101596113A (zh) * 2008-06-06 2009-12-09 中国科学院过程工程研究所 一种ct并行重建系统及成像方法
US20100251257A1 (en) * 2009-03-30 2010-09-30 Wooyoung Kim Method and system to perform load balancing of a task-based multi-threaded application

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070773A1 (en) * 2007-09-10 2009-03-12 Novell, Inc. Method for efficient thread usage for hierarchically structured tasks
US20090172361A1 (en) * 2007-12-31 2009-07-02 Freescale Semiconductor, Inc. Completion continue on thread switch mechanism for a microprocessor
CN101596113A (zh) * 2008-06-06 2009-12-09 中国科学院过程工程研究所 一种ct并行重建系统及成像方法
US20100251257A1 (en) * 2009-03-30 2010-09-30 Wooyoung Kim Method and system to perform load balancing of a task-based multi-threaded application

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110333915A (zh) * 2019-05-31 2019-10-15 深圳壹账通智能科技有限公司 定时执行任务的方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN102630316A (zh) 2012-08-08
CN102630316B (zh) 2015-05-06

Similar Documents

Publication Publication Date Title
WO2013091219A1 (zh) 并发任务的处理方法及装置
US9507631B2 (en) Migrating a running, preempted workload in a grid computing system
US10223165B2 (en) Scheduling homogeneous and heterogeneous workloads with runtime elasticity in a parallel processing environment
EP2698711B1 (en) Method for dispatching central processing unit of hotspot domain virtual machine and virtual machine system
EP3786793B1 (en) Task processing method and device, and computer system
EP3073374B1 (en) Thread creation method, service request processing method and related device
US9792051B2 (en) System and method of application aware efficient IO scheduler
WO2017016421A1 (zh) 一种集群中的任务执行方法及装置
CN109697122B (zh) 任务处理方法、设备及计算机存储介质
CN109564528B (zh) 分布式计算中计算资源分配的系统和方法
WO2016078178A1 (zh) 一种虚拟cpu调度方法
US20080229319A1 (en) Global Resource Allocation Control
KR101474872B1 (ko) 클라우드 상에 가상 클러스터들의 효율적 구축을 위한 탄력적 가상 클러스터 관리 방법, 이를 이용한 가상 클러스터 관리 장치 및 클라우드 시스템
WO2016061935A1 (zh) 一种资源调度方法、装置及计算机存储介质
US9448862B1 (en) Listening for externally initiated requests
Kaur et al. Performance evaluation of task scheduling algorithms in virtual cloud environment to minimize makespan
KR20140070231A (ko) 맵리듀스 워크플로우 처리 장치와 방법 및 이를 저장한 기록 매체
JP4912927B2 (ja) タスク割当装置、及びタスク割当方法
US10013288B2 (en) Data staging management system
JP3215264B2 (ja) スケジュール制御装置とその方法
US20090133029A1 (en) Methods and systems for transparent stateful preemption of software system
KR20130033020A (ko) 매니코어 시스템에서의 파티션 스케줄링 장치 및 방법
CN113886089A (zh) 一种任务处理方法、装置、系统、设备及介质
US20180341524A1 (en) Task packing scheduling process for long running applications
CN112181661B (zh) 一种任务调度方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180003339.1

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11878072

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11878072

Country of ref document: EP

Kind code of ref document: A1