CN111400010A - Task scheduling method and device - Google Patents

Task scheduling method and device Download PDF

Info

Publication number
CN111400010A
CN111400010A CN202010189939.2A CN202010189939A CN111400010A CN 111400010 A CN111400010 A CN 111400010A CN 202010189939 A CN202010189939 A CN 202010189939A CN 111400010 A CN111400010 A CN 111400010A
Authority
CN
China
Prior art keywords
priority
executed
parallel
tasks
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010189939.2A
Other languages
Chinese (zh)
Inventor
曹德然
安兴朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202010189939.2A priority Critical patent/CN111400010A/en
Publication of CN111400010A publication Critical patent/CN111400010A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a task scheduling method and a task scheduling device, wherein the method comprises the following steps: determining the priority of each task in the batch tasks to be executed; determining a weight coefficient of each priority according to at least two priorities of the batch tasks; determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantity which can be executed in parallel; the invention determines the task quantity which can be executed in parallel by each priority based on the weight coefficient of each priority, so that the tasks with low priority have certain system resources to be executed, the tasks with all priorities are ensured to be executed simultaneously, and the overall execution efficiency of the tasks is improved.

Description

Task scheduling method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a task scheduling method and apparatus.
Background
Currently, many applications are used for executing batch tasks in each business processing system. When a batch task is executed, some methods ignore the priority of the task, so that an important task cannot be processed in time, some methods consider the priority of the task, but when the task is executed, the task with high priority always acquires all resources, the task with low priority can be executed only after the task with high priority is completely executed, and if the task with high priority is added continuously, the task with low priority may be in a waiting state for a long time because the task with low priority cannot acquire the resources, so that the overall execution efficiency of the task is low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a task scheduling method, which is used for improving the execution efficiency of tasks and comprises the following steps:
determining the priority of each task in the batch tasks to be executed;
determining a weight coefficient of each priority according to at least two priorities of the batch tasks;
determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantity which can be executed in parallel;
and scheduling the tasks to be executed in parallel at each priority according to the quantity of the tasks which can be executed in parallel at each priority.
The embodiment of the invention provides a task scheduling device, which is used for improving the execution efficiency of tasks and comprises the following components:
the priority determining module is used for determining the priority of each task in the batch tasks to be executed;
the weight coefficient determining module is used for determining the weight coefficient of each priority according to at least two priorities of the batch tasks;
the task quantity determining module is used for determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantities which can be executed in parallel;
and the execution module is used for scheduling the tasks of all the priorities to be executed in parallel according to the quantity of the tasks which can be executed in parallel at all the priorities.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the task scheduling method is realized when the processor executes the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the task scheduling method.
The embodiment of the invention comprises the following steps: determining the priority of each task in the batch tasks to be executed; determining a weight coefficient of each priority according to at least two priorities of the batch tasks; determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantity which can be executed in parallel; the invention determines the task quantity which can be executed in parallel by each priority based on the weight coefficient of each priority, so that the tasks with low priority have certain system resources to be executed, the tasks with all priorities are ensured to be executed simultaneously, and the overall execution efficiency of the tasks is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a diagram illustrating a task scheduling method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating adjustment of priority task volume according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a task scheduler architecture according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating another structure of a task scheduler according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an adjusting module structure according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Before describing embodiments of the present invention, terms of art to which embodiments of the present invention relate will be described.
Starvation task: in the batch task execution, the tasks are always in a waiting state;
asynchronous and batch tasks: the English and Chinese of the data dictionary is abbreviated as Aschn _ Tsk, and the frame code is generally called as Task. Is a collection of minimum schedulable unit jobs. The scheduling method can be divided into periodic tasks, real-time tasks and timing tasks according to scheduling occasions.
Operation: the English and Chinese of the data dictionary is abbreviated as Job, and the code is generally called Job. The task scheduling method is a minimum unit which can be scheduled in a batch task, a plurality of tasks form a task in a sequence, branch and combined job flow mode, and the task scheduling method is a functional module capable of completing a specific operation.
Example (c): english in the data dictionary is abbreviated as Exmp, and the code is generally called Exmp or Instance. Including a task instance and a JOB instance, are stored in a task instance table (S08T1_ ASCHN _ TSK _ EXMP) and a JOB instance table (S08T1_ JOB _ EXMP), respectively.
Next, the inventor finds technical problems and provides the idea of the task scheduling method.
When tasks are executed in batches, each task has different priorities, and in the process of executing the tasks in batches in the prior art, a task with a high priority always acquires system resources first, so that a task with a low priority always waits to be executed, and the task with the low priority is in an starvation state. The inventor finds the technical problems and provides a task scheduling method, which mainly uses a weighting algorithm to ensure that a low-priority task can be executed by acquiring a certain proportion of system resources when a high-priority task is not completely finished, thereby ensuring that the low-priority task is not starved because the high-priority task is not capable of acquiring the resources while the high-priority task is executed preferentially. The following describes the task scheduling method provided by the embodiment of the present invention in detail.
An embodiment of the present invention provides a task scheduling method, which is used to improve task execution efficiency, and fig. 1 is a schematic diagram of a task scheduling method flow in an embodiment of the present invention, as shown in fig. 1, the method includes:
step 101: determining the priority of each task in the batch tasks to be executed;
step 102: determining a weight coefficient of each priority according to at least two priorities of the batch tasks;
step 103: determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantity which can be executed in parallel;
step 104: and scheduling the tasks to be executed in parallel at each priority according to the quantity of the tasks which can be executed in parallel at each priority.
As shown in fig. 1, an embodiment of the present invention is implemented by: determining the priority of each task in the batch tasks to be executed; determining a weight coefficient of each priority according to at least two priorities of the batch tasks; determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantity which can be executed in parallel; the invention determines the task quantity which can be executed in parallel by each priority based on the weight coefficient of each priority, so that the tasks with low priority have certain system resources to be executed, the tasks with all priorities are ensured to be executed simultaneously, and the overall execution efficiency of the tasks is improved.
In specific implementation, in step 101, after obtaining batch tasks to be executed, the importance and the urgency of each task may be determined according to task information in the batch tasks, and the priority of each task, that is, the priority of each job, may be determined according to the importance and the urgency of each task, where the priority of a task may include three levels, i.e., a high level, a medium level, and a low level, and the priority may include multiple levels, which is not limited in this respect. And a task instance table and a job instance table can be generated according to task information such as task numbers, batch numbers, states, planning start time and the like in the batch tasks, so that subsequent scheduling is facilitated, as shown in tables 1 and 2.
TABLE 1 task example Table
Name (R) Description of the invention Type of field
SGL_TSK_ID Task instance numbering VARchar2(80) Main key
APL_BTCH_NO Application lot number VARchar2(600)
JFRM_TSK_EXEC_ST_CD Task instance state char(1)
PLN_STTM Scheduled start time char(17)
TABLE 2 working examples Table
Figure BDA0002415520750000041
In specific implementation, in step 102, a weight coefficient of each priority may be calculated according to at least two priorities of the batch tasks determined in step 101, for example, the batch tasks collectively include three priorities, i.e., high (a), medium (B), and low (C), and the degrees of the priorities may be represented by different numbers, where the priorities are: a is 0, B is 10, C is 20, and the priority ranges are: 0-100, then the weighting factor for priority a may be: 100-0 ═ 100; the weighting factor for priority B may be: 100-10 ═ 90; the weighting factor for priority C may be: the weight coefficient may be calculated by other means, for example, 80 to 100, and the present invention is not limited thereto. It should be noted that, because a new task may be added to the task pool of the batch tasks at any time, there may be a case where there is only one priority of the tasks in the batch tasks, and in this case, the corresponding number of tasks may be directly obtained from the batch tasks to be executed according to the total number of the tasks that can be executed in parallel.
In one embodiment, the step 103 of determining the amount of tasks that can be executed in parallel by each priority according to the weight coefficient of each priority and the total number of the tasks that can be executed in parallel may include:
when the total number of the tasks which can be executed in parallel can be divided by the sum of the weight coefficients of the priority levels, the tasks which can be executed in parallel by the priority levels are determined as follows:
Figure BDA0002415520750000051
in a specific implementation, the total number of tasks that can be executed in parallel may be the maximum number of tasks that can be executed in parallel at one time by the system, and when the total number of tasks that can be executed in parallel may be divided by the sum of the weight coefficients of each priority, the number of tasks that can be executed in parallel at each priority may be calculated according to equation (1), for example, if the total number of tasks that can be executed in parallel is 270, the weight coefficient of priority a is 100, the weight coefficient of priority B is 90, the weight coefficient of priority C is 80, and the sum of the weight coefficients of each priority is 100+90+80 + 270, then the number of tasks that can be executed in parallel at priority a is 270/270 × 100-100, the number of tasks that can be executed in parallel at priority B is 270/270 × 90-90, and the number of tasks that can be executed in parallel at priority C is 270/270 × 80-80.
In an embodiment, step 103, determining the amount of tasks that can be executed in parallel by each priority according to the weight coefficient of each priority and the total number of the tasks that can be executed in parallel, may further include:
when the total number of the tasks which can be executed in parallel is not evenly divisible by the sum of the weight coefficients of the priority levels, the tasks which can be executed in parallel by the priority levels are determined according to the following modes:
Figure BDA0002415520750000052
the remaining number of tasks that can be executed in parallel is assigned to the priority with the largest weight coefficient.
In specific implementation, when the total number of tasks that can be executed in parallel is not evenly divisible by the sum of the weighting factors of the priority levels, the number of tasks that can be executed in parallel at each priority level may be first calculated according to equation (2), for example, the total number of tasks that can be executed in parallel is 300, and the weighting factor of priority level a is 100If the weighting factor of the priority B is 90, the weighting factor of the priority C is 80, and the sum of the weighting factors of the priorities is 100+90+80 equal to 270, the task amount that the priority a can execute in parallel is 270
Figure BDA0002415520750000061
The task with priority B can be executed in parallel with the quantity of
Figure BDA0002415520750000062
The amount of tasks that can be executed in parallel by priority C is
Figure BDA0002415520750000063
At this time, the remaining number of tasks 300-90-80 that can be executed in parallel is 30, and all the tasks are allocated to the priority a, and the final task amount that can be executed in parallel of the priority a is 130.
In step 104, the tasks of each priority level are scheduled to be executed in parallel according to the amount of the tasks that can be executed in parallel by each priority level, so that the tasks of low priority levels have certain system resources to be executed, and the tasks of all priority levels are ensured to be executed simultaneously.
To further conserve resources, in one embodiment, the method may further comprise:
step 105: and when the residual task quantity of any priority is less than the task quantity which can be executed in parallel by the priority, adjusting the priority and the task quantity which can be executed in parallel by the next priority lower than the priority.
Fig. 2 is a schematic diagram of adjusting a priority task amount according to an embodiment of the present invention, as shown in fig. 2, in an embodiment, the step 105 may specifically include:
step 1051: determining the difference value between the residual task quantity of the priority and the task quantity which can be executed in parallel by the priority;
step 1052: accumulating the difference value to the task quantity which can be executed in parallel at the next priority lower than the priority; and taking the residual task quantity of the priority as the task quantity which can be executed in parallel by the priority.
In specific implementation, in step 105, when the tasks are executed in parallel by each priority, if the remaining task amount of any priority is less than the task amount that can be executed in parallel by the priority, it indicates that there is an idle resource in the priority, the difference between the remaining task amount of the priority and the task amount that can be executed in parallel by the priority can be accumulated to the task amount that can be executed in parallel by the next priority that is lower than the priority, and the remaining task amount of the priority is used as the task amount that can be executed in parallel by the priority, so that the idle resource is utilized, and the system resource is saved.
The following is a specific example to facilitate an understanding of how the invention may be practiced:
the first step is as follows: after the batch tasks to be executed are obtained, determining the importance and the emergency degree of each task according to task information in the batch tasks, and determining the priority of each task according to the importance and the emergency degree of each task;
the second step is that: calculating a weight coefficient of each priority according to at least two priorities of the batch tasks;
the third step: when the total number of the tasks which can be executed in parallel can be divided by the sum of the weight coefficients of all the priorities, the tasks which can be executed in parallel at all the priorities are calculated according to the formula (1); when the total number of the tasks capable of being executed in parallel is not evenly divisible by the sum of the weight coefficients of the priority levels, the tasks capable of being executed in parallel of the priority levels can be firstly calculated according to the formula (2), and then the rest number of the tasks capable of being executed in parallel is distributed to the priority level with the largest weight coefficient;
the fourth step: scheduling the tasks of all priorities to be executed in parallel according to the quantity of the tasks which can be executed in parallel at all priorities;
the fifth step: when the residual task quantity of any priority is smaller than the task quantity which can be executed in parallel by the priority, the difference value between the residual task quantity of the priority and the task quantity which can be executed in parallel by the priority is accumulated to the task quantity which can be executed in parallel by the next priority lower than the priority, and the residual task quantity of the priority is taken as the task quantity which can be executed in parallel by the priority.
Based on the same inventive concept, the embodiment of the present invention further provides a task scheduling apparatus, as in the following embodiments. Because the principle of the task scheduling device for solving the problems is similar to the task scheduling generation method, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
An embodiment of the present invention provides a task scheduling device, configured to improve task execution efficiency, where fig. 3 is a schematic diagram of a task scheduling device structure in an embodiment of the present invention, and as shown in fig. 3, the task scheduling device includes:
the priority determining module 01 is used for determining the priority of each task in the batch of tasks to be executed;
the weight coefficient determining module 02 is configured to determine a weight coefficient of each priority according to at least two priorities of the batch tasks;
the task quantity determining module 03 is configured to determine the task quantity that can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantities that can be executed in parallel;
and the execution module 04 is used for scheduling the tasks to be executed in parallel at each priority according to the quantity of the tasks which can be executed in parallel at each priority.
In one embodiment, the task amount determining module 03 is specifically configured to:
when the total number of the tasks which can be executed in parallel can be divided by the sum of the weight coefficients of the priority levels, the tasks which can be executed in parallel by the priority levels are determined as follows:
Figure BDA0002415520750000071
in one embodiment, the task amount determining module 03 is specifically configured to:
when the total number of the tasks which can be executed in parallel is not evenly divisible by the sum of the weight coefficients of the priority levels, the tasks which can be executed in parallel by the priority levels are determined according to the following modes:
Figure BDA0002415520750000081
the remaining number of tasks that can be executed in parallel is assigned to the priority with the largest weight coefficient.
Fig. 4 is another schematic diagram of a task scheduling apparatus according to an embodiment of the present invention, and as shown in fig. 4, in an embodiment, the apparatus may further include:
an adjustment module 05 configured to: and when the residual task quantity of any priority is less than the task quantity which can be executed in parallel by the priority, adjusting the priority and the task quantity which can be executed in parallel by the next priority lower than the priority.
Fig. 5 is a schematic diagram of an adjusting module structure in an embodiment of the present invention, and as shown in fig. 5, the adjusting module 05 may include:
difference value calculation unit 051: determining the difference value between the residual task quantity of the priority and the task quantity which can be executed in parallel by the priority;
priority task amount adjusting unit 052: accumulating the difference value to the task quantity which can be executed in parallel at the next priority lower than the priority; and taking the residual task quantity of the priority as the task quantity which can be executed in parallel by the priority.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the task scheduling method is realized when the processor executes the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the task scheduling method.
In summary, the embodiment of the present invention provides: determining the priority of each task in the batch tasks to be executed; determining a weight coefficient of each priority according to at least two priorities of the batch tasks; determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantity which can be executed in parallel; the invention determines the task quantity which can be executed in parallel by each priority based on the weight coefficient of each priority, so that the tasks with low priority have certain system resources to be executed, the tasks with all priorities are ensured to be executed simultaneously, and the overall execution efficiency of the tasks is improved.
When the residual task quantity of any priority is less than the task quantity which can be executed in parallel by the priority, the difference value between the residual task quantity of the priority and the task quantity which can be executed in parallel by the priority is accumulated to the task quantity which can be executed in parallel by the next priority which is lower than the priority, so that idle resources can be utilized, and system resources are saved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and variations of the embodiment of the present invention may occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for task scheduling, comprising:
determining the priority of each task in the batch tasks to be executed;
determining a weight coefficient of each priority according to at least two priorities of the batch tasks;
determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantity which can be executed in parallel;
and scheduling the tasks to be executed in parallel at each priority according to the quantity of the tasks which can be executed in parallel at each priority.
2. The method of claim 1, further comprising:
and when the residual task quantity of any priority is less than the task quantity which can be executed in parallel by the priority, adjusting the priority and the task quantity which can be executed in parallel by the next priority lower than the priority.
3. The method of claim 2, wherein adjusting the priority and the amount of tasks that can be executed in parallel at a next priority lower than the priority when the remaining amount of tasks at any priority is less than the amount of tasks that can be executed in parallel at the priority comprises:
determining the difference value between the residual task quantity of the priority and the task quantity which can be executed in parallel by the priority;
accumulating the difference value to the task quantity which can be executed in parallel at the next priority lower than the priority;
and taking the residual task quantity of the priority as the task quantity which can be executed in parallel by the priority.
4. The method of claim 1, wherein determining the amount of tasks that can be executed in parallel by each priority according to the weight coefficient of each priority and the total number of tasks that can be executed in parallel comprises:
when the total number of the tasks which can be executed in parallel can be divided by the sum of the weight coefficients of the priority levels, the tasks which can be executed in parallel by the priority levels are determined as follows:
each priority being executable in parallel
Figure FDA0002415520740000011
5. The method of claim 1, wherein the amount of tasks that can be executed in parallel at each priority level is determined according to the weight coefficient at each priority level and the total number of tasks that can be executed in parallel, further comprising:
when the total number of the tasks which can be executed in parallel is not evenly divisible by the sum of the weight coefficients of the priority levels, the tasks which can be executed in parallel by the priority levels are determined according to the following modes:
each priority being executable in parallel
Figure FDA0002415520740000012
The remaining number of tasks that can be executed in parallel is assigned to the priority with the largest weight coefficient.
6. A task scheduling apparatus, comprising:
the priority determining module is used for determining the priority of each task in the batch tasks to be executed;
the weight coefficient determining module is used for determining the weight coefficient of each priority according to at least two priorities of the batch tasks;
the task quantity determining module is used for determining the task quantity which can be executed in parallel at each priority according to the weight coefficient of each priority and the total number of the task quantities which can be executed in parallel;
and the execution module is used for scheduling the tasks of all the priorities to be executed in parallel according to the quantity of the tasks which can be executed in parallel at all the priorities.
7. The apparatus of claim 6, further comprising: an adjustment module to:
and when the residual task quantity of any priority is less than the task quantity which can be executed in parallel by the priority, adjusting the priority and the task quantity which can be executed in parallel by the next priority lower than the priority.
8. The apparatus of claim 6, wherein the task quantity determination module is specifically configured to:
when the total number of the tasks which can be executed in parallel can be divided by the sum of the weight coefficients of the priority levels, the tasks which can be executed in parallel by the priority levels are determined as follows:
each priority being executable in parallel
Figure FDA0002415520740000021
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 5.
CN202010189939.2A 2020-03-18 2020-03-18 Task scheduling method and device Pending CN111400010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010189939.2A CN111400010A (en) 2020-03-18 2020-03-18 Task scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010189939.2A CN111400010A (en) 2020-03-18 2020-03-18 Task scheduling method and device

Publications (1)

Publication Number Publication Date
CN111400010A true CN111400010A (en) 2020-07-10

Family

ID=71434275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010189939.2A Pending CN111400010A (en) 2020-03-18 2020-03-18 Task scheduling method and device

Country Status (1)

Country Link
CN (1) CN111400010A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760991A (en) * 2021-03-25 2021-12-07 北京京东拓先科技有限公司 Data operation method and device, electronic equipment and computer readable medium
CN115185685A (en) * 2022-07-06 2022-10-14 重庆软江图灵人工智能科技有限公司 Artificial intelligence task scheduling method and device based on deep learning and storage medium
CN117676901A (en) * 2023-12-06 2024-03-08 武汉天宝莱信息技术有限公司 FPGA-based 5G signal processing method and system
CN117676901B (en) * 2023-12-06 2024-05-24 武汉天宝莱信息技术有限公司 FPGA-based 5G signal processing method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105511950A (en) * 2015-12-10 2016-04-20 天津海量信息技术有限公司 Dispatching management method for task queue priority of large data set
CN107391243A (en) * 2017-06-30 2017-11-24 广东神马搜索科技有限公司 Thread task processing equipment, device and method
CN107423120A (en) * 2017-04-13 2017-12-01 阿里巴巴集团控股有限公司 Method for scheduling task and device
CN107943686A (en) * 2017-10-30 2018-04-20 北京奇虎科技有限公司 A kind of test dispatching method, apparatus, server and storage medium
CN108762896A (en) * 2018-03-26 2018-11-06 福建星瑞格软件有限公司 One kind being based on Hadoop cluster tasks dispatching method and computer equipment
CN110515713A (en) * 2019-08-13 2019-11-29 北京安盟信息技术股份有限公司 A kind of method for scheduling task, equipment and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105511950A (en) * 2015-12-10 2016-04-20 天津海量信息技术有限公司 Dispatching management method for task queue priority of large data set
CN107423120A (en) * 2017-04-13 2017-12-01 阿里巴巴集团控股有限公司 Method for scheduling task and device
CN107391243A (en) * 2017-06-30 2017-11-24 广东神马搜索科技有限公司 Thread task processing equipment, device and method
CN107943686A (en) * 2017-10-30 2018-04-20 北京奇虎科技有限公司 A kind of test dispatching method, apparatus, server and storage medium
CN108762896A (en) * 2018-03-26 2018-11-06 福建星瑞格软件有限公司 One kind being based on Hadoop cluster tasks dispatching method and computer equipment
CN110515713A (en) * 2019-08-13 2019-11-29 北京安盟信息技术股份有限公司 A kind of method for scheduling task, equipment and computer storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760991A (en) * 2021-03-25 2021-12-07 北京京东拓先科技有限公司 Data operation method and device, electronic equipment and computer readable medium
CN115185685A (en) * 2022-07-06 2022-10-14 重庆软江图灵人工智能科技有限公司 Artificial intelligence task scheduling method and device based on deep learning and storage medium
CN117676901A (en) * 2023-12-06 2024-03-08 武汉天宝莱信息技术有限公司 FPGA-based 5G signal processing method and system
CN117676901B (en) * 2023-12-06 2024-05-24 武汉天宝莱信息技术有限公司 FPGA-based 5G signal processing method and system

Similar Documents

Publication Publication Date Title
US9852005B2 (en) Multi-core processor systems and methods for assigning tasks in a multi-core processor system
JP4185103B2 (en) System and method for scheduling executable programs
US20110209153A1 (en) Schedule decision device, parallel execution device, schedule decision method, and program
US10216530B2 (en) Method for mapping between virtual CPU and physical CPU and electronic device
CN114610474B (en) Multi-strategy job scheduling method and system under heterogeneous supercomputing environment
CN111400010A (en) Task scheduling method and device
KR20120082598A (en) Cost based scheduling algorithm for multiple workflow in cloud computing and system of the same
CN103748559A (en) Method and system for work partitioning between processors with work demand feedback
CN111984426B (en) Task scheduling method and device, electronic equipment and storage medium
US20110023044A1 (en) Scheduling highly parallel jobs having global interdependencies
US8281313B1 (en) Scheduling computer processing jobs that have stages and precedence constraints among the stages
CN108897625B (en) Parallel scheduling method based on DAG model
CN114911613A (en) Cross-cluster resource high-availability scheduling method and system in inter-cloud computing environment
CN108958919B (en) Multi-DAG task scheduling cost fairness evaluation method with deadline constraint in cloud computing
Moulik et al. A deadline-partition oriented heterogeneous multi-core scheduler for periodic tasks
CN111736959B (en) Spark task scheduling method considering data affinity under heterogeneous cluster
CN110413393B (en) Cluster resource management method and device, computer cluster and readable storage medium
CN111274024B (en) CFS scheduler-based ready queue average load optimization method and data structure
CN116880994A (en) Multiprocessor task scheduling method, device and equipment based on dynamic DAG
CN116701001A (en) Target task allocation method and device, electronic equipment and storage medium
CN115098240B (en) Multiprocessor application scheduling method and system and storage medium
CN109189581B (en) Job scheduling method and device
CN111143210A (en) Test task scheduling method and system
CN112506640B (en) Multiprocessor architecture for encryption operation chip and allocation method
CN111475297A (en) Flexible operation configuration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220914

Address after: 25 Financial Street, Xicheng District, Beijing 100033

Applicant after: CHINA CONSTRUCTION BANK Corp.

Address before: 25 Financial Street, Xicheng District, Beijing 100033

Applicant before: CHINA CONSTRUCTION BANK Corp.

Applicant before: Jianxin Financial Science and Technology Co.,Ltd.