CN112685181A - Push task scheduling method and system for balancing CPU resources - Google Patents
Push task scheduling method and system for balancing CPU resources Download PDFInfo
- Publication number
- CN112685181A CN112685181A CN202011592637.6A CN202011592637A CN112685181A CN 112685181 A CN112685181 A CN 112685181A CN 202011592637 A CN202011592637 A CN 202011592637A CN 112685181 A CN112685181 A CN 112685181A
- Authority
- CN
- China
- Prior art keywords
- task
- push
- lock
- time
- free queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000007246 mechanism Effects 0.000 claims abstract description 8
- 230000003111 delayed effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Debugging And Monitoring (AREA)
Abstract
The invention provides a method and a system for scheduling push tasks of balanced CPU resources, wherein the method comprises the following steps: s1, predicting the execution time of the push task by using a minimum heap mechanism to obtain the expected time; s2, when the push task reaches the expected time, the push task is put on an unpredictable task chain table; s3, putting the pushing task on the unpredictable task linked list into a lock-free queue of a corresponding time consumption level in the working thread according to the task time consumption; s4, each worker thread traverses the push task in the lock-free queue from the highest priority to the lowest priority and executes the push task; and S5, putting the executed push task into the lock-free queue of the completed task, recovering the executed push task from the lock-free queue of the completed task to the minimum heap group by the main push framework, and modifying the expected time. The invention can refine the tasks and fully utilize the CPU resources under the condition of ensuring that the time-consuming tasks and the non-time-consuming tasks do not influence each other, thereby improving the pushing performance.
Description
Technical Field
The invention relates to the technical field of network performance, in particular to a push task scheduling method and a push task scheduling system for balancing CPU resources.
Background
The existing method for scheduling the pushed tasks of the CPU resources comprises the following steps: and classifying the push tasks, and distributing different CPU cores in different types for pushing. Because the tasks are pushed with time-consuming tasks and time-consuming-free tasks, in order to avoid influence among the tasks, the tasks are classified, and different CPU cores are distributed to each class of tasks. When this scheme is problematic, some CPU cores may be in an idle state for a long time, and some CPU cores may be in a 99% state for a long time. And the pushing task is greatly delayed, and the CPU resource is not fully utilized.
Disclosure of Invention
The invention aims to provide a method and a system for scheduling a push task of a balanced CPU resource, so as to refine the task, fully utilize the CPU resource and improve the push performance under the condition of ensuring that a time-consuming task and a non-time-consuming task are not influenced mutually.
The invention provides a push task scheduling method for balancing CPU resources, which comprises the following steps:
s1, performing execution time prediction on the pushing task by using a minimum heap mechanism to obtain the expected time of the pushing task;
s2, traversing all push tasks to reach the expected time, and when the push tasks reach the expected time, throwing the push tasks to an unpredictable task chain table;
s3, setting a time consumption level for the lock-free queue of the working thread of each CPU core, and putting the pushing task on the unpredictable task chain table into the lock-free queue of the corresponding time consumption level in the working thread according to the task time consumption;
s4, each worker thread traverses the push task in the lock-free queue from the highest priority to the lowest priority and executes the push task according to the dynamically distributed priority;
and S5, putting the executed push task into the lock-free queue of the completed task, recovering the executed push task from the lock-free queue of the completed task to the minimum heap group by the main push framework, and modifying the expected time.
Further, the method for predicting the execution time of the push task by using the minimum heap mechanism in step S1 to obtain the expected time consumption includes:
s11, splitting a push task into the push task with the minimum granularity, and entering a lock-free queue of a new task through configuration updating;
s12, the main pushing frame polls the lock-free queue of the new task and judges whether the new pushing task comes in the lock-free queue of the new task, if so, the new pushing task is added into the minimum heap group;
and S13, performing heap classification on the minimum heap group according to different links, wherein the top of the minimum heap is the expected time of the next pushing task.
Further, in step S12, the main push framework updates the version number when a new push task is added to the minimum heap group; when the version numbers obtained when the executed push task is recovered to the minimum heap group in step S6 are inconsistent, the task in the lock-free queue in which the task is completed is eliminated, and the task is not pushed any more.
Further, the lock-free queues of the work threads of the same CPU core in step S3 are of similar time consumption level.
Furthermore, each worker thread in step S4 is further assigned with a corresponding execution level according to the time-consuming level, that is, when traversing to a certain time-consuming level, the subsequent lock-free queue will not be traversed.
Further, when the CPU core configured by the user is insufficient and the data amount of the push task is too large, so that the delay of the push task occurs, a time period with a large delay is skipped.
The invention also provides a push task scheduling system for balancing the CPU resources, and the push task scheduling system adopts the push task scheduling method to schedule the push tasks.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. according to the invention, the time consumption influence of the pushing task can be greatly reduced by setting the time consumption level.
2. The invention can fully utilize CPU resources by occupying the form of pushing tasks in the lock-free queue through the working thread.
3. The invention predicts the execution time of the push task by using a minimum heap mechanism, and can greatly reduce the traversal check overhead of the push task.
Therefore, the push task scheduling method can refine the tasks and fully utilize CPU resources under the condition of ensuring that the time-consuming tasks and the non-time-consuming tasks do not influence each other, thereby improving the push performance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram of a data flow of a push architecture of a push task scheduling method for balancing CPU resources according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The present embodiment provides a push task scheduling method for balancing CPU resources, as shown in fig. 1, where the push task scheduling method includes the following steps:
s1, performing execution time prediction on the pushing task by using a minimum heap mechanism to obtain the expected time of the pushing task; specifically, the method comprises the following steps:
s11, splitting a push task into the push task with the minimum granularity, and entering a lock-free queue of a new task through configuration updating;
s12, the main pushing frame polls the lock-free queue of the new task and judges whether the new pushing task comes in the lock-free queue of the new task, if so, the new pushing task is added into the minimum heap group;
and S13, performing heap classification on the minimum heap group according to different links, wherein the top of the minimum heap is the expected time of the next pushing task.
By the miniheap mechanism of this step S1, the push task traversal can be greatly reduced to only check whether the miniheap top task reaches the expected time, thereby greatly reducing the traversal check overhead of the push task.
S2, traversing all push tasks to reach the expected time, and when the push tasks reach the expected time, throwing the push tasks to an unpredictable task chain table;
the push tasks are divided into second tasks, minute tasks, hour tasks and day tasks, aggregation of time tasks depends forward, backward tasks are delayed slightly, and then the push tasks are put on an unpredictable task chain table when the push tasks reach the expected time, and then the push tasks on the unpredictable task chain table are mainly checked in a short time through the following steps. In addition, when the CPU core configured by the user is insufficient and the data volume of the push task is too large, the push task is delayed, and a time period with large delay is skipped at this time, so that the push task in the latest period can be normally executed.
S3, setting a time consumption level for the lock-free queue of the working thread of each CPU core, and putting the pushing task on the unpredictable task chain table into the lock-free queue of the corresponding time consumption level in the working thread according to the task time consumption; in some embodiments, the time consumption levels of the lock-free queues of the working threads of the same CPU core are made to be similar, for example, the lock-free queues of the working threads with the same or adjacent time consumption levels belong to the same CPU core, so that when the CPU cores are allocated, the push tasks with the similar time consumption levels can be allocated to the same CPU core according to the CPU cores, the thread number and the time consumption levels, and the threads of the CPU cores are ensured to be balanced as much as possible, thereby greatly reducing the time consumption influence of the push tasks.
S4, each worker thread traverses the push task in the lock-free queue from the highest priority to the lowest priority and executes the push task according to the dynamically distributed priority; therefore, the mode of pushing tasks in the lock-free queue is preempted through the working thread, and the CPU resource is fully utilized. Further, in order to prevent the time-consuming task from occupying the work threads for a long time, and thus affecting the execution progress of the non-time-consuming task, each work thread in step S4 is further assigned with a corresponding execution level according to the time-consuming level, that is, when traversing to a certain time-consuming level, the subsequent lock-free queue is not traversed, and it is ensured that when pushing a task with large time consumption, the non-time-consuming task is not stranded.
And S5, putting the executed push task into the lock-free queue of the completed task, recovering the executed push task from the lock-free queue of the completed task to the minimum heap group by the main push framework, and modifying the expected time. In some embodiments, the master push framework updates the version number when a new push task is added to the minimum heap group in step S12; when the version numbers obtained when the executed push task is recovered to the minimum heap group in step S6 are inconsistent, the task in the lock-free queue in which the task is completed is eliminated, and the task is not pushed any more. Whereby the push task scheduling is completed through steps S1 to S5.
In some embodiments, a push task scheduling system for balancing CPU resources is further implemented, where the push task scheduling system performs push task scheduling by using the above push task scheduling method. The detailed process is not described again.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A push task scheduling method for balancing CPU resources is characterized by comprising the following steps:
s1, performing execution time prediction on the pushing task by using a minimum heap mechanism to obtain the expected time of the pushing task;
s2, traversing all push tasks to reach the expected time, and when the push tasks reach the expected time, throwing the push tasks to an unpredictable task chain table;
s3, setting a time consumption level for the lock-free queue of the working thread of each CPU core, and putting the pushing task on the unpredictable task chain table into the lock-free queue of the corresponding time consumption level in the working thread according to the task time consumption;
s4, each worker thread traverses the push task in the lock-free queue from the highest priority to the lowest priority and executes the push task according to the dynamically distributed priority;
and S5, putting the executed push task into the lock-free queue of the completed task, recovering the executed push task from the lock-free queue of the completed task to the minimum heap group by the main push framework, and modifying the expected time.
2. The method as claimed in claim 1, wherein the step S1 of predicting the execution time of the push task by using the minimum heap mechanism to obtain the expected time consumption includes:
s11, splitting a push task into the push task with the minimum granularity, and entering a lock-free queue of a new task through configuration updating;
s12, the main pushing frame polls the lock-free queue of the new task and judges whether the new pushing task comes in the lock-free queue of the new task, if so, the new pushing task is added into the minimum heap group;
and S13, performing heap classification on the minimum heap group according to different links, wherein the top of the minimum heap is the expected time of the next pushing task.
3. The push task scheduling method for equalizing CPU resources of claim 2, wherein in step S12, the master push framework updates the version number when a new push task is added to the minimum heap group; when the version numbers obtained when the executed push task is recovered to the minimum heap group in step S6 are inconsistent, the task in the lock-free queue in which the task is completed is eliminated, and the task is not pushed any more.
4. The method for push task scheduling with balanced CPU resources according to claim 1, wherein the lock-free queues of the worker threads of the same CPU core in step S3 have similar time consumption levels.
5. The push task scheduling method for balancing CPU resources as claimed in claim 1, wherein each worker thread in step S4 is further assigned with a corresponding execution level according to a time-consuming level, that is, when traversing to a certain time-consuming level, the subsequent lock-free queue will not be traversed.
6. The method as claimed in claim 1, wherein when the CPU core configured by the user is insufficient and the amount of the push task data is too large, which causes a delay in the push task, the method skips the time period with a large delay.
7. A push task scheduling system for balancing CPU resources, characterized in that the push task scheduling system employs the push task scheduling method according to any one of claims 1 to 6 for push task scheduling.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011592637.6A CN112685181B (en) | 2020-12-29 | 2020-12-29 | Push task scheduling method and system for balancing CPU resources |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011592637.6A CN112685181B (en) | 2020-12-29 | 2020-12-29 | Push task scheduling method and system for balancing CPU resources |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112685181A true CN112685181A (en) | 2021-04-20 |
CN112685181B CN112685181B (en) | 2024-06-04 |
Family
ID=75455050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011592637.6A Active CN112685181B (en) | 2020-12-29 | 2020-12-29 | Push task scheduling method and system for balancing CPU resources |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112685181B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020118706A1 (en) * | 2001-02-26 | 2002-08-29 | Maple Optical Systems, Inc. | Data packet transmission scheduling based on anticipated finish times |
US20090271789A1 (en) * | 2008-04-28 | 2009-10-29 | Babich Alan F | Method, apparatus and article of manufacture for timeout waits on locks |
CN104391754A (en) * | 2014-10-13 | 2015-03-04 | 北京星网锐捷网络技术有限公司 | Method and device for processing task exception |
CN104915253A (en) * | 2014-03-12 | 2015-09-16 | 中国移动通信集团河北有限公司 | Work scheduling method and work processor |
CN107301085A (en) * | 2017-05-31 | 2017-10-27 | 深圳市神云科技有限公司 | A kind of cloud platform method for allocating tasks based on queue |
CN109716297A (en) * | 2016-09-16 | 2019-05-03 | 华为技术有限公司 | Optimization is directed to the operating system timer of high activity ratio |
CN109756565A (en) * | 2018-12-26 | 2019-05-14 | 成都科来软件有限公司 | A kind of Multitask Data method for pushing based on statistical form |
CN110096340A (en) * | 2018-01-29 | 2019-08-06 | 北京世纪好未来教育科技有限公司 | Timed task processing method and processing device |
CN110311965A (en) * | 2019-06-21 | 2019-10-08 | 长沙学院 | Method for scheduling task and system under a kind of cloud computing environment |
CN111506438A (en) * | 2020-04-03 | 2020-08-07 | 华夏龙晖(北京)汽车电子科技股份有限公司 | Shared resource access method and device |
CN111813515A (en) * | 2020-06-29 | 2020-10-23 | 中国平安人寿保险股份有限公司 | Multi-process-based task scheduling method and device, computer equipment and medium |
-
2020
- 2020-12-29 CN CN202011592637.6A patent/CN112685181B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020118706A1 (en) * | 2001-02-26 | 2002-08-29 | Maple Optical Systems, Inc. | Data packet transmission scheduling based on anticipated finish times |
US20090271789A1 (en) * | 2008-04-28 | 2009-10-29 | Babich Alan F | Method, apparatus and article of manufacture for timeout waits on locks |
CN104915253A (en) * | 2014-03-12 | 2015-09-16 | 中国移动通信集团河北有限公司 | Work scheduling method and work processor |
CN104391754A (en) * | 2014-10-13 | 2015-03-04 | 北京星网锐捷网络技术有限公司 | Method and device for processing task exception |
CN109716297A (en) * | 2016-09-16 | 2019-05-03 | 华为技术有限公司 | Optimization is directed to the operating system timer of high activity ratio |
CN107301085A (en) * | 2017-05-31 | 2017-10-27 | 深圳市神云科技有限公司 | A kind of cloud platform method for allocating tasks based on queue |
CN110096340A (en) * | 2018-01-29 | 2019-08-06 | 北京世纪好未来教育科技有限公司 | Timed task processing method and processing device |
CN109756565A (en) * | 2018-12-26 | 2019-05-14 | 成都科来软件有限公司 | A kind of Multitask Data method for pushing based on statistical form |
CN110311965A (en) * | 2019-06-21 | 2019-10-08 | 长沙学院 | Method for scheduling task and system under a kind of cloud computing environment |
CN111506438A (en) * | 2020-04-03 | 2020-08-07 | 华夏龙晖(北京)汽车电子科技股份有限公司 | Shared resource access method and device |
CN111813515A (en) * | 2020-06-29 | 2020-10-23 | 中国平安人寿保险股份有限公司 | Multi-process-based task scheduling method and device, computer equipment and medium |
Non-Patent Citations (1)
Title |
---|
姚崇华 等: ""多线程应用中的定时器管理算法"", 《计算机工程》, no. 02, 20 January 2010 (2010-01-20), pages 81 - 83 * |
Also Published As
Publication number | Publication date |
---|---|
CN112685181B (en) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210334135A1 (en) | Computing node job assignment using multiple schedulers | |
EP3770774B1 (en) | Control method for household appliance, and household appliance | |
CN110489217A (en) | A kind of method for scheduling task and system | |
Thinakaran et al. | Phoenix: A constraint-aware scheduler for heterogeneous datacenters | |
US8090974B1 (en) | State machine controlled dynamic distributed computing | |
CN112162865A (en) | Server scheduling method and device and server | |
CN104199739B (en) | A kind of speculating type Hadoop dispatching methods based on load balancing | |
CN110362391B (en) | Resource scheduling method and device, electronic equipment and storage medium | |
CN102541642A (en) | Task management method for enhancing real-time performance | |
WO2024021489A1 (en) | Task scheduling method and apparatus, and kubernetes scheduler | |
Petrov et al. | Adaptive performance model for dynamic scaling Apache Spark Streaming | |
CN110737485A (en) | workflow configuration system and method based on cloud architecture | |
CN113032102A (en) | Resource rescheduling method, device, equipment and medium | |
CN116010064A (en) | DAG job scheduling and cluster management method, system and device | |
CN111857990B (en) | Method and system for enhancing YARN long-type service scheduling | |
CN110928666A (en) | Method and system for optimizing task parallelism based on memory in Spark environment | |
US9424078B2 (en) | Managing high performance computing resources using job preemption | |
CN112860401A (en) | Task scheduling method and device, electronic equipment and storage medium | |
CN112395052A (en) | Container-based cluster resource management method and system for mixed load | |
CN112685181B (en) | Push task scheduling method and system for balancing CPU resources | |
CN112416520A (en) | Intelligent resource scheduling method based on vSphere | |
CN104731662B (en) | A kind of resource allocation methods of variable concurrent job | |
CN113051064A (en) | Task scheduling method, device, equipment and storage medium | |
CN113688053B (en) | Queuing using method and queuing using system for cloud testing tool | |
CN111158896A (en) | Distributed process scheduling method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 610041 12th, 13th and 14th floors, unit 1, building 4, No. 966, north section of Tianfu Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan Applicant after: Kelai Network Technology Co.,Ltd. Address before: 41401-41406, 14th floor, unit 1, building 4, No. 966, north section of Tianfu Avenue, Chengdu hi tech Zone, Chengdu Free Trade Zone, Sichuan 610041 Applicant before: Chengdu Kelai Network Technology Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |