WO2020108337A1 - Procédé de programmation de ressources cpu, et équipement électronique - Google Patents

Procédé de programmation de ressources cpu, et équipement électronique Download PDF

Info

Publication number
WO2020108337A1
WO2020108337A1 PCT/CN2019/119125 CN2019119125W WO2020108337A1 WO 2020108337 A1 WO2020108337 A1 WO 2020108337A1 CN 2019119125 W CN2019119125 W CN 2019119125W WO 2020108337 A1 WO2020108337 A1 WO 2020108337A1
Authority
WO
WIPO (PCT)
Prior art keywords
cpu
exclusive
application
node
shared
Prior art date
Application number
PCT/CN2019/119125
Other languages
English (en)
Chinese (zh)
Inventor
姚军利
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020108337A1 publication Critical patent/WO2020108337A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Embodiments of the present disclosure relate to the field of network technology, and in particular, to a CPU resource scheduling method and electronic equipment.
  • a distributed streaming data processing system is a system that converts real-time streaming data processing into multiple small jobs and executes them in parallel on multiple processing machines.
  • the distributed stream data processing system based on small batch operations divides the real-time stream data into a series of small batch data at time intervals, and then processes these small batch data. In this way, this type of system can provide low Delayed, high-throughput real-time data processing services.
  • cloud computing technology it has become a trend to deploy such complex applications into cloud environment clusters.
  • each node In a cloud environment cluster, each node often needs to configure exclusive CPU cores and shared CPU cores in advance, and the configuration is more complicated; and the shared CPU cores and exclusive CPU cores need to be scheduled as two resources.
  • the scheduling dimension is high; and because the threshold for exclusive use of CPU cores is high, one type of resources is often insufficient, and another type of resources is idle and wasted, resulting in inefficient utilization of CPU resources; at the same time, exclusive CPU cores are After the node operating system is started, it is determined. If the allocation ratio of shared CPU cores and exclusive CPU cores on the node is modified in order to improve the utilization efficiency of CPU resources, the node operating system must be restarted to take effect. In the cloud environment cluster, the restart of the node operating system means the migration or interruption of the services carried on the node, which cannot be executed at high frequency, which greatly affects the service effect of the node.
  • the purpose of the embodiments of the present disclosure is to provide a CPU resource scheduling method and an electronic device, so that the complexity of configuring nodes is reduced without affecting the service effect of nodes, and the scheduling dimension of CPU resources and the exclusive use of CPU resources are also reduced.
  • the use threshold also realizes flexible scheduling of shared CPU and exclusive CPU resources, and improves the CPU resource utilization efficiency of nodes.
  • the embodiments of the present disclosure provide a CPU resource scheduling method, including: configuring all CPU cores on each node as shared CPU cores, and taking the number of shared CPU cores of each node as the node’s The number of available CPU cores; the number of CPU cores required to receive and parse the application.
  • the number of CPU cores required includes the number of exclusive CPU cores and the number of shared CPU cores.
  • An embodiment of the present disclosure also provides an electronic device including at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are at least one The processor executes to implement the above CPU resource scheduling method.
  • FIG. 1 is a schematic flowchart of a CPU resource scheduling method according to the first embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a CPU resource scheduling method according to a second embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of a CPU resource scheduling method according to a third embodiment of the present disclosure.
  • FIG. 4 is a schematic flow chart of dynamic conversion of shared CPU cores and exclusive CPU cores according to the third embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present disclosure.
  • the first embodiment of the present disclosure relates to a CPU resource scheduling method.
  • the specific flow diagram is shown in FIG. 1 and specifically includes:
  • Step 101 All CPU cores on each node are configured as shared CPU cores, and the number of shared CPU cores of each node is used as the number of available CPU cores of the node.
  • the existing cluster usually needs to plan exclusive CPU cores in advance, and in the embodiment of the present disclosure, all CPU cores on each node in the cluster are configured as shared CPU cores.
  • all CPU cores on each node are shared CPU cores, and there is only one type of CPU resource in the cluster scheduler, that is, shared CPU resources.
  • each node reports its own number of shared CPU cores to the cluster resource scheduler. In this embodiment, it is not necessary to plan exclusive CPU cores in advance, which reduces the complexity of nodes and lowers the threshold for exclusive CPU resources.
  • Step 102 Receive and analyze the number of CPU core requirements of the application.
  • the cluster scheduler needs to analyze the application's CPU core requirements, determine the application's exclusive CPU core requirements and shared CPU core requirements, so as to subsequently deploy nodes for the application based on the application's exclusive CPU core requirements and shared CPU core requirements.
  • Step 103 Select a node whose available CPU core number is greater than or equal to the required number of CPU cores as a deployment node, and deploy the task of executing the application to the deployment node.
  • each node in the cluster reports the number of its own shared CPU cores to the cluster resource scheduler, and the cluster resource scheduler learns the number of available CPU cores of each node in the cluster, and thus receives the CPU core demand of the application According to the number of CPU core requirements of an application, a node that meets the application requirements can be selected as a deployment node. The task of executing the application is deployed to a deployment node, and the deployment node executes the task of the application.
  • Step 104 Monitor the application startup event and exit event of the deployment node. If the startup event of the application is monitored, step 105 is entered; if the exit event is monitored, step 106 is entered.
  • Step 105 Select shared CPU cores equal to the number of exclusive CPU requirements from the shared CPU cores of the deployment node and convert them into exclusive CPU cores, and allocate the converted exclusive CPU cores to the application for use.
  • Step 106 When the application exit event of the deployment node is monitored, the exclusive CPU core allocated to the application is converted into a shared CPU core.
  • the cluster resource scheduler monitors the startup event and exit event of the application of the deployment node, and if the startup event of the application of the deployment node is monitored, it is selected from the shared CPU cores of the deployment node Shared CPU cores equal to the number of exclusive CPU cores required by the application are converted into exclusive CPU cores, and the converted exclusive CPU cores are allocated to the application for use.
  • the application exit event of the deployed node is monitored, the node's exclusive CPU core no longer needs to execute the application's task. At this time, the exclusive CPU core allocated to the application is converted into a shared CPU core.
  • the method further includes: updating the number of available CPU cores of the deployment node. So that the cluster resource scheduler can obtain the real-time actual number of available CPU cores of each node after the application task is deployed, and it is convenient for the subsequent resource scheduler to deploy the application task for each node.
  • the method further includes: monitoring the task execution status of the deployment node, and updating the available CPU of the deployment node after the task execution of the deployment node is completed or the task execution is abnormally terminated The number of cores. So that the cluster resource scheduler can obtain the real-time actual number of available CPU cores of each node after the task execution of the deployment node is completed or the task execution is abnormally terminated, which is convenient for subsequent resource schedulers to deploy application tasks for each node.
  • the method further includes the step of: updating the available CPU quantity of the deployment node. Specifically, the difference between the number of available CPUs before the deployment node is deployed and the number of CPU requirements is used as the updated number of available CPUs of the deployment node. Specifically, after the application is deployed for the deployment node, the number of available CPU cores of the deployment node is updated in time, so that the cluster resource scheduler can obtain the real-time actual number of available CPU cores of each node, which is convenient for subsequent resource schedulers to deploy for each node application.
  • the method further includes the step of: updating the number of available CPUs of the deployment node. Specifically, the sum of the number of available CPUs and the number of CPU requirements after the deployment node is deployed as the updated number of available CPUs of the deployment node. Specifically, when the application exit event of the deployment node is monitored, the exclusive CPU core of the node does not need to perform the task of the application.
  • the number of available CPU cores of the deployment node is updated again to make the deployment node available after the application is deployed
  • the sum of the number of CPU cores and the number of CPU core requirements is the updated number of available CPU cores of the deployment node, so that after the current task of the deployment node ends, the current number of available CPU cores is updated in time to facilitate subsequent resource schedulers for each Node deployment application.
  • the scheduling process of the cluster application under the static exclusive CPU solution is as follows:
  • MASTER establishes a connection with NODE1 and NODE2; 20 of the 32 CPUs on the NODE1 node are used as shared CPUs, and 12 are reported as exclusive CPUs to the MASTER node. Of the 48 CPUs on the NODE2 node, 30 are shared CPUs and 18 are reported to MASTER as exclusive CPUs. From the perspective of MASTER, there are 50 shared CPUs and 30 exclusive CPUs in the cluster.
  • the initialization state table is shown in Table 2 below:
  • CONTAINER1 and CONTAINER2 have no exclusive CPU requirements, and all the pre-planned exclusive CPU resources in this cluster are idle.
  • CONTAINER3 and CONTAINER4 are ultimately limited by the exclusive CPU resources, resulting in a situation of tight exclusive resources and a large amount of shared resources in the cluster.
  • the scheduling process of the application container in the cluster under the dynamic exclusive CPU solution is as follows:
  • MASTER establishes a connection with NODE1 and NODE2; 32 of the 32 CPUs on the NODE1 node are reported to the MASTER node as shared CPUs, and 48 of the 48 CPUs on the NODE2 node are reported to the MASTER as shared CPUs From a perspective, the cluster has a total of 80 shared CPUs and 0 exclusive CPU cores.
  • the state table in the initial state is shown in Table 7:
  • CONTAINER1 and CONTAINER2 do not have exclusive CPU requirements. All CPU resources in this cluster are used as shared CPUs, and there is no idle scenario.
  • the method of dynamically monopolizing the CPU core does not need to pay attention to whether the container needs to share the CPU or the exclusive CPU at the scheduling level.
  • CPU needs to be scheduled, thereby improving the utilization efficiency of CPU resources in the cluster at the scheduling layer, circumventing the inevitable problem of idle CPU resources in the past, and improving the utilization efficiency of CPU resources.
  • this embodiment provides a CPU resource scheduling method, all CPU cores on each node are configured as shared CPU cores, and the number of shared CPU cores of each node is used as the available CPU core of the node Number, no need to configure exclusive CPU and shared CPU, reducing the complexity of configuring nodes; at the same time, CPU resources are only scheduled in the shared CPU dimension, and there is no need to schedule CPU resources in the shared CPU and exclusive CPU dimension, which reduces the scheduling dimension of CPU resources Then select the node with the number of available CPU cores greater than or equal to the number of application CPU cores as the deployment node, deploy the application to the deployment node to meet the application's CPU core demand, and select and apply from the shared CPU core of the deployment node when the application starts an event The number of shared CPU cores with the same number of exclusive CPU requirements is converted to exclusive CPU cores for use.
  • the exclusive CPU cores allocated to the application are converted to shared CPU cores.
  • the exclusive CPU cores are Dynamic conversion of shared CPU cores, so that at the level of scheduling, it is not necessary to pay attention to whether the container needs shared CPU cores or exclusive CPU cores, but merged as ordinary CPU core requirements for scheduling, avoiding some cases of sharing CPU cores and exclusive CPU cores are scheduled as two kinds of resources, resulting in a situation where "one resource is insufficient, and the other resource is largely idle and wasted", which greatly improves the utilization efficiency of CPU resources and the flexibility of scheduling.
  • the node operating system needs to be restarted, which reduces the threshold for exclusive CPU resources, and avoids the migration or interruption of the bearer service on the node, which does not affect the service effect of the node.
  • the second embodiment of the present disclosure relates to a CPU resource scheduling method.
  • the second embodiment is an improvement on the first embodiment.
  • the main improvement is that: in this embodiment, a specific implementation manner for obtaining the number of CPU core requirements of an application is provided.
  • FIG. 2 The specific flowchart of the CPU resource scheduling method in this embodiment is shown in FIG. 2 and specifically includes:
  • Step 201 All CPU cores on each node are configured as shared CPU cores, and the number of shared CPU cores of each node is used as the number of available CPU cores of the node.
  • step 201 is substantially the same as step 101 in the first embodiment, and will not be repeated here.
  • Step 202 Query the configuration information of the application.
  • Step 203 Determine the number of exclusive CPU requirements according to the configuration information.
  • the resource scheduler in the cluster queries the configuration information of the application before deploying the application, and the configuration information of the application includes the CPU of the application.
  • the cluster resource scheduler can determine the number of exclusive CPU cores and the number of shared CPU cores of the application according to the description in the configuration information, which is convenient for the subsequent dynamic allocation of exclusive CPU cores for the application.
  • Step 204 Select a node whose available CPU core number is greater than or equal to the required number of CPU cores as the deployment node, and deploy the task of executing the application to the deployment node.
  • step 204 is substantially the same as step 103 in the first embodiment, and will not be repeated here.
  • Step 205 Hand over tasks that do not require exclusive CPU cores in the application to the operating system of the deployment node for scheduling.
  • the resource retriever hands over the task that requires exclusive CPU cores in the application to the converted Exclusive CPU cores are used for processing, and tasks that do not require exclusive CPU cores in the application are directly scheduled by the operating system of the deployment node of the application for scheduling, and the operating system of the deployment node selects which shared CPU cores of the deployment node It does not need to monopolize the CPU core in the application, and realizes the flexible call to the shared CPU core in the deployment node.
  • Step 206 Monitor the application startup event and exit event of the deployment node. If the startup event of the application is monitored, step 207 is entered; if the exit event is monitored, step 208 is entered.
  • Step 207 When the application start event of the deployment node is monitored, select the shared CPU cores equal to the exclusive CPU demand from the shared CPU cores of the deployment node and convert to exclusive CPU cores, and assign the converted exclusive CPU cores to Application use.
  • Step 208 When the application exit event of the deployment node is monitored, convert the exclusive CPU core allocated to the application to a shared CPU core.
  • this embodiment provides a CPU resource scheduling method, and proposes a specific implementation method for obtaining the number of application CPU cores, and obtains the number of application CPU cores according to the configuration information of the application. And tasks that do not require exclusive CPU cores in the application are directly dispatched to the operating system of the deployment node of the application for scheduling, and the operating system of the deployment node selects which shared CPU cores of the deployment node to process the application without exclusive CPU cores The task of implementing the flexible call to the shared CPU core in the deployment node.
  • the third embodiment of the present disclosure relates to a CPU resource scheduling method.
  • the third embodiment is an improvement on the first embodiment, and the main improvement lies in: a specific implementation manner for determining and converting an exclusive CPU core is proposed in this embodiment.
  • FIG. 3 A specific process schematic diagram of a CPU resource scheduling method in this embodiment is shown in FIG. 3, and specifically includes:
  • Step 301 All CPU cores on each node are configured as shared CPU cores, and the number of shared CPU cores of each node is used as the number of available CPU cores of the node.
  • Step 302 Receive and analyze the number of CPU core requirements of the application.
  • Step 303 Select a node whose available CPU core number is greater than or equal to the required number of CPU cores as the deployment node, and deploy the task of executing the application to the deployment node.
  • steps 301 to 303 are substantially the same as steps 101 and 103 in the first embodiment, and will not be repeated here.
  • Step 304 Monitor the application startup event and exit event of the deployment node. If the startup event of the application is monitored, step 305 is entered; if the exit event is monitored, step 309 is entered.
  • Step 305 Determine the ID of the CPU core of the deployment node that is equal to the number of exclusive CPU requirements and needs to be converted into the exclusive CPU core.
  • Step 306 According to the number of exclusive CPU requirements, determine the ID of the shared CPU core of the deployed node equal to the number of exclusive CPU requirements as the ID of the CPU core to be converted.
  • Step 307 Convert the shared CPU core corresponding to the CPU core ID to be converted on the deployment node into an exclusive CPU core.
  • Step 308 Assign the converted exclusive CPU core to the application.
  • the processing for the application startup event in the cluster is as follows: (1) Query application configuration information, determine the number of exclusive CPU requirements of the application, and hand over tasks that do not require exclusive CPU to the operating system Perform scheduling; (2) Determine the number of CPU cores in the deployment node that need to be converted to exclusive CPUs, and the ID of the CPU cores of the deployment nodes that need to be converted to exclusive CPU cores; (3) Convert the CPU cores of specific IDs on the deployment nodes to exclusive CPU cores (the number is the number of exclusive CPUs required by the application on the deployment node); (4) Assign the converted exclusive CPU cores to the application.
  • the ID of the shared CPU core with the lightest load on the deployment node may be directly used as the ID of the CPU core to be converted.
  • the shared CPU core with the fewest tasks and the lightest load directly handles the task of monopolizing the CPU core required by the application on the deployment node, providing a selection strategy for CPU conversion goals, which not only improves the speed of scheduling CPU cores, but also speeds up Task processing efficiency.
  • the method further includes: migrating the load on the exclusive CPU core to other shared CPU cores.
  • migrating the load on the exclusive CPU core to other shared CPU cores.
  • the converted exclusive CPU core information is passed to the application, and CONTROL GROUP is used to ensure that only the belonging application can use the converted exclusive CPU core, which specifically includes: binding the application to the application according to the application's exclusive CPU description Convert the exclusive CPU core.
  • the converted exclusive CPU core information is transferred to the application, and the converted exclusive CPU core information may be transferred to the application using the environment variable or configuration file of the exclusive CPU core.
  • Use CONTROLGROUP control group
  • the application is bound to the converted exclusive CPU, and a specific implementation solution that uses CONTROL GROUP to ensure CPU exclusive is provided, and this implementation depends on the binding strategy specified in the application information.
  • Step 309 Convert the exclusive CPU core allocated to the application to a shared CPU core.
  • the system kernel finds the updated exclusive CPU core ID list, sets the CPU core corresponding to the ID in the list as the exclusive CPU core in the system, and can schedule the CPU core list after updating the operating system;
  • CONTAINER2 is deployed on the NODE1 node, and the CPU layout on NODE1 is adjusted to Table 15:
  • CONTAINER3 is deployed on the NODE1 node, then the CPU layout on NODE1 is adjusted to Table 17:
  • CONTAINER4 is deployed on the NODE1 node, then the CPU layout on NODE1 is adjusted to Table 19:
  • the CPU resources on the node are dynamically converted between the shared CPU and the exclusive CPU according to the actual needs of the application.
  • the application ownership of the exclusive CPU core needs to be clearly defined, and through the configuration in the CONTROL GROUP, the exclusive CPU core can only be used by the belonging application.
  • this embodiment provides a specific implementation method for determining and converting an exclusive CPU core.
  • the CPU core ID is used to determine the shared CPU core that needs to be converted to an exclusive CPU core, and the CPU core ID is used to Realize the conversion from shared CPU core to exclusive CPU core and allocate the converted exclusive CPU core to the application.
  • the shared CPU core with the fewest tasks and the lightest load directly handles the task of monopolizing the CPU core required by the application on the deployment node.
  • the task process converted to the exclusive processing on the exclusive CPU core is migrated to other shared CPU cores, and the processing is performed by the other shared CPU cores, making the scheduling of the CPU cores in the node more flexible.
  • the fourth embodiment of the present disclosure relates to an electronic device, as shown in FIG. 5, including: at least one processor 501; and a memory 502 communicatively connected to the at least one processor 501;
  • the instructions executed by the processor 501 are executed by at least one processor 501, so that the at least one processor 501 can execute the medium CPU resource scheduling method in any of the foregoing embodiments.
  • the bus may include any number of interconnected buses and bridges.
  • the bus connects one or more processors 501 and various circuits of the memory 502 together.
  • the bus can also connect various other circuits such as peripheral devices, voltage regulators, and power management circuits, etc., which are well known in the art, and therefore, they will not be described further herein.
  • the bus interface provides an interface between the bus and the transceiver.
  • the transceiver can be a single element or multiple elements, such as multiple receivers and transmitters, providing a unit for communicating with various other devices on the transmission medium.
  • the data processed by the processor 501 is transmitted on the wireless medium through the antenna. Further, the antenna also receives the data and transmits the data to the processor 501.
  • the processor 501 is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions.
  • the memory 502 may be used to store data used by the processor 501 when performing operations.
  • the fifth embodiment of the present disclosure relates to a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, the above method embodiments are implemented.
  • a program which is stored in a storage medium and includes several instructions to make a device ( It may be a single chip microcomputer, a chip, etc.) or a processor to execute all or part of the steps of the methods described in the embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .
  • This embodiment provides a CPU resource scheduling method and electronic device, which reduces the complexity of configuring nodes without affecting the service effect of nodes, and reduces the scheduling dimension of CPU resources and the threshold for using exclusive CPU resources. It also realizes flexible scheduling of shared CPU and exclusive CPU resources, which improves the CPU resource utilization efficiency of nodes.
  • the embodiments of the present disclosure configure all CPU cores on each node as shared CPU cores, and use the number of shared CPU cores of each node as the number of available CPU cores of the node, without the need to configure an exclusive CPU and share CPU, reduces the complexity of configuring nodes; at the same time, only schedules CPU resources in the shared CPU dimension, and does not need to schedule CPU resources in the shared CPU and exclusive CPU dimensions at the same time, reducing the scheduling dimension of CPU resources; and then selects the number of available CPU cores greater than or A node equal to the number of application CPU cores is required as the deployment node, and the application is deployed to the deployment node to meet the application's CPU core requirements.
  • the number of exclusive CPU requirements of the application is selected from the shared CPU cores of the deployment node Share CPU cores and convert them to exclusive CPU cores for use.
  • the exclusive CPU cores assigned to the application to shared CPU cores, and realize the dynamic conversion of exclusive CPU cores and shared CPU cores according to the actual needs of the application. Therefore, at the level of scheduling, it is not necessary to pay attention to whether the container needs to share CPU cores or exclusive CPU cores, but it is combined as ordinary CPU cores for scheduling, which avoids the use of shared CPU cores and exclusive CPU cores as two resources in some cases.
  • Scheduling results in a situation where "one resource is insufficient, while another resource is largely idle and wasted", which greatly improves the utilization efficiency of CPU resources and the flexibility of scheduling. It also does not need to share CPU cores and monopolize the nodes due to modification The distribution of CPU cores requires a restart of the node's operating system, which lowers the threshold for exclusive use of CPU resources. At the same time, it avoids the migration or interruption of bearer services on the node and does not affect the service effect of the node.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Hardware Redundancy (AREA)

Abstract

La présente invention se rapporte au domaine technique des réseaux. Selon un mode de réalisation, l'invention concerne un procédé et un dispositif de planification de ressources CPU. Le procédé consiste à : configurer tous les coeurs de CPU sur divers noeuds de façon à être des coeurs de CPU partagés; sélectionner des noeuds satisfaisant à la quantité requise de coeurs de CPU pour une application en tant que noeuds de déploiement et déployer l'application sur les noeuds de déploiement; lorsqu'un événement de démarrage d'application est traité, convertir une partie des coeurs de CPU partagés en une quantité égale à la quantité requise de coeurs de CPU exclusifs pour l'application dans des coeurs de CPU exclusifs destinés à être utilisés par l'application; et lorsqu'un événement de sortie d'application est traité, convertir les coeurs de CPU exclusifs alloués à l'application en coeurs de CPU partagés.
PCT/CN2019/119125 2018-11-29 2019-11-18 Procédé de programmation de ressources cpu, et équipement électronique WO2020108337A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811442355.0A CN111240824B (zh) 2018-11-29 2018-11-29 一种cpu资源调度方法及电子设备
CN201811442355.0 2018-11-29

Publications (1)

Publication Number Publication Date
WO2020108337A1 true WO2020108337A1 (fr) 2020-06-04

Family

ID=70852452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/119125 WO2020108337A1 (fr) 2018-11-29 2019-11-18 Procédé de programmation de ressources cpu, et équipement électronique

Country Status (2)

Country Link
CN (1) CN111240824B (fr)
WO (1) WO2020108337A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112039963B (zh) * 2020-08-21 2023-04-07 广州虎牙科技有限公司 一种处理器的绑定方法、装置、计算机设备和存储介质
CN112231067B (zh) * 2020-12-11 2021-03-30 广东睿江云计算股份有限公司 一种虚拟cpu的优化调度方法及其系统
CN116431357B (zh) * 2023-06-13 2023-12-01 阿里巴巴(中国)有限公司 内核分配方法、分配组件、工作节点和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508696A (zh) * 2011-10-30 2012-06-20 北京方物软件有限公司 一种不对称的资源调度方法及装置
CN103019853A (zh) * 2012-11-19 2013-04-03 北京亿赞普网络技术有限公司 一种作业任务的调度方法和装置
CN105988872A (zh) * 2015-02-03 2016-10-05 阿里巴巴集团控股有限公司 一种cpu资源分配的方法、装置及电子设备
CN108153583A (zh) * 2016-12-06 2018-06-12 阿里巴巴集团控股有限公司 任务分配方法及装置、实时计算框架系统
EP3376399A1 (fr) * 2015-12-31 2018-09-19 Huawei Technologies Co., Ltd. Procédé, appareil et système de traitement de données

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007257257A (ja) * 2006-03-23 2007-10-04 Matsushita Electric Ind Co Ltd マルチタスクシステムにおけるタスク実行環境切替え方法
JP4705051B2 (ja) * 2007-01-29 2011-06-22 株式会社日立製作所 計算機システム
US20090158299A1 (en) * 2007-10-31 2009-06-18 Carter Ernst B System for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
US9075609B2 (en) * 2011-12-15 2015-07-07 Advanced Micro Devices, Inc. Power controller, processor and method of power management
US9244738B2 (en) * 2013-10-24 2016-01-26 International Business Machines Corporation Conditional serialization to improve work effort
WO2016084237A1 (fr) * 2014-11-28 2016-06-02 株式会社日立製作所 Procédé de commande pour système de machines virtuelles et système de machines virtuelles
US9830187B1 (en) * 2015-06-05 2017-11-28 Apple Inc. Scheduler and CPU performance controller cooperation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508696A (zh) * 2011-10-30 2012-06-20 北京方物软件有限公司 一种不对称的资源调度方法及装置
CN103019853A (zh) * 2012-11-19 2013-04-03 北京亿赞普网络技术有限公司 一种作业任务的调度方法和装置
CN105988872A (zh) * 2015-02-03 2016-10-05 阿里巴巴集团控股有限公司 一种cpu资源分配的方法、装置及电子设备
EP3376399A1 (fr) * 2015-12-31 2018-09-19 Huawei Technologies Co., Ltd. Procédé, appareil et système de traitement de données
CN108153583A (zh) * 2016-12-06 2018-06-12 阿里巴巴集团控股有限公司 任务分配方法及装置、实时计算框架系统

Also Published As

Publication number Publication date
CN111240824B (zh) 2023-05-02
CN111240824A (zh) 2020-06-05

Similar Documents

Publication Publication Date Title
US10003500B2 (en) Systems and methods for resource sharing between two resource allocation systems
CN108337109B (zh) 一种资源分配方法及装置和资源分配系统
CN109564528B (zh) 分布式计算中计算资源分配的系统和方法
WO2019001092A1 (fr) Moteur d'équilibrage de charge, client, système informatique distribué, et procédé d'équilibrage de charge
CN113064712B (zh) 基于云边环境的微服务优化部署控制方法、系统及集群
WO2020108337A1 (fr) Procédé de programmation de ressources cpu, et équipement électronique
WO2021227999A1 (fr) Système et procédé de service en nuage
CN106933664B (zh) 一种Hadoop集群的资源调度方法及装置
CN110221920B (zh) 部署方法、装置、存储介质及系统
WO2016095535A1 (fr) Procédé et appareil d'attribution de ressources, et serveur
CN110166507B (zh) 多资源调度方法和装置
CN110990154B (zh) 一种大数据应用优化方法、装置及存储介质
WO2022105337A1 (fr) Procédé et système de planification de tâche
WO2024021489A1 (fr) Procédé et appareil de planification de tâches, et planificateur kubernetes
US11438271B2 (en) Method, electronic device and computer program product of load balancing
CN109002364A (zh) 进程间通信的优化方法、电子装置以及可读存储介质
Wu et al. Abp scheduler: Speeding up service spread in docker swarm
JP4862056B2 (ja) 仮想計算機管理機構及び仮想計算機システムにおけるcpu時間割り当て制御方法
CN116680078A (zh) 云计算资源调度方法、装置、设备以及计算机存储介质
WO2022111466A1 (fr) Procédé de planification de tâches, procédé de commande, dispositif électronique et support lisible par ordinateur
US20170269968A1 (en) Operating system support for game mode
CN110399206B (zh) 一种基于云计算环境下idc虚拟化调度节能系统
CN112346853A (zh) 用于分布应用的方法和设备
WO2024087663A1 (fr) Procédé et appareil de planification de tâche, et puce
TWI826137B (zh) 電腦系統、應用於電腦系統的資源分配方法及執行資源分配方法的電腦程式產品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19890610

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 031121)

122 Ep: pct application non-entry in european phase

Ref document number: 19890610

Country of ref document: EP

Kind code of ref document: A1