CN108427604B - Cluster resource adjustment method and device and cloud platform - Google Patents

Cluster resource adjustment method and device and cloud platform Download PDF

Info

Publication number
CN108427604B
CN108427604B CN201810119092.3A CN201810119092A CN108427604B CN 108427604 B CN108427604 B CN 108427604B CN 201810119092 A CN201810119092 A CN 201810119092A CN 108427604 B CN108427604 B CN 108427604B
Authority
CN
China
Prior art keywords
resource
partition
cluster
information
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810119092.3A
Other languages
Chinese (zh)
Other versions
CN108427604A (en
Inventor
单海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huawei Cloud Computing Technology Co ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810119092.3A priority Critical patent/CN108427604B/en
Priority to PCT/CN2018/100552 priority patent/WO2019153697A1/en
Publication of CN108427604A publication Critical patent/CN108427604A/en
Application granted granted Critical
Publication of CN108427604B publication Critical patent/CN108427604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a resource adjustment method and device for a cluster and a cloud platform, and relates to the field of cloud computing, wherein the cluster comprises a plurality of resource partitions, each resource partition comprises at least one Virtual Machine (VM), each resource partition corresponds to a scheduler, and the method comprises the following steps: acquiring VM information of each VM in a cluster; adjusting the VM included in at least one resource partition according to the acquired VM information; and updating partition information of the cluster according to the adjustment result, wherein the partition information is used for indicating the VM included by each resource partition, and each scheduler is used for executing scheduling tasks according to the partition information. Because each scheduler can independently execute the scheduling task in the corresponding resource partition, the problem of scheduling failure caused by scheduling conflict of each scheduler can be avoided, and because the resource partition of the cluster can be dynamically adjusted based on the VM information, the resource utilization rate of each resource partition can be effectively balanced, and the utilization rate of the cluster resources is improved.

Description

Cluster resource adjustment method and device and cloud platform
Technical Field
The present application relates to the field of cloud computing, and in particular, to a method and an apparatus for adjusting resources of a cluster, and a cloud platform.
Background
A Platform as a Service (PaaS) technology in Cloud Computing (Cloud Computing) is a technology capable of providing an operation and development environment of an application as a Service to a user. The platform for providing an environment for running and developing an application is called a cloud platform, and the cloud platform generally includes a scheduler and a cluster composed of a large number of Virtual Machines (VMs), and the scheduler may deploy an application submitted by a user in one or more VMs according to a user's requirement and a preset scheduling rule, so as to implement scheduling of the application.
In the related art, in order to improve scheduling efficiency, a plurality of schedulers may be arranged in the cloud platform, and the schedulers may share resources of the cluster, that is, each scheduler may obtain resource information of each virtual machine in the cluster in real time, and may implement scheduling of the application program according to the obtained resource information. The cluster resources refer to resources such as a CPU, a memory, and a disk occupied by each virtual machine in the cluster.
However, when the cluster load is large and the remaining resources are small, if a plurality of schedulers all have scheduling tasks to be executed at the same time and all the schedulers schedule the application program to the same virtual machine with the small remaining resources, a scheduling conflict may occur, which may cause a scheduling failure.
Disclosure of Invention
The application provides a cluster resource adjustment method, a cluster resource adjustment device and a cloud platform, which can solve the problem of scheduling failure caused by scheduling conflict in the related technology, and the technical scheme is as follows:
in one aspect, a method for adjusting resources of a cluster is provided, where the method may be applied to a master node of a cloud platform, the cluster includes a plurality of resource partitions, each resource partition includes at least one virtual machine VM, and each resource partition corresponds to a scheduler, and the method may include: the main node acquires VM information of each VM in the cluster, adjusts the VMs included in at least one resource partition according to the acquired VM information, and can update partition information of the cluster according to an adjustment result, wherein the partition information is used for indicating the VMs included in each resource partition, and each scheduler is used for executing scheduling tasks in the corresponding resource partition according to the partition information.
Each scheduler can independently execute the scheduling task in the corresponding resource partition, so that the problem of scheduling failure caused by scheduling conflict of each scheduler can be avoided; and because the resources of each resource partition in the cluster can be dynamically adjusted based on the VM information, the balanced distribution of the cluster resources can be realized, the resource utilization rate of each resource partition can be effectively balanced, and the utilization rate of the cluster resources can be further improved.
Optionally, the VM information includes: resource information; the adjusting, by the host node according to the obtained VM information, a process of the VM included in at least one resource partition may include:
determining the residual resource amount of each VM according to the resource information of each VM in the cluster, and determining the total residual resource amount of the cluster; and adjusting the attribution of the VM included in at least one resource partition based on the residual resource amount of each VM and the total residual resource amount of the cluster, so that the residual resource amount occupied by each resource partition meets the preset resource ratio.
The preset resource ratio can be equal ratio or determined based on the historical adjustment amount of each scheduler, and the resource amount included in each resource partition is adjusted through the resource ratio, so that the reasonable distribution of cluster resources can be ensured, and the resource utilization rate can be improved.
Optionally, the process of the master node adjusting the VMs included in the at least one resource partition based on the remaining resource amount of each VM and the total remaining resource amount may include:
dividing the rest resources of the cluster into N parts of resources according to the preset resource ratio, wherein each part of resources is provided by at least one VM (virtual machine), each part of resources corresponds to one resource partition, and N is the number of the resource partitions included in the cluster;
at least one VM for providing each resource is partitioned to a corresponding resource partition.
Further, the VM information may further include: type information of the VM; the process of determining the total amount of resources remaining for the cluster may include:
dividing a plurality of VMs included in the cluster into at least two groups of resource groups according to the type information of each VM, wherein the types of at least one VM included in each group of resource groups are consistent;
respectively determining the total amount of the residual resources of at least one VM included in each group of resource groups;
correspondingly, the process of dividing the remaining resources of the cluster into N resources according to the preset resource ratio may include:
dividing the residual resources of each group of resources into N parts of sub-resources according to the preset resource ratio, wherein each part of sub-resources is provided by at least one VM and corresponds to one resource partition;
at least two sub-resources corresponding to the same resource partition are determined as one resource.
The cluster resources are adjusted based on the types of the VMs, so that the balanced distribution of different types of resources in the cluster can be ensured, and the balance of resource distribution in the cluster is further improved.
Optionally, before adjusting the VM included in the at least one resource partition, the method may further include:
determining a physical location at which each VM is deployed;
accordingly, adjusting the VMs included in at least one resource partition based on the remaining resource amount of each VM and the total remaining resource amount may include:
adjusting the VMs included in at least one resource partition based on the remaining resource amount of each VM, the total remaining resource amount and the physical location where each VM is deployed;
and for any two VMs with equal residual resource amount and adjusted to different resource partitions, the average physical distance between the first VM and each VM in the first resource partition to which the first VM belongs is smaller than the average physical distance between the second VM and each VM in the first resource partition.
The method provided by the application can divide the VMs with the closer physical positions into the same resource partition as much as possible, so that the communication time delay among the VMs in the same resource partition is reduced, and the communication efficiency is improved.
Optionally, determining the remaining resource amount of each VM according to the resource information of each VM in the cluster, and determining the total remaining resource amount of the cluster may include:
determining the residual resource amount of each VM according to the resource information of each VM in the cluster;
determining at least one target VM based on the residual resource amount of each VM, wherein the residual resource amount of each target VM is larger than a preset threshold value;
determining the sum of the residual resource amount of the at least one target VM as the total residual resource amount of the cluster;
accordingly, adjusting the VMs included in at least one resource partition based on the remaining resource amount of each VM and the total remaining resource amount may include:
and adjusting the target VM included in at least one resource partition based on the residual resource amount of each target VM and the total residual resource amount.
According to the method, only the resource partition to which the at least one target VM belongs can be adjusted, and the VM with the residual resource amount smaller than the preset threshold does not need to be adjusted, so that the change degree of the resource partition can be reduced as much as possible, and the adjustment efficiency of the resource partition is improved.
Optionally, the VM information may include: resource information; before adjusting the VM included in the at least one resource partition, the method may further include:
obtaining the partition information of the cluster; detecting whether the cluster meets a partition adjusting condition or not according to the resource information of each VM in the cluster and the partition information;
correspondingly, the process of adjusting the partition information of the cluster according to the obtained VM information may include:
and when detecting that the cluster meets the partition adjusting condition, adjusting the VM included in each resource partition according to the obtained VM information.
The process of detecting whether the cluster meets the partition adjusting condition may include:
determining the resource utilization rate of each resource partition according to the resource information of each VM in the cluster and the partition information, wherein the resource utilization rate is the ratio of the used resource amount of the resource partition to the occupied resource total amount;
when detecting that the number of the resource partitions with the resource utilization rate larger than the utilization rate threshold is larger than the number threshold, determining that the cluster meets partition adjustment conditions;
and when detecting that the number of the resource partitions with the resource utilization rate larger than the utilization rate threshold is not larger than the number threshold, determining that the cluster does not meet the partition adjustment condition.
When the number of the resource partitions with the resource utilization rate larger than the utilization rate threshold is larger than the number threshold, the resources of the cluster are readjusted, so that the timeliness of cluster resource adjustment can be ensured, and the problem of scheduling failure of the scheduler corresponding to the resource partition with the higher resource utilization rate is effectively solved.
Optionally, the resource information may include: at least one of processor resource information, memory resource information and storage resource information; the resource usage being greater than the usage threshold may refer to:
the average value of the utilization rates of the resources corresponding to the information is larger than the utilization rate threshold; or, in the at least one type of information, the number of information items for which the usage rate of the corresponding resource is greater than the usage rate threshold is greater than the number threshold.
Optionally, the process of acquiring VM information of each VM in the cluster may include:
periodically acquiring VM information of each VM in the cluster according to a preset adjusting period;
or when detecting that the number of schedulers arranged in the cloud platform changes, acquiring VM information of each VM in the cluster.
According to the method provided by the application, the main node can periodically adjust the cluster resources according to a preset adjustment period, or can timely adjust the resource partition of the cluster when the number of the schedulers changes, and the resource adjustment method is high in flexibility.
In another aspect, an apparatus for adjusting resources of a cluster is provided, where the cluster includes a plurality of resource partitions, each resource partition includes at least one VM, and each resource partition corresponds to a scheduler, and the apparatus may include: at least one module, configured to implement the resource adjustment method for a cluster provided in the foregoing aspect.
In yet another aspect, a cloud platform is provided, the cloud platform comprising: a cluster, a plurality of schedulers, and a resource adjustment apparatus of a cluster as provided in the above aspect.
In yet another aspect, a computer-readable storage medium is provided, having instructions stored therein, which when run on a computer, cause the computer to perform the method of resource adjustment of a cluster as provided in the above aspect.
In a further aspect, a computer program product containing instructions is provided, which when run on a computer can cause the computer to perform the method for resource adjustment of a cluster provided in the above aspect.
In summary, the present application provides a method and an apparatus for adjusting resources of a cluster, and a cloud platform, where for a cluster including multiple resource partitions, the method provided in the present application may obtain VM information of each VM in the cluster, adjust a VM included in at least one resource partition according to the obtained VM information, and update partition information of the cluster according to an adjustment result, so that each scheduler may execute a scheduling task in a corresponding resource partition according to the adjusted partition information. According to the method provided by the application, each scheduler can independently execute the scheduling task in the corresponding resource partition, so that the problem of scheduling failure caused by scheduling conflict can be effectively avoided; and because the resources of the cluster can be dynamically adjusted, the balanced distribution of the cluster resources in each resource partition can be ensured, the resource utilization rate of each resource partition is effectively balanced, and the utilization rate of the cluster resources is improved.
Drawings
Fig. 1A is an architecture diagram of a cloud platform according to a resource adjustment method for a cluster provided in an embodiment of the present invention;
fig. 1B is a schematic diagram of a resource partitioning situation of a cluster provided in an embodiment of the present invention;
fig. 1C is an architecture diagram of a cloud platform according to another resource adjustment method for a cluster provided in the embodiment of the present invention;
fig. 2 is a flowchart of a method for adjusting resources of a cluster according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for detecting whether a cluster meets a partition adjustment condition according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for adjusting a VM included in at least one resource partition, provided in an embodiment of the present invention;
FIG. 5 is a diagram illustrating another example of resource partitioning for a cluster according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a resource partitioning of another cluster provided in this embodiment of the present invention;
fig. 7 is a flowchart of another cluster resource adjustment method provided in the embodiment of the present invention;
fig. 8 is a flowchart of a resource adjustment method for a cluster provided in the embodiment of the present invention;
fig. 9 is a flowchart of a resource adjustment method for a cluster according to another embodiment of the present invention;
fig. 10 is a schematic structural diagram of a resource adjustment apparatus of a cluster according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an adjusting module according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of another cluster resource adjustment apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a further apparatus for adjusting resources of a cluster according to an embodiment of the present invention.
Detailed Description
In the related art, in order to improve scheduling efficiency, a cluster may be further divided into a plurality of resource partitions according to different computing frameworks, where each resource partition includes a plurality of VMs for supporting one computing framework. Moreover, a scheduler may be correspondingly set for each resource partition, and each scheduler may perform task scheduling in the corresponding resource partition, that is, after receiving an application program submitted by a user, each scheduler may select a suitable VM from multiple VMs included in the corresponding resource partition to deploy the application program, so that an installation package or an image file of the application program is started and run on the virtual machine. The plurality of schedulers work in parallel, and the scheduling efficiency can be effectively improved.
However, as the running time of the cloud platform increases, the resource of some resource partitions in the cluster may be in shortage, and the resource of some resource partitions is idle, so that the resource utilization rate of each resource partition in the cluster is unbalanced.
Please refer to fig. 1A, which illustrates an architecture diagram of a cloud platform according to a resource adjustment method for a cluster provided in an embodiment of the present invention. The resource adjustment method of the cluster can be applied to a Master node (also called Master node) 00 of a cluster management system in a cloud platform. Referring to fig. 1A, the cloud platform further includes a cluster composed of a plurality of VMs, a plurality of schedulers, and a database 10, for example, fig. 1A shows three schedulers of S0, S1, and S2. The plurality of VMs included in the cluster may be partitioned into a plurality of resource partitions, each resource partition including at least one VM. Each scheduler in the plurality of schedulers may correspond to a resource partition, and after receiving an application program submitted by a user, each scheduler may select a suitable VM from at least one VM included in the corresponding resource partition to deploy the application program, thereby avoiding a scheduling conflict problem that may occur when the plurality of schedulers schedule in parallel. For example, referring to FIG. 1B, the cluster may include three resource partitions S00, S10, and S20, each resource partition including a plurality of VMs. When the scheduler S0 receives an application submitted by a user, the scheduler S0 may select a suitable VM from at least one VM included in the corresponding resource partition S00 to deploy the application, where the resource partition S00 corresponds to the scheduler S0, the resource partition S10 corresponds to the scheduler S1, and the resource partition S20 corresponds to the scheduler S2. The database 10 may be configured to store partition information of each resource partition in the cluster, the partition information indicating a VM included in each resource partition; the database 10 may also store VM information (e.g., type information and location information of the VM, etc.) of each VM, for the management module 01 and the policy module 03 to read.
Referring to fig. 1A, the master node 00 establishes communication connections with each scheduler and each VM, and the master node 00 can receive VM information sent by each VM and adjust the VMs included in at least one resource partition based on the received VM information, so that each scheduler can implement scheduling of an application according to an adjusted result, thereby implementing dynamic adjustment of cluster resources and improving the utilization rate of resources.
As shown in fig. 1A, the master node 00 may include a management module 01, a collection module 02, a policy module 03, and a plurality of caches corresponding to the schedulers, each cache being configured to store partition information of a resource partition corresponding to one scheduler, for example, cache 0 may store partition information of scheduler S0. The collection module 02 may be configured to obtain VM information (e.g., an identifier and resource information of a VM) of each VM in the cluster, and send the obtained VM information to the policy module 03; the policy module 03 may adjust the VMs included in the at least one resource partition according to the VM information of each VM, update the partition information stored in the database according to the adjustment result, and send the updated partition information to the management module 01; the management module 01 may update the partition information stored in each cache based on the partition information. The partition information stored in each cache may include, in addition to the identifier of the VM included in the resource partition, resource information of each VM in the resource partition, and each scheduler may schedule the application based on the partition information stored in its corresponding cache.
It should be noted that, in the embodiment of the present invention, the multiple VMs included in the cluster in the cloud platform may be divided into two groups, where the VMs included in one group are management plane VMs, and the VMs included in the other group are data plane VMs. The management plane VM is used for deploying each component in the cluster management system, such as the host node 00, each scheduler, the database 10, and the like; the data plane VM is used to deploy an application program submitted by a user, and therefore the resource of the cluster adjusted by the method provided by the embodiment of the present invention refers to a resource occupied by the data plane VM.
It should be further noted that, referring to fig. 1C, in the embodiment of the present invention, the cloud platform may support a plurality of different computing frameworks, for example, fig. 1C shows three computing frameworks, namely computing framework 0, computing framework 1, and computing framework 2. Each scheduler in the cloud platform may belong to one computing framework and may schedule an application program in the computing framework to which the scheduler belongs (i.e., an application program developed by using the computing framework). For example, scheduler S0 corresponds to computing frame 0, and the scheduler S0 may schedule applications within the computing frame 0. For example, the cloud platform may be provided with a mess framework (an open source distributed resource management framework), an upper layer of the mess framework may interface with a plurality of independently developed computing frameworks, such as Hadoop, MPI, kubernets, and the like, and the mess framework may enable the plurality of computing frameworks to share resources in a cluster through a common resource sharing layer.
As can also be seen with reference to fig. 1C, multiple executors (executors) may be included in each VM, through which each VM may implement the deployment of tasks (i.e., applications).
Fig. 2 is a flowchart of a method for adjusting resources of a cluster according to an embodiment of the present invention, where the method may be applied to the master node 00 shown in fig. 1A or fig. 1C. In the cloud platform shown in fig. 1A or fig. 1C, the cluster may include a plurality of resource partitions, each resource partition includes at least one virtual machine VM, and each resource partition corresponds to one scheduler. Referring to fig. 2, the resource adjustment method of the cluster may include:
step 101, obtaining VM information of each VM in a cluster.
In the embodiment of the present invention, the host node 00 may acquire VM information of each VM in the cluster as needed or periodically, for example, the host node 00 may acquire VM information of each VM in the cluster every 30 minutes through the collection module 02, and may update VM information of each VM stored in the database 10 based on the acquired VM information. The VM information of each VM may include at least an identification of the VM and resource information of the VM, and the VM information may further include at least one of state information, type information, location information, and information of the partition to which the VM belongs.
The identification of the VM can be a character string capable of uniquely identifying the VM, and the character string can be randomly generated by the cloud platform; the resource information may be used to indicate the amount of resources currently used by the VM and the amount of remaining resources, for example, the resource information may include the total amount of resources of the VM and the amount of used resources, where the resources may refer to CPU resources, memory resources, storage resources, and the like; the state information may be used to indicate a current working state of the VM, where the working state may be a normal state or a downtime state; the type information may be used to indicate heterogeneous types of VMs (also referred to as architecture types), where different types of VMs may refer to VMs that employ processors or memories of different architectures; the location information may be used to indicate a physical location where the VM is deployed, for example, the location information may include at least one of a rack, a machine room, a Data Center (DC), an Available Zone (AZ), and a Region (Region) where the VM is deployed; the information of the partition to which the VM belongs may then be used to indicate the resource partition to which the VM currently belongs.
And 102, acquiring the partition information of the cluster.
The master node 00 may obtain the partition information from the database 10, for example, the policy module 03 in the master node 00 may obtain the partition information from the database 10 after receiving the VM information of each VM sent by the collection module 02. The partition information is used to indicate the VMs included in the resource partition, for example, an identifier of each resource partition and an identifier of the VM included in each resource partition may be recorded in the partition information.
For example, it is assumed that three schedulers S0, S1 and S3 are provided in the cloud platform as shown in fig. 1B, wherein the resource partition corresponding to the scheduler S0 is S00, the resource partition corresponding to the scheduler S1 is S10, and the resource partition corresponding to the scheduler S2 is S20. As can be seen in FIG. 1B, the resource partition S20 includes a greater number of VMs and the resource partition S00 includes a lesser number of VMs. Accordingly, the partition information acquired by the master node 00 may be as shown in table 1. As can be seen from table 1, the resource partition S00 includes 10 VMs, whose identities are VM1 to VM10 in sequence; the resource partition S10 includes 12 VMs, the identities of the 12 VMs being VM11 to VM22 in sequence; the resource partition S20 includes 26 VMs, whose identities are VM23 through VM48 in that order.
TABLE 1
Resource partitioning VM
S00 VM1-VM10
S10 VM11-VM22
S20 VM23-VM48
Step 103, detecting whether the cluster meets the partition adjustment condition according to the resource information of each VM in the cluster and the partition information.
When the master node detects that the cluster meets the partition adjusting condition, the master node may adjust the resource partition, that is, execute step 104; when it is detected that the cluster does not satisfy the partition adjustment condition, step 101 may be performed continuously, that is, VM information of each VM in the cluster is continuously obtained.
In this embodiment of the present invention, as shown in fig. 3, the process of the master node detecting whether the cluster meets the partition adjusting condition may include:
step 1031, determining the resource utilization rate of each resource partition according to the resource information of each VM in the cluster and the partition information.
The resource usage of each resource partition may refer to a ratio of an amount of resources used by the resource partition to a total amount of resources occupied by the resource partition. Assume that the cluster includes N resource partitions (N is an integer greater than 1), where the nth resource partition includes SnVM, the utilization rate r of the nth resource partitionnCan satisfy the following conditions:
Figure BDA0001571526650000071
wherein, UiIs the amount of resources currently used by the ith VM, TiIs the total resource amount of the ith VM, N is a positive integer not greater than N, and i is not greater than SnIs a positive integer of (1).
Step 1032, when it is detected that the number of resource partitions whose resource utilization rates are greater than the utilization rate threshold is greater than the number threshold, it is determined that the cluster meets the partition adjustment condition.
In the embodiment of the invention, the utilization rate threshold and the number threshold can be manually set by operation and maintenance personnel of the cloud platform; or the usage threshold may also be obtained by the master node according to historical data statistics, for example, the master node may analyze the performance of each virtual machine at different resource usage rates, and may determine the resource usage rate when the performance of the virtual machine is decreased faster as the usage threshold; the number threshold may also be determined by the master node according to the number of the current resource partitions, for example, the number threshold may be 10% or 30% of the number of the current resource partitions. And when calculating the number threshold according to the number of the current resource partitions, the calculated number threshold should be guaranteed to be an integer.
For example, assuming that the usage threshold is 80% and the number threshold is 1, when the master node 00 detects that the resource usage of any resource partition of the three resource partitions S00, S10, and S30 is greater than 80%, it may be determined that the cluster satisfies the partition adjustment condition. Or, if the number of resource partitions in the current cluster is 10, the number threshold is 30% of the number of the current resource partitions, that is, the number threshold is 3; accordingly, master node 00 may determine that the cluster satisfies the partition adjustment condition when it detects that the resource usage of more than 3 resource partitions is greater than 80%.
And 1033, when it is detected that the number of the resource partitions of which the resource utilization rate is greater than the utilization rate threshold is not greater than the number threshold, determining that the cluster does not meet the partition adjustment condition.
For example, when the master node 00 detects that the resource utilization of each resource partition is not greater than 80%, it may be determined that the cluster does not satisfy the partition adjustment condition.
It should be noted that, since the resource of each VM may include at least one of a CPU resource, a memory resource, and a storage resource, the resource information of each VM may also include: at least one of CPU resource information, memory resource information and storage resource information. Accordingly, in step 1031, when the master node calculates the resource utilization rates, the master node may calculate the resource utilization rates corresponding to each type of information. For example, the CPU resource usage, the memory resource usage, and the storage resource usage of each resource partition may be calculated separately.
Further, the fact that the resource utilization is greater than the utilization threshold in step 1032 and step 1033 may refer to: the average value of the utilization rates of the resources corresponding to the information is larger than the utilization rate threshold; or, in the at least one type of information, the number of information items for which the usage rate of the corresponding resource is greater than the usage rate threshold is greater than the number threshold. The number threshold may be a preset fixed value, or may be determined by the master node according to the number of information included in the resource information, for example, the number threshold may be one third or two thirds of the number of information included in the resource information, and the number threshold should be an integer.
In addition, each resource may also correspond to one usage threshold, and the usage thresholds corresponding to the various resources may be different; accordingly, in step 1032 and step 1033, the resource usage rate of each resource may be compared with its corresponding usage rate threshold.
For example, assuming that the usage threshold is 80%, and the resource usage is greater than the usage threshold, it means: the resource information includes at least one type of information, and the utilization rate of the resource corresponding to any type of information is greater than the utilization rate threshold (i.e., the number threshold is 1). If the resource information of each VM includes CPU resource information, memory resource information, and storage resource information, and the CPU resource utilization of the resource partition S00 calculated by the master node is 85%, the memory resource utilization is 75%, and the storage resource utilization is 50%, then the master node 00 may determine that the resource utilization of the resource partition S00 is greater than the utilization threshold because the CPU resource utilization is greater than 80%.
Or, assuming that the usage threshold corresponding to the CPU resource is 80%, the usage threshold corresponding to the memory resource is 85%, the usage threshold corresponding to the storage resource is 90%, and the resource usage is greater than the usage threshold, it means that: the utilization rate of the resource corresponding to each information is greater than the utilization rate threshold corresponding to the information (i.e. the quantity threshold is 3). Then, when the master node calculates that the CPU resource utilization of the resource partition S00 is 85%, the memory resource utilization is 88%, and the storage resource utilization is 92%, then the master node 00 may determine that the resource utilization of the resource partition S00 is greater than the utilization threshold value because the utilization of the resource corresponding to each type of information is greater than the corresponding utilization threshold value.
It should be further noted that, in the embodiment of the present invention, when detecting whether the cluster meets the partition adjustment condition, the master node 00 may detect whether the resource usage rate of each resource partition is greater than the usage rate threshold, and may also determine whether the cluster meets the partition adjustment condition by detecting a balance degree of the resource usage rates of each resource partition. For example, the master node may calculate a variance of the resource usage rates of the resource partitions, and when the variance is greater than a preset variance threshold, it may be determined that the resource usage rates of the resource partitions are unbalanced, and it may be further determined that the cluster satisfies a partition adjustment condition; when the variance is not greater than the preset variance threshold, it can be determined that the resource utilization rates of the resource partitions are relatively balanced, and the cluster can be determined not to meet the partition adjustment condition without adjusting the resource partitions of the cluster.
When the number of the resource partitions with the resource utilization rate larger than the utilization rate threshold is larger than the number threshold, the resources of the cluster are readjusted, so that the timeliness of cluster resource adjustment can be ensured, the problem of scheduling failure of the scheduler corresponding to the resource partition with the higher resource utilization rate can be effectively solved, and the scheduling effect of the scheduler is improved.
And step 104, determining the residual resource amount of each VM according to the resource information of each VM in the cluster, and determining the total residual resource amount of the cluster.
After the main node determines that the cluster meets the partition adjusting condition, the main node can start to adjust the resources of the cluster again so as to balance the resource utilization rate of each resource partition, and further improve the utilization rate of the cluster resources. Before performing resource adjustment, the master node may determine the total amount of resources remaining in the cluster.
Since the resource information of each VM may include the total amount of resources of the VM and the amount of used resources, the master node 00 may calculate the remaining amount of resources of each VM based on the total amount of resources and the amount of used resources, and may further accumulate the remaining amount of resources of each VM to determine the total amount of remaining resources of the cluster.
Or the resource information reported by each VM to the host node 00 may be the remaining resource amount of the VM, and the host node 00 may directly calculate the total amount of the remaining resources of the cluster based on the resource information reported by each VM.
Or, the resource information reported by each VM to the host node 00 may only be the resource amount currently used by the VM, and the host node 00 may obtain the total resource amount of each VM from the database 10, and further calculate the remaining resource amount of each VM and the remaining resource amount of the cluster.
It should be noted that, since the resource of each VM may include at least one of a CPU resource, a memory resource, and a storage resource, when the master node calculates the total amount of the remaining resources of the cluster, the master node may calculate the total amount of the remaining resources of each resource respectively. For example, the master node may calculate the total amount of the remaining resources of the CPU resources, the total amount of the remaining resources of the memory resources, and the total amount of the remaining resources of the storage resources of all the VMs in the cluster, respectively.
For example, as shown in fig. 1B, if 48 VMs are included in the cluster, the master node may calculate the total amount of remaining resources of the CPU resources, the total amount of remaining resources of the memory resources, and the total amount of remaining resources of the storage resources of the 48 VMs, respectively.
Step 105, determining the physical location where each VM is deployed.
In the embodiment of the present invention, the VM information of each VM received by the host node may include location information of the VM, so the host node may determine a physical location where each VM is deployed based on the obtained VM information; alternatively, the master node 00 may also directly obtain the location information of each VM from the database, and further determine the physical location where each VM is deployed.
And 106, adjusting the VM included in at least one resource partition based on the residual resource amount of each VM, the residual resource total amount of the cluster and the physical position deployed by each VM.
Further, the master node may adjust VMs included in at least one resource partition of the plurality of resource partitions based on a principle of resource balanced allocation, so that the remaining resource amount occupied by each resource partition meets a preset resource ratio, thereby ensuring balanced allocation of cluster resources. In the adjustment process, the master node may further perform adjustment with reference to the physical location where each VM is deployed, so that the two remaining resource amounts are equal, and the average physical distance between the first VM and each VM in the first resource partition to which the first VM belongs is smaller than the average physical distance between the second VM and each VM in the first resource partition, when the two remaining resource amounts are adjusted to the first VM and the second VM in different resource partitions. That is, the VMs with closer physical positions may be partitioned into the same resource partition as much as possible, so as to reduce the communication delay between the VMs in the same resource partition, and further reduce the communication delay of the application program or the application component.
The preset resource ratio may be equal, that is, the master node 00 may adjust the VM included in at least one resource partition, so that the remaining resource amount occupied by each resource partition is equal; or, the preset resource allocation may be determined according to the historical allocation amount of each scheduler, for example, the master node may count the historical allocation amount of each scheduler in the preset time period every preset time period, and may determine the resource allocation of the resource partition corresponding to each scheduler based on the counted historical allocation amount, where the resource allocation may be positively correlated to the ratio of the historical allocation amount of each scheduler, that is, for the resource partition corresponding to the scheduler with a higher historical allocation amount, the ratio of the allocated resource amount in the remaining total resource amount may be higher, so as to ensure the rationality of cluster resource allocation and improve the resource utilization rate.
For example, assuming that three schedulers S0, S1, and S3 are provided in the cloud platform, and the master node 00 counts the historical adjustment amounts of the schedulers every other week, if the ratio of the historical adjustment amounts of the three schedulers counted by the master node last time is 1:2:3, the master node 00 may determine that the resource ratio of the three resource partitions corresponding to the three schedulers may be 1:2: 3.
In an optional implementation manner of the embodiment of the present invention, the master node may determine the remaining resource amount that each resource partition should occupy according to the current total remaining resource amount of the cluster and the preset resource ratio; further, the master node may determine a resource amount difference value of each resource partition based on the remaining resource amount actually occupied by each resource partition at present, and may further adjust VMs included in each resource partition based on the resource amount difference value, the remaining resource amount of each VM, and the physical location where each VM is deployed, so that the ratio of the resource amounts of each resource partition satisfies the preset resource ratio (that is, the resource amount difference value of each resource partition is 0). Of course, for a resource partition with a resource amount difference of 0, the master node may not need to adjust the VMs included in the resource partition.
In another optional implementation manner of the embodiment of the present invention, referring to fig. 4, the method for adjusting the VMs included in at least one resource partition based on the remaining resource amount of each VM, the remaining resource total amount of the cluster, and the physical location where each VM is deployed may include:
step 1061, dividing the remaining resources of the cluster into N resources according to a preset resource ratio.
And N is the number of resource partitions included in the cluster, and each resource corresponds to one resource partition, that is, each resource can be allocated to one corresponding resource partition. In the embodiment of the present invention, the master node may determine the resource amount of each resource according to the current total amount of the remaining resources of the cluster and the preset resource ratio; further, for any resource, the master node may select at least one group of VMs, where a sum of the remaining resource amounts is equal to the resource amount of any resource (or a difference between the two is smaller than a preset difference threshold), according to the remaining resource amount of each VM in the cluster, and each group of VMs may include at least one VM. Finally, the master node may determine, as a VM for providing any resource, a group of VMs in the at least one group of VMs, whose average physical distance between the VMs is shortest.
For example, the master node 00 may divide the current remaining resources in the cluster into three resources according to a ratio of 1:2:3, and if the amount of the first resource corresponding to the resource partition S00 is P0, the amount of the second resource corresponding to the resource partition S10 is P1, and the amount of the third resource corresponding to the resource partition S30 is P2, the ratio of the amounts of the three resources satisfies P0: P1: P2: 1:2: 3. If there are 6 first VMs and 40 second VMs in 48 VMs included in the cluster, where the remaining resource amount of each first VM is P0/6, and the remaining resource amount of each second VM is P0/8, the master node may select the 6 first VMs to provide the first resource, select 16 second VMs to provide the second resource, and select 24 second VMs to provide the third resource. Of course, 8 second VMs may be selected to provide the first share of resources, 6 first VMs may be selected, 8 second VMs may be selected to provide the second share of resources, and 24 second VMs may be selected to provide the third share of resources.
In addition, in the selection process, the master node may make VMs with closer physical locations provide the same resource as much as possible. For example, if 16 second VMs of the 40 second VMs are deployed in the same machine room, and the remaining 24 second VMs are deployed in another machine room, the master node may select the 16 second VMs deployed in the same machine room to provide the second resource, and select the 24 second VMs deployed in another machine room to provide the third resource.
Step 1062, partitioning at least one VM for providing each resource into corresponding resource partitions.
Further, the master node 00 may divide at least one VM for providing each resource into corresponding resource partitions according to the division result of the remaining resources in the cluster, so as to adjust the VM included in at least one resource partition of the plurality of resource partitions.
For example, the master node 00 may partition 6 first VMs for providing a first share of resources to resource partition S00, 16 second VMs for providing a second share of resources to resource partition S10, and 24 second VMs for providing the third share of resources to resource partition S20.
It should be noted that, in the embodiment of the present invention, because the VM information of each VM acquired by the master node may further include state information of the VM, before performing resource adjustment, the master node may first detect whether each VM is in a normal state according to the acquired state information of each VM, and may only adjust a resource partition to which the VM in the normal state belongs, and may not adjust the VM in the downtime state. That is, the VMs referred to in the above steps 103 to 106 may all be VMs in a normal state.
It should be noted that, in the step 104, the master node may calculate the total amount of the remaining resources of each resource in at least one resource included in the cluster resource, so in the step 106, when adjusting the cluster resource, as an implementation manner, the master node may adjust the total amount of the remaining resources of the specified resource in the at least one resource as a reference. The designated resource may be a resource arbitrarily selected from the at least one resource, and may be, for example, a CPU resource. Or, the master node may also calculate the equilibrium degree of each resource in the at least one resource partition, and determine a resource with the lowest equilibrium degree as the designated resource; for example, the master node may calculate a variance of the remaining resource amounts of each resource in the respective resource partitions, respectively, and may determine a resource having the largest variance as the designated resource.
As another implementation manner, the master node may further calculate an average value of the total remaining resource amount of the at least one resource and an average value of the remaining resource amount of the at least one resource in each VM, and perform adjustment on the cluster resource based on the average value of the total remaining resource amount.
Step 107, updating the partition information of the cluster according to the adjustment result
Further, the master node 00 may update the partition information of the cluster according to the result of partition adjustment, so that each scheduler may execute the scheduling task in the corresponding resource partition according to the updated partition information. As shown in fig. 1A and 1C, after completing the readjustment of the cluster resource, the policy module 03 may update the partition information stored in the database 10, and may send the updated partition information to the management module 01. The management module 01 may obtain VM information of each VM from the database 10 after receiving the updated partition information, and may further update the partition information stored in each cache according to the updated partition information and the VM information of each VM. The partition information stored in each cache may include an identifier of a VM included in the resource partition corresponding to the cache, and may further include VM information of each VM included in the resource partition, for example, may include resource information and state information of the VM, and the like. Each scheduler may execute a scheduling task in the corresponding resource partition according to the updated partition information in the cache.
For example, assuming that after the cluster resource is readjusted as shown in fig. 5, the resource partition S00 corresponding to the scheduler S0 includes 16 VMs, the resource partition S10 corresponding to the scheduler S10 includes 17 VMs, and the resource partition S20 corresponding to the scheduler S20 includes 15 VMs, each scheduler may execute the scheduling task in its corresponding resource partition.
According to the method provided by the embodiment of the invention, each scheduler can independently execute the scheduling task in the corresponding resource partition, so that the problem of scheduling failure caused by scheduling conflict can be avoided; and because the main node can dynamically adjust the cluster resources based on the acquired VM information, the balanced distribution of the cluster resources can be ensured, the resource utilization rate is effectively improved, and the scheduling effect of the scheduler is further improved.
Optionally, as an optional implementation manner, the VM information of each VM acquired by the host node 00 may further include: type information of the VM. Then, in step 104, the process of determining the total amount of the remaining resources of the cluster by the master node may include:
step 1041a, according to the type information of each VM, dividing the plurality of VMs included in the cluster into at least two groups of resource groups.
And the types of at least one VM included in each group of resource groups are consistent. Assuming that the cluster includes K (K is an integer greater than 1) types of VMs, the master node may divide VMs of the same type among the plurality of VMs in the cluster into a set of resource groups, and thus K sets of resource groups may be obtained.
Step 1042a, respectively determining the total amount of the remaining resources of at least one VM included in each group of resource groups.
Further, when determining the total amount of the remaining resources of the cluster, the master node 00 may calculate the total amount of the remaining resources of each resource group in the K resource groups, respectively.
Accordingly, in step 1061, the process of adjusting the resources by the master node may include:
step 1061a, dividing the remaining resources of each group of resource groups into N sub-resources according to the preset resource ratio.
Wherein each share of sub-resources may be provided by at least one VM and each share of sub-resources corresponds to one resource partition.
Step 1061b, determining at least two sub-resources corresponding to the same resource partition as one resource.
If multiple VMs in the cluster are divided into K resource groups, after the remaining resources of each resource group are divided into N sub-resources, each resource partition may be correspondingly allocated to K sub-resources, where the K sub-resources constitute one resource allocated to the resource partition, and a resource amount L of one resource allocated to the nth resource partitionnCan satisfy the following conditions:
Figure BDA0001571526650000111
wherein the content of the first and second substances,
Figure BDA0001571526650000112
and allocating the resource quantity of a sub-resource to the nth resource partition in the kth group of resource by the master node, wherein K is a positive integer not greater than K, and N is a positive integer not greater than N.
In the embodiment of the invention, the cluster resources are adjusted based on the types of the VMs, so that the balanced distribution of the resources of different heterogeneous types in the cluster can be ensured, and the balance of the resource distribution in the cluster is further improved.
Optionally, as another optional implementation manner, the step 104 may include:
and 1041b, determining the residual resource amount of each VM according to the resource information of each VM in the cluster.
Step 1042b, determining at least one target VM based on the remaining resource amount of each VM.
The residual resource amount of each target VM is greater than a preset threshold value, and the preset threshold value can be a fixed value preset in the master node; or, the master node may also be determined according to the total resource amount of each VM, for example, the preset threshold may be 10% of the total resource amount of the VM; still alternatively, the preset threshold may be manually adjusted by an operation and maintenance person of the cloud platform.
For example, assuming that the preset threshold is 0, the master node 00 may determine a VM having a remaining resource in the cluster as a target VM.
And 1043b, determining the sum of the remaining resource amounts of the at least one target VM as the total remaining resource amount of the cluster.
Further, the master node may calculate a sum of the remaining resource amounts of the at least one target VM and determine the sum of the remaining resource amounts of the at least one target VM as a total remaining resource amount of the cluster.
Accordingly, in step 105, the master node only needs to determine the physical location of each target VM; in step 106, the process of adjusting the resource by the master node may include:
and adjusting the target VM included in at least one resource partition based on the residual resource amount of each target VM, the total residual resource amount of the cluster and the physical position of each target VM.
In addition, the methods shown in steps 1041b to 1043b may be performed before step 1041 a. Correspondingly, in step 1041a, the master node may divide the multiple target VMs included in the cluster into at least two groups of resource groups according to the type information of each target VM; in step 1042a, the master node may then determine a total amount of remaining resources of at least one target VM included in each set of resource groups.
In the embodiment of the present invention, the master node may only adjust the resource partition to which the at least one target VM belongs, and for the VM whose remaining resource amount is less than the preset threshold, it may not need to adjust the partition to which the VM belongs, so that the change degree of the resource partition may be reduced as much as possible, and the adjustment efficiency of the resource partition is improved.
It should be noted that, in the embodiment of the present invention, the master node may trigger the adjustment of the cluster resource based on the resource utilization of each resource partition, and may also trigger the adjustment of the cluster resource in the following manner:
an alternative triggering method: the master node may periodically adjust the resources of the cluster based on a preset adjustment period. Accordingly, in step 101, the master node may periodically obtain VM information of each VM in the cluster according to a preset adjustment period. Thereafter, the master node may perform the methods shown in steps 102 to 107 in sequence to adjust the cluster resources.
The adjustment period may be a preset fixed value, or may be set by an operation and maintenance worker of the cloud platform, for example, the adjustment period may be 12 hours or one week. Assuming that the adjustment period is one week, the master node may perform adjustment on the cluster resources once every other week by the methods shown in the above steps 101 to 107. After the master node 00 adjusts the cluster resources once based on the resource division shown in fig. 5, the resource division of the cluster may be as shown in fig. 6.
Another optional triggering mode: the master node may also adjust the resources of the cluster when detecting that the number of schedulers set in the cloud platform changes. Correspondingly, before the step 101, the master node may monitor the number of schedulers set in the cloud platform in real time; then, in step 101, the master node may obtain VM information of each VM in the cluster when detecting that the number of schedulers set in the cloud platform changes. Thereafter, the master node may perform the methods shown in steps 102 to 107 in sequence to adjust the cluster resources.
It should be noted that, after detecting that the number of the schedulers is increased, the host node may also create a corresponding cache for each newly added scheduler; accordingly, after detecting that the number of schedulers is reduced, the host node may also delete the cache corresponding to the reduced scheduler.
For the two triggering manners, step 103 in the above embodiment may also be deleted, that is, after the host node acquires the VM information and the partition information, the host node may directly adjust the cluster resources by the methods shown in steps 104 to 107.
Of course, the master node may also adjust the cluster resources in the multiple triggering manners, that is, when the master node detects that the cloud platform meets any of the triggering conditions, the master node may trigger the adjustment of the cluster resources. At this time, when entering each new adjustment period, the master node may first detect whether the adjustment of the cluster resources has been triggered in the previous adjustment period by another method (for example, a change in resource utilization rate or the number of scheduling groups). If the master node detects that the resource adjustment operation triggered by other methods has not been executed in the last adjustment period, the master node may adjust the resources of the cluster by the method shown in the above steps 101 to 107 (where the operation shown in step 103 may be deleted); if the master node detects that at least one resource adjustment operation triggered by other modes has been executed in the last adjustment period, the master node may skip the current resource adjustment operation and wait for the next adjustment period.
Further taking the architectures shown in fig. 1A and fig. 1C as an example to introduce the cluster resource adjustment method provided in the embodiment of the present invention, referring to fig. 7, when the master node determines whether to trigger resource adjustment according to the resource utilization rate of each resource partition in the cluster, the method may include:
step 201, a collection module obtains VM information of each VM in a cluster.
Step 202, the collection module sends the VM information to the policy module.
Step 203, the collection module sends the VM information to the database.
The collection module may also send the obtained VM information to a database so that the database updates the VM information for each VM it stores.
Step 204, the strategy module acquires the current partition information of the cluster from the database.
Step 205, the policy module detects whether the cluster meets the partition adjustment condition.
When the policy module detects that the cluster satisfies the partition adjustment condition, step 206 may be executed; otherwise, no operation may be performed, or an instruction indicating not to adjust the resource partition may be sent to the management module.
Step 206, the policy module adjusts the VM included in at least one resource partition according to the obtained VM information.
Step 207, the policy module updates the partition information stored in the database.
And step 208, the strategy module sends the adjusted partition information to the management module.
Step 209, the management module obtains VM information of each VM from the database.
Step 210, the management module updates the partition information stored in the at least one cache.
The implementation process of step 201 to step 210 may refer to corresponding steps in the embodiments shown in fig. 2 to fig. 4, and details are not repeated here.
Referring to fig. 8, when the master node triggers resource adjustment according to a preset adjustment period, the method may include:
step 301, a timer in the policy module counts time.
In an embodiment of the present invention, the timer may be a countdown timer, the countdown duration of which is the preset adjustment period, and when the timing time of the timer (i.e. the countdown time is 0) is reached, step 302 may be executed.
Step 302, the policy module sends an adjustment instruction to the collection module.
Step 303, the collection module obtains VM information of each VM in the cluster according to the adjustment instruction.
Step 304, the collection module sends the VM information to the policy module.
Step 305, the collection module sends the VM information to the database.
The database may update the VM information of each VM it stores based on the received VM information of each VM.
Step 306, the policy module obtains the current partition information of the cluster from the database.
Step 307, the policy module adjusts the VM included in at least one resource partition according to the obtained VM information.
Step 308, the policy module updates the partition information stored in the database.
Step 309, the policy module sends the adjusted partition information to the management module.
In step 310, the management module obtains VM information of each VM from the database.
Step 311, the management module updates the partition information stored in the at least one cache.
The implementation process of steps 301 to 311 may refer to corresponding steps in the embodiments shown in fig. 2 to 4, and details are not repeated here.
Referring to fig. 9, when the master node triggers resource adjustment according to a change in the number of schedulers, the method may include:
step 401, the management module detects whether the number of schedulers in the cloud platform changes.
When a change in the number of schedulers is detected, step 402 may be performed; otherwise, the number of schedulers may continue to be monitored, i.e., step 401 continues. Moreover, when the number of the schedulers is increased, the management module can also create a corresponding cache for each newly added scheduler; when the number of schedulers decreases, the management module may delete the cache corresponding to the decreased scheduler.
Step 402, the management module sends an adjustment instruction to the policy module.
Step 403, the policy module sends an adjustment instruction to the collection module.
And step 404, the collection module acquires the VM information of each VM in the cluster according to the adjustment instruction.
Step 405, the collection module sends the VM information to the policy module.
Step 406, the collection module sends the VM information to the database.
The database may update the VM information of each VM it stores based on the received VM information of each VM.
Step 407, the policy module obtains the current partition information of the cluster from the database.
Step 408, the policy module adjusts the VM included in at least one resource partition according to the obtained VM information.
Step 409, the policy module updates the partition information stored in the database.
Step 410, the policy module sends the adjusted partition information to the management module.
Step 411, the management module obtains VM information of each VM from the database.
Step 412, the management module updates the partition information stored in the at least one cache.
The implementation process of steps 401 to 412 may refer to corresponding steps in the embodiments shown in fig. 2 to fig. 4, and details are not repeated here.
It should be noted that, the order of the steps of the resource adjustment method for a cluster provided in the embodiment of the present invention may be appropriately adjusted, and the steps may also be increased or decreased according to the situation. For example, the step 102 may delete the resource according to the situation, that is, the host node may not consider the current partition information when performing resource adjustment, and the host node may directly adjust the VM included in at least one resource partition according to the VM information of each VM; or, the step 103 may also delete according to the situation, that is, the host node may directly adjust the cluster resources after acquiring the VM information and the partition information; alternatively, step 105 may be deleted as appropriate, that is, in step 106, the master node may adjust the VMs included in at least one resource partition based only on the remaining resource amount of each VM and the total remaining resource amount of the cluster. Any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application is covered by the protection scope of the present application, and thus the detailed description thereof is omitted.
In summary, embodiments of the present invention provide a method for adjusting resources of a cluster, where for a cluster including multiple resource partitions, the method provided in the embodiments of the present invention may obtain VM information of each VM in the cluster, adjust a VM included in at least one resource partition according to the obtained VM information, and update partition information of the cluster according to an adjustment result, so that each scheduler may execute a scheduling task in a corresponding resource partition according to the adjusted partition information. In the method provided by the embodiment of the invention, each scheduler can independently execute the scheduling task in the corresponding resource partition, so that the problem of scheduling failure caused by scheduling conflict can be effectively avoided; and because the resources of the cluster can be dynamically adjusted, the balanced distribution of the cluster resources in each resource partition can be ensured, the resource utilization rate of each resource partition is effectively balanced, and the utilization rate of the cluster resources is improved.
Fig. 10 is a schematic structural diagram of a resource adjustment apparatus of a cluster according to an embodiment of the present invention, where the apparatus may be configured in a master node 00 in the cloud platform shown in fig. 1A or fig. 1C, the cluster includes a plurality of resource partitions, each resource partition includes at least one virtual machine VM, and each resource partition corresponds to one scheduler. Referring to fig. 10, the apparatus may include:
a first obtaining module 501, configured to implement the method in step 101 in the embodiment shown in fig. 2.
An adjusting module 502, configured to adjust a VM included in at least one resource partition according to the obtained VM information.
An updating module 503, configured to implement the method in step 107 in the embodiment shown in fig. 2.
Optionally, the VM information may include: resource information; fig. 11 is a schematic structural diagram of an adjusting module 502 according to an embodiment of the present invention, and referring to fig. 11, the adjusting module 502 may include:
the first determining submodule 5021 is used for implementing the method in step 104 in the embodiment shown in fig. 2.
The adjusting submodule 5022 is configured to adjust the VMs included in at least one resource partition based on the remaining resource amount of each VM and the total remaining resource amount, so that the remaining resource amount occupied by each resource partition meets a preset resource ratio.
Optionally, the adjusting sub-module 5022 may be used to implement the methods from step 1061 to step 1062 in the embodiment shown in fig. 4.
Optionally, the VM information may further include: type information of the VM;
the first determining submodule 5021 is configured to:
dividing a plurality of VMs included in the cluster into at least two groups of resource groups according to the type information of each VM, wherein the types of at least one VM included in each group of resource groups are consistent;
respectively determining the total amount of the residual resources of at least one VM included in each group of resource groups;
accordingly, the adjustment submodule 5022 may be configured to:
dividing the residual resources of each group of resources into N parts of sub-resources according to the preset resource ratio, wherein each part of sub-resources is provided by at least one VM and corresponds to one resource partition;
at least two sub-resources corresponding to the same resource partition are determined as one resource.
Optionally, as shown in fig. 11, the adjusting module 502 may further include:
a second determination submodule 5023 is used to implement the method of step 105 in the embodiment shown in fig. 2.
Accordingly, the adjustment submodule 5022 may be used to implement the method of step 106 in the embodiment shown in fig. 2.
Optionally, the first determining submodule 5021 may be configured to:
determining the residual resource amount of each VM according to the resource information of each VM in the cluster;
determining at least one target VM based on the residual resource amount of each VM, wherein the residual resource amount of each target VM is larger than a preset threshold value;
and determining the sum of the residual resource amount of the at least one target VM as the total residual resource amount of the cluster.
Accordingly, the adjustment submodule 5022 may be configured to:
and adjusting the target VM included in at least one resource partition based on the residual resource amount of each target VM and the total residual resource amount.
Optionally, the VM information includes: resource information; referring to fig. 12, the apparatus may further include:
a second obtaining module 504, configured to implement the method in step 102 in the embodiment shown in fig. 2.
A detection module 505, configured to implement the method in step 103 in the embodiment shown in fig. 2.
Accordingly, the adjustment module 502 may be configured to: and when detecting that the cluster meets the partition adjusting condition, adjusting the VM included in each resource partition according to the obtained VM information.
Optionally, the detection module 505 may be configured to implement the method from step 1031 to step 1033 in the embodiment shown in fig. 3.
Optionally, the resource information includes: at least one of processor resource information, memory resource information and storage resource information; the resource utilization rate being greater than the utilization rate threshold value means: the average value of the utilization rates of the resources corresponding to the information is larger than the utilization rate threshold; or, in the at least one type of information, the number of information items for which the usage rate of the corresponding resource is greater than the usage rate threshold is greater than the number threshold.
Optionally, the first obtaining module 501 may be configured to:
periodically acquiring VM information of each VM in the cluster according to a preset adjusting period;
or when detecting that the number of schedulers arranged in the cloud platform changes, acquiring VM information of each VM in the cluster.
It should be noted that the functions of the first obtaining module 501 in the above apparatus embodiment may be the same as the functions of the collecting module 02 in the master node 00 shown in fig. 1A or fig. 1C, and the functions of the adjusting module 502, the updating module 503, the second obtaining module 504, and the detecting module 505 may be the same as the functions of the policy module 03 in the master node 00 shown in fig. 1A or fig. 1C.
In summary, the present invention provides a resource adjustment apparatus for a cluster, where for a cluster including multiple resource partitions, the apparatus provided in this embodiment of the present invention may obtain VM information of each VM in the cluster, adjust a VM included in at least one resource partition according to the obtained VM information, and update partition information of the cluster according to an adjustment result, so that each scheduler may execute a scheduling task in a corresponding resource partition according to the adjusted partition information. Because each scheduler can independently execute the scheduling task in the corresponding resource partition, the problem of scheduling failure caused by scheduling conflict can be effectively avoided; and because the resources of the cluster can be dynamically adjusted, the balanced distribution of the cluster resources in each resource partition can be ensured, the resource utilization rate of each resource partition is effectively balanced, and the utilization rate of the cluster resources is further improved.
With regard to the apparatus in the above-described embodiment, the implementation manner in which each module performs the operation has been described in detail in the embodiment related to the method, and thus, the description is not set forth here.
Referring to fig. 13, which shows a schematic structural diagram of a cluster resource adjustment apparatus 600 provided in an embodiment of the present application, referring to fig. 13, the cluster resource adjustment apparatus 600 may include: the processor 610, the communication interface 620 and the memory 630, and the communication interface 620 and the memory 630 are respectively connected to the processor 610, and illustratively, as shown in fig. 13, the communication interface 620 and the memory 630 are connected to the processor 610 through a bus 640.
The processor 610 may be a Central Processing Unit (CPU), and the processor 610 includes one or more processing cores. The processor 610 executes various functional applications and data processing by running software programs.
The communication interface 620 may be multiple, and the communication interface 620 is used for the resource adjustment apparatus 600 of the cluster to communicate with an external device, such as a display, a third-party device (e.g., a storage device, a mobile terminal, etc.), and the like.
The memory 630 may include, but is not limited to: random Access Memory (RAM), Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM), flash memory, optical memory. The memory 630 is responsible for information storage, e.g., the memory 630 is used to store software programs.
Optionally, the resource adjusting apparatus 600 of the cluster may further include: an input/output (I/O) interface (not shown in FIG. 13). The I/O interface is coupled to the processor 610, the communication interface 620, and the memory 630. The I/O interface may be, for example, a Universal Serial Bus (USB).
In the embodiment of the present application, the processor 610 is configured to execute the instructions stored in the memory 630, and the processor 630 implements the resource adjustment method of the cluster provided by the above method embodiment by executing the instructions.
An embodiment of the present invention provides a cloud platform, and as shown in fig. 1A and 1C, the cloud platform may include: a cluster, a plurality of schedulers, and a resource adjustment apparatus of the cluster as shown in fig. 10, fig. 12 or fig. 13, which may be deployed in the master node 00.
An embodiment of the present invention provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer is caused to execute the method for adjusting resources of a cluster provided in the foregoing method embodiment.
The embodiment of the present invention further provides a computer program product containing instructions, and when the computer program product runs on a computer, the computer is enabled to execute the method for adjusting resources of a cluster provided in the foregoing method embodiment.

Claims (19)

1. A method for adjusting resources of a cluster is characterized in that the cluster comprises a plurality of resource partitions, each resource partition comprises at least one Virtual Machine (VM), each resource partition corresponds to a scheduler, and the method is applied to a main node which is in communication connection with each scheduler and each VM; the method comprises the following steps:
obtaining VM information of each VM in the cluster, wherein the VM information at least comprises: the identification of the VM, the resource information of the VM and the position information of the VM;
adjusting the VM included in at least one resource partition according to the acquired VM information;
updating partition information of the cluster according to an adjustment result, wherein the partition information is used for indicating a VM (virtual machine) included by each resource partition, and each scheduler is used for executing a scheduling task in the corresponding resource partition according to the partition information;
wherein the adjusting the VM included in at least one resource partition according to the obtained VM information includes:
determining the residual resource amount of each VM according to the resource information of each VM in the cluster, and determining the total residual resource amount of the cluster;
determining the physical position deployed by each VM according to the position information of each VM;
adjusting the VMs included in at least one resource partition based on the residual resource amount of each VM, the total amount of the residual resources and the physical position of each VM in deployment, so that the residual resource amount occupied by each resource partition meets a preset resource ratio, and the ratio of the preset resource ratio to the historical adjustment amount of each scheduler is positively correlated;
the method comprises the steps that any two VMs with equal residual resource amount are adjusted to different resource partitions, and the average physical distance between the first VM and each VM in the first resource partition to which the first VM belongs is smaller than the average physical distance between the second VM and each VM in the first resource partition.
2. The method of claim 1, wherein the adjusting the VM included in at least one resource partition based on the remaining amount of resources and the total amount of resources remaining for each VM comprises:
dividing the rest resources of the cluster into N parts of resources according to the preset resource ratio, wherein each part of resources is provided by at least one VM (virtual machine), each part of resources corresponds to one resource partition, and N is the number of the resource partitions included in the cluster;
at least one VM for providing each resource is partitioned to a corresponding resource partition.
3. The method of claim 2, wherein the VM information further comprises: type information of the VM;
the determining the total amount of remaining resources of the cluster comprises:
dividing a plurality of VMs included in the cluster into at least two groups of resource groups according to the type information of each VM, wherein the types of at least one VM included in each group of resource groups are consistent;
respectively determining the total amount of the residual resources of at least one VM included in each group of resource groups;
dividing the remaining resources of the cluster into N parts of resources according to the preset resource ratio, including:
dividing the residual resources of each group of resources into N parts of sub-resources according to the preset resource ratio, wherein each part of sub-resources is provided by at least one VM and corresponds to one resource partition;
at least two sub-resources corresponding to the same resource partition are determined as one resource.
4. The method of claim 1, wherein determining the amount of resources remaining for each VM and determining the total amount of resources remaining for the cluster based on the resource information for each VM in the cluster comprises:
determining the residual resource amount of each VM according to the resource information of each VM in the cluster;
determining at least one target VM based on the residual resource amount of each VM, wherein the residual resource amount of each target VM is larger than a preset threshold value;
determining the sum of the residual resource amount of the at least one target VM as the total residual resource amount of the cluster;
the adjusting the VM included in at least one resource partition based on the remaining resource amount and the total remaining resource amount of each VM includes:
and adjusting the target VM included in at least one resource partition based on the residual resource amount of each target VM and the total residual resource amount.
5. The method according to any of claims 1 to 4, wherein prior to said adjusting the VM comprised by the at least one resource partition, the method further comprises:
acquiring the partition information of the cluster;
detecting whether the cluster meets a partition adjustment condition or not according to the resource information of each VM in the cluster and the partition information;
the adjusting the partition information of the cluster according to the acquired VM information includes:
and when the cluster is detected to meet the partition adjusting conditions, adjusting the VM included in each resource partition according to the obtained VM information.
6. The method of claim 5, wherein the detecting whether the cluster satisfies a partition adjustment condition comprises:
determining the resource utilization rate of each resource partition according to the resource information of each VM in the cluster and the partition information, wherein the resource utilization rate is the ratio of the used resource amount of the resource partition to the occupied resource total amount;
when detecting that the number of resource partitions with the resource utilization rate larger than the utilization rate threshold is larger than the number threshold, determining that the cluster meets partition adjustment conditions;
and when detecting that the number of the resource partitions with the resource utilization rate larger than the utilization rate threshold is not larger than the number threshold, determining that the cluster does not meet the partition adjustment condition.
7. The method of claim 6, wherein the resource information comprises: at least one of processor resource information, memory resource information and storage resource information;
the resource usage being greater than a usage threshold comprises:
the average value of the utilization rates of the resources corresponding to the information is larger than the utilization rate threshold; or, in the at least one type of information, the number of information items of which the utilization rate of the corresponding resource is greater than the utilization rate threshold is greater than a number threshold.
8. The method of claim 5, wherein the detecting whether the cluster satisfies a partition adjustment condition comprises:
determining the resource utilization rate of each resource partition according to the resource information of each VM in the cluster and the partition information, wherein the resource utilization rate is the ratio of the used resource amount of the resource partition to the occupied resource total amount;
when detecting that the resource utilization rates of all resource partitions are not balanced, determining that the cluster meets partition adjustment conditions;
and when detecting that the resource utilization rates of all the resource partitions are balanced, determining that the cluster does not meet the partition adjustment condition.
9. The method of any of claims 1 to 4, wherein the obtaining VM information for each VM in the cluster comprises:
periodically acquiring VM information of each VM in the cluster according to a preset adjusting period;
or when detecting that the number of schedulers arranged in the cloud platform changes, acquiring VM information of each VM in the cluster.
10. A resource adjustment device of a cluster is characterized in that the cluster comprises a plurality of resource partitions, each resource partition comprises at least one Virtual Machine (VM), each resource partition corresponds to a scheduler, and the device is applied to a main node which is in communication connection with each scheduler and each VM; the device comprises:
a first obtaining module, configured to obtain VM information of each VM in the cluster, where the VM information at least includes: the identification of the VM, the resource information of the VM and the position information of the VM;
the adjusting module is used for adjusting the VM included in at least one resource partition according to the acquired VM information;
the updating module is used for updating partition information of the cluster according to an adjustment result, the partition information is used for indicating a VM (virtual machine) included by each resource partition, and each scheduler is used for executing a scheduling task in the corresponding resource partition according to the partition information;
wherein, the adjusting module comprises:
the first determining submodule is used for determining the residual resource amount of each VM according to the resource information of each VM in the cluster and determining the total residual resource amount of the cluster;
the second determining submodule is used for determining the physical position deployed by each VM according to the position information of each VM;
the adjusting submodule is used for adjusting the VMs included in at least one resource partition based on the residual resource amount of each VM, the total amount of the residual resources and the physical positions deployed by each VM, so that the residual resource amount occupied by each resource partition meets a preset resource ratio, and the ratio of the preset resource ratio to the historical regulation amount of each scheduler is positively correlated;
the method comprises the steps that any two VMs with equal residual resource amount are adjusted to different resource partitions, and the average physical distance between the first VM and each VM in the first resource partition to which the first VM belongs is smaller than the average physical distance between the second VM and each VM in the first resource partition.
11. The apparatus of claim 10, wherein the adjustment submodule is configured to:
dividing the rest resources of the cluster into N parts of resources according to the preset resource ratio, wherein each part of resources is provided by at least one VM (virtual machine), each part of resources corresponds to one resource partition, and N is the number of the resource partitions included in the cluster;
at least one VM for providing each resource is partitioned to a corresponding resource partition.
12. The apparatus of claim 11, wherein the VM information further comprises: type information of the VM;
the first determining submodule is configured to:
dividing a plurality of VMs included in the cluster into at least two groups of resource groups according to the type information of each VM, wherein the types of at least one VM included in each group of resource groups are consistent;
respectively determining the total amount of the residual resources of at least one VM included in each group of resource groups;
the adjustment submodule is configured to:
dividing the residual resources of each group of resources into N parts of sub-resources according to the preset resource ratio, wherein each part of sub-resources is provided by at least one VM and corresponds to one resource partition;
at least two sub-resources corresponding to the same resource partition are determined as one resource.
13. The apparatus of claim 10, wherein the first determining submodule is configured to:
determining the residual resource amount of each VM according to the resource information of each VM in the cluster;
determining at least one target VM based on the residual resource amount of each VM, wherein the residual resource amount of each target VM is larger than a preset threshold value;
determining the sum of the residual resource amount of the at least one target VM as the total residual resource amount of the cluster;
the adjustment submodule is configured to:
and adjusting the target VM included in at least one resource partition based on the residual resource amount of each target VM and the total residual resource amount.
14. The apparatus of any one of claims 10 to 13, further comprising:
a second obtaining module, configured to obtain partition information of the cluster before the adjusting module adjusts the VM included in the at least one resource partition;
the detection module is used for detecting whether the cluster meets a partition adjustment condition or not according to the resource information of each VM in the cluster and the partition information;
the adjusting module is configured to: and when the cluster is detected to meet the partition adjusting conditions, adjusting the VM included in each resource partition according to the obtained VM information.
15. The apparatus of claim 14, wherein the detection module is configured to:
determining the resource utilization rate of each resource partition according to the resource information of each VM in the cluster and the partition information, wherein the resource utilization rate is the ratio of the used resource amount of the resource partition to the occupied resource total amount;
when detecting that the number of resource partitions with the resource utilization rate larger than the utilization rate threshold is larger than the number threshold, determining that the cluster meets partition adjustment conditions;
and when detecting that the number of the resource partitions with the resource utilization rate larger than the utilization rate threshold is not larger than the number threshold, determining that the cluster does not meet the partition adjustment condition.
16. The apparatus of claim 15, wherein the resource information comprises: at least one of processor resource information, memory resource information and storage resource information;
the resource usage being greater than a usage threshold comprises:
the average value of the utilization rates of the resources corresponding to the information is larger than the utilization rate threshold; or, in the at least one type of information, the number of information items of which the utilization rate of the corresponding resource is greater than the utilization rate threshold is greater than a number threshold.
17. The apparatus according to any one of claims 10 to 13, wherein the first obtaining module is configured to:
periodically acquiring VM information of each VM in the cluster according to a preset adjusting period;
or when detecting that the number of schedulers arranged in the cloud platform changes, acquiring VM information of each VM in the cluster.
18. A cloud platform, the cloud platform comprising: cluster, a plurality of schedulers and a resource adjustment apparatus of a cluster according to any of claims 10 to 17.
19. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of resource adjustment of a cluster of any of claims 1 to 9.
CN201810119092.3A 2018-02-06 2018-02-06 Cluster resource adjustment method and device and cloud platform Active CN108427604B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810119092.3A CN108427604B (en) 2018-02-06 2018-02-06 Cluster resource adjustment method and device and cloud platform
PCT/CN2018/100552 WO2019153697A1 (en) 2018-02-06 2018-08-15 Cluster resource adjustment method and device, and cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810119092.3A CN108427604B (en) 2018-02-06 2018-02-06 Cluster resource adjustment method and device and cloud platform

Publications (2)

Publication Number Publication Date
CN108427604A CN108427604A (en) 2018-08-21
CN108427604B true CN108427604B (en) 2020-06-26

Family

ID=63156694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810119092.3A Active CN108427604B (en) 2018-02-06 2018-02-06 Cluster resource adjustment method and device and cloud platform

Country Status (2)

Country Link
CN (1) CN108427604B (en)
WO (1) WO2019153697A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110888733B (en) * 2018-09-11 2023-12-26 三六零科技集团有限公司 Cluster resource use condition processing method and device and electronic equipment
CN110968416A (en) * 2018-09-29 2020-04-07 中兴通讯股份有限公司 Resource allocation method, device, equipment and computer readable storage medium
CN109614236B (en) * 2018-12-07 2023-04-18 深圳前海微众银行股份有限公司 Cluster resource dynamic adjustment method, device and equipment and readable storage medium
CN109586970B (en) * 2018-12-13 2022-07-08 新华三大数据技术有限公司 Resource allocation method, device and system
CN110209166B (en) * 2019-05-22 2020-07-24 重庆大学 Cooperative control method and device for multiple mobile service robots and storage medium
CN110138883B (en) * 2019-06-10 2021-08-31 北京贝斯平云科技有限公司 Hybrid cloud resource allocation method and device
CN110912967A (en) * 2019-10-31 2020-03-24 北京浪潮数据技术有限公司 Service node scheduling method, device, equipment and storage medium
CN112965828B (en) * 2021-02-03 2024-03-19 北京轻松怡康信息技术有限公司 Multithreading data processing method, device, equipment and storage medium
CN116661979B (en) * 2023-08-02 2023-11-28 之江实验室 Heterogeneous job scheduling system and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060069761A1 (en) * 2004-09-14 2006-03-30 Dell Products L.P. System and method for load balancing virtual machines in a computer network
CN101504620A (en) * 2009-03-03 2009-08-12 华为技术有限公司 Load balancing method, apparatus and system of virtual cluster system
TWI595760B (en) * 2015-12-01 2017-08-11 廣達電腦股份有限公司 Management systems for managing resources of servers and management methods thereof

Also Published As

Publication number Publication date
WO2019153697A1 (en) 2019-08-15
CN108427604A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN108427604B (en) Cluster resource adjustment method and device and cloud platform
CN109857518B (en) Method and equipment for distributing network resources
US10055244B2 (en) Boot control program, boot control method, and boot control device
US9571561B2 (en) System and method for dynamically expanding virtual cluster and recording medium on which program for executing the method is recorded
US10255091B2 (en) Adaptive CPU NUMA scheduling
US7437730B2 (en) System and method for providing a scalable on demand hosting system
US20170244784A1 (en) Method and system for multi-tenant resource distribution
US20190155655A1 (en) Resource allocation method and resource manager
KR101733117B1 (en) Task distribution method on multicore system and apparatus thereof
US9535740B1 (en) Implementing dynamic adjustment of resources allocated to SRIOV remote direct memory access adapter (RDMA) virtual functions based on usage patterns
US20160196157A1 (en) Information processing system, management device, and method of controlling information processing system
JP2008191949A (en) Multi-core system, and method for distributing load of the same
CN110221920B (en) Deployment method, device, storage medium and system
CN107291544B (en) Task scheduling method and device and distributed task execution system
CN111078363A (en) NUMA node scheduling method, device, equipment and medium for virtual machine
US10169102B2 (en) Load calculation method, load calculation program, and load calculation apparatus
WO2016101996A1 (en) Allocating cloud computing resources in a cloud computing environment
KR101587579B1 (en) Memory balancing method for virtual system
CN105389211A (en) Memory allocation method and delay perception-memory allocation apparatus suitable for memory access delay balance among multiple nodes in NUMA construction
CN109558216B (en) Single root I/O virtualization optimization method and system based on online migration
CN106878389B (en) Method and device for resource scheduling in cloud system
US10733022B2 (en) Method of managing dedicated processing resources, server system and computer program product
JP2013125548A (en) Virtual machine allocation system and method for using the same
CN114625500A (en) Method and application for scheduling micro-service application based on topology perception in cloud environment
CN113032102A (en) Resource rescheduling method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220211

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221205

Address after: 518129 Huawei Headquarters Office Building 101, Wankecheng Community, Bantian Street, Longgang District, Shenzhen, Guangdong

Patentee after: Shenzhen Huawei Cloud Computing Technology Co.,Ltd.

Address before: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee before: Huawei Cloud Computing Technology Co.,Ltd.