WO2013139037A1 - 用于调度资源的方法及装置 - Google Patents

用于调度资源的方法及装置 Download PDF

Info

Publication number
WO2013139037A1
WO2013139037A1 PCT/CN2012/072939 CN2012072939W WO2013139037A1 WO 2013139037 A1 WO2013139037 A1 WO 2013139037A1 CN 2012072939 W CN2012072939 W CN 2012072939W WO 2013139037 A1 WO2013139037 A1 WO 2013139037A1
Authority
WO
WIPO (PCT)
Prior art keywords
process group
resource scheduling
scheduling policy
type
group
Prior art date
Application number
PCT/CN2012/072939
Other languages
English (en)
French (fr)
Inventor
王烽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201280000704.8A priority Critical patent/CN103503412B/zh
Priority to PCT/CN2012/072939 priority patent/WO2013139037A1/zh
Publication of WO2013139037A1 publication Critical patent/WO2013139037A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Definitions

  • the present invention relates to the field of information technology, and in particular, to a method and apparatus for scheduling resources. Background technique
  • Cloud computing is an Internet-based computing method in which shared hardware and software resources and information can be provided to computers and other devices as needed.
  • the core idea of cloud computing is to manage and schedule a large number of computing resources connected by networks to form a computing resource pool to provide on-demand services to users.
  • the network that provides resources is called the "cloud.”
  • cloud computing is generally divided into two categories: public cloud and private cloud.
  • cloud-based applications want to consume as many cloud resources as possible to ensure application quality of service (QoS), but there are costs to using cloud resources, such as cloud resource rent (for public clouds) and operating costs (for private clouds). Therefore, the cloud application dynamically changes the usage of cloud resources according to factors such as real-time workload, so as to improve the efficiency of using cloud resources and achieve a balance between application QoS and resource costs.
  • an embodiment of the present invention provides a method and apparatus for scheduling resources, and adopting a resource scheduling policy corresponding to a process group for a different process group of the cloud application to provide a more flexible and effective manner.
  • Resource Scheduling
  • An embodiment of the present invention provides a method for scheduling a resource, including: acquiring process group information, where the process group information includes: information indicating a process group obtained by a process grouping in a cloud application;
  • Resource scheduling is performed on the process group by using a resource scheduling policy corresponding to the process group.
  • An embodiment of the present invention provides an apparatus for scheduling a resource, including: an obtaining module, configured to acquire process group information, where the process group information includes: obtaining, by a process grouping, a cloud application The process group performs the indicated information.
  • the scheduling module is configured to perform resource scheduling on the process group by using a resource scheduling policy corresponding to the process group.
  • An embodiment of the present invention provides an apparatus for scheduling resources, including: a memory for storing an instruction; a processor coupled to the memory, the processor being configured to execute an instruction stored in the memory, wherein The processor is configured to: acquire process group information, where the process group information includes: information indicating a process group obtained by a process group in a cloud application; and adopting resource scheduling corresponding to the process group The policy performs resource scheduling on the process group.
  • the embodiment of the present invention further provides a machine readable storage medium storing machine executable instructions, when the machine executable instructions are executed, causing the machine to perform the following steps: acquiring process group information, where the process group information includes: The information indicating the process group obtained by the process grouping in the cloud application is performed; and the resource scheduling is performed on the process group by using a resource scheduling policy corresponding to the process group.
  • a process group is obtained by grouping processes, and when resource scheduling is performed on a process group, a resource scheduling policy corresponding to the process group is used for scheduling, and resource scheduling based on the process group is implemented.
  • FIG. 1 is a schematic flow chart of a method for scheduling resources according to an embodiment of the present invention
  • FIG. 2 is an application environment of a method for scheduling resources according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for scheduling resources according to another embodiment of the present invention
  • FIG. 4 is another embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a method for scheduling resources according to another embodiment of the present invention
  • FIG. 6 is a schematic diagram of an apparatus for scheduling resources according to an embodiment of the present invention;
  • FIG. 7 is a schematic diagram of an apparatus for scheduling resources according to another embodiment of the present invention. detailed description
  • FIG. 1 is a schematic flow chart of a method for scheduling resources according to an embodiment of the present invention.
  • the method of FIG. 1 includes: Step 110: Obtain process group information, where the process group information includes: information indicating a process group obtained by process grouping in a cloud application; Step 120, adopting a process corresponding to the process group
  • the resource scheduling policy performs resource scheduling on the process group.
  • a process group is obtained by grouping processes, and when resource scheduling is performed on a process group, a resource scheduling policy corresponding to the process group is used for scheduling, and resource scheduling based on the process group is implemented.
  • the cloud application 210 refers to an application running on a cloud platform, and its runtime includes one or more (for example, N, N is an integer) processes, and the one or more processes may be distributed in one of the cloud platforms. Or multiple virtual machines (VM, Virtual Machine).
  • VM Virtual Machine
  • processes of the cloud application may be grouped, for example, may be grouped according to the type of the cloud application process; and may be grouped according to functions of the cloud application.
  • Each of the process groups obtained by grouping the processes may correspond to one or more resource scheduling policies.
  • Process components can be grouped into different types depending on the characteristics of the process group. Different types of process groups can be applied to different categories of resource scheduling policies.
  • the number of processes in the static process group is preset and is fixed during the running. Therefore, the static process group cannot share the work by dynamically adding the same process.
  • the resource scheduling policy for adding or removing processes is not suitable for static process groups.
  • the number of processes in a dynamic process group can be dynamically changed during operation. Generally, the processes in the process group of this type have the same function, so the work can be shared by dynamically adding new processes.
  • Processes in a migrateable process group are migrateable, that is, they can be migrated from one virtual machine to another. Processes in a resident process group are not migrateable, so the scheduling policy of the migration process is not suitable for hosting process groups. Process groups with the same type but different functions can adopt the same scheduling policy or different scheduling policies.
  • the resource scheduling apparatus 220 is configured to perform allocation and scheduling of the virtual resources based on the process group based on the resource scheduling policy corresponding to the process group.
  • the resource scheduling policy may be predefined.
  • the resource scheduling strategy includes trigger conditions and decision algorithms. Decision algorithms determine how and how resources are added or reduced by cloud applications.
  • the monitoring device 230 is configured to monitor parameters related to the process group, including: a load of the cloud application and a usage state of the virtual resource, such as an average CPU utilization of the process group, a number of VMs used, and a process. The number of processes in the group, and so on.
  • the virtual resource management platform 240 virtualizes physical resources and provides external virtual resources such as virtual machines, virtual volumes, and virtual networks.
  • FIG. 3 is a schematic flowchart diagram of a method for scheduling resources according to another embodiment of the present invention. The method in Figure 3 includes the following steps:
  • Step 310 Obtain process group information, where the process group information includes: information indicating a process group obtained by the process grouping in the cloud application, where the information is used to indicate which process groups the cloud application is divided into and which process groups are included Process; and, information indicating the type of process group.
  • the types of process groups can include: static process groups, dynamic process groups, migratable process groups, and resident process groups.
  • the process group can be at least one of the above types.
  • a process group can be one of a static process group and a dynamic process group, and is one of a migration process group and a resident process group.
  • Step 320 Determine a resource scheduling policy according to the type of the process group.
  • the determined resource scheduling policy is a resource scheduling policy corresponding to the type of the process group.
  • the resource scheduling strategy adopted corresponds to the characteristics of the types.
  • the process group may adopt a scaled resource scheduling policy, where the expanded resource scheduling policy performs resource scheduling by changing the specifications of the virtual machine in which the process group is located.
  • a scaled resource scheduling policy it is necessary to determine the identifier of the virtual machine that needs to be expanded and the specifications of the virtual machine that is scaled up.
  • the process group can adopt the resource scheduling policy of adding or deleting processes.
  • the resource scheduling policy may be a Min-load algorithm, and the newly added process is added to the VM with the lowest CPU and memory utilization in the VM cluster.
  • the scheduling policy may also have some restrictions.
  • the limiting condition may be to limit the utilization of the VM to a certain threshold.
  • the scheduling policy can also have failure handling when necessary but not required.
  • the failure processing may be to create a VM and start a process using a VM specification that satisfies a certain preset condition when a VM that does not satisfy the condition can start the process.
  • the VM specification that satisfies a certain preset condition may be the VM specification that is most used by the process group.
  • the selection algorithm of the deletion process can decide which process or VMs to delete the process, and can also decide whether to delete the corresponding VM after deleting the process.
  • the selection algorithm of the deletion process may be to first select a process on the VM that has only the process group process, and secondly select the process on the VM with the least number of processes; and, after deleting the process, the deleted process may be deleted.
  • dynamic process groups can also adopt an extended resource scheduling strategy.
  • the process group can adopt the rearranged resource scheduling policy, where the rearranged resource scheduling policy performs resource scheduling by changing the mapping relationship between the process and the virtual machine.
  • the rearrangement algorithm specifically adopted by the rearrangement scheduling policy may include a balanced rearrangement scheduling policy and a centralized rearrangement scheduling policy.
  • a balanced reordering scheduling strategy is used to enable processes to distribute equalization across different VMs.
  • the centralized rescheduling scheduling policy is used to centrally place on one or several VMs to ensure that VM utilization is not too low, such as below a preset utilization threshold.
  • the rearranged resource scheduling policy usually needs to decide the source VM where the pre-migration process is located and the target VM to which the migration is to be performed.
  • processes on the least utilized VMs can be migrated one by one to VMs with the highest utilization.
  • the migration restrictions can also be set in the reordered resource scheduling policy if necessary but not required.
  • the migration restriction can be: The estimated utilization does not exceed a preset utilization threshold after the migration is merged.
  • the utilization can be minimized.
  • the processes on the VM are migrated one by one to the highest utilization VM; if the CPU utilization of the VM is expected to exceed the CPU utilization threshold after the process is expected to be migrated to the currently highest utilization VM, the process is expected to migrate the current If the CPU utilization of the VM with the second highest utilization rate does not exceed the CPU utilization threshold, the processes on the VM with the lowest utilization can be migrated one by one to the VM with the second highest utilization. .
  • the type of the process group is a resident process group
  • the process group cannot adopt the reordered resource scheduling policy.
  • the above-mentioned expanded resource scheduling policy can be adopted.
  • the resource scheduling policy of the above-mentioned addition and deletion may be adopted.
  • the expanded resource scheduling policy may also be adopted.
  • the process group can adopt at least one of the above-mentioned expanded resource scheduling policy and the above-mentioned rearranged resource scheduling policy. If the type of the process group is a dynamic process group and is a migrateable process group, the process group can adopt at least one of the above-mentioned addition and deletion resource scheduling policy and the above-mentioned rearranged resource scheduling policy.
  • the resource scheduling policy adopted can be determined according to the priority. For example, it may be a combination of a rearranged resource scheduling policy and an added/removed resource scheduling policy.
  • the centralized reordering resource scheduling policy is combined with the deleted resource scheduling policy, and the centralized reordering resource scheduling policy takes precedence over the deleted resource scheduling policy.
  • the process may be centralized first, and if the scheduling effect is not achieved, a certain number of processes are deleted.
  • the balanced reordering resource scheduling policy is combined with the increased resource scheduling policy, and the increased resource scheduling policy takes precedence over the balanced reordering resource scheduling strategy. Specifically, it is possible to place the process on the newly added VM first, and then balance the process on the VM.
  • Step 330 Perform resource scheduling on the process group according to the determined resource scheduling policy.
  • Resource scheduling strategies typically include triggering conditions and decision algorithms. When the parameter related to the process group meets the trigger condition of the resource scheduling policy, triggering resource scheduling on the process group, and calling a corresponding decision algorithm to schedule resources.
  • the parameter related to the process group may be at least one of the following: an average CPU usage of the process group, a number of virtual machines used by the process group, a number of processes in the process group, and a virtual machine where the process group is located. Utilization, communication bandwidth corresponding to the process group, network speed corresponding to the process group, and so on.
  • the utilization of the mentioned virtual machine is the utilization of the resources occupied by the VM, such as CPU utilization, memory utilization, disk utilization, disk input/output per second (IOPS, Input/Output Per Second), And / or network IOPS.
  • each resource scheduling policy may be pre-defined, and each resource scheduling policy may be pre-stored in the device for scheduling, or may acquire various resource scheduling policies simultaneously when acquiring the process group type information.
  • the foregoing obtaining may be implemented by receiving information input by a user, for example, receiving type information of a process group and resource scheduling policy information input by a user through a policy template.
  • the correspondence between each process group and each resource scheduling policy may be preset, and the resource scheduling policy required for the process group is determined according to the corresponding relationship.
  • mapping between the type of each process group and each resource scheduling policy may be preset. After obtaining the type of the process group, the resource scheduling policy required for the process group is determined according to the corresponding relationship.
  • a process group is obtained by grouping processes, and when resource scheduling is performed on a process group, a resource scheduling policy corresponding to the process group is used for scheduling, and resource scheduling based on the process group is implemented.
  • resource scheduling is performed for different process groups by using a scheduling policy that is compatible with the type of the process group, so that resource scheduling for the cloud application can be made more flexible and effective.
  • FIG. 4 is a schematic flow chart of a method for scheduling resources according to another embodiment of the present invention.
  • the method shown in Figure 4 includes the following steps:
  • Step 410 Obtain process group information, where the process group information includes: information indicating a process group obtained by the process grouping in the cloud application, where the information is used to indicate which process groups the cloud application is divided into and which process groups are included a process; and, information indicating a resource scheduling policy corresponding to the process group.
  • Step 420 Perform resource scheduling on the process group by using the acquired resource scheduling policy corresponding to the process group.
  • the resource scheduling policy corresponding to the process group may be a resource scheduling policy corresponding to the type of the process group.
  • the process group information may further include: process group type information indicating the type of the process group.
  • the types of process groups may include: a static process group, a dynamic process group, a migrateable process group, and a resident process group; and, for different types of process groups, resources of corresponding categories as described above may be employed.
  • the scheduling policy is not described here.
  • obtaining process group information may include: receiving a configuration file and parsing the configuration file to obtain process group information.
  • the receiving configuration file may be a policy template for receiving user input, and the parsing configuration file may be an analysis policy template.
  • the policy template can include the required process group information.
  • the resource scheduling policy corresponding to the process group may also be a resource scheduling policy corresponding to the function of the process group, or a resource corresponding to the function of the process group when the type of the process group is satisfied.
  • Process groups with the same type but different functions can be divided into different process groups.
  • the resource scheduling policies of the same type but different function groups can be the same or different.
  • the process group information may include function information of the process group indicating the function of the process group.
  • the function information of the process group may be transmitted to the resource scheduling device as part of the process group information, or may not be transmitted to the resource scheduling device as part of the process group information, but is embodied in the scheduling policy adopted by the process group.
  • the above functions refer to the responsibilities and capabilities of the process group used by the application to complete the business; it is divided according to the application's business process and design architecture.
  • the functions of a process group can be divided into: a database function for persisting data to store data; a logic layer function for processing data; a presentation layer function for visualizing data, such as by text , table, graphical way to show.
  • a database function for persisting data to store data
  • a logic layer function for processing data
  • a presentation layer function for visualizing data, such as by text , table, graphical way to show.
  • control process used to monitor, start, stop work processes
  • distribute work processes accept, review, distribute computing applications
  • calculate work processes for execution Specific scientific calculations.
  • the above functional divisions are exemplary only and are not intended to be limiting.
  • the deployer can have other ways of partitioning.
  • resource scheduling for the process group is performed according to the priority.
  • performing resource scheduling on the process group by using the resource scheduling policy includes: when the parameter related to the process group meets a trigger condition of the resource scheduling policy, triggering resource scheduling on the process group.
  • FIG. 5 is a schematic flowchart diagram of a method for scheduling resources according to still another embodiment of the present invention. The method shown in Figure 5 includes the following steps:
  • Step 510 Receive a policy template submitted by a user.
  • the policy template may include, but is not limited to, the following: information about the application, information about the process group, information about the resource scheduling policy.
  • the information about the application includes information about the process group included in the application, for example, which process groups are included in the application.
  • the information about the process group includes the specific information of each process group, for example, the type of each process group, the process included in each process group, the identifier of the resource scheduling policy corresponding to each process group, and the information to be counted for each process group.
  • Information about the resource scheduling policy includes information on trigger conditions and decision algorithms. Among them, the decision algorithm describes how to perform scheduling, which can identify the decision algorithm by algorithm name, script path, function name, and the like.
  • the decision algorithm may have a parameter list of the corresponding decision algorithm, for example, an input parameter of the decision algorithm.
  • the specific content of the parameter can be used with the algorithm Different and different.
  • the scaling algorithm uses the method of changing the specification of the virtual machine where the process is located, that is, the host virtual machine, to add or delete resources.
  • the addition and deletion algorithm adds or deletes resources by increasing or deleting the number of processes, occupying or releasing resources of existing and newly created VMs.
  • the rearrangement algorithm uses a mapping process that changes the process to the VM to schedule resources, which allows the process to allocate equalization on the VM, or to centrally place processes to ensure VM utilization.
  • the resource scheduling strategy for the expansion uses the scaling algorithm; the resource scheduling strategy for adding and deleting uses the addition and deletion algorithm; and the rearranged resource scheduling strategy uses the rearrangement algorithm.
  • Step 520 Parse the policy template, obtain the above information about the application, information about the process group, and information about the resource scheduling policy.
  • Step 530 Determine whether there is an unconfigured resource scheduling policy; if yes, go to step 540; otherwise, go to step 550.
  • Step 540 Receive a set resource scheduling policy.
  • the input includes but is not limited to: trigger condition, decision algorithm identification and parameter list.
  • the trigger condition may be a parameter related to the process group, such as a statistical parameter of the process group, such as a CPU usage of the process group, a number of processes, and the like.
  • the trigger engine can be set with a rules engine or script.
  • the decision algorithm identifier can be a script name, a function name, other types of module identifiers, and the like. Parameter tables can be recorded in a database, memory, or file.
  • the scheduling policy can be set and matched using a rules engine or script.
  • Step 550 Obtain real-time status of the process group and the virtual machine.
  • Step 560 Trigger resource scheduling when the predetermined condition is met.
  • Step 570 Select a resource scheduling policy according to the type of the process group.
  • step 571 is executed to select a scaling algorithm that changes the specification of the VM where the process group is located, and in step 572, the VM specification is returned and returned, and then Go to step 580.
  • the input parameter is the algorithm identifier.
  • step 573 is executed to select an add/drop algorithm for adding or deleting processes, and at step 574, a plan for adding and deleting is determined, and then the process proceeds to step 580.
  • algorithm identification is used to identify different addition and deletion algorithms; in specific selection, the selection may be made by inputting an algorithm identifier to be selected. Specifically, the following four addition and deletion algorithms are taken as an example.
  • adding algorithm 1 includes: New quantity: 5% of the total number of process groups added each time;
  • Min-load the newly added process is in the VM cluster with the lowest CPU and memory utilization ratio
  • the addition algorithm 2 includes:
  • New quantity 2 processes at a time
  • VM mapping algorithm Start the same number of VMs as the new process, start the process on the new VM, and use the configuration in the template.
  • the deletion algorithm 1 includes:
  • Delete the process selection algorithm (you can delete the VM after deleting the process): First select the process on the VM with only the process group process; secondly select the process on the VM with the fewest processes; delete the empty VM.
  • the deletion algorithm 2 includes:
  • Delete the process selection algorithm First select the process on the VM that has only the process group process; delete the 2 VMs with the lowest utilization and all the processes above.
  • step 575 If the process group can migrate the process group, go to step 575 and select the change process and its location.
  • the rearrangement algorithm of the mapping relationship of the VM decides and returns the process mapping scheme after the rearrangement, and then proceeds to step 580; in this example, there may be multiple rearrangement algorithms, and the algorithm identifiers are used to indicate different weights.
  • the row algorithm in the specific selection, can be selected by inputting the algorithm identifier to be selected.
  • the rearrangement algorithm 1 is a balanced rearrangement algorithm, and the specific content thereof may be that the process on the most utilized VM is migrated to the VM with the lowest utilization;
  • the rearrangement algorithm 2 is a centralized rearrangement algorithm, and the specific content thereof This can include: migrating processes on the lowest utilization VM one by one to the VM with the highest utilization, and limiting the expected utilization after the migration merge does not exceed the threshold.
  • Step 580 Execute the selected decision algorithm to schedule resources.
  • a resource scheduling algorithm that is more suitable for the type characteristics of the process group is selected for each process type, but the number of processes in the static process group is not changed and the resident process group is not moved. Under the premise of shifting, each process group can adopt other resource scheduling algorithms.
  • the configured resource scheduling policy or the set resource scheduling policy may be verified by using a correspondence between the preset resource scheduling policy and the type of the process group; Then, proceed to step 550; if the verification fails, the resource scheduling policy corresponding to the type of the process group may be selected from the preset default resource scheduling policy to perform scheduling.
  • an application App1 including three process groups is taken as an example for description.
  • the three process groups are Process Group 1 (ProGroupl), Process Group 2 (ProGroup2), and Process Group 3 (ProGroup3).
  • the types of process group 1 are static process groups and resident process groups.
  • the resource scheduling policy adopted by process group 1 is the scheduling policy.
  • the types of process group 2 are dynamic process groups and resident process groups.
  • the resource scheduling policy adopted by process group 2 is a combination of scheduling policy 2 and scheduling policy 3.
  • the types of process group 3 are dynamic process groups and migratable process groups.
  • the resource scheduling policy adopted by process group 3 is the scheduling policy 4.
  • a policy template sent by the user to the device for scheduling the resource is received.
  • the policy template includes the following:
  • Process group ID list [ProcGroupl, ProcGroup2, ProcGroup3];
  • Process ID list [CtrlProcl, CtrlProc2]; indicates that ProcGroupl includes two processes. The process IDs of these two processes are CtrlProc 1 and CtrlProc2 respectively.
  • Scheduling policy identifier list Scheduling policy 1 (SchedPolicyl); indicates that the scheduling policy adopted by process group 1 is scheduling policy 1;
  • Process group statistics Avarage_CPU_Load; indicates the average CPU utilization of the process group.
  • Process ID list WorkerProc#; indicates that the process IDs of the processes included in ProcGroup2 are WorkerProc, WorkerProcl, WorkerProc2,...
  • Scheduling policy identifier list [Scheduling Policy 2, Scheduling Policy 3] ([SchedPolicy2, SchedPolicy3]);
  • the scheduling policy used by Process Group 2 is the combination of scheduling policy 2 and scheduling policy 3.
  • Process group statistics Avarage_CPU_Load; indicates the average CPU utilization of the process group.
  • Process ID list indicates that the process ID of the process included in ProcGroup3 is Procname, Procnamel, Procname2, ***
  • Scheduling policy identifier list Scheduling policy 4 (SchedPolicy4); indicates that the scheduling policy adopted by process group 3 is scheduling policy 4;
  • Process group statistics Avarage_CPU_Load; indicates the average CPU utilization of the process group.
  • Scheduling policy identifier Scheduling policy identifier
  • Trigger condition "ProcGroupl::Avarage_CPU_Load > 80"; trigger condition for scheduling policy 1; indicates that resource scheduling is triggered when the average CPU utilization of process group ProcGroup1 is greater than 80%.
  • the decision algorithm identifier "ScaleUpDown::ScaleUpAlgor ; is the identifier of the decision algorithm used in scheduling strategy 1; it represents the expansion algorithm 1 (ScaleUpAlgol) in the scaling algorithm class, which is to expand the VM where the process group is located.
  • vmspeclist "vmSepcl, vmSpec2, vmSpec3" ⁇ ; parameters of the algorithm used for the table, the parameters used here are virtual machine specifications.
  • Virtual machine specifications can typically include: small, medium, large, and very large. Compared with a small-sized virtual machine, a virtual machine with a larger size includes more or more virtual CPUs, virtual memory capacity, disk capacity, and/or number of network card blocks.
  • vmSepcl, vmSpec2, vmSpec3 are specific specifications. among them, vmSepcl is smaller than vmSpec2, and vmSpec2 is smaller than vmSpec3.
  • the VM specification in the current group is vmSpec1
  • the VM specification in the current group is vmSpec2
  • the VM specification in the current group is vmSpec2
  • the VM specification is expanded to vmSpec3 after the scheduling policy 1 is executed. It is assumed here that the VM can support thermal expansion.
  • Scheduling policy ID Scheduling policy ID: SchedPolicy2
  • Trigger condition "ProcGroup2::Avarage_CPU_Load > 80"; trigger condition for scheduling policy 2; indicates that resource scheduling is triggered when the average CPU utilization of process group ProcGroup2 is greater than 80%.
  • Scheduling policy ID Scheduling policy ID: SchedPolicy3
  • Trigger condition "ProcGroup2::Avarage_CPU_Load ⁇ 20"; trigger condition for scheduling policy 3; indicates that resource scheduling is triggered when the average CPU utilization of process group ProcGroup2 is less than 20%.
  • Scheduling policy ID SchedPolicy4 Trigger condition: "ProcGroup3::Avarage_CPU_Load ⁇ 20"; is the trigger condition of scheduling policy 4; indicates that resource scheduling is triggered when the average CPU utilization of process group ProcGroup3 is less than 20%.
  • the processes on the VM with the lowest utilization can be migrated one by one to the VM with the highest utilization; and if the VM with the highest utilization is currently the highest.
  • the CPU utilization in the VM exceeds the cpu_load_upper, and the CPU utilization in the VM with the highest utilization rate is expected to not exceed the cpu_load_upper, and the processes on the VM with the lowest utilization can be migrated one by one to the VM with the second highest utilization. on.
  • the apparatus for scheduling resources may, after receiving the policy template, parse the policy template to obtain the foregoing information for the cloud application. In the process of parsing, you can verify the validity of the template first. If the template is legal, the cloud application is queried according to the process identification information in the provided process identification list to query the VM identifier of the process given by the operating system and the local identity of the process in the VM.
  • the average CPU utilization of all processes in each process group is periodically calculated, and the average CPU utilization of each process group is monitored to meet the corresponding trigger condition.
  • the corresponding resource scheduling policy is triggered to schedule the resource.
  • the resource is scheduled by invoking a decision algorithm corresponding to the resource scheduling policy corresponding to the satisfied trigger condition.
  • the expansion algorithm 1 corresponding to scheduling policy 1 is invoked to schedule resources.
  • the specification of the VM in which the current process group 1 is located is vmSpe C 2
  • the expansion algorithm 1 expands the specifications of the VM to vmSpec3. It is assumed here that it can be thermally expanded.
  • the process 1 of the scheduling policy 2 is invoked to schedule the resources. For example, the algorithm decides to add 5 new processes.
  • the deployment scenario is to start 3 on VM1 and 2 on VM2.
  • the delete process algorithm 2 corresponding to the scheduling policy 3 is called to delete the 2 VMs with the lowest utilization and all the processes on the 2 VMs.
  • the VMs with the lowest utilization rate are VM1 and VM2.
  • the corresponding reordering algorithm 2 of scheduling policy 4 is called to migrate the process in process group 3 from VM1 with the lowest utilization to VM2, and then delete VM1.
  • the process group 1 of the type of the static process group and the resident process group is expanded by adopting the specification of the process group 1 host virtual machine; for the type of the dynamic process group and the resident process group Process group 2 adopts the method of adding processes to increase resources, and adopts the method of deleting processes to reduce resources.
  • Process group 3 of type dynamic process group and migration process group the process is migrated to VM with higher utilization. Ways to optimize resource utilization.
  • the cloud application may include more or fewer process groups; the type of the process group may be other types or combination of types, for example, may be a static process group. And the migration process group; the scheduling policy may adopt other trigger conditions and/or decision algorithms.
  • the trigger condition may be defined by other statistical information corresponding to the process group, and the decision algorithm may be a type characteristic of the process group. Corresponding other algorithms.
  • other statistical information that may be utilized in the trigger condition corresponding to the process group may include at least one of the following: the number of virtual machines used by the process group; The number of processes; the utilization rate of the virtual machine where the process group is located; the communication bandwidth corresponding to the process group; the network speed corresponding to the process group, and the like.
  • the scaling algorithm is applicable to the static process group, which uses the method of changing the specification of the process host VM to add or delete resources.
  • the decision content of the algorithm includes: determining the VM identifier that needs to be expanded; and determining the changed VM specification.
  • the addition and deletion algorithm is applicable to the dynamic process group, which can use one of the following methods to schedule resources, such as adding or deleting processes, occupying or releasing existing resources, and occupying or releasing resources of the newly created VM;
  • the algorithm for increasing the process may include the following: determining the number of newly added processes, for example, may increase 5% of the total number of process groups each time;
  • the VM mapping algorithm used may be, for example, a minimum load (Min-load) mapping algorithm.
  • Min-load a minimum load mapping algorithm.
  • the newly added process can be placed in the VM cluster with the most comprehensive utilization of CPU and memory. On a low VM.
  • the algorithm for increasing the process may further include a constraint condition and a failure process.
  • Restrictions are used to limit the process of adding.
  • the constraint can be: VM utilization cannot exceed a certain percentage, such as 60%; or critical processes cannot be assigned on the same VM.
  • the failure process can be: When there is no VM bootable process that satisfies the condition, use the most VM specification in the process group to create the VM and start the process.
  • the deletion process algorithm includes the following contents:
  • the number of decision deletions can be 5% of the total number of process groups deleted each time
  • the selection algorithm of the deletion process is selected, and the algorithm for deleting the VM after deleting the process is selected. For example, the process on the VM having only the process group process is deleted first; secondly, the process on the VM with the least number of processes is deleted.
  • the rearrangement algorithm is applicable to the migrateable process group; the process of migrating with the rearrangement algorithm can change the mapping relationship between the process and the VM, so that the process can allocate equalization on the VM, or can be centralized by placing the process
  • the process is centralized on one or several VMs to ensure the utilization of the VM.
  • the algorithm can include the following: The decision source process and the target VM, for example, the processes on the lowest-utilized VM are migrated one by one to the VM with high utilization. on.
  • the rearrangement algorithm may further include a constraint condition, for example, the constraint condition may be: the estimated utilization rate of the VM after the migration merge process does not exceed a predetermined threshold.
  • Each process group can have multiple scheduling strategies. Multiple scheduling strategies can also be combined based on priority. Illustratively, it can be combined in the following manner:
  • Rearrangement strategy 1 or expansion strategy 1 It can mean that the process is placed first; if the scheduling effect is not achieved, then a certain number of processes are deleted;
  • Scale-up strategy 2 and rearrangement strategy 2 It can mean that the process is placed on the newly added VM first, and then the process is balanced on the VM.
  • the apparatus 600 for scheduling resources of the embodiment includes: an obtaining module 610, configured to acquire process group information, where the process group information includes: performing, by using, a process group obtained by process grouping in a cloud application.
  • the information that is indicated by the scheduling module 620 is configured to perform resource scheduling on the process group by using a resource scheduling policy corresponding to the process group.
  • the apparatus 600 can perform the steps of the method for scheduling resources described above, and details are not described herein again.
  • the process group information may further include: information indicating a resource scheduling policy corresponding to the process group.
  • the process group information may further include: information indicating the type of the process group; and the scheduling module 620, configured to: adopt a resource scheduling policy corresponding to a type of the process group The process group performs resource scheduling.
  • the scheduling module is used to:
  • the resource scheduling policy is performed by using a scaled resource scheduling policy, where the expanded resource scheduling policy is used to change the specification of the virtual machine where the process group is located;
  • the resource scheduling policy is performed by adding or deleting a resource scheduling policy, where the added or deleted resource scheduling policy is used to add or delete a process;
  • the resource scheduling policy is performed by using the rearranged resource scheduling policy, where the rearranged resource scheduling policy is used to change the mapping relationship between the process and the virtual machine.
  • the scheduling module can also be used to:
  • the type of the process group is a static process group and is a migration process group, at least one of the extended resource scheduling policy and the rearranged resource scheduling policy is used for resource scheduling;
  • the type of the process group is a dynamic process group and is a migrateable process group
  • at least one of the add/drop resource scheduling policy and the rearranged resource scheduling policy is used for resource scheduling.
  • the acquiring module may include: a receiving unit, configured to receive a configuration file; and a parsing unit, configured to parse the configuration file to obtain the process group information.
  • the scheduling module may further include: a selecting unit, configured to select, according to the priority, when the resource scheduling policy corresponding to the process group is multiple Resource scheduling strategy.
  • FIG. 7 is a schematic diagram of an apparatus 700 for scheduling resources according to another embodiment of the present invention, the apparatus of which corresponds to the method illustrated in FIG.
  • the device group information acquired by the obtaining module 710 further includes: information indicating the type of the process group, in addition to the acquiring module, in the device 700 for scheduling resources, compared with the device in FIG.
  • the apparatus of this embodiment further includes: a determining module 730, configured to determine, according to the type of the process group, The resource scheduling policy corresponding to the process group; in the device of this embodiment, the scheduling module 720 performs resource scheduling on the process group by using the determined resource scheduling policy.
  • a method and apparatus for scheduling resources according to an embodiment of the present invention may adopt a scheduling policy adapted to its function or type for the process group, thereby More flexible and efficient scheduling, the ability to dynamically adapt to the dynamics of multiple types of application processes, applicable to different types of cloud applications.
  • scheduling is performed based on a process group, and resources can be added, deleted, and/or rearranged based on processes, finer-grained resource dynamic adjustment can be achieved.
  • the means for scheduling resources described in connection with the examples disclosed herein may be embodied directly in hardware, as a software module executed by a processor, or in a combination of both.
  • a device for scheduling resources may include:
  • a memory for storing instructions
  • processor coupled to the memory, the processor being configured to execute instructions stored in the memory, wherein the processor is configured to:
  • the process group information includes: information indicating, by the process group obtained by the process grouping in the cloud application;
  • Resource scheduling is performed on the process group by using a resource scheduling policy corresponding to the process group.
  • the process group information further includes: information indicating the type of the process group; and the processor is further configured to:
  • the processor is further configured to:
  • the processor may be further configured to:
  • the resource scheduling policy is performed by using a scaled resource scheduling policy, where the expanded resource scheduling policy is used to change the specification of the virtual machine where the process group is located;
  • the resource scheduling policy is performed by adding or deleting a resource scheduling policy, where the added or deleted resource scheduling policy is used to add or delete a process;
  • the processor may be further configured to:
  • the type of the process group is a static process group and is a migration process group, at least one of the extended resource scheduling policy and the rearranged resource scheduling policy is used for resource scheduling;
  • the type of the process group is a dynamic process group and is a migrateable process group
  • at least one of the add/drop resource scheduling policy and the rearranged resource scheduling policy is used for resource scheduling.
  • the processor may be further configured to:
  • the adopted resource scheduling policy is selected according to the priority.
  • the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium is coupled to the processor such that the processor can read information from the storage medium and can write information to the storage medium.
  • the storage medium can be an integral part of the processor.
  • the processor and storage medium can be located in an ASIC.
  • the software module can be stored in the memory of the mobile terminal or in a memory card that can be inserted into the mobile terminal. For example: If the mobile terminal uses a larger capacity MEGA-SIM card or a large-capacity flash memory device, the software module can be stored in the MEGA-SIM card or a large-capacity flash memory device.
  • One or more of the functional blocks described with respect to FIGS. 6 and 7 and/or one or more combinations of functional blocks may Implemented as a general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic for performing the functions described herein A device, discrete hardware component, or any suitable combination thereof.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • One or more of the functional blocks described with respect to Figures 6 and 7 and/or one or more combinations of functional blocks may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, A microprocessor, one or more microprocessors in communication with the DSP, or any other such configuration.
  • the resource scheduling policy information corresponding to the process group in the embodiment of the present invention is not limited to being provided by the user of the cloud resource, but may also be provided by the provider of the cloud resource, and is called when the resource is scheduled.
  • the user can also provide the method by other means, for example, by directly providing a configuration file that records the required information.
  • the apparatus for scheduling resources may further include: a monitoring module, configured to monitor parameters related to the process group, to determine whether the parameter meets a trigger of the corresponding scheduling policy. condition.
  • the monitoring function may also be performed by other monitoring devices that can implement the monitoring function independently of the device for scheduling resources, and the device for scheduling resources only needs to decide whether to trigger according to whether the monitoring device provides information that meets the trigger condition.
  • the related parameter information monitored by the monitoring module or the monitoring device may include at least one of information of the CPU, information of the memory, information of the disk, and/or information of the network.
  • the monitored statistics may be: CPU time occupied by the process, CPU utilization of the process, memory usage occupied by the process, and disk input/output per second occupied by the process. (IOPS, Input/Output Per Second), the input/output of the network occupied by the process per second; for the VM, the monitored statistics can be: CPU utilization occupied by the VM, memory utilization occupied by the VM The disk IOPS occupied by the VM and the IOPS of the network occupied by the VM.
  • the monitored information may be a statistical value of related information of all processes included in the process group, such as the sum of the statistics of the foregoing process.
  • average, maximum, minimum, number of processes, etc. for example, it can be the average CPU utilization of all processes in the process group, the sum of memory occupied by all processes in the process group, and all processes in the process group
  • IOPS can be read and / or write IOPS.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Stored Programmes (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明涉及用于调度资源的方法及装置,该方法包括:获取进程组信息,所述进程组信息包括:对云应用中通过进程分组所获得的进程组进行指示的信息;采用与所述进程组对应的资源调度策略对所述进程组进行资源调度。利用该方法或装置,可以更灵活有效地对资源进行调度。

Description

技术领域
本发明涉及信息技术领域, 特别涉及一种用于调度资源的方法及装置。 背景技术
云计算是一种基于互联网的计算方式, 通过这种方式, 共享的软硬件 资源和信息可以按需提供给计算机和其他设备。 云计算的核心思想, 是将 大量用网络连接的计算资源统一管理和调度, 构成一个计算资源池向用户 按需服务。 提供资源的网络被称为"云"。
云计算按照服务对象的不同, 一般分为公有云和私有云两大类。 通常, 基于云的应用为了保证应用服务质量(QoS )希望占用尽量多的云资源, 然 而使用云资源是有成本的, 例如云资源的租金 (对于公有云) 和运营成本 (对于私有云) 。 因此云应用会根据实时工作负载等因素, 动态地改变云 资源的使用量,以提高云资源的使用效率,达到应用 QoS和资源成本的平衡。
现有技术的资源调度方法所能采取的调度策略单一且有限, 从而具有 较大的局限性。 发明内容
考虑到现有技术的上述缺陷, 本发明的实施例提出一种用于调度资源 的方法及装置, 针对云应用不同的进程组, 采用与进程组对应的资源调度 策略, 以提供更灵活有效地资源调度。
本发明实施例提供一种用于调度资源的方法, 包括: 获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所获得的进程组进行指示 的信息;
采用与所述进程组对应的资源调度策略对所述进程组进行资源调度。 本发明实施例提供一种用于调度资源的装置, 包括: 获取模块, 用于 获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所获得 的进程组进行指示的信息; 调度模块, 用于采用与所述进程组对应的资源 调度策略对所述进程组进行资源调度。
本发明实施例提供一种用于调度资源的装置, 包括: 存储器, 用于存 储指令; 处理器, 其与所述存储器耦合, 该处理器被配置为执行存储在所 述存储器中的指令, 其中, 所述处理器被配置为用于: 获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所获得的进程组进行指示 的信息; 采用与所述进程组对应的资源调度策略对所述进程组进行资源调 度。
本发明实施例还提供一种机器可读存储介质, 其存储机器可执行指令, 当所述机器可执行指令被执行时使得机器执行如下步骤: 获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所获得的进程组进行指示 的信息; 采用与所述进程组对应的资源调度策略对所述进程组进行资源调 度。
在按照本发明的实施例中, 通过对进程进行分组获得进程组, 在对进 程组进行资源调度时, 利用与进程组相对应的资源调度策略来进行调度, 实现了基于进程组的资源调度。 附图说明
本发明的目的、 特点、 特征和优点通过以下结合附图的详细描述将变 得显而易见。 其中:
图 1是本发明一实施例的用于调度资源的方法的流程示意图;
图 2是根据本发明实施例的用于调度资源的方法的一种应用环境; 图 3是本发明另一实施例的用于调度资源的方法的流程示意图; 图 4是根据本发明另一实施例的用于调度资源的方法的流程示意图; 图 5是本发明又一实施例的用于调度资源的方法的流程示意图; 图 6是本发明一实施例的用于调度资源的装置的示意图;
图 7是根据本发明另一实施例的用于调度资源的装置的示意图。 具体实施方式
下面将结合附图详细描述本发明的用于调度资源的方法及装置的各个 实施例。 图 1是根据本发明实施例的用于调度资源的方法的流程示意图。 图 1中 的方法包括: 步骤 110, 获取进程组信息, 其中, 进程组信息包括: 对云应 用中通过进程分组所获得的进程组进行指示的信息; 步骤 120, 采用与所述 进程组对应的资源调度策略对所述进程组进行资源调度。
在按照本发明的实施例中, 通过对进程进行分组获得进程组, 在对进 程组进行资源调度时, 利用与进程组相对应的资源调度策略来进行调度, 实现了基于进程组的资源调度。
图 2示出了根据本发明实施例的用于调度资源的方法的一种应用环境。 图 2中, 云应用 210指运行在云平台上的应用程序, 其运行时包含一个或多 个 (例如 N个, N为整数) 进程, 这一个或多个进程可能分布在云平台中的 一个或多个虚拟机 (VM, Virtual Machine) 中。
在本发明实施例的用于调度资源的方法中, 可以对云应用的进程进行 分组, 例如可以根据云应用进程的类型进行分组; 还可以根据云应用的功 能进行分组。 通过对进程进行分组所获得的各进程组中的每一个进程组可 以对应于一个或多个资源调度策略。
根据进程组的特性可以将进程组分成不同的类型。 不同类型的进程组 可以适用于不同类别的资源调度策略。 其中, 静态进程组中进程的个数是 预先设定的, 在运行中固定不变, 因此静态进程组不能通过动态添加相同 进程的方法来分担工作。 增加或删除进程的资源调度策略不适合静态进程 组。 动态进程组中进程的个数在运行中是可以动态变化的, 一般该类型进 程组中的进程的功能相同, 因此能够通过动态添加新进程来分担工作。 可 迁移进程组中的进程是可以迁移的, 即可以从一个虚拟机迁移到另一个虚 拟机。 驻留进程组中的进程是不可以迁移的, 因此迁移进程的调度策略不 适合驻留进程组。 其中, 类型相同但功能不同的进程组可以采用相同的调 度策略, 也可以采用不同的调度策略。
资源调度装置 220, 用于基于与进程组相对应的资源调度策略来执行基 于进程组的虚拟资源的分配和调度。 其中, 资源调度策略可以是预先定义 的。 资源调度策略包括触发条件和决策算法。 决策算法决策云应用增加或 减少资源的方式和数量。
监控装置 230用于监控与进程组相关的参数, 包括: 云应用的负载和虚 拟资源的使用状态, 例如进程组的平均 CPU利用率、 使用的 VM台数、 进程 组中进程的个数等等。
虚拟资源管理平台 240将物理资源虚拟化, 对外提供虚拟机、 虚拟卷、 虚拟网络等虚拟资源。
用户 /管理员是操作人员, 其用于提交策略模版、 配置资源调度策略。 图 3是本发明另一实施例的用于调度资源的方法的流程示意图。 图 3中 的方法包括如下步骤:
步骤 310, 获取进程组信息, 其中, 进程组信息包括: 对云应用中通过 进程分组所获得的进程组进行指示的信息, 该信息用于指示云应用划分成 哪些进程组以及各进程组包括哪些进程; 以及, 对进程组的类型进行指示 的信息。 示例性地, 进程组的类型可以包括: 静态进程组、 动态进程组、 可迁移进程组和驻留进程组。 进程组可以是上述类型中的至少一种。 进程 组可以既是静态进程组和动态进程组中的一种, 又是可迁移进程组和驻留 进程组中的一种。
步骤 320, 根据进程组的类型确定资源调度策略。 在该步骤中, 所确定 的资源调度策略是与进程组的类型对应的资源调度策略。 具体地, 由于不 同的类型的进程组具有不同的特点, 因此所采用的资源调度策略与其类型 的特点相对应。
例如, 若进程组的类型是静态进程组, 则由于静态进程组中进程的个 数是预先设定的、 在运行中固定不变, 因此可以确定该进程组不能采用增 删进程的资源调度策略, 但是, 该进程组可以采用扩缩的资源调度策略, 其中, 扩缩的资源调度策略通过改变进程组所在的虚拟机的规格来进行资 源调度。 通常, 在使用扩缩的资源调度策略时, 需要确定需要扩缩的虚拟 机的标识以及进行扩缩后的虚拟机的规格。
若进程组的类型是动态进程组, 由于动态进程组中的进程个数在应用 程序的运行中是可以动态变化的, 因此可以确定进程组能够采用增加或删 除进程的增删的资源调度策略。 通常, 在使用增加进程的增加的资源调度 策略时, 需要决策要增加的进程数量以及选择决策新增加的进程分布在哪 些 VM上的 VM映射算法。 示例性地, VM映射算法可以是负载最小 (Min-load) 算法, 将新增加的进程添加在 VM集群中 CPU和内存综合利用 率最低的 VM上。 在必要时, 但不是必须的, 该调度策略还可以有一些限制 条件, 示例性地, 该限制条件可以是限制 VM的利用率不能超过某一阈值, 或者限制关键进程不能分配到同一 VM上等。在必要时但不是必须的, 该调 度策略还可以有失败处理。 示例性地, 该失败处理可以是当没有满足条件 的 VM可以启动进程时,使用满足某一预设条件的 VM规格,创建一台 VM并 启动进程。 示例性地, 该满足某一预设条件的 VM规格可以是该进程组使用 最多的 VM规格。
在使用删除的资源调度策略时, 需要决策要删除的进程数量, 以及决 策删除进程的选择算法。 该删除进程的选择算法可以决策要删除的是哪个 或哪些 VM上的进程, 还可以决策在删除进程后是否要删除相应的 VM。 示 例性但不作为限制的, 删除进程的选择算法可以是首先选择只有该进程组 进程的 VM上的进程, 其次选择进程数最少的 VM上的进程; 并且, 在删除 进程之后可以删除被删除进程所在的 VM。 当然, 动态进程组也可以采用扩 缩的资源调度策略。
若进程组的类型是可迁移进程组, 则该进程组能够采用重排的资源调 度策略, 其中, 该重排的资源调度策略通过改变进程到虚拟机的映射关系 来进行资源调度。 该重排调度策略具体采用的重排算法可以包括均衡的重 排调度策略和集中的重排调度策略。 均衡的重排调度策略用于使进程在不 同的 VM上分配均衡。集中的重排调度策略用于将进行集中放置在某一个或 几个 VM上以保证 VM的利用率不会太低,例如低于某个预设的利用率阈值。 重排的资源调度策略通常需决策迁移前的进程所处的源 VM和欲迁移至的 目标 VM。 示例性地, 可以将利用率最低的 VM上的进程一个一个迁移到利 用率尽量高的 VM上。在必要的但不是必须的情况下, 在重排的资源调度策 略中还可以设置迁移的限制条件。 例如, 该迁移限制条件可以是: 迁移合 并后预计利用率不超过某一预设的利用率阈值。 在一个重排的资源调度策 略的示例中, 如果预计将进程迁移到当前利用率最高的 VM中后, 该 VM的 CPU利用率不超过预设的 CPU利用率阈值, 则可以将利用率最低的 VM上的 进程一个一个迁移到该利用率最高的 VM上; 而如果预计将进程迁移到当前 利用率最高的 VM中后该 VM的 CPU利用率超过该 CPU利用率阈值, 而预计 将进程迁移当前利用率次高的 VM中后该利用率次高的 VM的 CPU利用率不 超过该 CPU利用率阈值, 则可以将利用率最低的 VM上的进程一个一个迁移 到该利用率次高的 VM上。
若进程组的类型是驻留进程组, 由于驻留进程组的不可迁移性, 则可 以确定该进程组不能够采用重排的资源调度策略。 对于属于驻留进程组的 静态进程组则可以采用上述扩缩的资源调度策略。 对于属于驻留进程组的 动态进程组则可以采用上述增删的资源调度策略, 当然也可以采用扩缩的 资源调度策略。
若进程组的类型是静态进程组并且是可迁移进程组, 则所述进程组能 够采用上述扩缩的资源调度策略和上述重排的资源调度策略中的至少一 个。 若述进程组的类型是动态进程组并且是可迁移进程组, 则进程组能够 采用上述增删的资源调度策略和上述重排的资源调度策略中的至少一个。
其中, 当一个进程组可以采用多个资源调度策略时, 可以根据优先级 来确定所采用的资源调度策略。 示例性, 可以是重排的资源调度策略和增 删的资源调度策略进行组合。 一个例子中, 集中的重排资源调度策略与删 除的资源调度策略组合, 集中的重排资源调度策略优先于删除的资源调度 策略。 具体地, 可以是先将进程集中放置, 如果达不到调度效果, 再删除 一定数目的进程。 另一个例子中, 均衡的重排资源调度策略与增加的资源 调度策略组合, 增加的资源调度策略优先于均衡的重排资源调度策略。 具 体地, 可以是先在新增加的 VM上放置进程, 然后再对进程在 VM上进行均 衡。
步骤 330, 根据所确定的资源调度策略对进程组进行资源调度。 资源调 度策略通常包括触发条件和决策算法。 当与进程组相关的参数满足所述资 源调度策略的触发条件时, 触发对所述进程组进行资源调度, 调用相应的 决策算法来调度资源。 其中, 上述与进程组相关的参数可以是如下中的至 少一项: 进程组的平均 CPU利用率、 进程组使用的虚拟机的台数、 进程组 中进程的个数、 进程组所在的虚拟机的利用率、 与进程组相对应的通信带 宽、 与进程组相对应的网络速度等。 其中, 所提到的虚拟机的利用率是 VM 所占用的资源的利用率, 例如 CPU利用率、 内存利用率、 磁盘利用率、 磁 盘每秒输入输出量 (IOPS, Input/Output Per Second) 、 和 /或网络 IOPS。
可选地, 可以预先定义各资源调度策略, 将各资源调度策略预先存储 在用于调度的装置中, 或者可以在获取进程组类型信息的时候也同时获取 各种资源调度策略。 例如, 上述获取可以通过接收用户输入的信息来实现, 例如接收用户通过策略模版输入的进程组的类型信息和资源调度策略信 息。 可选地, 可以预先设置各进程组与各资源调度策略之间的对应关系, 根据该对应关系来确定针对该进程组所需采用的资源调度策略。
可选地, 可以预先设置各进程组的类型与各资源调度策略之间的对应 关系, 在获取进程组的类型后, 根据该对应关系来确定针对该进程组所需 采用的资源调度策略。
在按照本发明的实施例中, 通过对进程进行分组获得进程组, 在对进 程组进行资源调度时, 利用与进程组相对应的资源调度策略来进行调度, 实现了基于进程组的资源调度。
进一步地, 在根据进程的类型对进程进行分组时, 针对不同的进程组 利用与进程组的类型相适应的调度策略来进行资源调度, 可以使得针对云 应用的资源调度更灵活和有效。
图 4是本发明另一实施例的用于调度资源的方法的流程示意图。 图 4所 示的方法包括如下步骤:
步骤 410, 获取进程组信息, 其中, 进程组信息包括: 对云应用中通过 进程分组所获得的进程组进行指示的信息, 该信息用于指示云应用划分成 哪些进程组以及各进程组包括哪些进程; 以及, 对与进程组相对应的资源 调度策略进行指示的信息。
步骤 420, 采用所获取的与进程组对应的资源调度策略来对进程组进行 资源调度。
示例性地, 上述与进程组对应的资源调度策略可以为与进程组的类型 对应的资源调度策略。 进程组信息还可以包括: 对进程组的类型进行指示 的进程组类型信息。 如上文所描述的, 进程组的类型可以包括: 静态进程 组、 动态进程组、 可迁移进程组和驻留进程组; 以及, 针对不同类型的进 程组可以采用如上文所述的对应类别的资源调度策略, 在此不再赘述。
可选地, 获取进程组信息可以包括: 接收配置文件以及解析配置文件 以获得进程组信息。 上述接收配置文件可以是接收用户输入的策略模板, 解析配置文件可以是解析策略模版。 策略模版可以包括所需的进程组信息。
本领域的技术人员应该知道, 除了进程组的类型外, 还可以采用其它 的方式来对进程进行分组。 功能不同的进程可以位于不同的进程组。 与进 程组对应的资源调度策略也可以是与进程组的功能对应的资源调度策略, 或者是在满足与进程组的类型相适应的情况下、 与进程组的功能对应的资 源调度策略, 或者是与进程组的其它属性相对应的资源调度策略。
类型相同但功能不同的进程组可以划分成不同的进程组, 类型相同但 功能不同的进程组对应的资源调度策略可以相同或不同。 进程组信息可以 包括对进程组的功能进行指示的进程组的功能信息。 该进程组的功能信息 可以包括作为进程组信息的一部分传递给资源调度装置, 也可以不作为进 程组信息的一部分传递给资源调度装置, 而是在进程组采用的调度策略中 体现。 上述功能是指应用在完成业务时所用到的进程组的职责和能力; 其 是根据应用的业务流程和设计架构来进行划分的。例如, 对于一个 web类应 用, 进程组的功能可以划分为: 数据库功能, 用于持久化数据即存储数据; 逻辑层功能, 用于处理数据; 表现层功能, 用于将数据可视化, 如通过文 字、 表格、 图形方式来展现。 又如, 对于一个科学计算类应用, 可以根据 功能划分为: 控制进程, 用于监视、 启动、 停止工作进程; 分发工作进程, 用于接受、 审核、 分发计算申请; 计算工作进程, 用于执行具体的科学计 算。 以上的功能划分只是示例性, 并不作为限制。 在将应用部署的云上时, 根据应用的业务流程和设计架构, 部署人员可以有其他的划分方式。
可选地, 当与进程组相对应的资源调度策略为多个时, 根据优先级来 执行对进程组的资源调度。
可选地, 利用资源调度策略执行对进程组的资源调度包括: 当与进程 组相关的参数满足资源调度策略的触发条件时, 触发对进程组进行资源调 度。
图 5是本发明又一实施例的用于调度资源的方法的流程示意图。 图 5中 所示的方法包括如下步骤:
步骤 510, 接收用户提交的策略模版。 策略模版可以包括但不限于以下 内容: 关于应用的信息、 关于进程组的信息、 关于资源调度策略的信息。 其中, 关于应用的信息包括应用所包括的进程组的信息, 例如应用包括哪 些进程组。 关于进程组的信息包括各进程组的具体信息, 例如, 各进程组 的类型、 各进程组所包括的进程、 各进程组所对应的资源调度策略的标识、 对于各进程组需统计的信息等。 关于资源调度策略的信息包括触发条件和 决策算法的信息。 其中, 决策算法说明如何进行调度, 其可以用算法名称、 脚本路径、 函数名等来标识决策算法。 决策算法可以带有相应的决策算法 的参数表, 例如, 决策算法的输入参数等。 参数的具体内容可以随算法的 不同而不同。
根据操作方式的不同决策算法可以分为三大类: 扩缩算法, 增删算法 和重排算法。 扩缩算法采用改变进程所在虚拟机即宿主虚拟机的规格的方 式增删资源。 增删算法采用增加或删除进程数, 占用或释放已有和新建 VM 的资源的方式来增删资源。重排算法采用改变进程到 VM的映射关系来调度 资源, 其可以使进程在 VM上分配均衡, 或者集中放置进程以保证 VM的利 用率。 相应地, 扩宿的资源调度策略采用扩缩算法; 增删的资源调度策略 采用增删算法; 重排的资源调度策略采用重排算法。
步骤 520, 解析策略模版, 获得上述关于应用的信息、 关于进程组的信 息和关于资源调度策略的信息。
步骤 530,判断是否存在未配置的资源调度策略;如是,则执行步骤 540; 否则, 执行步骤 550。
步骤 540, 接收设置的资源调度策略。 设置时, 输入的内容包括但不限 于: 触发条件、 决策算法标识和参数表。 其中, 触发条件可以是与进程组 相关的参数, 例如进程组的统计参数, 如进程组的 CPU利用率、 进程数量 等。 可以用规则引擎或脚本来实现触发条件的设置。 决策算法标识可以是 脚本名字、 函数名、 其他类型的模块标识等。 参数表可以记录在数据库、 内存或者文件中。 调度策略的设置和匹配可以用规则引擎或脚本实现。
步骤 550, 获取进程组和虚拟机的实时状态。
步骤 560, 在预定条件满足时, 触发资源调度。
步骤 570, 根据进程组的类型来选择资源调度策略。
在该实施例中, 如果进程组的类型为静态进程组, 则执行步骤 571, 选 择改变进程组所在 VM的规格的扩缩算法, 并在步骤 572中决策并返回扩缩 后的 VM规格, 然后转入步骤 580。 所使用的扩缩算法可以有多个, 可以根 据调度策略中的算法标识来进行选择。 输入的参数是算法标识。
如果进程组的类型为动态进程组, 则执行步骤 573, 选择增加或删除进 程的增删算法, 并在步骤 574决策并返回增删进行的方案, 然后转入步骤 580。 该例中, 可以有多个增删算法, 用算法标识来标识不同的增删算法; 在具体选择时, 可以通过输入需选择的算法标识来进行选择。 具体地, 以 如下四个增删算法为例。
其中, 增加算法 1包括: 新加数量: 每次增加进程组总数的 5%;
VM映射算法: Min-load,新加进程在 VM集群中 CPU和内存综合利用率 最低的 VM上;
限制条件: 关键进程不分配到同一个 VM上 ;
失败处理: 当没有满足条件的 VM可启动进程时, 使用该进程组使用最 多的 VM规格, 创建 1台 VM并启动进程 。
增加算法 2包括:
新加数量: 每次 2个进程 ;
VM映射算法: 启动与新进程相同数目的 VM, 在新增 VM上启动进程, 规格使用模版中的配置。
删除算法 1包括:
删除数量: 每次增加进程组总数的 5%;
删除进程选择算法(选择删除进程之后可以删除 VM的) : 首先选择只 有该进程组进程的 VM上的进程; 其次选择进程数最少的 VM上的进程; 删 除空 VM。
删除算法 2包括:
删除数量: 不指定;
删除进程选择算法: 首先选择只有该进程组进程的 VM上的进程; 删除 利用率最低的 2个 VM以及以上所有进程。
如果进程组可迁移进程组, 则执行步骤 575, 选择改变进程与其所在的
VM的映射关系的重排算法, 并在步骤 576决策并返回重排之后的进程映射 方案, 然后转入步骤 580; 该例中, 可以有多个重排算法, 用算法标识来标 示不同的重排算法, 在具体选择时, 可以通过输入需选择的算法标识来进 行选择。 例如, 重排算法 1为均衡的重排算法, 其具体内容可以是将利用率 最高的 VM上的进程迁移到利用率最低的 VM上;重排算法 2为集中的重排算 法, 其具体内容可以包括: 将利用率最低的 VM上的进程一个一个迁移到利 用率尽量高的 VM上, 并且用迁移合并后预计利用率不超过阈值来进行限 制。
步骤 580, 执行所选择的决策算法, 对资源进行调度。
该例中针对各进程类型选择了比较适合于该进程组的类型特点的资源 调度算法, 但在满足不改变静态进程组中进程的数目以及驻留进程组不迁 移的前提下, 各进程组可以采用其他的资源调度算法。
可选地, 在该实施例中, 还可以利用预先设置的资源调度策略与进程 组的类型之间的对应关系对所配置的资源调度策略或设置的资源调度策略 进行校验; 如果校验通过, 则进行步骤 550; 如果校验不通过, 则可以从预 先设置的默认资源调度策略中选择与进程组的类型对应的资源调度策略来 进行调度。
下面利用一个具体实例来说明如何利用本发明实施例的用于调度资源 的方法进行资源调度。 该实施例中, 以包括三个进程组的应用 Appl为例进 行说明。这三个进程组分别是进程组 1 (ProGroupl )、进程组 2 (ProGroup2) 和进程组 3 (ProGroup3 ) 。 进程组 1的类型是静态进程组和驻留进程组。 进 程组 1采用的资源调度策略是调度策略 1。进程组 2的类型是动态进程组和驻 留进程组。 进程组 2采用的资源调度策略是调度策略 2和调度策略 3的组合。 进程组 3的类型是动态进程组和可迁移进程组。 进程组 3采用的资源调度策 略是调度策略 4。该例中,接收用户向用于调度资源的装置发送的策略模版。 示例性地, 该策略模版包括如下内容:
1 ) 应用的信息:
应用标识: Appl
进程组标识列表: [ProcGroupl, ProcGroup2, ProcGroup3];
2) 进程组 1 (ProcGroupl ) 的信息:
进程组标识: ProcGroupl
是否静态进程组: 是 ; 表示进程组 1为静态进程组
是否可迁移进程组: 否; 表示进程组 1为驻留进程组
进程标识列表: [CtrlProcl, CtrlProc2] ;表示 ProcGroupl中包括两个进程, 这两个进程的进程标识分别为 CtrlProc 1 , CtrlProc2
调度策略标识列表: 调度策略 1 (SchedPolicyl ) ; 表示进程组 1所采用 的调度策略为调度策略 1 ;
进程组统计信息: Avarage_CPU_Load; 表示统计该进程组的平均 CPU 利用率;
3 ) 进程组 2 (ProcGroup2 ) 的信息:
进程组标识: ProcGroup2
是否静态进程组: 否; 表示进程组 2是动态进程组 是否可迁移进程组: 否; 表示进程组 2是驻留进程组
进程标识列表: WorkerProc#; 表示 ProcGroup2中包括的进程的进程标 识为 WorkerProc, WorkerProcl, WorkerProc2,…
调度策略标识列表: [调度策略 2, 调度策略 3] ( [SchedPolicy2, SchedPolicy3] ) ; 表示进程组 2采用的调度策略为调度策略 2和调度策略 3的 组合;
进程组统计信息: Avarage_CPU_Load; 表示统计该进程组的平均 CPU 利用率;
4) 进程组 3 (ProcGroup3 ) 的信息:
进程组标识: ProcGroup3
是否静态进程组: 否; 表示进程组 3是动态进程组
是否可迁移进程组: 是; 表示进程组 3是可迁移进程组
进程标识列表: Procname#; 表示 ProcGroup3中所包括的进程的进程标 识为 Procname, Procnamel, Procname2,***
调度策略标识列表: 调度策略 4 (SchedPolicy4) ; 表示进程组 3所采用 的调度策略为调度策略 4;
进程组统计信息: Avarage_CPU_Load; 表示统计该进程组的平均 CPU 利用率;
5 ) 调度策略 1 (SchedPolicyl )
调度策略标识: SchedPolicyl
触发条件: "ProcGroupl::Avarage_CPU_Load > 80"; 为调度策略 1的触 发条件; 表示当进程组 ProcGroupl的平均 CPU利用率大于 80%时, 触发资源 调度。
决策算法标识: "ScaleUpDown::ScaleUpAlgor ; 为调度策略 1所采用的 决策算法的标识; 表示使用扩缩算法类中的扩张算法 1 (ScaleUpAlgol ) , 此算法是对进程组所在的 VM进行扩容。
决策参数表: { "vmspeclist ", "vmSepcl,vmSpec2,vmSpec3" };表不所米 用的算法的参数, 此处采用的参数是虚拟机规格。 虚拟机的规格通常可以 包括: 小型、 中型、 大型及超大型。 与规格小的虚拟机相比, 规格较大的 虚拟机所包括的虚拟 CPU的个数、 虚拟内存的容量、 磁盘的容量、 和 /或网 卡块数等更多或更大。 vmSepcl,vmSpec2,vmSpec3为具体的规格。 其中, vmSepcl小于 vmSpec2, vmSpec2小于 vmSpec3。 如果当前组中 VM规格是 vmSpecl , 则执行该调度策略 1后, VM规格扩容到采用 vmSpec2; 如果当前 组中 VM规格是 vmSpec2, 则执行该调度策略 1后, VM规格扩容到采用 vmSpec3 。 此处假设 VM可以支持热扩容。
6 ) 调度策略 2 (SchedPolicy2 )
调度策略标识: SchedPolicy2
触发条件: "ProcGroup2::Avarage_CPU_Load > 80"; 为调度策略 2的触 发条件; 表示当进程组 ProcGroup2的平均 CPU利用率大于 80%时, 触发资源 调度。
决策算法标识: " ScaleOutIn::ScaleOutAlgol"。表示使用增删算法类算 法中的增进程算法 1 (ScaleOutAlgol ) , 此算法将添加新进程, 新进程采用 最小负载 (Min-load) 算法分配到 CPU利用率最低的 VM上。
决策参数表: 比率 (rate) ", "5%" }。 表示所采用的增进程算法 1的算 法参数, 此处表示增加 PracGr0Up2中进程总数的 5%个进程。 即当前进程组 有 100个进程时, 增加 5个新进程。
7 ) 调度策略 3 (SchedPolicy3 )
调度策略标识: SchedPolicy3
触发条件: " ProcGroup2::Avarage_CPU_Load < 20"; 为调度策略 3的触 发条件; 表示当进程组 ProcGroup2的平均 CPU利用率小于 20%时, 则触发资 源调度。
决策算法标识: " ScaleOutIn::ScaleInAlgo2"。 表示使用增删算法类中的 删进程算法 2 (ScaleInAlgo2 ) ; 此算法将删除一些进程; 具体地, 先找出 只具有该进程组进程的 VM, 然后删除利用率最低的 vm_nUmber个 VM以及 这 vm_number个 VM上的所有进程 ( vm_number指定了要删除的 VM的个数, 其值可以在决策参数表中设定) ; 其中, 所提到的 VM的利用率可以是 VM 所占用的资源的利用率, 例如 CPU利用率、 内存利用率、 磁盘利用率、 磁 盘 IOPS、 和 /或网络 IOPS等;
决策参数表: { "vm_nUmber", "2" };表示删除算法 2中涉及的要删除的虚 拟机的个数, 该例中, vm_number=2。
8 ) 调度策略 4 (SchedPolicy4 )
调度策略标识: SchedPolicy4 触发条件: " ProcGroup3::Avarage_CPU_Load < 20"; 为调度策略 4的触 发条件; 表示当进程组 ProcGroup3的平均 CPU利用率小于 20%时, 触发资源 调度。
决策算法标识: "Reallocate: :ReallocateAlgo2"。 表示使用重排 (Reallocate ) 算法类中的重排算法 2 ( ReallocateAlgo2 ) ; 重排算法 2具体 对应的是集中重排算法 1, 此算法将利用率最低的 VM上的进程一个一个迁 移到利用率尽量高的 VM上, 并且对于该 VM,所预计的该 VM中的 CPU利用 率不超过 CPU利用率上限 cpu_load_upper。例如,如果当前利用率最高的 VM 中的 CPU利用率不超过该 cpu_load_upper, 则可以将利用率最低的 VM上的 进程一个一个迁移到该利用率最高的 VM上; 而如果当前利用率最高的 VM 中的 CPU利用率超过该 cpu_load_upper, 而预计当前利用率次高的 VM中的 CPU利用率不超过该 cpu_load_upper, 则可以将利用率最低的 VM上的进程 一个一个迁移到该利用率次高的 VM上。
决策参数表: { "cpu_load_upper", "70%" };表不 cpu_load_upper=70%。 用于调度资源的装置可以在接收到上述策略模版后, 对策略模版进行 解析以获得上述针对该云应用的各信息。 在解析的过程中, 可以先验证模 版的合法性。 如果模版合法, 则根据所提供的进程标识列表中的进程标识 信息来向云应用查询由操作系统所赋予的进程所处的 VM标识和在所处的 VM中该进程的本地标识。
然后, 根据预先设置的上述各触发条件, 定时计算各进程组中所有进 程的平均 CPU利用率,并监控各进程组的 CPU平均利用率是否满足上述对应 的触发条件。
在满足触发条件时, 触发相对应的资源调度策略来调度资源。 如调用 与所满足的触发条件相对应的资源调度策略所对应的决策算法来调度资 源。
例如, 当进程组 1的平均 CPU利用率大于 80%时, 调用调度策略 1相对应 的扩张算法 1来调度资源。 假设当前进程组 1所在的 VM的规格是 vmSpeC2, 则扩张算法 1将 VM的规格扩容到 vmSpec3。 此处假定可以热扩。
当进程组 2的平均 CPU利用率大于 80%时,调用调度策略 2相对应的增进 程算法 1来调度资源。 例如, 算法决策新增 5个进程, 部署方案为在 VM1启 动 3个, 在 VM2上启动 2个。 当进程组 2的平均 CPU利用率小于 20%时,调用调度策略 3相对应的删进 程算法 2来删除利用率最低的 2个 VM及这 2个 VM上的所有进程。
该例中以利用率最低的 VM为 VM1和 VM2为例。 当进程组 3的平均 CPU 利用率小于 20%时, 调用调度策略 4相对应的重排算法 2将进程组 3中的进程 从利用率最低的 VM1上迁移 VM2上, 然后删除 VM1。
当应用的负载上升, 增加新资源以保证应用的服务质量(QoS, Quality of Service) ; 当应用负载降低时, 减少应用资源用量, 并优化所应用的资 源的利用率, 以降低成本的调度需求: 在该实施例中, 对于类型为静态进 程组和驻留进程组的进程组 1采用了扩张进程组 1宿主虚拟机的规格的方式 来增加资源; 对于类型为动态进程组和驻留进程组的进程组 2采用了增加进 程的方式来增加资源, 采用了删除进程的方式来减少资源; 对于类型为动 态进程组和可迁移进程组的进程组 3采用将进程迁移到利用率更高的 VM的 方式来优化资源的利用率。 当然, 本实施例的用于调度资源的方法只是示 例性地说明, 云应用可以包括更多或更少的进程组; 进程组的类型可以是 其它的类型或类型组合, 例如可以是静态进程组和可迁移进程组; 调度策 略可以采用其它的触发条件和 /或决策算法,例如触发条件可以是由与所述 进程组相对应其它的统计信息来限定, 决策算法可以是与进程组的类型特 点相对应的其它算法。 此外, 除了进程组的平均 CPU利用率之外, 触发条 件中可以利用的、 与进程组相对应的其它统计信息可以包括如下中的至少 一项: 进程组使用的虚拟机的台数; 进程组中进程的个数; 进程组所在的 虚拟机的利用率; 与所述进程组相对应的通信带宽; 与所述进程组相对应 的网络速度等。
根据静态进程组的特点, 扩缩算法适用于静态进程组, 其采用改变进 程宿主 VM的规格的方式来增删资源; 该算法的决策内容包括: 确定需要扩 缩的 VM标识; 以及确定改变之后的 VM规格。
根据动态进程组的特点, 增删算法适用于动态进程组, 其可以采用如 下方式之一来调度资源, 比如增加或删除进程数, 占用或释放已有的资源, 占用或释放新建 VM的资源等; 其中, 增加进程的算法可以包括以下内容: 决策新增加进程的数量, 例如可以是每次增加进程组总数的 5%;
采用的 VM映射算法, 例如可以是最小负载 (Min-load) 映射算法, 利 用该映射算法, 新增加的进程可以放在 VM集群中 CPU和内存综合利用率最 低的 VM上。
可选地, 增加进程的算法还可以包括限制条件和失败处理。 限制条件 用于对增加进程进行限制。 例如, 限制条件可以为: VM利用率不能超过某 一设定的百分数,例如 60%; 或者关键进程不可以分配在同一 VM上。例如, 失败处理可以为: 当没有满足条件的 VM可启动进程时, 使用该进程组中采 用最多的 VM规格来创建 VM并启动进程。
其中, 删除进程算法包括以下内容:
决策删除的数量, 例如可以是每次删除进程组总数的 5%;
删除进程的选择算法, 选择删除进程之后可以删除 VM的算法, 例如首 先选择删除只具有该进程组进程的 VM上的进程; 其次选择删除进程数最少 的 VM上的进程。
根据可迁移进程组的特点, 重排算法适用于可迁移进程组; 利用重排 算法迁移进程可以改变进程到 VM的映射关系, 从而可以使进程在 VM上分 配均衡,或者可以通过集中放置进程即将进程集中放置在某一个或几个 VM 上来保证 VM的利用率; 该算法可以包括以下内容: 决策源进程和目标 VM, 例如将利用率最低的 VM上的进程一个一个迁移到利用率高的 VM上。
可选地, 重排算法还可以包括限制条件, 例如, 该限制条件可以是: 迁移合并进程后 VM的预计利用率不超过预定阈值。
每一个进程组可以有多种调度策略。 多种调度策略还可以根据优先级 进行组合。 示例性地, 可以以如下方式组合:
重排策略 1或扩缩策略 1 : 可以表示先将进程集中放置; 如果达不到调 度效果, 再删除一定数目的进程;
扩缩策略 2和重排策略 2: 可以表示先在新增加的 VM上放置进程, 然后 再将进程在 VM上进行均衡。
图 6是根据本发明一实施例的用于调度资源的装置的示意图, 该实施例 的装置与图 1所示的方法相对应。 如图 6所示, 该实施例的用于调度资源的 装置 600包括: 获取模块 610, 用于获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所获得的进程组进行指示的信息; 调度模块 620, 用于采用与所述进程组对应的资源调度策略对所述进程组进行资源调度。 该装置 600可以执行上文所描述的用于调度资源的方法的各步骤, 在此不再 赘述。 可选地, 所述进程组信息还可以包括: 对与所述进程组对应的资源调 度策略进行指示的信息。
可选地, 所述进程组信息还可以包括: 对所述进程组的类型进行指示 的信息; 则所述调度模块 620, 用于采用与所述进程组的类型对应的资源调 度策略对所述进程组进行资源调度。
所述调度模块用于:
若所述进程组的类型是静态进程组, 则采用扩缩的资源调度策略进行 资源调度, 所述扩缩的资源调度策略用于改变所述进程组所在的虚拟机的 规格;
若所述进程组的类型是动态进程组, 则采用增删的资源调度策略进行 资源调度, 所述增删的资源调度策略用于增加或删除进程;
若所述进程组的类型是可迁移进程组, 则采用重排的资源调度策略进 行资源调度, 所述重排的资源调度策略用于改变进程到虚拟机的映射关系。
所述调度模块还可以用于:
若所述进程组的类型是静态进程组并且是可迁移进程组, 则采用所述 扩缩的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度;
若所述进程组的类型是动态进程组并且是可迁移进程组, 则采用所述 增删的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度。
可选地, 本发明实施例的装置中, 所述获取模块可以包括: 接收单元, 用于接收配置文件; 以及解析单元, 用于解析所述配置文件以获取所述进 程组信息。
可选地, 本发明实施例的装置中, 所述调度模块还可以包括: 选择单 元, 用于当与所述进程组相对应的资源调度策略为多个时, 根据优先级来 选择所采用的资源调度策略。
图 7是根据本发明另一实施例的用于调度资源的装置 700的示意图, 该 实施例的装置与图 3所示的方法相对应。 该实施例中, 与图 6中装置相比, 该用于调度资源的装置 700中, 获取模块 710所获取的进程组信息还包括: 对所述进程组的类型进行指示的信息; 除了获取模块 710和调度模块 720外, 该实施例的装置还包括: 确定模块 730, 用于根据进程组的类型确定与所述 进程组对应的资源调度策略; 该实施例的装置中, 调度模块 720采用所确定 的资源调度策略对所述进程组进行资源调度。
按照本发明实施例的用于调度资源的方法和装置, 基于对云应用进程 的分组, 以及采用与进程组相对应的调度策略, 可以针对进程组采用与其 功能或类型相适应的调度策略, 从而能够更灵活和更有效的调度, 能够提 供适合多种类型应用进程的资源动态变化的能力, 可适用不同类型的云应 用。 此外, 由于调度是基于进程组进行的, 并且可以基于进程来进行资源 的增加、 删除和 /或重排, 所以可以实现更细粒度的资源动态调整。
结合本申请所公开示例描述的用于调度资源的装置可直接体现为硬 件、 由处理器执行的软件模块或二者组合。
比如一种用于调度资源的装置, 可以包括:
存储器, 用于存储指令;
处理器, 其与所述存储器耦合, 该处理器被配置为执行存储在所述存 储器中的指令, 其中, 所述处理器被配置为用于:
获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所 获得的进程组进行指示的信息;
采用与所述进程组对应的资源调度策略对所述进程组进行资源调度。 可选地, 所述进程组信息还包括: 对所述进程组的类型进行指示的信 息; 则所述处理器进一步用于:
根据所述进程组的类型确定与所述进程组对应的资源调度策略; 采用所确定的资源调度策略对所述进程组进行资源调度。
可选地, 所述处理器进一步用于:
接收配置文件; 以及
解析所述配置文件以获得所述进程组信息。
可选地, 所述处理器可以进一步用于:
若所述进程组的类型是静态进程组, 则采用扩缩的资源调度策略进行 资源调度, 所述扩缩的资源调度策略用于改变所述进程组所在的虚拟机的 规格;
若所述进程组的类型是动态进程组, 则采用增删的资源调度策略进行 资源调度, 所述增删的资源调度策略用于增加或删除进程;
若所述进程组的类型是可迁移进程组, 则采用重排的资源调度策略进 行资源调度, 所述重排的资源调度策略用于改变进程到虚拟机的映射关系。 可选地, 所述处理器可以进一步用于:
若所述进程组的类型是静态进程组并且是可迁移进程组, 则采用所述 扩缩的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度;
若所述进程组的类型是动态进程组并且是可迁移进程组, 则采用所述 增删的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度。
可选地, 所述处理器可以进一步用于:
当与所述进程组相对应的资源调度策略为多个时, 根据优先级来选择 所采用的资源调度策略。
软件模块可以位于 RAM存储器、 闪存、 ROM存储器、 EPROM存储器、 EEPROM存储器、 寄存器、 硬盘、 移动磁盘、 CD-ROM或者本领域已知的 任何其它形式的存储介质。 一种存储介质耦接至处理器, 从而使处理器能 够从该存储介质读取信息, 且可向该存储介质写入信息。 或者, 存储介质 可以是处理器的组成部分。 处理器和存储介质可以位于 ASIC中。 该软件模 块可以存储在移动终端的存储器中, 也可以存储在可插入移动终端的存储 卡中。 例如: 若移动终端采用的是较大容量的 MEGA-SIM卡或者大容量的 闪存装置, 则该软件模块可存储在该 MEGA-SIM卡或者大容量的闪存装置 中。
针对图 6和图 7所描述的功能方框中的一个或多个和 /或功能方框的一个 或多个组合 (例如, 获取模块 610/710、 确定模块 730、 调度模块 620/720) 可以实现为用于执行本申请所描述功能的通用处理器、 数字信号处理器 (DSP) 、 专用集成电路 (ASIC) 、 现场可编程门阵列 (FPGA) 或者其它 可编程逻辑器件、 分立门或者晶体管逻辑器件、 分立硬件组件或者其任意 适当组合。 针对图 6和图 7描述的功能方框中的一个或多个和 /或功能方框的 一个或多个组合, 还可以实现为计算设备的组合, 例如, DSP和微处理器的 组合、多个微处理器、与 DSP通信结合的一个或多个微处理器或者任何其它 这种配置。
虽然本申请描述了本发明的特定示例, 但本领域的普通技术人员可以 在不脱离本发明概念的基础上设计出本发明的变型。 例如: 本发明实施例的与进程组相对应的资源调度策略信息, 并不仅 限于由云资源的用户提供, 还可以是由云资源的提供方提供, 在调度资源 时进行调用。 此外, 当进行资源调度所需的信息由用户提供时, 除了通过 提交策略模版外, 用户还可以通过其他的方式来提供, 例如通过直接提供 记录有所需信息的配置文件等方式来提供。
又例如: 本发明实施例的用于调度资源的方法中, 用于调度资源的装 置还可以包括监控模块, 用于监控与与进程组相关的参数, 以确定该参数 是否满足对应调度策略的触发条件。 此外, 监控功能也可以由独立于用于 调度资源的装置的其它能实现监控功能的监控装置来执行, 用于调度资源 的装置只需根据监控装置提供的是否满足触发条件的信息来决定是否触发 调度策略对应的决策算法。 其中, 监控模块或监控装置所监控的相关的参 数信息, 可以包括 CPU的信息、 内存的信息、 磁盘的信息和 /或网络的信息 中的至少一种。 具体地, 对于进程组中的进程而言, 所监控的统计信息可 以是:进程所占用的 CPU时间、进程的 CPU利用率、进程所占用的内存用量、 进程所占用的磁盘每秒输入输出量 (IOPS, Input/Output Per Second) 、 进 程所占用的网络每秒的输入输出量; 对于 VM而言, 所监控的统计信息可以 是: VM所占用的 CPU利用率、 VM所占用的内存利用率、 VM所占用的磁盘 IOPS、 VM所占用的网络的 IOPS; 对于进程组而言, 所监控的信息可以是 进程组所包括的所有进程的相关信息的统计值, 如上述进程的统计信息的 和、 平均值、 最大值、 最小值、 进程数等; 例如, 可以是进程组中所有进 程的平均 CPU利用率、 进程组中所有进程的所占用的内存之和、 进程组的 所有进程中所占用的最大的磁盘 IOPS等等。 其中, 上述提到的 IOPS, 可以 是读和 /或写的 IOPS。 在利用本发明实施例的方法调度资源时, 可以根据需 要选择适当的监控信息。
本领域技术人员应当理解, 本发明的各个实施例所公开的方法和装置, 可以在不偏离发明实质的情况下做出各种变形和改变, 这些变形和改变都 应当落入在本发明的保护范围之内。 因此, 本发明的保护范围由所附的权 利要求书来定义。
在权利要求书中, 术语"包括"不排除存在其它元件或步骤。此外, 尽管 是单独地列出, 但是多个装置、 元件或方法步骤可通过例如单一单元或处 理器来实施。 另外, 尽管独立的特征可以包含在不同权利要求中, 但是这 些特征也可以有利地组合, 并且不同权利要求中的包含不 合是不可行的和 /或不利的。

Claims

权 利 要 求 书
1、 一种用于调度资源的方法, 包括:
获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所 获得的进程组进行指示的信息;
采用与所述进程组对应的资源调度策略对所述进程组进行资源调度。
2、根据权利要求 1所述的方法,其特征在于,所述进程组信息还包括: 对与所述进程组对应的资源调度策略进行指示的信息。
3、 根据权利要求 2所述的方法, 其特征在于,
所述进程组信息还包括: 对进程组的类型进行指示的信息;
与所述进程组对应的资源调度策略为: 与所述进程组的类型对应的资 源调度策略。
4、 根据权利要求 1所述的方法, 其特征在于,
所述进程组信息还包括: 对所述进程组的类型进行指示的信息; 并且, 在采用与所述进程组对应的资源调度策略对所述进程组进行资源调度 之前还包括: 根据所述进程组的类型确定与所述进程组对应的资源调度策 略;
并且, 在对所述进程组进行资源调度时, 采用所确定的资源调度策略 来进行资源调度。
5、 根据权利要求 1所述的方法, 其特征在于, 所述获取进程组信息包 括:
接收配置文件; 以及
解析所述配置文件以获得所述进程组信息。
6、 根据权利要求 3或 4所述的方法, 其特征在于, 所述采用与所述进 程组对应的资源调度策略对所述进程组进行资源调度包括: 若所述进程组的类型是静态进程组, 则采用扩缩的资源调度策略进行 资源调度, 所述扩缩的资源调度策略用于改变所述进程组所在的虚拟机的 规格;
若所述进程组的类型是动态进程组, 则采用增删的资源调度策略进行 资源调度, 所述增删的资源调度策略用于增加或删除进程;
若所述进程组的类型是可迁移进程组, 则采用重排的资源调度策略进 行资源调度, 所述重排的资源调度策略用于改变进程到虚拟机的映射关系。
7、 根据权利要求 6所述的方法, 其特征在于, 所述采用与所述进程组 对应的资源调度策略对所述进程组进行资源调度还包括:
若所述进程组的类型是静态进程组并且是可迁移进程组, 则采用所述 扩缩的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度;
若所述进程组的类型是动态进程组并且是可迁移进程组, 则采用所述 增删的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度。
8、 根据权利要求 6所述的方法, 其特征在于, 所述采用与所述进程组 对应的资源调度策略对所述进程组进行资源调度包括:
当与所述进程组相对应的资源调度策略为多个时, 根据优先级来选择 所采用的资源调度策略。
9、 一种用于调度资源的装置, 包括:
获取模块, 用于获取进程组信息, 所述进程组信息包括: 对云应用中 通过进程分组所获得的进程组进行指示的信息;
调度模块, 用于采用与所述进程组对应的资源调度策略对所述进程组 进行资源调度。
10、 根据权利要求 9所述的装置, 其特征在于, 所述进程组信息还包 括: 对与所述进程组对应的资源调度策略进行指示的信息。
11、 根据权利要求 10所述的装置, 其特征在于,
所述进程组信息还包括: 对所述进程组的类型进行指示的信息; 则所述调度模块, 用于采用与所述进程组的类型对应的资源调度策略 对所述进程组进行资源调度。
12、 根据权利要求 9所述的装置, 其特征在于,
所述进程组信息还包括: 对所述进程组的类型进行指示的信息; 所述装置还包括: 确定模块, 用于根据所述进程组的类型确定与所述 进程组对应的资源调度策略;
其中, 所述调度模块采用所确定的资源调度策略对所述进程组进行资 源调度。
13、 根据权利要求 9所述的装置, 其特征在于, 所述获取模块包括: 接收单元, 用于接收配置文件; 以及
解析单元, 用于解析所述配置文件以获取所述进程组信息。
14、 根据权利要求 11或 12所述的装置, 其特征在于, 所述调度模块 用于:
若所述进程组的类型是静态进程组, 则采用扩缩的资源调度策略进行 资源调度, 所述扩缩的资源调度策略用于改变所述进程组所在的虚拟机的 规格;
若所述进程组的类型是动态进程组, 则采用增删的资源调度策略进行 资源调度, 所述增删的资源调度策略用于增加或删除进程;
若所述进程组的类型是可迁移进程组, 则采用重排的资源调度策略进 行资源调度, 所述重排的资源调度策略用于改变进程到虚拟机的映射关系。
15、 根据权利要求 14所述的装置, 其特征在于, 所述调度模块用于: 若所述进程组的类型是静态进程组并且是可迁移进程组, 则采用所述 扩缩的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度;
若所述进程组的类型是动态进程组并且是可迁移进程组, 则采用所述 增删的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度。
16、 根据权利要求 14所述的装置, 其特征在于, 所述调度模块包括: 选择单元, 用于当与所述进程组相对应的资源调度策略为多个时, 根 据优先级来选择所采用的资源调度策略。
17、 一种用于调度资源的装置, 包括:
存储器, 用于存储指令;
处理器, 其与所述存储器耦合, 该处理器被配置为执行存储在所述存 储器中的指令, 其中, 所述处理器被配置为用于:
获取进程组信息, 所述进程组信息包括: 对云应用中通过进程分组所 获得的进程组进行指示的信息;
采用与所述进程组对应的资源调度策略对所述进程组进行资源调度。
18、 根据权利要求 17所述的装置, 其特征在于, 所述进程组信息还包 括: 对与所述进程组对应的资源调度策略进行指示的信息。
19、 根据权利要求 18所述的装置, 其特征在于,
所述进程组信息还包括: 对所述进程组的类型进行指示的信息; 与所述进程组对应的资源调度策略为: 与所述进程组的类型对应的资 源调度策略。
20、 根据权利要求 17所述的装置, 其特征在于,
所述进程组信息还包括: 对所述进程组的类型进行指示的信息; 所述处理器进一步用于:
根据所述进程组的类型确定与所述进程组对应的资源调度策略; 采用所确定的资源调度策略对所述进程组进行资源调度。
21、 根据权利要求 17所述的装置, 其特征在于, 所述处理器进 接收配置文件; 以及
解析所述配置文件以获得所述进程组信息。
22、 根据权利要求 19或 20所述的装置, 其特征在于, 所述处理器进 一步用于:
若所述进程组的类型是静态进程组, 则采用扩缩的资源调度策略进行 资源调度, 所述扩缩的资源调度策略用于改变所述进程组所在的虚拟机的 规格;
若所述进程组的类型是动态进程组, 则采用增删的资源调度策略进行 资源调度, 所述增删的资源调度策略用于增加或删除进程;
若所述进程组的类型是可迁移进程组, 则采用重排的资源调度策略进 行资源调度, 所述重排的资源调度策略用于改变进程到虚拟机的映射关系。
23、 根据权利要求 22所述的装置, 其特征在于, 所述处理器进一步用 于:
若所述进程组的类型是静态进程组并且是可迁移进程组, 则采用所述 扩缩的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度;
若所述进程组的类型是动态进程组并且是可迁移进程组, 则采用所述 增删的资源调度策略和所述重排的资源调度策略中的至少一个来进行资源 调度。
24、 根据权利要求 22所述的装置, 其特征在于, 所述处理器进一步用 于:
当与所述进程组相对应的资源调度策略为多个时, 根据优先级来选择 所采用的资源调度策略。
25、 一种机器可读存储介质, 其存储机器可执行指令, 当所述机器可 执行指令被执行时使得机器执行权利要求 1-8 中任何一项权利要求中的步 骤。
PCT/CN2012/072939 2012-03-23 2012-03-23 用于调度资源的方法及装置 WO2013139037A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201280000704.8A CN103503412B (zh) 2012-03-23 2012-03-23 用于调度资源的方法及装置
PCT/CN2012/072939 WO2013139037A1 (zh) 2012-03-23 2012-03-23 用于调度资源的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/072939 WO2013139037A1 (zh) 2012-03-23 2012-03-23 用于调度资源的方法及装置

Publications (1)

Publication Number Publication Date
WO2013139037A1 true WO2013139037A1 (zh) 2013-09-26

Family

ID=49221833

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/072939 WO2013139037A1 (zh) 2012-03-23 2012-03-23 用于调度资源的方法及装置

Country Status (2)

Country Link
CN (1) CN103503412B (zh)
WO (1) WO2013139037A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268022A (zh) * 2014-09-23 2015-01-07 浪潮(北京)电子信息产业有限公司 一种操作系统中进程的资源分配方法及系统
US9413684B2 (en) 2014-02-07 2016-08-09 International Business Machines Corporation Provisioning legacy systems network architecture resource communications through a group of servers in a data center
CN112148465A (zh) * 2019-06-26 2020-12-29 维塔科技(北京)有限公司 资源分配方法和装置,电子设备及存储介质
CN113535378A (zh) * 2020-04-20 2021-10-22 深圳Tcl数字技术有限公司 一种资源调配方法、存储介质及终端设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106385387B (zh) * 2016-09-27 2019-08-02 中国科学院空间应用工程与技术中心 一种空间信息网络链路的资源调度方法、系统及应用
CN108733449B (zh) * 2017-04-17 2022-01-25 伊姆西Ip控股有限责任公司 用于管理虚拟机的方法、设备和计算机可读存储介质
CN108595265B (zh) * 2018-04-11 2022-05-13 武汉唯信兄弟科技有限公司 一种计算资源智能分配方法及系统
CN109491788B (zh) * 2018-11-01 2022-12-09 郑州云海信息技术有限公司 一种虚拟化平台负载均衡实现方法及装置
CN112685109B (zh) * 2020-12-03 2021-09-21 南京机敏软件科技有限公司 一种动态标识与识别远程应用窗口的方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101800762A (zh) * 2009-12-30 2010-08-11 中兴通讯股份有限公司 一种对多个业务进行融合的业务云系统及业务实现方法
CN101951411A (zh) * 2010-10-13 2011-01-19 戴元顺 云调度系统及方法以及多级云调度系统
US20110153824A1 (en) * 2009-12-17 2011-06-23 Chikando Eric N Data Processing Workload Administration In A Cloud Computing Environment
US20110202657A1 (en) * 2010-02-12 2011-08-18 Elitegroup Computer Systems Co., Ltd. Method for scheduling cloud-computing resource and system applying the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030095447A1 (en) * 2001-11-20 2003-05-22 Koninklijke Philips Electronics N.V. Shared memory controller for display processor
CN100495346C (zh) * 2006-08-21 2009-06-03 英业达股份有限公司 多核多中央处理器的执行线程分配方法
CN101571813B (zh) * 2009-01-04 2012-02-29 四川川大智胜软件股份有限公司 一种多机集群中主从调度方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110153824A1 (en) * 2009-12-17 2011-06-23 Chikando Eric N Data Processing Workload Administration In A Cloud Computing Environment
CN101800762A (zh) * 2009-12-30 2010-08-11 中兴通讯股份有限公司 一种对多个业务进行融合的业务云系统及业务实现方法
US20110202657A1 (en) * 2010-02-12 2011-08-18 Elitegroup Computer Systems Co., Ltd. Method for scheduling cloud-computing resource and system applying the same
CN101951411A (zh) * 2010-10-13 2011-01-19 戴元顺 云调度系统及方法以及多级云调度系统

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9413684B2 (en) 2014-02-07 2016-08-09 International Business Machines Corporation Provisioning legacy systems network architecture resource communications through a group of servers in a data center
US9413682B2 (en) 2014-02-07 2016-08-09 International Business Machines Corporation Provisioning legacy systems network architecture resource communications through a group of servers in a data center
CN104268022A (zh) * 2014-09-23 2015-01-07 浪潮(北京)电子信息产业有限公司 一种操作系统中进程的资源分配方法及系统
CN112148465A (zh) * 2019-06-26 2020-12-29 维塔科技(北京)有限公司 资源分配方法和装置,电子设备及存储介质
WO2020259289A1 (zh) * 2019-06-26 2020-12-30 维塔科技(北京)有限公司 资源分配方法和装置,电子设备及存储介质
CN113535378A (zh) * 2020-04-20 2021-10-22 深圳Tcl数字技术有限公司 一种资源调配方法、存储介质及终端设备

Also Published As

Publication number Publication date
CN103503412A (zh) 2014-01-08
CN103503412B (zh) 2017-06-20

Similar Documents

Publication Publication Date Title
US11875173B2 (en) Execution of auxiliary functions in an on-demand network code execution system
WO2013139037A1 (zh) 用于调度资源的方法及装置
US10817331B2 (en) Execution of auxiliary functions in an on-demand network code execution system
JP7197612B2 (ja) オンデマンドネットワークコード実行システム上での補助機能の実行
US10949237B2 (en) Operating system customization in an on-demand network code execution system
EP3270289B1 (en) Container-based multi-tenant computing infrastructure
US11575748B2 (en) Data storage method and apparatus for combining different data distribution policies
US11099870B1 (en) Reducing execution times in an on-demand network code execution system using saved machine states
US11119813B1 (en) Mapreduce implementation using an on-demand network code execution system
JP5510556B2 (ja) 仮想マシンのストレージスペースおよび物理ホストを管理するための方法およびシステム
US11231955B1 (en) Dynamically reallocating memory in an on-demand code execution system
US9183016B2 (en) Adaptive task scheduling of Hadoop in a virtualized environment
US9569245B2 (en) System and method for controlling virtual-machine migrations based on processor usage rates and traffic amounts
WO2018040525A1 (zh) 资源池的处理方法、装置和设备
WO2015027771A1 (zh) 一种虚拟机的资源配置方法和通信设备
Anwar et al. Taming the cloud object storage with mos
WO2013082742A1 (zh) 资源调度方法、装置和系统
JP2016115065A (ja) 情報処理装置、情報処理システム、タスク処理方法、及び、プログラム
WO2016121879A1 (ja) 仮想化制御装置、配置先選択方法及びプログラム
US20150134792A1 (en) Resource management method and management server
US10291475B2 (en) Virtualization of management services in a cloud computing environment
US10572412B1 (en) Interruptible computing instance prioritization
JP6287261B2 (ja) システム制御装置、制御方法、及びプログラム
WO2016041202A1 (zh) 一种基于云环境系统的部署方法和装置
WO2022142515A1 (zh) 管理实例的方法、装置以及云应用引擎

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12871980

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12871980

Country of ref document: EP

Kind code of ref document: A1