WO2016011953A1 - Scheduling of service resource - Google Patents

Scheduling of service resource Download PDF

Info

Publication number
WO2016011953A1
WO2016011953A1 PCT/CN2015/084851 CN2015084851W WO2016011953A1 WO 2016011953 A1 WO2016011953 A1 WO 2016011953A1 CN 2015084851 W CN2015084851 W CN 2015084851W WO 2016011953 A1 WO2016011953 A1 WO 2016011953A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
allocable
nodes
node
user
Prior art date
Application number
PCT/CN2015/084851
Other languages
French (fr)
Inventor
Zhenfeng Lv
Songer SUN
Original Assignee
Hangzhou H3C Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou H3C Technologies Co., Ltd. filed Critical Hangzhou H3C Technologies Co., Ltd.
Publication of WO2016011953A1 publication Critical patent/WO2016011953A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/501Performance criteria

Definitions

  • the OpenStack is an emerging open source software architecture to provide a cloud computing management solution, which intends to build an open and extensible framework with which various resources in a cloud environment are managed. Due to its open-source and open accessibility, the OpenStack architecture is supported by the majority of predominant manufacturers, and has become a widely applied open-source cloud computing solution.
  • Fig. 1 illustrates a schematic diagram of a hardware structure of a system in an example
  • Fig. 2 illustrates a framework diagram of a controller in an example
  • Fig. 3 illustrates a framework diagram of another system in an example
  • Fig. 4 illustrates a flow chart of a method of scheduling service resources in an example.
  • the present disclosure relates to a cloud computing system which enables computing, storage and other resources to be provided to users over the Internet.
  • a cloud computing system may be used by a plurality of users, and resources may be managed and dynamically allocated from one user to the other by the system controller according to the demand for resources at any one time.
  • the following description refers to the OpenStack system, but the principles disclosed herein may also be applied to other cloud computing systems and controllers.
  • a modular architecture provided by the OpenStack typically includes various modular resources in a cloud environment, such as computing, storage and network, to provide an integral framework solution.
  • a cloud computing data center typically provides a user with a virtual host lease service, and since there is no physical boundary for the virtual host lease service, it is difficult for the user herself or himself to deploy a physical network security device.
  • the cloud computing data center needs to provide the user with network security solutions such as Firewall as a Service (FWaaS) .
  • Firewall as a Service FWaaS
  • the user can operate directly on a user interface provided by the OpenStack to purchase a virtual firewall, and after the user purchases the virtual firewall, an OpenStack console may be connected automatically with a physical firewall to create the virtual firewall for the user and to configure the virtual firewall correspondingly. After the virtual firewall is configured, the user can perform service management on her or his own virtual firewall.
  • Fig. 1 illustrates an OpenStack system framework in an example of this disclosure.
  • service modules such as an FWaaS 101, a Virtual Private Network as a Service (VPNaaS) 102, and a Load Balance as a Service (LBaaS) 103 are preliminarily defined at present, and these service modules can be controlled centrally to provide the user with a relative complete network security solution.
  • VPNaaS Virtual Private Network as a Service
  • LLBaaS Load Balance as a Service
  • various security services e.g., Firewall, LB, VPN, etc.
  • Firewall e.g., Firewall, LB, VPN, etc.
  • no solution is provided in the OpenStack framework to select and allocate an optimum resource from the pool of resources.
  • the resources may be allocated inappropriately and consequently underutilized; for example, there may be some resource node being overloaded while other resource nodes stay idled.
  • a logic for scheduling the service resources which operates on the OpenStack controller, determines allocable resource nodes among the respective resource nodes by periodically detecting performance parameters of the respective resource nodes, and allocates an optimum resource node from the allocable resource nodes for the user according to a preset scheduling policy, thereby optimizing the allocation of the service resources in the OpenStack framework and scheduling the resources flexibly and dynamically, so as to improve the utilization ratio of the resources.
  • the OpenStack controller may be embodied in a hardware structure as illustrated in Fig. 2. As shown in Fig. 2, the OpenStack controller 20 may include a processor (CPU) 210, a machine-readable storage medium 220, and a network interface 230, all of which are connected with each other through an internal bus 240.
  • processor CPU
  • machine-readable storage medium 220 machine-readable storage medium 220
  • network interface 230 network interface 230
  • the machine-readable storage medium 220 is configured to store instruction codes. When the instruction codes are executed by the CPU 210, operations generally including a function of scheduling service resources are performed.
  • the CPU 210 communicates with the machine-readable storage medium 220 to read and execute the instruction codes stored in the machine-readable storage medium 220 to achieve the function of scheduling the service resources.
  • the machine-readable storage medium 220 may be any electronic, magnetic, optic or other physical storage devices, which can include or store information such as executable instructions, data, etc.
  • the machine-readable storage medium 220 may be a Random Access Memory (RAM) , a volatile memory, a nonvolatile memory, a flash memory, a storage driver (e.g., a hard disk driver) , a solid-state hard disk, any type of storage disk (e.g., an optic disk, a DVD, etc. ) or the like, or any combination thereof.
  • RAM Random Access Memory
  • a volatile memory e.g., a nonvolatile memory
  • a flash memory e.g., a flash memory
  • a storage driver e.g., a hard disk driver
  • solid-state hard disk e.g., any type of storage disk (e.g., an optic disk, a DVD, etc. ) or the like, or any combination thereof.
  • any machine-readable storage medium described in the disclosure can be non
  • Fig. 3 illustrates an OpenStack framework in another example of the disclosure.
  • the OpenStack framework illustrated in the example includes a logic for scheduling service resources (abbreviated as a scheduling logic hereinafter) 301, which is arranged in the OpenStack controller and is for receiving an upper Application Programming Interface (API) call and allocating the most appropriate resource node for the user flexibly and dynamically according to a preset scheduling policy.
  • a scheduling logic hereinafter
  • the CPU of the OpenStack controller reads and executes the logic for scheduling service resources 301 in the machine-readable storage medium in an operational flow as illustrated in Fig. 4.
  • Block 410 is for periodically detecting performance parameters of respective resource nodes.
  • Block 420 is for determining allocable resource nodes among the respective resource nodes according to the detected performance parameters.
  • Block 430 is for allocating an optimum resource node from the allocable resource nodes for a user according to a preset scheduling policy upon reception of a resource acquiring request of the user, and to return the optimum resource node to the user.
  • the OpenStack controller executing the scheduling logic can receive the resource acquiring request from the user.
  • the resource acquiring request may be a resource acquiring request message sent by the user operating on a user interface provided by the OpenStack.
  • the resource acquiring request message may include a service creation request and even a performance requirement set by the user for a service.
  • the user can operate on the user interface provided by the OpenStack to send a resource acquiring request message, to request the OpenStack controller for a firewall service with a required processing capacity of 1Gbbp/s.
  • the OpenStack controller executing the scheduling logic can create a corresponding service and allocate an appropriate resource node for the user upon reception of the resource acquiring request message sent by the user operating on the user interface provided by the OpenStack, and returns a determination result to the user when an optimum resource node for the user is determined by the OpenStack controller.
  • the OpenStack controller executing the scheduling logic can periodically detect different performance parameters of the respective resource nodes in the pool of resources, to monitor in real time loads and operating states of the respective resource nodes in the pool of resources.
  • the OpenStack framework typically includes service modules adapted to provide the user with a network security solution, such as, the FWaaS, the VPNaaS, the LBaaS, etc., so the OpenStack controller executing the scheduling logic can periodically detect the different performance parameters of the respective resource nodes in the pool of resources, thereby monitoring in real time the loads and operating states of the respective resource nodes in the pool of resources.
  • the performance parameters may be preconfigured performance parameters, or may be performance parameters determined according to a particular performance requirement, which is required by the user for the service resource and is carried in the resource acquiring request message, which is not be specifically limited in the embodiment.
  • the performance parameters may include CPU utilization ratios and memory utilization ratios of the resource nodes, and other service performance parameters which can characterize resource availability of the resource nodes; and the service performance parameters may include parameters such as service traffic, service response time, etc., where a resource node with a smaller service traffic and a resource node with shorter service response time have higher resource availability.
  • the OpenStack controller executing the scheduling logic may determine the allocable resource nodes among the respective resource nodes according to a result of periodically detecting the different performance parameters of the respective resource nodes in the pool of resources, and allocates an appropriate resource node from the allocable resource nodes for the user according to a preset resource scheduling policy after determining the allocable resource nodes among the respective resource nodes.
  • the OpenStack controller executing the scheduling logic may allocate of the appropriate resource node from the allocable resource nodes for the user according to the preset resource scheduling policy, by comparing at least one of the CPU utilization ratios and the memory utilization ratios of the respective resource nodes with a threshold and categorizing the respective resource nodes into allocable resource nodes and non-allocable resource nodes.
  • the OpenStack controller executing the scheduling logic may firstly determine whether at least one of the CPU utilization ratio and the memory utilization ratio of each resource node is above the preset threshold; if at least one of the CPU utilization ratio and the memory utilization ratio of one resource node is above the preset threshold, the OpenStack controller categorizes the resource node as a non-allocable resource node; otherwise, if at least one of the CPU utilization ratio and the memory utilization ratio of one resource node is not above the preset threshold, the OpenStack controller categorizes the resource node as an allocable resource node.
  • the preset threshold is not specifically limited in the discourse and may be preset according to actual needs of the user. For example, if the user intends to request a firewall service with a processing capacity of 1Gbbp/s, and the processing capacity of a resource node can not satisfy the requirement of 1Gbbp/swhen the CPU utilization ratio of the resource node is above 70%, the threshold can be set to 70%.
  • the non-allocable resource nodes may not be allocated; and upon reception of the resource acquiring request message from the user, the OpenStack controller executing the scheduling logic dynamically and flexibly allocates the allocable resource nodes for the user according to the priorities and the service performance parameters of the respective allocable resource nodes, or other performance parameters influencing directly the service.
  • the OpenStack controller executing the scheduling logic may allocate the allocable resource nodes by firstly comparing the priorities of the respective resource nodes and then allocating resource node having a highest priority for the user as an optimum resource node. If there are multiple resource nodes having the highest priority among the allocable resource nodes, the service performance parameters of the multiple resource nodes having the highest priority may be further compared and respective resource availabilities of the multiple resource nodes having the highest priority can be acquired, to determine an optimum resource node to be allocated for the user.
  • the OpenStack controller executing the scheduling logic may further compare the service traffics of the multiple resource nodes having the highest priority, selects a resource node having a lowest service traffic from the multiple resource nodes having the highest priority, and allocates the selected resource node for the user.
  • the OpenStack controller executing the scheduling logic may further compare the service response time of the multiple resource nodes having the highest priority, selects a resource node having shortest response time from the resource nodes having the highest priority and the same service traffic, and allocates the selected node for the user.
  • the OpenStack controller executing the scheduling logic may allocate randomly one resource node from the resource nodes for the user, or may further compare other performance parameters which can characterize the availabilities of the service resources until an appropriate resource node is selected and allocated for the user.
  • the OpenStack controller executing the scheduling logic monitors in real time the performance parameters of the respective resource nodes, and allocates the appropriate resource node for the user according to the preset scheduling policy, thereby optimizing the allocation of the service resources in the OpenStack architecture and scheduling flexibly and dynamically the resources, so as to improve the utilization ratio of the resources without causing a situation that some resource node is overloaded while other resource nodes stay idled.
  • Fig. 3 If there are four firewall resource nodes, denoted as A, B, C and D respectively, in a current pool of resources in the OpenStack architecture illustrated in Fig. 3, a user requests for a firewall service with a processing capacity of 1 Gbbp/sby inputting the following message:
  • Firewalls Request (requesting for a firewall service):
  • the OpenStack controller may allocate an appropriate resource node for the user according to a preset scheduling policy.
  • the OpenStack controller may allocate the appropriate resource node for the user by monitoring in real time performance parameters such as CPU utilization ratios, memory utilization ratios, service traffics, service response time, etc., of respective firewall resource nodes, determining allocable resource nodes according to the monitoring result, and then allocating an appropriate resource node, from the allocable resource nodes, for the user.
  • the processing capacities of the respective firewall resource nodes can not satisfy the requirement of 1Gbpp/s; hence, the respective firewall resource nodes can be categorized into allocable firewall resource nodes and non-allocable firewall resource nodes by using a threshold of 70%.
  • firewall resource node A is a non-allocable firewall resource node, and the firewall resource nodes B, C and D are allocable firewall resource nodes.
  • the OpenStack controller allocates an optimum firewall resource node, from the allocable firewall resource nodes B, C and D, for the user.
  • the OpenStack controller may allocate the optimum firewall resource node by firstly comparing priorities of the firewall resource nodes B, C and D, and selecting and allocating a firewall resource node having the highest priority for the user.
  • the OpenStack controller may allocate the firewall resource node B for the user.
  • a smaller value of the priority indicates a higher priority.
  • the OpenStack controller may allocate the resource node B for the user.
  • the OpenStack controller may allocate the resource node B for the user.
  • the OpenStack controller may allocate randomly one resource node from the resource nodes B and C for the user or may further compare another performance parameter which can characterize the availabilities of the service resources until an appropriate resource node is selected and allocated for the user.
  • the OpenStack controller may return information about the optimum resource node to the user when determining the optimum resource node for the user.
  • returned information about the firewall service may be reflected as follows:
  • firewall_policy_id "c69933c1-b472-44f9-8226-30dc4ffd454c” ,
  • the returned information about the firewall service includes a service ID, a firewall node ID, a firewall state, a firewall description, a user ID, and other information.
  • the OpenStack controller executing the scheduling logic monitors in real time the performance parameters of the respective resource nodes, and allocates the appropriate resource node for the user according to the preset scheduling policy, thereby optimizing the allocation of the service resources in the OpenStack architecture and scheduling the resource flexibly and dynamically, so as to improve the utilization ratio of the resources without causing a situation where some resource node is overloaded while other resource nodes stay idled.

Abstract

An OpenStack controller periodically detects performance parameters of multiple resource nodes and determines allocable resource nodes among the resource nodes according to the detected performance parameters. The controller allocates, from the allocable resource nodes, an optimum resource node for a user according to a preset scheduling policy upon reception of a resource acquiring request of the user, and returns the optimum resource node to the user.

Description

SCHEDULING OF SERVICE RESOURCE Field Background
The OpenStack is an emerging open source software architecture to provide a cloud computing management solution, which intends to build an open and extensible framework with which various resources in a cloud environment are managed. Due to its open-source and open accessibility, the OpenStack architecture is supported by the majority of predominant manufacturers, and has become a widely applied open-source cloud computing solution.
Summary
Brief Description of the Drawings
Fig. 1 illustrates a schematic diagram of a hardware structure of a system in an example;
Fig. 2 illustrates a framework diagram of a controller in an example;
Fig. 3 illustrates a framework diagram of another system in an example; and
Fig. 4 illustrates a flow chart of a method of scheduling service resources in an example.
Detailed Description of the Embodiments
The present disclosure relates to a cloud computing system which enables computing, storage and other resources to be provided to users over the Internet. In some cases a cloud computing system may be used by a plurality of users, and resources may be managed and dynamically allocated from one user to the other by the system controller according to the demand for resources at any one time. The following description refers to the OpenStack system, but the principles disclosed herein may also be applied to other cloud computing systems and controllers. A modular architecture provided by the OpenStack typically includes various  modular resources in a cloud environment, such as computing, storage and network, to provide an integral framework solution. In the cloud environment, a cloud computing data center typically provides a user with a virtual host lease service, and since there is no physical boundary for the virtual host lease service, it is difficult for the user herself or himself to deploy a physical network security device. Hence, the cloud computing data center needs to provide the user with network security solutions such as Firewall as a Service (FWaaS) .
For example, the user can operate directly on a user interface provided by the OpenStack to purchase a virtual firewall, and after the user purchases the virtual firewall, an OpenStack console may be connected automatically with a physical firewall to create the virtual firewall for the user and to configure the virtual firewall correspondingly. After the virtual firewall is configured, the user can perform service management on her or his own virtual firewall.
Reference can be made to Fig. 1, which illustrates an OpenStack system framework in an example of this disclosure. In the OpenStack system framework, service modules such as an FWaaS 101, a Virtual Private Network as a Service (VPNaaS) 102, and a Load Balance as a Service (LBaaS) 103 are preliminarily defined at present, and these service modules can be controlled centrally to provide the user with a relative complete network security solution.
As illustrated in Fig. 1, various security services, e.g., Firewall, LB, VPN, etc., are all managed and distributed centrally by the OpenStack controller as service resources in the OpenStack framework. But if there is a pool of service resources formed of multiple resource nodes, no solution is provided in the OpenStack framework to select and allocate an optimum resource from the pool of resources. Thus, the resources may be allocated inappropriately and consequently underutilized; for example, there may be some resource node being overloaded while other resource nodes stay idled.
In an example of the disclosure, in addition to the above OpenStack framework, a logic for scheduling the service resources, which operates on the OpenStack controller, determines allocable resource nodes among the respective resource nodes by periodically detecting performance parameters of the respective resource nodes, and allocates an optimum resource node from the allocable resource nodes for the user according to a preset scheduling  policy, thereby optimizing the allocation of the service resources in the OpenStack framework and scheduling the resources flexibly and dynamically, so as to improve the utilization ratio of the resources.
The OpenStack controller may be embodied in a hardware structure as illustrated in Fig. 2. As shown in Fig. 2, the OpenStack controller 20 may include a processor (CPU) 210, a machine-readable storage medium 220, and a network interface 230, all of which are connected with each other through an internal bus 240.
The machine-readable storage medium 220 is configured to store instruction codes. When the instruction codes are executed by the CPU 210, operations generally including a function of scheduling service resources are performed.
The CPU 210 communicates with the machine-readable storage medium 220 to read and execute the instruction codes stored in the machine-readable storage medium 220 to achieve the function of scheduling the service resources.
The machine-readable storage medium 220 may be any electronic, magnetic, optic or other physical storage devices, which can include or store information such as executable instructions, data, etc. For example, the machine-readable storage medium 220 may be a Random Access Memory (RAM) , a volatile memory, a nonvolatile memory, a flash memory, a storage driver (e.g., a hard disk driver) , a solid-state hard disk, any type of storage disk (e.g., an optic disk, a DVD, etc. ) or the like, or any combination thereof. Moreover, any machine-readable storage medium described in the disclosure can be non-transitory.
Reference can be made to Fig. 3, which illustrates an OpenStack framework in another example of the disclosure. In addition to the OpenStack framework illustrated in Fig. 1, the OpenStack framework illustrated in the example includes a logic for scheduling service resources (abbreviated as a scheduling logic hereinafter) 301, which is arranged in the OpenStack controller and is for receiving an upper Application Programming Interface (API) call and allocating the most appropriate resource node for the user flexibly and dynamically according to a preset scheduling policy.
Particularly, the CPU of the OpenStack controller reads and executes the logic for scheduling service resources 301 in the machine-readable storage medium in an operational  flow as illustrated in Fig. 4.
Block 410 is for periodically detecting performance parameters of respective resource nodes.
Block 420 is for determining allocable resource nodes among the respective resource nodes according to the detected performance parameters.
Block 430 is for allocating an optimum resource node from the allocable resource nodes for a user according to a preset scheduling policy upon reception of a resource acquiring request of the user, and to return the optimum resource node to the user.
In the example, the OpenStack controller executing the scheduling logic can receive the resource acquiring request from the user. In an implementation, the resource acquiring request may be a resource acquiring request message sent by the user operating on a user interface provided by the OpenStack. The resource acquiring request message may include a service creation request and even a performance requirement set by the user for a service. For example, the user can operate on the user interface provided by the OpenStack to send a resource acquiring request message, to request the OpenStack controller for a firewall service with a required processing capacity of 1Gbbp/s.
The OpenStack controller executing the scheduling logic can create a corresponding service and allocate an appropriate resource node for the user upon reception of the resource acquiring request message sent by the user operating on the user interface provided by the OpenStack, and returns a determination result to the user when an optimum resource node for the user is determined by the OpenStack controller.
In the example, the OpenStack controller executing the scheduling logic can periodically detect different performance parameters of the respective resource nodes in the pool of resources, to monitor in real time loads and operating states of the respective resource nodes in the pool of resources. For example, the OpenStack framework typically includes service modules adapted to provide the user with a network security solution, such as, the FWaaS, the VPNaaS, the LBaaS, etc., so the OpenStack controller executing the scheduling logic can periodically detect the different performance parameters of the respective resource nodes in the pool of resources, thereby monitoring in real time the loads and operating states of  the respective resource nodes in the pool of resources.
Particularly, when the OpenStack controller executing the scheduling logic periodically detects the different performance parameters of the respective resource nodes in the pool of resources, the performance parameters may be preconfigured performance parameters, or may be performance parameters determined according to a particular performance requirement, which is required by the user for the service resource and is carried in the resource acquiring request message, which is not be specifically limited in the embodiment.
For example, in an example of this disclosure, the performance parameters may include CPU utilization ratios and memory utilization ratios of the resource nodes, and other service performance parameters which can characterize resource availability of the resource nodes; and the service performance parameters may include parameters such as service traffic, service response time, etc., where a resource node with a smaller service traffic and a resource node with shorter service response time have higher resource availability.
In the example, the OpenStack controller executing the scheduling logic may determine the allocable resource nodes among the respective resource nodes according to a result of periodically detecting the different performance parameters of the respective resource nodes in the pool of resources, and allocates an appropriate resource node from the allocable resource nodes for the user according to a preset resource scheduling policy after determining the allocable resource nodes among the respective resource nodes.
In the example, the OpenStack controller executing the scheduling logic may allocate of the appropriate resource node from the allocable resource nodes for the user according to the preset resource scheduling policy, by comparing at least one of the CPU utilization ratios and the memory utilization ratios of the respective resource nodes with a threshold and categorizing the respective resource nodes into allocable resource nodes and non-allocable resource nodes. For example, the OpenStack controller executing the scheduling logic may firstly determine whether at least one of the CPU utilization ratio and the memory utilization ratio of each resource node is above the preset threshold; if at least one of the CPU utilization ratio and the memory utilization ratio of one resource node is above the preset threshold, the OpenStack controller categorizes the resource node as a non-allocable resource node; otherwise, if at least  one of the CPU utilization ratio and the memory utilization ratio of one resource node is not above the preset threshold, the OpenStack controller categorizes the resource node as an allocable resource node.
The preset threshold is not specifically limited in the discourse and may be preset according to actual needs of the user. For example, if the user intends to request a firewall service with a processing capacity of 1Gbbp/s, and the processing capacity of a resource node can not satisfy the requirement of 1Gbbp/swhen the CPU utilization ratio of the resource node is above 70%, the threshold can be set to 70%.
In the example, the non-allocable resource nodes may not be allocated; and upon reception of the resource acquiring request message from the user, the OpenStack controller executing the scheduling logic dynamically and flexibly allocates the allocable resource nodes for the user according to the priorities and the service performance parameters of the respective allocable resource nodes, or other performance parameters influencing directly the service.
The OpenStack controller executing the scheduling logic may allocate the allocable resource nodes by firstly comparing the priorities of the respective resource nodes and then allocating resource node having a highest priority for the user as an optimum resource node. If there are multiple resource nodes having the highest priority among the allocable resource nodes, the service performance parameters of the multiple resource nodes having the highest priority may be further compared and respective resource availabilities of the multiple resource nodes having the highest priority can be acquired, to determine an optimum resource node to be allocated for the user.
For example, in an example of the disclosure, if there are multiple resource nodes having the highest priority among the allocable resource nodes, the OpenStack controller executing the scheduling logic may further compare the service traffics of the multiple resource nodes having the highest priority, selects a resource node having a lowest service traffic from the multiple resource nodes having the highest priority, and allocates the selected resource node for the user.
If the multiple resource nodes having the highest priority have the same service traffic, the OpenStack controller executing the scheduling logic may further compare the service  response time of the multiple resource nodes having the highest priority, selects a resource node having shortest response time from the resource nodes having the highest priority and the same service traffic, and allocates the selected node for the user.
If the multiple resource nodes having the highest priority still have the same service response time, the OpenStack controller executing the scheduling logic may allocate randomly one resource node from the resource nodes for the user, or may further compare other performance parameters which can characterize the availabilities of the service resources until an appropriate resource node is selected and allocated for the user.
In the above examples, the OpenStack controller executing the scheduling logic monitors in real time the performance parameters of the respective resource nodes, and allocates the appropriate resource node for the user according to the preset scheduling policy, thereby optimizing the allocation of the service resources in the OpenStack architecture and scheduling flexibly and dynamically the resources, so as to improve the utilization ratio of the resources without causing a situation that some resource node is overloaded while other resource nodes stay idled.
In another example of the disclosure, there is further illustrated a particular application.
Reference is still made to Fig. 3. If there are four firewall resource nodes, denoted as A, B, C and D respectively, in a current pool of resources in the OpenStack architecture illustrated in Fig. 3, a user requests for a firewall service with a processing capacity of 1 Gbbp/sby inputting the following message:
Firewalls Request (requesting for a firewall service) :
GET /v2.0/fw/firewalls. json
User-Agent: python-neutronclient
Accept: application/json
After the OpenStack controller executing the above-mentioned scheduling logic obtains the input message, the OpenStack controller may allocate an appropriate resource node for the user according to a preset scheduling policy. The OpenStack controller may allocate the  appropriate resource node for the user by monitoring in real time performance parameters such as CPU utilization ratios, memory utilization ratios, service traffics, service response time, etc., of respective firewall resource nodes, determining allocable resource nodes according to the monitoring result, and then allocating an appropriate resource node, from the allocable resource nodes, for the user.
If both the CPU utilization ratios and the memory utilization ratios are above 70%, the processing capacities of the respective firewall resource nodes can not satisfy the requirement of 1Gbpp/s; hence, the respective firewall resource nodes can be categorized into allocable firewall resource nodes and non-allocable firewall resource nodes by using a threshold of 70%.
If both the CPU utilization ratio and the memory utilization ratio of the firewall resource node A in the pool of resources are above 70%, and both the CPU utilization ratios and the memory utilization ratios of the firewall resource nodes B, C and D are below 70%, the firewall resource node A is a non-allocable firewall resource node, and the firewall resource nodes B, C and D are allocable firewall resource nodes.
After the firewall resource nodes are categorized, the OpenStack controller allocates an optimum firewall resource node, from the allocable firewall resource nodes B, C and D, for the user. The OpenStack controller may allocate the optimum firewall resource node by firstly comparing priorities of the firewall resource nodes B, C and D, and selecting and allocating a firewall resource node having the highest priority for the user.
If the priorities of the firewall resource nodes B, C and D are 1, 2 and 2 respectively, the OpenStack controller may allocate the firewall resource node B for the user. Here, a smaller value of the priority indicates a higher priority.
If the priorities of the firewall resource nodes B, C and D are 1, 1 and 2 respectively, then the service traffics of the resource nodes B and C are further compared; and if there the resource node B has a lower service traffic than the resource node C, the OpenStack controller may allocate the resource node B for the user.
If the resource nodes B and C have the same service traffic, a further comparison is performed on the service response time of the resource nodes B and C; and if the resource node B has shorter service response time than the resource node C, then the OpenStack controller  may allocate the resource node B for the user.
If the resource nodes B and C still have the same service response time, the OpenStack controller may allocate randomly one resource node from the resource nodes B and C for the user or may further compare another performance parameter which can characterize the availabilities of the service resources until an appropriate resource node is selected and allocated for the user.
The OpenStack controller may return information about the optimum resource node to the user when determining the optimum resource node for the user.
For example, if the OpenStack controller finally allocates the resource node B for the user, returned information about the firewall service may be reflected as follows:
Firewalls: Response
"firewalls" :
"admin_state_up" : true,
"description" : "**" ,
"firewall_policy_id" : "c69933c1-b472-44f9-8226-30dc4ffd454c" ,
"id" : "3b0ef8f4-82c7-44d4-a4fb-6177f9a21977" ,
"name" : "B" ,
"status" : "ACTIVE" ,
"tenant_id" : "45977fa2dbd7482098dd68d0d8970117" .
The returned information about the firewall service includes a service ID, a firewall node ID, a firewall state, a firewall description, a user ID, and other information.
In the examples above, the OpenStack controller executing the scheduling logic monitors in real time the performance parameters of the respective resource nodes, and allocates the appropriate resource node for the user according to the preset scheduling policy, thereby optimizing the allocation of the service resources in the OpenStack architecture and scheduling the resource flexibly and dynamically, so as to improve the utilization ratio of the resources  without causing a situation where some resource node is overloaded while other resource nodes stay idled.

Claims (13)

  1. A method for scheduling service resources, applied to a controller, wherein the method comprises:
    periodically detecting performance parameters of a plurality of resource nodes;
    determining allocable resource nodes among the resource nodes according to the detected performance parameters; and
    allocating, from the allocable resource nodes, an optimum resource node for a user according to a preset scheduling policy upon reception of a resource acquiring request of the user, and returning the optimum resource node to the user.
  2. The method according to claim 1, wherein the performance parameters include parameters selected from the list comprising: CPU utilization ratio, memory utilization ratio, service traffic, service response time and current resource availability.
  3. The method according to claim 2, wherein said determining allocable resource nodes among the resource nodes according to the detected performance parameters comprises:
    determining whether the CPU utilization ratio or the memory utilization ratio of each resource node is above a preset threshold; and
    determining that a resource node is a non-allocable resource node if it is determined that the CPU utilization ratio or the memory utilization ratio of the resource node is above the preset threshold; otherwise, determining the resource node as an allocable resource node.
  4. The method according to claim 2, wherein said allocating, from the allocable resource nodes, an optimum resource node for a user according to a preset scheduling policy comprises:
    comparing priorities of the allocable resource nodes; and
    allocating an allocable resource node having a highest priority for the user as the optimum resource node.
  5. The method according to claim 4, wherein said allocating, from the allocable resource nodes,  an optimum resource node for a user according to a preset scheduling policy comprises:
    if there are a plurality of allocable resource nodes having the same highest priority, comparing the service performance parameters of the plurality of allocable resource nodes having the highest priority; and
    selecting an allocable resource node having a highest current resource availability, from the plurality of allocable resource nodes having the highest priority, as the optimum resource node, and allocating the optimum resource node for the user.
  6. The method according to claim 2, wherein, for any one of the resource nodes, the lower the service traffic, the higher the resource availability; and the shorter the service response time, the higher the resource availability.
  7. The method according to claim 1, wherein the controller is a Openstack controller.
  8. A machine-readable storage medium, storing thereon machine readable instructions, which are executable by a processor to:
    periodically detect performance parameters of a plurality of resource nodes;
    determine allocable resource nodes among the resource nodes according to the detected performance parameters; and
    allocate, from the allocable resource nodes, an optimum resource node for a user according to a preset scheduling policy upon reception of a resource acquiring request of the user, and return the optimum resource node to the user.
  9. The machine-readable storage medium according to claim 8, wherein the performance parameters include parameters selected from the list comprising: CPU utilization ratio, memory utilization ratio, service traffic, service response time and current resource availability.
  10. The machine-readable storage medium according to claim 9 the instructions to determine allocable resource nodes among the resource nodes according to the detected performance parameters include instructions to:
    determine whether at least one of the CPU utilization ratio and the memory utilization ratio  of each resource node is above a preset threshold; and
    determine one resource node as a non-allocable resource node if it is determined that at least one of the CPU utilization ratio and the memory utilization ratio of the resource node is above the preset threshold; otherwise, determining the resource node as an allocable resource node.
  11. The machine-readable storage medium according to claim 9, wherein the instructions to allocate, from the allocable resource nodes, an optimum resource node for a user according to a preset scheduling policy include instructions to:
    compare priorities of the respective allocable resource nodes; and
    allocate an allocable resource node having a highest priority for the user as the optimum resource node.
  12. The machine-readable storage medium according to claim 11, wherein the instructions to allocate, from the allocable resource nodes, an optimum resource node for a user according to a preset scheduling policy include instructions to:
    if there are a plurality of allocable resource nodes having the same highest priority, compare the service performance parameters of the plurality of allocable resource nodes having the highest priority; and
    select an allocable resource node having a highest current resource availability, from the plurality of allocable resource nodes having the highest priority, as the optimum resource node, and allocate the optimum resource node for the user.
  13. The machine-readable storage medium according to claim 9, wherein, for any one of the resource nodes, the lower the service traffic, the higher the resource availability; and the shorter the service response time, the higher the resource availability.
PCT/CN2015/084851 2014-07-25 2015-07-22 Scheduling of service resource WO2016011953A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410359705.2A CN105335229B (en) 2014-07-25 2014-07-25 Scheduling method and device of service resources
CN201410359705.2 2014-07-25

Publications (1)

Publication Number Publication Date
WO2016011953A1 true WO2016011953A1 (en) 2016-01-28

Family

ID=55162523

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/084851 WO2016011953A1 (en) 2014-07-25 2015-07-22 Scheduling of service resource

Country Status (2)

Country Link
CN (1) CN105335229B (en)
WO (1) WO2016011953A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005051A (en) * 2018-06-27 2018-12-14 中国铁路信息科技有限责任公司 Routing high availability method and system based on OpenStack

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871750A (en) * 2016-03-25 2016-08-17 乐视控股(北京)有限公司 Resource scheduling method and server
CN105827448A (en) * 2016-03-31 2016-08-03 乐视控股(北京)有限公司 Resource distribution method and apparatus
CN107818013A (en) * 2016-09-13 2018-03-20 华为技术有限公司 A kind of application scheduling method thereof and device
CN106254154B (en) * 2016-09-19 2020-01-03 新华三技术有限公司 Resource sharing method and device
CN106686081B (en) * 2016-12-29 2020-08-28 北京奇虎科技有限公司 Resource allocation method and device for database service system
CN108429704B (en) * 2017-02-14 2022-01-25 中国移动通信集团吉林有限公司 Node resource allocation method and device
CN108920269B (en) * 2018-07-19 2021-03-19 中国联合网络通信集团有限公司 Scheduling method and device for I/O transmission task of container
CN109189578B (en) * 2018-09-06 2022-04-12 北京京东尚科信息技术有限公司 Storage server allocation method, device, management server and storage system
CN109684065B (en) * 2018-12-26 2020-11-03 北京云联万维技术有限公司 Resource scheduling method, device and system
CN109783236B (en) * 2019-01-16 2021-08-24 北京百度网讯科技有限公司 Method and apparatus for outputting information
CN110134104A (en) * 2019-03-19 2019-08-16 北京车和家信息技术有限公司 Cpu load calculation method, cpu load computing system and the vehicle of VCU
CN111064641B (en) * 2019-12-31 2021-07-02 上海焜耀网络科技有限公司 Node performance detection system and method for decentralized storage network
CN111586134A (en) * 2020-04-29 2020-08-25 新浪网技术(中国)有限公司 CDN node overload scheduling method and system
CN111858458B (en) * 2020-06-19 2022-05-24 苏州浪潮智能科技有限公司 Method, device, system, equipment and medium for adjusting interconnection channel

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103338241A (en) * 2013-06-19 2013-10-02 合肥工业大学 Novel public cloud architecture and virtualized resource self-adaption configuration method thereof
CN103576827A (en) * 2012-07-25 2014-02-12 田文洪 Method and device of online energy-saving dispatching in cloud computing data center
US20140108639A1 (en) * 2012-10-11 2014-04-17 International Business Machines Corporation Transparently enforcing policies in hadoop-style processing infrastructures

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102232282B (en) * 2010-10-29 2014-03-26 华为技术有限公司 Method and apparatus for realizing load balance of resources in data center
CN102195886B (en) * 2011-05-30 2014-02-05 北京航空航天大学 Service scheduling method on cloud platform
KR20130064906A (en) * 2011-12-09 2013-06-19 삼성전자주식회사 Method and apparatus for load balancing in communication system
CN103780646B (en) * 2012-10-22 2017-04-12 中国长城计算机深圳股份有限公司 Cloud resource scheduling method and system
CN103179217B (en) * 2013-04-19 2016-01-13 中国建设银行股份有限公司 A kind of load-balancing method for WEB application server farm and device
CN103617086B (en) * 2013-11-20 2017-02-08 东软集团股份有限公司 Parallel computation method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576827A (en) * 2012-07-25 2014-02-12 田文洪 Method and device of online energy-saving dispatching in cloud computing data center
US20140108639A1 (en) * 2012-10-11 2014-04-17 International Business Machines Corporation Transparently enforcing policies in hadoop-style processing infrastructures
CN103338241A (en) * 2013-06-19 2013-10-02 合肥工业大学 Novel public cloud architecture and virtualized resource self-adaption configuration method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG, NAN.: "The dynamic scheduling and management of computing resource in cloud platform based on OpenStack", CHINA 'S OUTSTANDING MASTER'S DEGREE DISSERTATION FULL-TEXT DATABASE, 15 September 2013 (2013-09-15), pages 13 - 14 , 21-36, 42 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005051A (en) * 2018-06-27 2018-12-14 中国铁路信息科技有限责任公司 Routing high availability method and system based on OpenStack

Also Published As

Publication number Publication date
CN105335229B (en) 2020-07-07
CN105335229A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
WO2016011953A1 (en) Scheduling of service resource
US9588789B2 (en) Management apparatus and workload distribution management method
US11128530B2 (en) Container cluster management
US9392050B2 (en) Automatic configuration of external services based upon network activity
US20150106805A1 (en) Accelerated instantiation of cloud resource
US10284489B1 (en) Scalable and secure interconnectivity in server cluster environments
US20140181839A1 (en) Capacity-based multi-task scheduling method, apparatus and system
US10027596B1 (en) Hierarchical mapping of applications, services and resources for enhanced orchestration in converged infrastructure
CN112181585A (en) Resource allocation method and device for virtual machine
US10630600B2 (en) Adaptive network input-output control in virtual environments
US20110173319A1 (en) Apparatus and method for operating server using virtualization technique
CN106133693A (en) The moving method of virtual machine, device and equipment
WO2018107945A1 (en) Method and device for implementing allocation of hardware resources, and storage medium
JP2011015196A (en) Load assignment control method and load distribution system
US20140201371A1 (en) Balancing the allocation of virtual machines in cloud systems
US20220070099A1 (en) Method, electronic device and computer program product of load balancing
US20210019160A1 (en) Quality of service scheduling with workload profiles
US10536394B2 (en) Resource allocation
US11726833B2 (en) Dynamically provisioning virtual machines from remote, multi-tier pool
WO2016118164A1 (en) Scheduler-assigned processor resource groups
GB2549773A (en) Configuring host devices
Younis et al. Hybrid load balancing algorithm in heterogeneous cloud environment
US20140047454A1 (en) Load balancing in an sap system
WO2016017161A1 (en) Virtual machine system, scheduling method, and program storage medium
JP4743904B2 (en) Resource over-distribution prevention system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15824289

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15824289

Country of ref document: EP

Kind code of ref document: A1