CN112433841A - Resource pool scheduling method, system, server and storage medium - Google Patents

Resource pool scheduling method, system, server and storage medium Download PDF

Info

Publication number
CN112433841A
CN112433841A CN201910792343.9A CN201910792343A CN112433841A CN 112433841 A CN112433841 A CN 112433841A CN 201910792343 A CN201910792343 A CN 201910792343A CN 112433841 A CN112433841 A CN 112433841A
Authority
CN
China
Prior art keywords
numa
node
resource pool
numa node
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910792343.9A
Other languages
Chinese (zh)
Other versions
CN112433841B (en
Inventor
陈琪
郭岳
钟储建
金天骄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910792343.9A priority Critical patent/CN112433841B/en
Publication of CN112433841A publication Critical patent/CN112433841A/en
Application granted granted Critical
Publication of CN112433841B publication Critical patent/CN112433841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a resource pool scheduling method, a resource pool scheduling system, a server and a storage medium, wherein the resource pool scheduling method is based on a Numa model and uses a Numa index for scheduling; the same Numa node can be repeatedly utilized by virtual resources, the server using the Numa model is combined with the Openstack resource pool scheduling technology to integrate the bottom layer resources, the resources of the data center are utilized to the maximum extent, the waste and the redundancy of the resources are reduced, and the resource utilization rate of the data center is improved. The Openstack resource pool environment of the data center has good popularization value; in addition, by using the Numa unified view, the unified management of the Numa nodes can be realized, the live migration capability of the virtual machine is combined, the live migration resources with fine granularity are realized, the method has strong practical value in a large-scale production environment, and the unified view according to the CPU frequency of the physical machine, the Numa node distribution rate and the Numa node utilization rate is generated on the basis of monitoring the Numa node load of the physical machine.

Description

Resource pool scheduling method, system, server and storage medium
Technical Field
The invention relates to the technical field of computer virtualization resource scheduling, in particular to a resource pool scheduling method, a resource pool scheduling system, a resource pool scheduling server and a storage medium.
Background
The OpenStack is an open-source cloud computing management platform project, and is formed by combining several main components to complete specific work. The Openstack resource pool completes scheduling tasks through Nova-Scheduler, and the method mainly comprises the following two steps: filter (filter) + weight calculation (weighing). The filtering is to remove the host machines which do not meet the condition, and the weight calculation is to sort the rest host machines according to a certain value and calculate the host machine which is most suitable for the virtual machine to open or migrate.
Currently, with the increase of processors, in order to improve performance and prevent Memory Access from reaching a bottleneck, a NUMA (non Uniform Memory Access architecture) technology is introduced, and the NUMA technology can enable a plurality of servers to operate like a single system, and simultaneously retains the advantage of convenient programming and management of a small system. NUMA also presents challenges to complex architectural designs based on the higher demands placed on memory access by e-commerce applications. NUMA attempts to solve this problem by providing separate memories to the various processors, avoiding the performance penalty that occurs when multiple processors access the same memory. For applications involving scattered data (as is common in servers and server-like applications), NUMA may improve performance by a factor of n with one shared memory, where n is approximately the number of processors (or separate memories). In the prior art, related hardware resources (such as a CPU and a memory) of a system are divided into a plurality of nodes. In this model, the processor's performance is faster accessing the local node than the remote node, with a performance increase of approximately 30-40%. For this Numa model, the Openstack resource pool has two solutions: one is to use NUMA technology Filter method, make the virtual machine deploy on the host computer with the same Numa node; the other mode is to force the Dedicated/Prefer mode to be used for fixing the virtual machine on the same Numa node.
However, the existing Openstack resource pool scheduling scheme has the following disadvantages:
1) the characteristics of the Numa model are not considered sufficiently, and the resource efficiency is low
The Numa model is characterized in that: the CPU processor's performance for accessing local nodes is 30% -40% faster than for accessing remote nodes. The filtering condition of the existing resource pool scheduling method NUMA (non uniform resource pool) filter is only to filter out computing nodes consistent with a Numa structure inside a virtual machine, the model characteristic of the Numa is not considered, the virtual machine is not placed in the Numa node, and the resource efficiency is low.
2) The Numa node index is lacked during weight calculation, and the resource scheduling matching degree is low
The filtered hosts are sorted according to a certain value by weight calculation, and the current index is based on the size of available memory of a calculation node, the size of a residual disk, the size of IO load and some basic index values (such as CPU utilization rate and the like).
The weight calculation does not match the Numa structure and the technical indicators of the Numa nodes. This results in a low matching degree of resource scheduling, and a suitable computing node cannot be matched.
3) The forced use of the Dedicated/Prefer mode has use limitation and low resource repeated utilization rate
The void/Prefer mode fixes the virtual machines on the same Numa node, and at present, the use limitation exists, the limitation condition is that the virtual CPUs correspond to the physical CPUs one by one, and the situation that a plurality of virtual machines share one physical CPU cannot be realized in the void/Prefer mode. This results in low resource utilization.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a resource pool scheduling system and a corresponding resource pool scheduling method that overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided a resource pool scheduling method, including the steps of:
acquiring Numa information of a computing node;
after each scheduling of the Openstack resource pool is completed, unifying scheduling results to a Numa node view;
initializing Numa nodes which are not called and distributed according to the Numa node view, and calculating the distribution rate after the Numa nodes are distributed and used by an Openstack resource pool;
periodically updating the memory and CPU utilization rate of each Numa node;
when the resources are scheduled, the Numa node information is counted, and a proper Numa node is filtered;
calculating node weights of the filtered Numa nodes, screening the Numa node with the largest weight, and scheduling the virtual machine to the Numa node; if the weighted values are equal to each other, a Numa node is randomly selected for scheduling;
and after the scheduling is finished, updating the Numa node view information and the distribution rate information.
Optionally, the Numa information of the computing node includes a computing node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
Optionally, the step of initializing a Numa node which is not called for allocation according to the Numa node view, and calculating an allocation rate after the Numa node is allocated and used by an Openstack resource pool further includes:
initializing the Numa nodes which are not called and distributed according to the Numa node view;
setting the Numa node distribution rate of the non-called distribution to be 0, and setting the use value to be 0.001;
and when the Numa node is distributed and used by the Openstack resource pool, calculating the distribution rate according to a preset formula.
Optionally, when the resource scheduling in the step is performed, the Numa node information is counted, and a suitable Numa node is filtered, where the filtering rule is:
the Numa node virtual machine Core number < Core number x Core thread number.
According to another aspect of the present invention, there is provided a resource pool scheduling system, including:
the static processing module is used for acquiring Numa information of the computing node;
the dynamic processing module is used for unifying the scheduling result to the Numa node view after each scheduling of the Openstack resource pool is completed;
the node allocation submodule is used for initializing the Numa nodes which are not called and allocated according to the Numa node view, and calculating allocation rate after the Numa nodes are allocated and used by the Openstack resource pool;
the utilization rate calculation submodule is used for periodically updating the memory of each Numa node and the utilization rate of the CPU;
the Openstack resource pool filtering module is used for counting the Numa node information and filtering out a proper Numa node when the resources are scheduled;
the weight calculation module is used for calculating the node weight of the filtered Numa nodes, screening the Numa node with the maximum weight and dispatching the virtual machine to the Numa node;
if the weighted values are equal to each other, a Numa node is randomly selected for scheduling;
and the updating module is used for updating the Numa node view information and the distribution rate information after the scheduling is finished.
Optionally, the Numa information of the computing node includes a computing node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
Optionally, the node allocation submodule is further configured to:
initializing the Numa nodes which are not called and distributed according to the Numa node view;
setting the Numa node distribution rate of the non-called distribution to be 0, and setting the use value to be 0.001;
and when the Numa node is distributed and used by the Openstack resource pool, calculating the distribution rate according to a preset formula.
Optionally, the Openstack resource pool filtering module has a filtering rule that:
the Numa node virtual machine Core number < Core number x Core thread number.
According to still another aspect of the present invention, there is provided a server including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the resource pool scheduling method.
According to still another aspect of the present invention, a computer storage medium has at least one executable instruction stored therein, and the executable instruction causes a processor to perform operations corresponding to the resource pool scheduling method as described above.
The resource pool scheduling method and system are based on a Numa model and use a Numa index for scheduling; the same Numa node can be repeatedly utilized by virtual resources, the server using the Numa model is combined with the Openstack resource pool scheduling technology to integrate the bottom layer resources, the resources of the data center are utilized to the maximum extent, the waste and the redundancy of the resources are reduced, and the resource utilization rate of the data center is improved. The Openstack resource pool environment of the data center has good popularization value; in addition, by using the Numa unified view, the unified management of the Numa nodes can be realized, the live migration capability of the virtual machine is combined, the live migration resources with fine granularity are realized, the method has strong practical value in a large-scale production environment, and the unified view according to the CPU frequency of the physical machine, the Numa node distribution rate and the Numa node utilization rate is generated on the basis of monitoring the Numa node load of the physical machine.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating a resource pool scheduling method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a resource pool scheduling system according to an embodiment of the present invention;
fig. 3 shows a block diagram of a server according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The OpenStack is an open-source cloud computing management platform project, and is formed by combining several main components to complete specific work.
NUMA (non Uniform Memory Access architecture) technology allows many servers to behave as a single system, while retaining the advantages of small systems for ease of programming and management. NUMA also presents challenges to complex architectural designs based on the higher demands placed on memory access by e-commerce applications. NUMA attempts to solve this problem by providing separate memories to the various processors, avoiding the performance penalty that occurs when multiple processors access the same memory. For applications involving scattered data (as is common in servers and server-like applications), NUMA may improve performance by a factor of n with one shared memory, where n is approximately the number of processors (or separate memories).
Example one
As shown in fig. 1, an exemplary embodiment of the present disclosure provides a resource pool scheduling method, which includes the following steps:
s11: acquiring Numa information of a computing node;
in this step, the Numa information of the compute node includes, but is not limited to, a compute node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
S12: after each scheduling of the Openstack resource pool is completed, unifying scheduling results to a Numa node view;
specifically, after each scheduling of the Openstack resource pool is completed, the scheduling result is unified to the Numa node view, and a basis is provided for future resource scheduling.
S13: initializing Numa nodes which are not called and distributed according to the Numa node view, and calculating the distribution rate after the Numa nodes are distributed and used by an Openstack resource pool;
specifically, the step of initializing a Numa node which is not called for allocation according to the Numa node view, and calculating an allocation rate after the Numa node is allocated and used by an Openstack resource pool further includes:
initializing the Numa nodes which are not called and distributed according to the Numa node view;
setting the Numa node distribution rate of the non-called distribution to be 0, and setting the use value to be 0.001;
and when the Numa node is distributed and used by the Openstack resource pool, calculating the distribution rate according to a preset formula.
The preset formula is as follows:
Figure RE-GDA0002224898380000061
wherein i is the number of virtual machines using the Numa node, and Ci is the number of cores using the Numa node virtual machines
N is Numa node Core number and Core thread number.
S14: periodically updating the memory and CPU utilization rate of each Numa node;
in this step, the statistical information of the memory and CPU utilization of each Numa node is periodically updated. Wherein, the CPU utilization rate is max { CPU utilization rate, memory utilization rate }.
S15: when the resources are scheduled, the Numa node information is counted, and a proper Numa node is filtered;
optionally, when the resource scheduling in the step is performed, the Numa node information is counted, and a suitable Numa node is filtered, where the filtering rule is:
the Numa node virtual machine Core number < Core number x Core thread number.
S16: calculating node weights of the filtered Numa nodes, screening the Numa node with the largest weight, and scheduling the virtual machine to the Numa node;
specifically, the filtered calculation nodes calculate the Numa node weight by the weight, and the calculation rule is as follows:
Figure RE-GDA0002224898380000071
wherein Li is LiCache size/Min (LiCache size)
If the weighted values are equal to each other, a Numa node is randomly selected for scheduling;
s17: and after the scheduling is finished, updating the Numa node view information and the distribution rate information.
By adopting the method provided by the embodiment, the scheduling is carried out by using the Numa index based on the Numa model; the same Numa node can be repeatedly utilized by virtual resources, the server using the Numa model is combined with the Openstack resource pool scheduling technology to integrate the bottom layer resources, the resources of the data center are utilized to the maximum extent, the waste and the redundancy of the resources are reduced, and the resource utilization rate of the data center is improved. The Openstack resource pool environment of the data center has good popularization value; in addition, by using the Numa unified view, the unified management of the Numa nodes can be realized, the live migration capability of the virtual machine is combined, the live migration resources with fine granularity are realized, the method has strong practical value in a large-scale production environment, and the unified view according to the CPU frequency of the physical machine, the Numa node distribution rate and the Numa node utilization rate is generated on the basis of monitoring the Numa node load of the physical machine.
Example two
Fig. 2 is a resource pool scheduling system according to an exemplary embodiment of the present invention, as shown in fig. 2, including:
the static processing module 21 is configured to obtain Numa information of the computing node;
optionally, the Numa information of the computing node includes a computing node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
The dynamic processing module 22 is configured to unify the scheduling result to the Numa node view after each scheduling of the Openstack resource pool is completed;
the node allocation submodule 221 is configured to initialize a Numa node which is not called and allocated according to the Numa node view, and calculate an allocation rate when the Numa node is allocated and used by an Openstack resource pool;
a utilization rate calculation submodule 222, configured to periodically update the memory and CPU utilization rate of each Numa node;
specifically, the node allocation submodule and the utilization rate calculation submodule are submodules of the dynamic processing module, and after each scheduling of the Openstack resource pool is completed, a scheduling result is unified to the Numa node view, so that a basis is provided for future resource scheduling.
The node assignment submodule is further configured to:
initializing the Numa nodes which are not called and distributed according to the Numa node view;
setting the Numa node distribution rate of the non-called distribution to be 0, and setting the use value to be 0.001;
and when the Numa node is distributed and used by the Openstack resource pool, calculating the distribution rate according to a preset formula.
The preset formula is as follows:
Figure RE-GDA0002224898380000081
wherein, i is the number of the Numa node virtual machines, Ci is the Core number of the Numa node virtual machines, and N is the Numa node Core number multiplied by the Core thread number.
The Openstack resource pool filtering module 23 is configured to, during resource scheduling, count the Numa node information and filter out a suitable Numa node;
optionally, the Openstack resource pool filtering module has a filtering rule that:
the Numa node virtual machine Core number < Core number x Core thread number.
The weight calculation module 24 is used for calculating the node weight of the filtered Numa nodes, screening the Numa node with the maximum weight, and scheduling the virtual machine to the Numa node;
and calculating the weight of the Numa node by the filtered calculation node through the weight, wherein the calculation rule is as follows:
Figure RE-GDA0002224898380000082
wherein Li is LiCache size/Min (LiCache size)
If the weighted values are equal to each other, a Numa node is randomly selected for scheduling;
and the updating module 25 is configured to update the Numa node view information and the allocation rate information after the scheduling is completed.
EXAMPLE III
A third embodiment of the present application provides a non-volatile computer storage medium, where the computer storage medium stores at least one executable instruction, and the computer executable instruction may execute the resource pool scheduling method in any of the above method embodiments.
Example four
Fig. 3 is a schematic structural diagram of a server according to a sixth embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the server.
As shown in fig. 3, the server may include: a processor (processor), a Communications Interface (Communications Interface), a memory (memory), and a Communications bus.
Wherein:
the processor, the communication interface, and the memory communicate with each other via a communication bus.
A communication interface for communicating with network elements of other devices, such as clients or other servers.
The processor is configured to execute a program, and may specifically execute relevant steps in the foregoing resource pool scheduling method embodiment.
In particular, the program may include program code comprising computer operating instructions.
The processor may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The server comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And the memory is used for storing programs. The memory may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program may specifically be adapted to cause a processor to perform the following operations: acquiring Numa information of a computing node;
after each scheduling of the Openstack resource pool is completed, unifying scheduling results to a Numa node view;
initializing Numa nodes which are not called and distributed according to the Numa node view, and calculating the distribution rate after the Numa nodes are distributed and used by an Openstack resource pool;
periodically updating the memory and CPU utilization rate of each Numa node;
when the resources are scheduled, the Numa node information is counted, and a proper Numa node is filtered;
calculating node weights of the filtered Numa nodes, screening the Numa node with the largest weight, and scheduling the virtual machine to the Numa node;
if the weighted values are equal to each other, a Numa node is randomly selected for scheduling;
and after the scheduling is finished, updating the Numa node view information and the distribution rate information.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a resource pool scheduling system according to an embodiment of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A resource pool scheduling method is characterized by comprising the following steps:
acquiring Numa information of a computing node;
after each scheduling of the Openstack resource pool is completed, unifying scheduling results to a Numa node view;
initializing Numa nodes which are not called and distributed according to the Numa node view, and calculating the distribution rate after the Numa nodes are distributed and used by an Openstack resource pool;
periodically updating the memory and CPU utilization rate of each Numa node;
when the resources are scheduled, the Numa node information is counted, and a proper Numa node is filtered;
calculating node weights of the filtered Numa nodes, screening the Numa node with the largest weight, and scheduling the virtual machine to the Numa node;
if the weighted values are equal to each other, a Numa node is randomly selected for scheduling;
and after the scheduling is finished, updating the Numa node view information and the distribution rate information.
2. The method of claim 1, wherein the compute node Numa information comprises a compute node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
3. The method of claim 1, wherein the step of initializing Numa nodes that are not allocated by call according to the Numa node view, and wherein calculating allocation rates after the Numa nodes are allocated by the Openstack resource pool further comprises:
initializing the Numa nodes which are not called and distributed according to the Numa node view;
setting the Numa node distribution rate of the non-called distribution to be 0, and setting the use value to be 0.001;
and when the Numa node is distributed and used by the Openstack resource pool, calculating the distribution rate according to a preset formula.
4. The method according to claim 1, wherein in the step of resource scheduling, the Numa node information is counted, and a suitable Numa node is filtered, and a filtering rule is as follows:
the Numa node virtual machine Core number < Core number x Core thread number.
5. A resource pool scheduling system, comprising:
the static processing module is used for acquiring Numa information of the computing node;
the dynamic processing module is used for unifying the scheduling result to the Numa node view after each scheduling of the Openstack resource pool is completed;
the node allocation submodule is used for initializing the Numa nodes which are not called and allocated according to the Numa node view, and calculating allocation rate after the Numa nodes are allocated and used by the Openstack resource pool;
the utilization rate calculation submodule is used for periodically updating the memory of each Numa node and the utilization rate of the CPU;
the Openstack resource pool filtering module is used for counting the Numa node information and filtering out a proper Numa node when the resources are scheduled;
the weight calculation module is used for calculating the node weight of the filtered Numa nodes, screening the Numa node with the maximum weight and dispatching the virtual machine to the Numa node;
if the weighted values are equal to each other, a Numa node is randomly selected for scheduling;
and the updating module is used for updating the Numa node view information and the distribution rate information after the scheduling is finished.
6. The system of claim 5, wherein the compute node Numa information comprises a compute node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
7. The system of claim 5, wherein the node assignment sub-module is further configured to:
initializing the Numa nodes which are not called and distributed according to the Numa node view;
setting the Numa node distribution rate of the non-called distribution to be 0, and setting the use value to be 0.001;
and when the Numa node is distributed and used by the Openstack resource pool, calculating the distribution rate according to a preset formula.
8. The system of claim 5, wherein the Openstack resource pool filtering module filters the Openstack resource pool according to a filtering rule:
the Numa node virtual machine Core number < Core number x Core thread number.
9. A server, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the resource pool scheduling method in any one of claims 1-4.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the resource pool scheduling method of any one of claims 1-4.
CN201910792343.9A 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium Active CN112433841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910792343.9A CN112433841B (en) 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910792343.9A CN112433841B (en) 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium

Publications (2)

Publication Number Publication Date
CN112433841A true CN112433841A (en) 2021-03-02
CN112433841B CN112433841B (en) 2023-08-01

Family

ID=74690303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910792343.9A Active CN112433841B (en) 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium

Country Status (1)

Country Link
CN (1) CN112433841B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201607439D0 (en) * 2016-04-28 2016-06-15 Metaswitch Networks Ltd Configuring host devices
CN108196958A (en) * 2017-12-29 2018-06-22 北京泽塔云科技股份有限公司 Scheduling of resource distribution method, computer system and super fusion architecture system
EP3382543A1 (en) * 2017-03-29 2018-10-03 Juniper Networks, Inc. Micro-level monitoring, visibility and control of shared resources internal to a processor of a host machine for a virtual environment
CN108694071A (en) * 2017-03-29 2018-10-23 瞻博网络公司 More cluster panels for distributed virtualization infrastructure elements monitoring and policy control
CN109885377A (en) * 2018-11-23 2019-06-14 中国银联股份有限公司 The method of unified resource scheduling coordinator and its creation virtual machine and/or container, unified resource dispatch system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201607439D0 (en) * 2016-04-28 2016-06-15 Metaswitch Networks Ltd Configuring host devices
EP3382543A1 (en) * 2017-03-29 2018-10-03 Juniper Networks, Inc. Micro-level monitoring, visibility and control of shared resources internal to a processor of a host machine for a virtual environment
CN108694071A (en) * 2017-03-29 2018-10-23 瞻博网络公司 More cluster panels for distributed virtualization infrastructure elements monitoring and policy control
CN108196958A (en) * 2017-12-29 2018-06-22 北京泽塔云科技股份有限公司 Scheduling of resource distribution method, computer system and super fusion architecture system
CN109885377A (en) * 2018-11-23 2019-06-14 中国银联股份有限公司 The method of unified resource scheduling coordinator and its creation virtual machine and/or container, unified resource dispatch system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LUBANSEVEN: "NUMA 体系架构", 《HTTPS://WWW.CNBLOGS.COM/XINGZHEANAN/P/10547387.HTML》, pages 1 - 8 *
祖立军: "基于容器的大数据与虚拟化融合平台研究", 《信息技术与标准化》, vol. 2019, no. 06, pages 27 - 30 *

Also Published As

Publication number Publication date
CN112433841B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
US11775354B2 (en) Reducing overlay network overhead across container hosts
CN108614726B (en) Virtual machine creation method and device
CN109684065B (en) Resource scheduling method, device and system
CN106371894B (en) Configuration method and device and data processing server
US10754704B2 (en) Cluster load balancing based on assessment of future loading
CN114741207B (en) GPU resource scheduling method and system based on multi-dimensional combination parallelism
CN103763346B (en) A kind of distributed resource scheduling method and device
CN109191287B (en) Block chain intelligent contract fragmentation method and device and electronic equipment
CN113641457A (en) Container creation method, device, apparatus, medium, and program product
US20190272189A1 (en) Scheduling framework for tightly coupled jobs
CN107295090A (en) A kind of method and apparatus of scheduling of resource
US10360065B2 (en) Smart reduce task scheduler
US10949368B2 (en) Input/output command rebalancing in a virtualized computer system
CN114356587B (en) Calculation power task cross-region scheduling method, system and equipment
US20230037293A1 (en) Systems and methods of hybrid centralized distributive scheduling on shared physical hosts
CN115292016A (en) Task scheduling method based on artificial intelligence and related equipment
CN113626173B (en) Scheduling method, scheduling device and storage medium
CN108897858B (en) Distributed cluster index fragmentation evaluation method and device and electronic equipment
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN116360994A (en) Scheduling method, device, server and storage medium of distributed heterogeneous resource pool
CN112433841B (en) Resource pool scheduling method, system, server and storage medium
CN116010020A (en) Container pool management
CN115150268A (en) Network configuration method and device of Kubernetes cluster and electronic equipment
CN116841720A (en) Resource allocation method, apparatus, computer device, storage medium and program product
CN109558214B (en) Host machine resource management method and device in heterogeneous environment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant