CN108255598A - The virtual management platform resource distribution system and method for performance guarantee - Google Patents

The virtual management platform resource distribution system and method for performance guarantee Download PDF

Info

Publication number
CN108255598A
CN108255598A CN201611238450.XA CN201611238450A CN108255598A CN 108255598 A CN108255598 A CN 108255598A CN 201611238450 A CN201611238450 A CN 201611238450A CN 108255598 A CN108255598 A CN 108255598A
Authority
CN
China
Prior art keywords
virtual machine
virtual
cpu
resource
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611238450.XA
Other languages
Chinese (zh)
Inventor
王力
缪金成
穆立超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARRAY NETWORKS (BEIJING) Inc
Original Assignee
ARRAY NETWORKS (BEIJING) Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ARRAY NETWORKS (BEIJING) Inc filed Critical ARRAY NETWORKS (BEIJING) Inc
Priority to CN201611238450.XA priority Critical patent/CN108255598A/en
Publication of CN108255598A publication Critical patent/CN108255598A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The present invention relates to the virtual management platform resource distribution systems and method of a kind of performance guarantee.The resource allocation system includes hardware layer, (SuSE) Linux OS, Docker containers, KVM virtual machines, Virtual Machine Manager module and user interface;The wherein described Virtual Machine Manager module includes at least mirror image management module and virtual management platform resource distribution module;The method of the present invention includes enabling NUMA functions, and physical cpu cores threads and virtual machine vCPU bindings are got up by resource allocation policy, pre-allocates mode using the memory of big page and enables single I/O virtualizations etc..The present invention realizes virtual machine and possesses mutually isolated running environment and exclusive system resource, does not interfere with each other so that virtual machine performance is utmostly close to the performance of physical machine.

Description

The virtual management platform resource distribution system and method for performance guarantee
Technical field
The present invention relates to network applications to deliver control field, more particularly to a kind of virtualization that can ensure virtual machine performance Management platform resource allocation system and method.
Background technology
Virtualization is rapidly sent out with its high resource utilization, low management cost and the advantage of high flexibility and autgmentability Exhibition particularly surpasses the appearance of fusion architecture ((Hyperconvergence Infrastructure, HCI)) in recent years, will Virtualization calculates and storage integration to same system platform so that administrator can be in more virtualization storage solutions It is selected in the middle.But what the super fusion architecture of prior art mainly solved is that the selection of existing virtualized environment is preferable Storage solution, for example, " super emerging system and its Longitudinal Extension disclosed in Chinese Patent Application No. 201610259698.8 Method ".
Virtual platform common at present, such as:KVM virtual machine platforms, increase income virtual management platform (Ovirt), cloud Platform (openstack) etc., the factor for restricting virtual machine performance mainly has the following aspects:
Between ■ physical cpus and virtual machine vCPU resource allocation float variation, cause CPU seize and CPU handover overheads increase Greatly
The virtualization of virtual platform generally use CPU hardware auxiliary virtualizes the physical cpu of host in prior art, So that scheduling virtual machine uses.Multiple virtual machines is allowed to run simultaneously in super convergence platform, physical cpu is dispatched in virtual machine vCPU When have the following problems:
CPU is seized:In order to support more virtual machines, often by physical cpu virtual machine in super convergence platform Go out more vCPU, that is to say, that the cores threads of the CPU of a physics will corresponding multiple vCPU.It is dispatched simultaneously in multiple vCPU During same physical cpu cores threads, can physical cpu cores threads calculation resources be seized according to priority between vCPU, this will Occur interfering with each other between vCPU, cause the fluctuating of virtual machine performance.
CPU handover overheads increase:The each vCPU of virtual machine will be attached on multiple physical cpu cores, in virtual machine The affiliation that " virsh vcpuinfo " checks specified vCPU and physical cpu can be performed under root authority, such as:
In the above example, vCPU0 is attached in 8 physical cpu cores threads, current attached and physical cpu 1.But VCPU0 can float variation before CPU0-CPU7, and the switching between physical cpu increases expense, affects virtual machine performance.
■ virutal machine memories space telescopic but uncontrollable
In common virtual platform, in order to utilize memory source, generally use memory stretching mechanism to the greatest extent (Virtio balloon), the mechanism are drawn interior either to the more memories of virtual machine offer or virtual platform from virtual machine It deposits and does not all need to suspend or restart virtual machine, it completely can be with dynamic implement.There are one be referred to as in virtual machine kernel driving The driving of virtio_balloon, its effect can expand oneself using memory size or reduce memory usage amount (can be reduced to almost in nothing), when balloon expands, the normal application run on a virtual machine will be reduced suddenly Committed memory.When not enough memories, virtual machine is just not present when the balloon Driver memories occupied, and uses and exchange Space is killed some processes and is used altogether in the hope of discharging a little memories.When this situation occurs, regular traffic processing will receive It influences, influences the performance of virtual machine.The shortcomings that mechanism, is:Guest OS (client operating system) need timing to inquire The memory service condition of Guest wastes the cpu resource of Guest;Host may be interrupted at any time, to obtain the distribution of Guest Memory request carries out dynamic storage allocation.
The performance bottleneck of ■ Microsoft Loopback Adapters
There are performance bottlenecks for the virtual machine network interface card that virtual platform carries.To use virtualizations of the KVM as hypervisor For platform, it usually supports two kinds of Microsoft Loopback Adapter type:
QEMU simulates network interface card:The mode of QEMU pure softwares simulates I/O equipment, including keyboard, mouse, display, hard disk With network interface card etc..QEMU simulation network interface cards common are e1000 (simulation Intel E1000 network interface cards) and rtl8139 (simulation RealTeck 8139 network interface cards).QEMU simulation network interface cards are existed only in software, but the path of I/O operation is long every time, are needed repeatedly up and down Text switching, it is also desirable to which multiple data replicate, so poor-performing.
Quasi- virtualization (Para-virtualizaiton) network interface card:Quasi- virtualization mode allows quasi- virtualization driving to optimize I/O performances.What KVM was used is the device drives standard card cage on virtio this Linux at present, it provides a kind of Host The IO frames interacted with Guest.The vitio of KVM/QEMU realizes to use installs front-end driven in Guest O/S kernels (Front-end driver) and the mode that rear end driving (Back-end) is realized in QEMU.
But back-end processing programs (backend) of the virtio in host is usually to be provided by the QEMU of user's space , that is to say, that the back-end processing of network I/O requests is completed in user's space, it is therefore desirable to kernel spacing to user's space Switching affects performance.In addition, on the network interface card of 10G, 7.4G, and this side are can be only achieved with the maximum of handling up of Virtio modes Formula will also occupy a CPU hyperthread of a HOST.
It is universal with virtualization technology, virtualization is calculated and storage integration is to same system platform in addition to above-mentioned, The scene that a variety of Virtual Machine Models are deployed in a system platform simultaneously is had also appeared, such as:KVM(Kernel-based Virtual Machine) virtual machine and Docker container model virtual machines surpass convergence platform, how to be put down in this kind of virtual management Above-mentioned bottlenecks are overcome on platform, one of urgent problem to be solved will be become by providing the resources of virtual machine configuration of performance guarantee.
Invention content
To solve above-mentioned problem, the purpose of the present invention is on virtual management platform by integrating CPU, memory and I/O Equipment etc. provides a kind of the virtual management platform resource distribution system and method for performance guarantee so that virtual machine performance proximity object Reason machine performance.
A kind of virtual management platform resource distribution system of performance guarantee include hardware layer, (SuSE) Linux OS, Docker containers, KVM virtual machines, Virtual Machine Manager module and user interface;The wherein described Virtual Machine Manager module is at least wrapped Include mirror image management module and virtual management platform resource distribution module, the mirror image management module for Docker containers and The unified management of KVM Virtual Machine Mechanisms;The virtual management platform resource distribution module includes:
Request creates virtual machine module:It asks to create virtual machine for user;
NUMA management modules:For checking in NUMA domains that user specifies whether there is sufficient system resource;
CPU binding modules:For the specification specified according to user, the physical cpu thread of distribution is tied up with virtual machine vCPU It is fixed;
Exclusive memory allocating module:For specification and both port of origination that block is specified according to user, SR- is distributed for virtual machine IOV VFs;
And I/O resource management modules and create virtual machine module.
Resource of the present invention includes cpu resource, memory source and I/O resources etc., is referred to as virtual machine when merging narration Resource.
A kind of virtual management platform resource distribution method of performance guarantee, includes the following steps:
Step 1 enables NUMA functions, and system resource is divided into two or more NUMA domains, and by virtual machine Resource correspondence is divided into described two or more than two NUMA domains;
Step 2 is got up physical cpu cores threads and virtual machine vCPU bindings by resource allocation policy, to ensure Virtual machine vCPU is to the exclusive of distributed physics cpu resource and the performance cost that CPU switchings is avoided to bring;
Further, several resources of virtual machine rank is divided as needed;
Further, partition virtual machines resource is Entry, Small, Medium, Large rank, and the resource is divided It is with strategy:
The two Entry virtual machines successively created occupy a physical core, respectively occupy a thread;
One Small virtual machine occupies a physical core and two threads;
One Medium virtual machine occupies two physical cores and four threads;
One Large virtual machine occupies 4 physical cores and eight threads;
Described in step 2 can be step by step with what virtual machine vCPU was bound by physical cpu cores threads:
Request distribution physics cpu resource;
Whether the specification for judging virtual machine is Entry types
If Entry types, then and then judge that same physical cpu core is unallocated most with the presence or absence of havingIf so, then divide With not physical cpu core to the greatest extent to the virtual machine, and then it is allocated successfully;
If above-mentioned judgement is not Entry types, and then search whether enough physical cpu core, if so, then dividing With physical cpu resource to the virtual machine, it is allocated successfully;
It is above-mentioned to search whether enough physical cpu core, if it is not, distribution failure;
The above-mentioned same physical cpu core of judgement is unallocated most with the presence or absence of havingIt is such as not present, then and then searches whether to have time Not busy physical cpu core if so, then distributing a physical cpu core to the virtual machine, is allocated successfully;
It is above-mentioned to search whether available free physical cpu core, if it is not, distribution failure.
Step 3 pre-allocates mode using the memory of big page, disposably distributes to memory source according to virtual machine specification Virtual machine;
Step 4 enables single I/O virtualizations so that and the equipment of establishment allows virtual machine being directly connected to I/O equipment, Single I/O device resources can be shared by multiple virtual machines, and each virtual machine has the PCIe configuration space of oneself.
Further, by with issue orders for virtual machine increase or reduce VF, to facilitate management VF distribution:
va port<va_name><port_name><vf_index>
no va port<va_name><port_name><vf_index>
Present system pre-allocates exclusive memory, single I/O void using CPU binding technologies (CPU pining), virtual machine Planization (SR-IOV) technology and Non Uniform Memory Access framework (Non-uniform Memory Architecture) realize virtual machine Possess mutually isolated running environment on virtual machine platform, possess exclusive system resource, do not interfere with each other so that virtual machine It can be utmostly close to the performance of physical machine.
The present invention is by specifying NUMA domains, it is ensured that distributes CPU, memory and network interface card money for virtual machine from same NUMA domains Source is all completed this assures virtual machine business processing in same NUMA domains, between avoiding different NUMA domains, because The performance issue that the restriction band of QPI (Intel's Quick Path Interconnect) bandwidth is come;By CPU binding technologies, It ensures that virtual machine monopolizes distributed cpu resource, avoids the performance cost that CPU switchings are brought;By using big page Memory pre-allocates mode, and memory is disposably distributed to virtual machine according to virtual machine specification, substantially increases in cpu cache and stores The memory size that is covered of page table, so as to improve TLB hit rate;By enabling SR-IOV technologies, can obtain can be with The I/O equipment performances that the machine performance matches in excellence or beauty.
Description of the drawings
Fig. 1 is virtual management platform structure schematic diagram of the present invention;
Fig. 2 is virtual management platform resource distribution system configuration schematic diagram of the present invention;
Fig. 3 is the preferred embodiments system architecture schematic diagram that resource allocation system of the present invention creates virtual machine in NUMA domains;
Fig. 4 is the step block diagram that the present invention creates virtual machine preferred embodiments in NUMA domains;
Fig. 5 be the present invention it is a kind of physical cpu cores threads are bound with virtual machine vCPU by resource allocation policy it is preferable Example schematic;
Fig. 6 is the program chart of preferred embodiments that the present invention binds physical cpu cores threads and virtual machine vCPU;
Fig. 7 is a kind of preferred embodiments schematic diagrames of SR-IOV of the present invention;
Fig. 8 is that virtual machine of the present invention shares flow mouth embodiment schematic diagram.
Specific embodiment
In the following description, in order to make the reader understand this application better, many technical details are proposed.But this Even if the those of ordinary skill in field is appreciated that without these technical details and many variations based on the following respective embodiments And modification and each claim of the application technical solution claimed.
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with implementation of the attached drawing to the present invention Mode is described in further detail.
As shown in Figure 1, the virtual management platform resource distribution system of performance guarantee of the present invention includes hardware layer 100th, (SuSE) Linux OS 200, Docker containers 300, KVM virtual machines 400, Virtual Machine Manager module 500 and user interface 600, wherein the Virtual Machine Manager module includes at least mirror image management module 520 and virtual management platform resource distributes Module 510, the mirror image management module are used for the unified management of Docker containers and KVM Virtual Machine Mechanisms.
As shown in Fig. 2, the virtual management platform resource distribution module, which includes request, creates virtual machine (VM: Virtual Machine) module 511, NUMA management modules 512, CPU binding modules 513 and its virtual cpu (vCPU) included With physical cpu allocation strategy module 514, exclusive memory allocating module 515, I/O resource management modules 516 and establishment virtual machine (VM) module 517, wherein:
The request creates virtual machine module and asks to create VM for user;
The NUMA management modules are used to check whether there is sufficient system resource in the NUMA domains that user specifies;
The CPU binding modules are used for the specification specified according to user, by the physical cpu thread and virtual machine of distribution VCPU is bound, including vCPU and physical cpu allocation strategy;
The I/O resource management modules are used for the specification and both port of origination that block is specified according to user, are distributed for virtual machine SR-IOV VFs。
A kind of virtual management platform resource distribution method of performance guarantee of the present invention, includes the following steps:
Step 1 enables NUMA functions, system resource is at least divided into two NUMA domains (domain), and by virtual machine Resource, for example, cpu resource, memory source and I/O resources (such as:Network interface card resource) correspondence be divided into two NUMA domains.Such as It, will when creating virtual machine on " the Array virtual managements platform " of Huayao's research and development shown in Fig. 3 System resource is divided into two NUMA domains, and each NUMA domains include 1 12 thread CPU, and (wherein 8 threads can be distributed to virtually Machine), 24GB memories (wherein 16GB can distribute to virtual machine) and 4 10G network interface cards.Above-mentioned steps one can pass through substep shown in Fig. 4 It is rapid to realize.After enabling NUMA functions, on Array virtual management platforms create virtual machine when, can by specify NUMA domains, Ensure that system is virtual machine distributing system resource from the NUMA domains, in this way it is avoided that virtual machine system resource is cross-domain.Such as It issues orders in the order for creating virtual machine, the domain where virtual machine can be formulated by parameter " domain_id ", value can be 1 or 2:
va instance<va_name><va_size><starting_port>[domain_id][image_name]
It such as issues orders for example, can perform, the virtual machine VM01 of Medium specifications is created in NUMA domains 1.
va instance VM01 medium port1 1 image1
Step 2 is got up physical cpu cores threads and virtual machine vCPU bindings by resource allocation policy, to ensure Virtual machine vCPU is to the exclusive of distributed physics cpu resource and the performance cost that CPU switchings is avoided to bring;
Further, several resources of virtual machine rank is divided as needed, for example, in example described in step 1, Hua Yao (China) Science and Technology Ltd. exploitation Array virtual management platforms, The platform provides Entry, Small, Medium, Several resources of virtual machine ranks such as Large, the resource allocation policy can be:
The two Entry virtual machines successively created occupy a physical core, respectively occupy a thread;
One Small virtual machine occupies a physical core and two threads;
One Medium virtual machine occupies two physical cores and four threads;
One Large virtual machine occupies 4 physical cores and eight threads;
Other than the CPU that virtual machine occupies, other CPU are occupied by the process of Hypervisor.
Fig. 5 gives the virtual machine vCPU of two Large ranks and the binding relationship example of physical cpu core, as Hua Yao (in State) Science and Technology Ltd. exploitation 7600 equipment of AVX, the equipment provide two physical cpus (sockets), each physical cpu 6 CPU cores are provided, 12 threads amount to 24 CPU line journeys, wherein each physical cpu, which respectively takes out 4 threads, is supplied to virtualization Management platform resource allocation system uses, remaining, which amounts to 16 threads, can distribute to virtual machine use.It is empty in Fig. 5 examples Plan machine VM1 is assigned with 8 vCPU, corresponds to the cores threads 0-7 in physics CPU1 jacks respectively;Virtual machine VM2 is assigned with 8 VCPU corresponds to the cores threads 12-19 in physics CPU2 jacks respectively.
Further, as shown in fig. 6, described the step of binding physical cpu cores threads and virtual machine vCPU can be with It is:
Request distribution physics cpu resource;
Judge whether the specification of virtual machine matches
If it does, then and then judge that same physical cpu core is unallocated most with the presence or absence of havingObject is not use up if so, then distributing CPU core is managed to the virtual machine, and then is allocated successfully;
If the above-mentioned specification for judging virtual machine mismatches, and then search whether enough physical cpu core, if Have, then distribute physics cpu resource to the virtual machine, be allocated successfully;
It is above-mentioned to search whether enough physical cpu core, if it is not, distribution failure;
The above-mentioned same physical cpu core of judgement is unallocated most with the presence or absence of havingIt is such as not present, then and then searches whether to have time Not busy physical cpu core if so, then distributing a physical cpu core to the virtual machine, is allocated successfully;
It is above-mentioned to search whether available free physical cpu core, if it is not, distribution failure.
Step 3 pre-allocates mode using the memory of big page, disposably distributes to memory virtually according to virtual machine specification Machine;For example, " Array virtual managements platform " resource allocation system of Huayao's research and development employs greatly The memory predistribution mode of page, disposably distributes to virtual machine, for different virtual machine resource according to virtual machine specification by memory Rank, the memory source of distribution, such as be respectively:
·Large:16GB
·Medium:8GB
·Small:4GB
·Entry:2GB
Above-mentioned resource allocation system allows by changing virtual machine specification, to change the memory amount of virtual machine distribution, It is controllable so as to fulfill the dynamic of virtual machine performance.This Memory Allocation mode substantially increases what is stored in cpu cache (cache) The memory size that page table are covered, so as to improve TLB (Translation Lookaside Buffer, i.e.,Virtually AddressIt arrivesPhysical addressConversion table, hereinafter referred to as " page table ") hit rate.It is right that the virtual memory address section of process is first coupled to page table After be connected to physical memory.So first accesses page table is needed to obtain the mapping of virtual memory and physical memory when accessing memory Then relationship visits again physical memory.It is this to improve to be used to store portion of page table by some TLB in cpu cache (cache) The speed of conversion.Because page table space becomes larger, an equal amount of TLB, the memory size covered also becomes larger, and improves TBL hit rates, that is, improve the speed of address conversion;Meanwhile Array virtual management platform resources distribution system point The memory of dispensing virtual machine is continuous, and can also improve virtual machine greatly in this way to memory read-write rate, improving performance.
Step 4 enables SR-IOV (single I/O virtualizations) so that the equipment of establishment allows virtual machine being directly connected to I/O equipment, single I/O device resources can be shared by many virtual machines, and each virtual machine has the PCIe of oneself (Peripheral Component Interconnect Express, quick peripheral component interconnection) configuration space.For example, Fig. 7 " the Array virtual managements platform " resource allocation system for giving Huayao's research and development enables SR-IOV A kind of preferred embodiments of technology, the resource allocation system enable SR-IOV for each flow mouth (PF), and by each PF Corresponding 8 VF.Each VF can give different virtual machines, then virtual machine can share the performance of same flow mouth.By It is based on hard-wired in SR-IOV, virtual machine directly accesses hardware, will not consume the CPU and memory source of AVX, 8 VF Globality very close to physics flow mouth performance.As shown in figure 8, respectively virtual machine VM01 and VM02 is assigned with flow mouth The VF1 and VF2 of port1.VM01 and VM02 passes through the direct flowing of access mouth port1 of VF.The overall performance of port1 will be by two void Plan machine is shared.When virtual machine VM01 does not have flow to pass through, the performance of VM02 can be close to the overall performance of port1.
Further, by with issue orders for virtual machine increase or reduce VF, to facilitate management VF distribution:
va port<va_name><port_name><vf_index>
no va port<va_name><port_name><vf_index>
Such as:
va port VM01port2 1
no va port VM02port1 2
It should be noted that each unit mentioned in each equipment embodiment of the present invention is all logic unit, physically, One logic unit can be a part for a physical unit or a physical unit, can also be with multiple physics The combination of unit realizes that the Physical realization of these logic units in itself is not most important, these logic units institute is real The combination of existing function is only the key for solving the technical issues of proposed by the invention.In addition, in order to protrude the innovation of the present invention Part, the present invention is without introducing above-mentioned each equipment embodiment and with solving the technical issues of proposed by the invention relationship less Close unit, but this does not indicate that there is no above equipment embodiment and other related implementation units.
Although by referring to some of the preferred embodiment of the invention, the present invention is shown and described, It will be understood by those skilled in the art that can to it, various changes can be made in the form and details, without departing from this hair Bright spirit and scope.

Claims (7)

1. a kind of virtual management platform resource distribution system of performance guarantee includes hardware layer, (SuSE) Linux OS, Docker Container, KVM virtual machines, Virtual Machine Manager module and user interface;The wherein described Virtual Machine Manager module includes at least mirror image Management module and virtual management platform resource distribution module, the mirror image management module are empty for Docker containers and KVM Intend the unified management of plane mechanism;It is characterized in that the virtual management platform resource distribution module includes:
Request creates virtual machine module:It asks to create virtual machine for user;
NUMA management modules:For checking in NUMA domains that user specifies whether there is sufficient system resource;
CPU binding modules:For the specification specified according to user, the physical cpu thread of distribution and virtual machine vCPU are bound;
Exclusive memory allocating module:For specification and both port of origination that block is specified according to user, SR-IOV is distributed for virtual machine VFs;
And I/O resource management modules and create virtual machine module.
2. a kind of virtual management platform resource distribution system of performance guarantee according to claim 1, it is characterized in that institute Include virtual cpu and physical cpu allocation strategy module in the CPU binding modules stated.
3. a kind of virtual management platform resource distribution method of performance guarantee, includes the following steps:
Step 1 enables NUMA functions, and system resource is divided into two or more NUMA domains, and by resources of virtual machine (cpu resource, memory source and I/O resources) correspondence is divided into described two or more than two NUMA domains;
Step 2 is got up physical cpu cores threads and virtual machine vCPU bindings by resource allocation policy, virtual to ensure Machine vCPU is to the exclusive of distributed physics cpu resource and the performance cost that CPU switchings is avoided to bring;
Step 3 pre-allocates mode using the memory of big page, memory is disposably distributed to virtual machine according to virtual machine specification;
Step 4 enables single I/O virtualization so that the equipment of establishment allows virtual machine being directly connected to I/O equipment, single I/O device resources can be shared by multiple virtual machines, and each virtual machine has the PCIe configuration space of oneself.
4. a kind of virtual management platform resource distribution method of performance guarantee according to claim 3, it is characterized in that institute The step of stating one further include it is following step by step:Several resources of virtual machine rank is divided as needed.
5. a kind of virtual management platform resource distribution method of performance guarantee according to claim 4, it is characterized in that drawing It is Entry, Small, Medium, Large rank to divide resources of virtual machine, and the resource allocation policy is:
The two Entry virtual machines successively created occupy a physical core, respectively occupy a thread;
One Small virtual machine occupies a physical core and two threads;
One Medium virtual machine occupies two physical cores and four threads;
One Large virtual machine occupies 4 physical cores and eight threads.
6. a kind of virtual management platform resource distribution method of performance guarantee according to claim 3, it is characterized in that step Physical cpu cores threads are included step by step with what virtual machine vCPU was bound described in rapid two:
Request distribution physics cpu resource;
Whether the specification for judging virtual machine is Entry types
If Entry types, then and then judge that same physical cpu core is unallocated most with the presence or absence of havingIf so, it then distributes not Physical cpu core is to the virtual machine to the greatest extent, and then is allocated successfully;
If above-mentioned judgement is not Entry types, and then search whether enough physical cpu core, if so, then allotment Cpu resource is managed to the virtual machine, is allocated successfully;
It is above-mentioned to search whether enough physical cpu core, if it is not, distribution failure;
The above-mentioned same physical cpu core of judgement is unallocated most with the presence or absence of havingIt is such as not present, then and then searches whether available free Physical cpu core if so, then distributing a physical cpu core to the virtual machine, is allocated successfully;
It is above-mentioned to search whether available free physical cpu core, if it is not, distribution failure.
7. a kind of virtual management platform resource distribution method of performance guarantee according to claim 3, it is characterized in that logical It crosses to issue orders as virtual machine increase or reduce VF, to facilitate the distribution of management VF:
va port<va_name><port_name><vf_index>
no va port<va_name><port_name><vf_index> 。
CN201611238450.XA 2016-12-28 2016-12-28 The virtual management platform resource distribution system and method for performance guarantee Pending CN108255598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611238450.XA CN108255598A (en) 2016-12-28 2016-12-28 The virtual management platform resource distribution system and method for performance guarantee

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611238450.XA CN108255598A (en) 2016-12-28 2016-12-28 The virtual management platform resource distribution system and method for performance guarantee

Publications (1)

Publication Number Publication Date
CN108255598A true CN108255598A (en) 2018-07-06

Family

ID=62720515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611238450.XA Pending CN108255598A (en) 2016-12-28 2016-12-28 The virtual management platform resource distribution system and method for performance guarantee

Country Status (1)

Country Link
CN (1) CN108255598A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347657A (en) * 2018-09-12 2019-02-15 石家庄铁道大学 The virtual data domain construction method of scientific and technological business is supported under SDN mode
CN109857513A (en) * 2018-11-01 2019-06-07 晓白科技(吉林)有限公司 A kind of virtual machine system using open source linux KVM kernel virtualization technology
CN110058946A (en) * 2019-04-26 2019-07-26 上海燧原科技有限公司 Device virtualization method, apparatus, equipment and storage medium
CN110780817A (en) * 2019-10-18 2020-02-11 腾讯科技(深圳)有限公司 Data recording method and apparatus, storage medium, and electronic apparatus
CN110990114A (en) * 2019-11-08 2020-04-10 浪潮电子信息产业股份有限公司 Virtual machine resource allocation method, device, equipment and readable storage medium
CN111158846A (en) * 2019-11-22 2020-05-15 中国船舶工业系统工程研究院 Real-time virtual computing-oriented resource management method
TWI697786B (en) * 2019-05-24 2020-07-01 威聯通科技股份有限公司 Virtual machine building method based on hyper converged infrastructure
CN112925604A (en) * 2019-11-20 2021-06-08 北京华耀科技有限公司 Virtualization management platform and implementation method
WO2021120841A1 (en) * 2020-07-20 2021-06-24 平安科技(深圳)有限公司 Method, apparatus, and device for creating virtual machine and allocating cpu resources
CN113419820A (en) * 2021-07-02 2021-09-21 广州市品高软件股份有限公司 Deployment method of real-time virtual machine and cloud platform
CN114697215A (en) * 2022-03-31 2022-07-01 西安超越申泰信息科技有限公司 Method, system, equipment and medium for improving performance of virtualization network
CN117311910A (en) * 2023-11-29 2023-12-29 中安网脉(北京)技术股份有限公司 High-performance virtual password machine operation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1955948A (en) * 2005-10-26 2007-05-02 国际商业机器公司 Digital data processing device and method for managing cache data
US20080046895A1 (en) * 2006-08-15 2008-02-21 International Business Machines Corporation Affinity dispatching load balancer with precise CPU consumption data
CN103870314A (en) * 2014-03-06 2014-06-18 中国科学院信息工程研究所 Method and system for simultaneously operating different types of virtual machines by single node

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1955948A (en) * 2005-10-26 2007-05-02 国际商业机器公司 Digital data processing device and method for managing cache data
US20080046895A1 (en) * 2006-08-15 2008-02-21 International Business Machines Corporation Affinity dispatching load balancer with precise CPU consumption data
CN103870314A (en) * 2014-03-06 2014-06-18 中国科学院信息工程研究所 Method and system for simultaneously operating different types of virtual machines by single node

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
STEVE GORDON: "Driving in the Fast Lane - CPU Pinning and NUMA Topology Awareness in OpenStack Compute", 《HTTPS://WWW.REDHAT.COM/EN/BLOG/DRIVING-FAST-LANE-CPU-PINNING-AND-NUMA-TOPOLOGY-AWARENESS-OPENSTACK-COMPUTE》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347657A (en) * 2018-09-12 2019-02-15 石家庄铁道大学 The virtual data domain construction method of scientific and technological business is supported under SDN mode
CN109857513A (en) * 2018-11-01 2019-06-07 晓白科技(吉林)有限公司 A kind of virtual machine system using open source linux KVM kernel virtualization technology
CN110058946A (en) * 2019-04-26 2019-07-26 上海燧原科技有限公司 Device virtualization method, apparatus, equipment and storage medium
TWI697786B (en) * 2019-05-24 2020-07-01 威聯通科技股份有限公司 Virtual machine building method based on hyper converged infrastructure
CN110780817A (en) * 2019-10-18 2020-02-11 腾讯科技(深圳)有限公司 Data recording method and apparatus, storage medium, and electronic apparatus
CN110780817B (en) * 2019-10-18 2021-12-07 腾讯科技(深圳)有限公司 Data recording method and apparatus, storage medium, and electronic apparatus
CN110990114A (en) * 2019-11-08 2020-04-10 浪潮电子信息产业股份有限公司 Virtual machine resource allocation method, device, equipment and readable storage medium
CN112925604A (en) * 2019-11-20 2021-06-08 北京华耀科技有限公司 Virtualization management platform and implementation method
CN112925604B (en) * 2019-11-20 2024-04-19 北京华耀科技有限公司 Virtualization management platform and implementation method
CN111158846A (en) * 2019-11-22 2020-05-15 中国船舶工业系统工程研究院 Real-time virtual computing-oriented resource management method
WO2021120841A1 (en) * 2020-07-20 2021-06-24 平安科技(深圳)有限公司 Method, apparatus, and device for creating virtual machine and allocating cpu resources
CN113419820A (en) * 2021-07-02 2021-09-21 广州市品高软件股份有限公司 Deployment method of real-time virtual machine and cloud platform
CN114697215A (en) * 2022-03-31 2022-07-01 西安超越申泰信息科技有限公司 Method, system, equipment and medium for improving performance of virtualization network
CN117311910A (en) * 2023-11-29 2023-12-29 中安网脉(北京)技术股份有限公司 High-performance virtual password machine operation method
CN117311910B (en) * 2023-11-29 2024-02-27 中安网脉(北京)技术股份有限公司 High-performance virtual password machine operation method

Similar Documents

Publication Publication Date Title
CN108255598A (en) The virtual management platform resource distribution system and method for performance guarantee
US10691363B2 (en) Virtual machine trigger
US10162658B2 (en) Virtual processor allocation techniques
US10191759B2 (en) Apparatus and method for scheduling graphics processing unit workloads from virtual machines
US10223162B2 (en) Mechanism for resource utilization metering in a computer system
Peng et al. {MDev-NVMe}: A {NVMe} Storage Virtualization Solution with Mediated {Pass-Through}
US8443376B2 (en) Hypervisor scheduler
CN115344521A (en) Virtual device composition in extensible input/output (I/O) virtualization (S-IOV) architecture
WO2018119952A1 (en) Device virtualization method, apparatus, system, and electronic device, and computer program product
JP6029550B2 (en) Computer control method and computer
Zhang et al. Is singularity-based container technology ready for running MPI applications on HPC clouds?
BR112016014367B1 (en) RESOURCE PROCESSING METHOD, OPERATING SYSTEM, AND DEVICE
US10853259B2 (en) Exitless extended page table switching for nested hypervisors
US11989416B2 (en) Computing device with independently coherent nodes
US20230051825A1 (en) System supporting virtualization of sr-iov capable devices
US11922072B2 (en) System supporting virtualization of SR-IOV capable devices
US11698737B2 (en) Low-latency shared memory channel across address spaces without system call overhead in a computing system
US10983832B2 (en) Managing heterogeneous memory resource within a computing system
CN113568734A (en) Virtualization method and system based on multi-core processor, multi-core processor and electronic equipment
WO2022083158A1 (en) Data processing method, instances and system
US11650835B1 (en) Multiple port emulation
Chen et al. GaaS workload characterization under NUMA architecture for virtualized GPU
CN109032510B (en) Method and device for processing data based on distributed structure
Koutsovasilis et al. A holistic system software integration of disaggregated memory for next-generation cloud infrastructures
CN113326118A (en) Virtualization method and system based on multi-core processor, multi-core processor and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100125 Beijing city Chaoyang District Liangmaqiao Road No. 40 building 10 room 1001, twenty-first Century

Applicant after: Beijing Huayao Technology Co.,Ltd.

Address before: 100125 Beijing city Chaoyang District Liangmaqiao Road No. 40 building 10 room 1001, twenty-first Century

Applicant before: ARRAY NETWORKS, Inc.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20180706

RJ01 Rejection of invention patent application after publication