WO2018040845A1 - 一种计算资源调度方法及装置 - Google Patents

一种计算资源调度方法及装置 Download PDF

Info

Publication number
WO2018040845A1
WO2018040845A1 PCT/CN2017/095879 CN2017095879W WO2018040845A1 WO 2018040845 A1 WO2018040845 A1 WO 2018040845A1 CN 2017095879 W CN2017095879 W CN 2017095879W WO 2018040845 A1 WO2018040845 A1 WO 2018040845A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
virtual machine
vcpu
physical
cpu
Prior art date
Application number
PCT/CN2017/095879
Other languages
English (en)
French (fr)
Inventor
李瑞联
梦迪特保罗
刘力力
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018040845A1 publication Critical patent/WO2018040845A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a computing resource scheduling method and apparatus.
  • Server virtualization technology is a key technology based on the infrastructure layer in cloud computing. It is to virtualize physical servers to deploy multiple virtual machines (VMs) on a single physical server to improve physical server resources. Utilization, reducing the cost of use.
  • VMs virtual machines
  • VMM virtual machine monitor
  • GuestOS Guest Operating System
  • the application (Application, APP) running in the virtual machine is configured to run on a Central Processing Unit (CPU), which has a great impact on the final performance of the application.
  • CPU Central Processing Unit
  • NUMA Non-Uniform Memory Access
  • the virtual hardware layer is added due to the deployment of the VMM
  • dual scheduling is required, that is, Kernel scheduling and VMM scheduling of guest operating systems in the virtual machine.
  • the performance overhead caused by resource scheduling is not stable due to the above, which may cause the performance of the application to drop by more than 50%.
  • the performance overhead introduced by server virtualization technology is acceptable and has little impact on the user experience.
  • the above performance overhead may have a serious impact on the user experience.
  • the virtual machine may need to forward data very quickly.
  • the instantaneous forwarding is not timely, resulting in data loss, which leads to intermittent video communication and seriously affects the customer experience.
  • the present invention provides a method and a device for scheduling a computing resource, which are used to solve the problem that the performance cost introduced by the server virtualization technology in the prior art is large, causing the virtual machine to process data of a part of the application and causing data loss, thereby affecting the customer experience. problem.
  • an embodiment of the present invention provides a computing resource scheduling method, where the method is applied to a physical server, where a physical CPU in the physical server includes a physical CPU that can communicate with the VCPU and a physical that cannot communicate with the VCPU.
  • a CPU the method comprising the steps of: when a virtual machine of the physical server starts the first application, the virtual machine determines whether the first application is included in the set application set; and the virtual machine determines the location In the case where the setting application set includes the first application, the virtual machine will be able to directly configure the first VCPU of the first physical CPU to the first application; and determine the setting application at the virtual machine.
  • the virtual machine configures the second VCPU that cannot pass through the physical CPU to the first application, so that the VMM in the subsequent physical server will use the second VCPU. Scheduling to a second physical CPU that cannot communicate with the VCPU.
  • the virtual machine may configure an application with strict performance requirements in a set application set for maintenance.
  • the computing resource scheduling may not be performed by the VMM in the physical server, that is, the virtual machine directly connects to the first VCPU of the first physical CPU.
  • the first application without the VMM scheduling, reduces the performance overhead introduced by the server virtualization technology, ensures the data processing efficiency of the virtual application for the first application, and improves the customer experience.
  • the virtual machine determines the first physical CPU that is set in the physical server and can communicate with the VCPU, and the first The physical CPU is bound to the first VCPU.
  • the virtual machine binds the first physical CPU to the first VCPU in the virtual machine, so that after the virtual machine configures the first VCPU to the first application,
  • the virtual machine may directly determine that the hardware computing resource of the first application is the first physical CPU, and the virtual machine may not perform hardware computing resource scheduling by using the VMM in the physical machine server, but
  • the kernel scheduling of the guest operating system of the virtual machine implements hardware computing resource scheduling.
  • the scheduling priority of any one of the multiple applications included in the setting application set is higher than the scheduling priority of an application not belonging to the setting set.
  • the virtual machine can ensure that the computing resources are scheduled for the applications in the set of application settings when the multiple applications are started at the same time, and the performance requirements of the applications in the set of application settings are guaranteed.
  • the virtual machine may determine the scheduling priority of the application by applying a corresponding Quality of Service (QoS) parameter.
  • QoS Quality of Service
  • the virtual machine before the virtual machine starts the first application, the virtual machine adds the first VCPU to a non-virtualized scheduling list; in the above case, the virtual machine passes the following The step of configuring the first VCPU to the first application: the virtual machine selects the first VCPU in the non-virtualized scheduling list, and configures the first VCPU to the first application .
  • the non-virtualized scheduling list included in the virtual machine includes a VCPU capable of directly passing through a physical CPU, so that when the virtual machine starts an application with strict performance requirements, the non-virtualized scheduling list is obtained.
  • the VCPU included in the configuration is given to the first application.
  • a kernel of the virtual machine creates a kernel thread corresponding to the first application, where the kernel thread is The first VCPU is executed when the service of the first application is implemented.
  • the first VCPU may perform the service of the first application.
  • the method can ensure that the first physical CPU that is directly connected to the first VCPU has a corresponding kernel thread to maintain the running environment of the first application, and ensure that the first application does not run through the virtualization layer. Operating environment.
  • the virtual machine configures the first VCPU to the first application
  • the virtual machine executes the kernel thread until the exception switching process is completed, and continues to be executed by the first VCPU to continue executing the kernel thread.
  • the kernel of the virtual machine can ensure that the VMM is skipped and the processing is directly performed to ensure that the virtual machine runs the first application. Security.
  • the virtual machine may release the binding of the first VCPU to the first physical CPU, that is, the virtual machine restores the first physical CPU to a physical CPU that cannot communicate with the VCPU.
  • the virtual machine can dynamically switch between two physical CPUs that can communicate with the VCPU and cannot communicate with the VCPU, and ensure the flexibility of the computing resource scheduling of the physical server.
  • an embodiment of the present invention further provides a computing resource scheduling apparatus, where the apparatus has a function of implementing virtual machine behavior in the foregoing method instance.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the structure of the device includes a judging unit and a processing unit, and the units may perform corresponding functions in the above method examples.
  • the units may perform corresponding functions in the above method examples. For details, refer to the detailed description in the method examples, which are not described herein.
  • an embodiment of the present invention further provides a physical server, where the physical server includes multiple physical central processing units, a virtual machine monitor VMM, and at least one virtual machine, each virtual machine including multiple virtual centralities.
  • a processor VCPU wherein each virtual machine maintains a set of application sets, where the set of application sets includes a plurality of applications, wherein each of the at least one virtual machine has an example of implementing the above method
  • the virtual machine can configure the application with strict performance requirements in the set application set for maintenance.
  • the computing resource scheduling may not be performed by the VMM in the physical server, that is, the virtual machine will directly pass through the first VCPU configuration of the first physical CPU.
  • the first application is used without VMM scheduling, which reduces the performance overhead introduced by the server virtualization technology, ensures the data processing efficiency of the virtual application to the first application, and improves the customer experience.
  • FIG. 1 is a schematic structural diagram of a physical server according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a method for scheduling a computing resource according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a computing resource scheduling apparatus according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a physical server according to an embodiment of the present invention.
  • FIG. 5 is a specific flowchart of a method for scheduling a computing resource according to an embodiment of the present invention
  • FIG. 6 is a schematic structural diagram of another physical server according to an embodiment of the present invention.
  • the embodiment of the present invention provides a method and a device for scheduling a computing resource, which are used to solve the problem that the performance of the virtual machine is reduced due to the server virtualization technology, which causes the virtual machine to process data of the application and cause data loss, thereby affecting the client.
  • the problem of experience The method and the device of the present invention are based on the same inventive concept. Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description is not repeated.
  • the physical CPU in the physical server includes a physical CPU that can communicate with the virtual CPU (VCPU) and a physical CPU that cannot communicate with the VCPU.
  • the virtual CPU Determining whether the first application is included in the set application set; when determining the inclusion, the virtual machine directly configures the first VCPU capable of communicating with the first physical CPU to the first application; The virtual machine performs hardware computing resource scheduling for the first application through the VMM. In this way, the virtual machine can configure the application with strict performance requirements in the set application set for maintenance.
  • the virtual machine starts an application it is determined that the application exists in the set application set (ie, the application is determined to be the pair).
  • the computing resource scheduling may be performed by the VMM in the physical server, that is, the virtual machine configures the first VCPU that is directly connected to the first physical CPU to the first application, and
  • the VMM scheduling is not required, the performance overhead introduced by the server virtualization technology is reduced, the data processing efficiency of the virtual machine to the first application is ensured, and the customer experience is improved.
  • the physical CPU is a hardware computing resource in the physical server relative to the VCPU.
  • a multi-core physical server that can implement server virtualization technology includes multiple physical CPUs.
  • the plurality of physical CPUs include a physical CPU that can communicate with the VCPU and a physical CPU that cannot communicate with the VCPU.
  • the physical CPU that can communicate with the VCPU configures the physical server to the hardware computing resources of the application with strict performance requirements.
  • the physical CPU is no longer used by the VMM in the physical server.
  • the other virtual machines that are managed are visible.
  • the physical CPU that cannot communicate with the VCPU is configured to give the hardware computing resources of the application that is not required for performance.
  • the computing resource scheduling mode is implemented, that is, the virtual machine implements computing resource scheduling for the common application by combining the kernel scheduling of the virtual machine and the VMM scheduling of the physical server.
  • a VCPU is a CPU in a virtual machine. When a virtual machine executes an application through a VCPU, it actually needs to schedule the application to run on the physical CPU bound to the VCPU.
  • the scheduling priority of any one of the multiple applications included in the setting application set is higher than the scheduling priority of an application not belonging to the setting set.
  • the scheduling priority indicates an order in which the virtual machine performs computing resource scheduling on the application. In the embodiment of the present invention, the virtual machine preferentially performs computing resource scheduling for the application in the set of application settings.
  • the guest operating system is the operating system of the virtual machine.
  • the main system (HostSystem) is the operating system of the physical server.
  • a virtual machine management system deployed in the main system of the physical server to implement virtualization in the physical server The machine performs management control, such as operations such as creating, starting, shutting down, and deleting virtual machines.
  • Multiple means two or more.
  • association relationship of the associated object indicating that there may be three relationships, for example, A and/or B, which may indicate that there are three cases where A exists separately, A and B exist at the same time, and B exists separately.
  • the character "/" generally indicates that the contextual object is an "or" relationship.
  • the physical server includes a plurality of cores, that is, a plurality of physical CPUs, and the plurality of physical CPUs include a physical CPU capable of being directly connected to the VCPU (such as physical CPU0 and physical CPUq in the figure) and cannot The physical CPU of the VCPU through (such as physical CPU1, physical CPU2, and physical CPUp).
  • the physical server is virtualized, multiple VMs can be deployed on the physical server, that is, VM1 and VM2 in the figure, and in order to implement the server virtualization technology, the physical server further includes a VMM, and the VMM is used by the VMM.
  • the hardware resource management of the physical server and the computing resource scheduling are implemented.
  • the scheduling manner of the computing resource when the application is started is also different, and each VM of the multiple VMs can maintain one setting.
  • the application set is set to be a performance-critical application.
  • the virtual machine is included in the set of application settings maintained by itself, and if so, Through the kernel scheduling of the virtual machine, the VCPU capable of directly communicating with the physical CPU is directly configured to the application; otherwise, the traditional computing resource scheduling mode, that is, the combination of the virtual machine's kernel scheduling and the VMM scheduling dual scheduling, is performed for the application.
  • Hardware computing resource scheduling is performed for the application.
  • the physical server can ensure the implementation of the server virtualization technology
  • the virtual machine in the physical server starts an application, after determining that the application is an application with strict performance requirements, the virtual machine will directly be able to
  • the VCPU of the physical CPU is configured to the application without VMM scheduling, which reduces the performance overhead introduced by the server virtualization technology, ensures the data processing efficiency of the virtual machine to the application, and improves the customer experience.
  • the computing resource scheduling of some applications with strict performance requirements in the VM is not implemented by the VMM, and the security of these applications is also ensured.
  • the VM1 determines that the APP1 is included in a set application set maintained by the VM1, the VM1 determines a VCPU1 that can communicate with the physical CPU0, and configures the VCPU1. Giving the APP1, such that the hardware computing resource allocated by the VM1 to the APP1 is the physical CPU0;
  • the VM1 determines that APP2 is not included in the set application set maintained by the VM1, and the VM1 is not directly connected to the physical CPU through the traditional computing resource scheduling mode, first through the kernel scheduling.
  • the VCPU2 is configured to the APP2, and then the VCPU2 is scheduled to the physical CPU1 by the VMM in the physical server.
  • the hardware computing resource allocated by the VM1 to the APP2 is the physical CPU1.
  • a computing resource scheduling method provided by an embodiment of the present invention may be applied to a physical server as shown in FIG. 1.
  • the processing flow of the method includes:
  • Step 201 When the virtual machine starts the first application, the virtual machine determines whether the first application is included in the set application set, where the set of application settings includes multiple applications.
  • the set of application settings is maintained in the virtual machine, and the plurality of applications in the set of application sets are applications with strict performance requirements.
  • the scheduling mode of the virtual machine is different when the application starts.
  • the virtual machine starts the first application it is required to determine whether the first application is an application that has strict performance requirements, that is, whether it is required to include the first application in a set of locally maintained settings.
  • an application management module is further maintained in the virtual machine, where the application management module can provide an application registration interface to the user, and the user can register some applications to the set application set through the registration interface to ensure the Application performance requirements.
  • the method further includes:
  • the virtual machine binds the first physical CPU to the first VCPU.
  • the physical CPU in the physical server is capable of being directly connected to the VCPU and configured for the physical server.
  • the physical system of the physical server includes a physical CPU resource management module, where the physical server is used.
  • the multiple physical CPUs managed by the VMM are divided into the above two types.
  • the first physical CPU that is set to be directly connected to the VCPU is determined by the interface of the physical CPU resource management module.
  • the virtual machine binds the determined first physical CPU to a VCPU in the virtual machine, wherein the first physical CPU may bind at least one VCPU (including the first VCPU) of the virtual machine.
  • the first physical CPU may also be bound to a VCPU in one or more virtual machines, which is not limited by the present invention.
  • the virtual machine binds the first physical CPU to the first VCPU in the virtual machine, so that after the virtual machine configures the first VCPU to the first application,
  • the virtual machine may directly determine that the hardware computing resource of the first application is the first physical CPU, and the virtual machine may not perform hardware computing resource scheduling by using a VMM in the physical machine server, but by using the Kernel scheduling of the guest operating system of the virtual machine to implement hardware computing resource scheduling.
  • the virtual machine management system in the physical server may add a non-virtualization label to the virtual machine when the virtual machine is created, to indicate that the virtual machine can implement computing resource scheduling without using the VMM.
  • the virtual machine may determine a sequence of computing resource scheduling for multiple applications that are simultaneously started by scheduling priorities.
  • the scheduling priority of any one of the plurality of applications included in the setting application set is higher than the scheduling priority of an application not belonging to the setting set.
  • the virtual machine can ensure that when multiple applications are started at the same time, the computing resource scheduling is preferentially performed for the applications in the set of application settings, and the performance requirements of the applications in the set of application settings are guaranteed.
  • the scheduling priority of the application may be determined by the user for the application, or the virtual machine is determined by the quality of service (QoS) parameter corresponding to the application, and the invention is not limited thereto. .
  • QoS quality of service
  • Step 202 When the virtual machine determines to include the first application in the set application set, the virtual Configuring a first virtual central processing unit VCPU to the first application, the first VCPU being able to pass through the first physical CPU; and when the virtual machine determines that the first application is not included in the set application set The virtual machine configures the second VCPU to the first application, and schedules the second VCPU to the second physical CPU by using a VMM in the physical server to which the virtual machine belongs, where the second VCPU cannot Directly to the second physical CPU.
  • the first physical CPU and the second physical CPU are physical CPUs in a physical server to which the virtual machine belongs.
  • the scheduling manner of the computing resources of the virtual machine when the application is started is also different.
  • the applications in the set of application settings maintained in the virtual machine are all performance-critical applications, so when the virtual machine starts the first application, the virtual opportunity is included according to whether the first application is included in the virtual application.
  • Set the application set if yes, configure the VCPU that can be bound to the physical CPU to the application through the kernel scheduling of the virtual machine; otherwise, through the traditional computing resource scheduling mode, that is, through the virtual machine kernel scheduling and VMM
  • the scheduling dual scheduling is combined to perform hardware computing resource scheduling for the first application.
  • the method further includes: adding, by the virtual machine, the first VCPU to a non-virtualized scheduling list; optionally, the operation may be The virtual machine is executed when it is started.
  • the virtual machine includes two scheduling lists, one is a traditional normal scheduling list of the virtual machine, and the other is a non-virtualized scheduling list.
  • the normal schedule list includes a VCPU in the virtual machine that cannot pass through the physical CPU
  • the non-virtualized schedule list includes a VCPU in the virtual machine that can directly pass through the physical CPU.
  • the virtual machine configures the first VCPU to the first application, including:
  • the virtual machine selects the first VCPU in the non-virtualized scheduling list, and configures the first VCPU to the first application.
  • the virtual machine may perform load balancing processing on multiple VCPUs, and select the first VCPU among the multiple VCPUs.
  • the method further includes:
  • the kernel of the virtual machine creates a kernel thread corresponding to the first application, and the kernel thread is executed by the first VCPU when implementing the service of the first application.
  • the kernel of the virtual machine creates the kernel thread, so that after the virtual machine configures the first VCPU to the first application, the first VCPU can perform the service of the first application. In this way, it can be ensured that the first physical CPU that is directly connected to the first VCPU has a corresponding kernel thread to maintain the running environment of the first application, and ensure that the first application does not run through the virtualization layer. surroundings.
  • the method further includes:
  • the kernel of the virtual machine executes the kernel thread until the abnormality switching process is completed, and continues to rotate. Continued execution by the first VCPU.
  • an abnormal switch such as a page fault, an interrupt, etc.
  • the kernel of the virtual machine can ensure that the VMM is skipped and processed directly to ensure that the virtual machine runs the security of the first application.
  • the method further includes:
  • the virtual machine releases the binding of the first VCPU to the first physical CPU, that is, the virtual machine restores the first physical CPU to a physical CPU that cannot communicate with the VCPU.
  • the virtual machine can dynamically switch between two physical CPUs that can communicate with the VCPU and cannot communicate with the VCPU to ensure the flexibility of scheduling the computing resources of the physical server.
  • the physical CPU in the physical server includes a physical CPU that can communicate with the VCPU and a physical CPU that cannot communicate with the VCPU.
  • the virtual opportunity Determining whether the first application is included in the set application set; when determining the inclusion, the virtual machine directly configures the first VCPU capable of communicating with the first physical CPU to the first application; when determining not to include
  • the virtual machine performs hardware computing resource scheduling for the first application through the VMM. In this way, the virtual machine can configure the application with strict performance requirements in the set application set for maintenance.
  • the virtual machine starts an application it is determined that the application exists in the set application set (ie, the application is determined to be the pair).
  • the computing resource scheduling may be performed by the VMM in the physical server, that is, the virtual machine configures the first VCPU that is directly connected to the first physical CPU to the first application, and
  • the VMM scheduling is not required, the performance overhead introduced by the server virtualization technology is reduced, the data processing efficiency of the virtual machine to the first application is ensured, and the customer experience is improved.
  • the present invention further provides a computing resource scheduling apparatus.
  • the computing resource scheduling apparatus 300 includes: a determining unit 301 and a processing unit 302, where
  • the determining unit 301 is configured to determine, when the device 300 starts the first application, whether the first application is included in the set application set, where the set of application settings includes multiple applications;
  • the processing unit 302 is configured to: when the determining unit 301 determines that the first application is included in the set application set, configure a first virtual central processing unit VCPU to the first application, the first VCPU Ability to pass through the first physical CPU;
  • the determining unit 301 determines that the first application is not included in the set of application settings, configuring the second VCPU to the first application and passing the virtual machine in the physical server where the device 300 is located
  • the monitor VMM schedules the second VCPU to the second physical CPU, and the second VCPU cannot directly pass the second physical CPU binding;
  • the first physical CPU and the second physical CPU are physical CPUs in the physical server.
  • processing unit 302 is further configured to:
  • the scheduling priority of any one of the multiple applications included in the setting application set is higher than the scheduling priority of an application not belonging to the setting set.
  • processing unit 302 is further configured to:
  • the processing unit 302 is configured to: when configuring the first VCPU to the first application, specifically:
  • processing unit 302 is further configured to:
  • processing unit 302 is further configured to:
  • the kernel thread is executed until the abnormality switching process is completed.
  • the device 300 further includes a management unit, configured to:
  • the binding of the first VCPU to the first physical CPU is released.
  • the virtual machine runs in the device, and the physical CPU in the physical server where the device is located includes a physical CPU that can communicate with the VCPU and a physical CPU that cannot communicate with the VCPU.
  • the device determines whether the first application is included in the set application set; when determining the inclusion, the device directly configures the first VCPU capable of directly communicating with the first physical CPU.
  • the first application when the determination is not included, the device performs hardware computing resource scheduling for the first application through the VMM. In this way, the device can configure the application with strict performance requirements in the set application set for maintenance.
  • the computing resource scheduling may not be performed by the VMM in the physical server, that is, the device configures the first VCPU that is directly connected to the first physical CPU to the first application, and The VMM scheduling is not required, the performance overhead introduced by the server virtualization technology is reduced, the data processing efficiency of the virtual machine to the first application is ensured, and the customer experience is improved.
  • the present invention further provides a physical server, as shown in FIG. 4, the physical server includes a VM, a VMM, and a main system of the physical server, where
  • the VMM is configured to implement hardware resource management of the physical server, and calculate resource scheduling.
  • the main system includes a physical CPU resource management module, configured to set the physical CPU in the physical server to be a physical CPU that can communicate with the VCPU and a physical CPU that cannot communicate with the VCPU, and the physical CPUs are For management, for example, a physical CPU that can communicate with the VCPU can be converted into a physical CPU that cannot communicate with the VCPU, or a physical CPU that cannot communicate with the VCPU can be converted into a physical CPU that can communicate with the VCPU.
  • the deployment manner of the physical CPU resource management module is different according to the actual operating platform, and the embodiment of the present invention provides only one possible example.
  • the physical CPU resource management module can be deployed in the host system.
  • the physical CPU resource management module is divided into two platforms.
  • the physical CPU virtualization management module and the physical CPU resource management interface are respectively deployed in the VMM and the main system.
  • the physical CPU virtualization management module deployed in the VMM is used to isolate the two physical CPUs to ensure virtuality.
  • the VCPU of the machine does not run on a physical CPU that can communicate with the VCPU; the physical CPU resource management interface deployed in the main system is used to implement management of the above two physical CPUs.
  • the virtual machine management system determines the interface of the physical CPU resource management module.
  • a physical CPU in the physical server that is capable of communicating with the VCPU, such that the VM binds the determined physical CPU to the VCPU in the VM.
  • the VM also includes a local resource adaptation device for scheduling the computing resources of the critical applications (for performance-critical applications) in the VM, that is, scheduling the critical applications to run on a physical CPU that can communicate with the VCPU.
  • the local resource adaptation device includes an application management module, and a local CPU control microkernel, a kernel scheduling adaptation module, and a hybrid resource driver in a kernel of the guest operating system, where
  • An application management module configured to provide an application registration interface to the user, so that the user registers an application to the set of application applications maintained by the VM through the registration interface, thereby ensuring performance requirements of the applications;
  • the local CPU controls the micro-kernel for creating a kernel thread of a critical application, and processing an abnormal switching problem that occurs during execution of the kernel thread by the first VCPU to ensure an operating environment and operational security of the critical application;
  • the kernel scheduling adaptation module is configured to schedule the critical application to run on a physical CPU capable of communicating with the VCPU, that is, the VCPU that is directly connected to the physical CPU is configured to the critical application;
  • the hybrid resource driver determines the above two physical CPUs by calling an interface of physical CPU resource management, and presents a list of each physical CPU to the VM.
  • the process for the VM to perform computing resource scheduling on a critical application includes:
  • Step 501 The VM is started, and a kernel of a guest operating system of the VM loads a local resource adaptation device.
  • Step 502 The hybrid resource driver invokes the physical CPU resource management module, determines a first physical CPU that can communicate with the VCPU, and determines a first VCPU of the VM that is bound to the first physical CPU.
  • Step 503 The VM determines to have a non-virtualized tag, load a local CPU control microkernel and a kernel scheduling adaptation module.
  • Step 504 The local CPU controls the microkernel to set the operating mode of the first physical CPU to a non-virtualized mode, that is, disables hardware virtualization support of the first physical CPU, and starts a group of users to respond to the first physics. Kernel thread for CPU memory.
  • Step 505 The kernel scheduling adaptation module deletes the VCPU bound to the first physical CPU in the default normal scheduling list of the kernel, and creates a non-virtualized scheduling list, where the non-virtualized physical scheduling list includes all the VMs. a VCPU (including a first VCPU) bound to the first physical CPU.
  • Step 506 After the VM starts an application, the VM determines that the application is a critical application.
  • the VM may determine whether the application is a critical application by determining whether the application exists in the set of locally maintained settings.
  • Step 507 The local CPU controls the microkernel to create kernel threads and resources corresponding to the critical application.
  • Step 508 The kernel scheduling adaptation module selects a target VCPU (the first VCPU) in the non-virtualized scheduling queue, and configures the selected first VCPU to the critical application.
  • Step 509 Execute a kernel thread corresponding to the critical application on the first physical CPU bound to the first VCPU.
  • Step 510 When an abnormal switch occurs in the process of executing the kernel thread by the first VCPU, the local CPU controls the microkernel to execute the kernel thread until the abnormality switching process is performed, after the abnormality switching process is completed. The running right of the kernel thread switches back to the first VCPU.
  • the computing resource scheduling method in the above embodiment of the present invention is used after the virtual machine starts a critical application.
  • the virtual machine may not perform the computing resource scheduling by the VMM in the physical server, but configure the VCPU that can communicate with the physical CPU to the first application, thereby reducing the performance overhead introduced by the server virtualization technology.
  • the data processing efficiency of the virtual machine to the first application improves the customer experience.
  • the division of the unit in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
  • the embodiment of the present invention further provides a physical server.
  • the physical server 600 includes: a multi-core processor 601, a communication bus 602, and a memory 603, where
  • the multi-core processor 601 and the memory 603 are connected to each other through the communication bus 602.
  • the communication bus 602 can be a peripheral component interconnect (PCI) communication bus or an extended industry standard architecture (EISA) communication bus.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the communication bus can be divided into an address communication bus, a data communication bus, a control communication bus, and the like. For ease of representation, only one thick line is shown in Figure 6, but it does not mean that there is only one communication bus or one type of communication bus.
  • the physical server adopts a server virtualization technology. Therefore, the physical server 600 further includes a VMM 604 and at least one virtual machine. Each virtual machine includes multiple VCPUs, and each virtual machine maintains a set application. The collection, the set application set contains a plurality of applications.
  • the multi-core processor 601 includes a plurality of physical CPUs, and the plurality of physical CPUs may be marked as two types: a physical CPU that can communicate with the VCPU and a physical CPU that cannot communicate with the VCPU.
  • the at least one physical CPU of the multi-core processor 601 is configured to implement the computing resource scheduling method shown in FIG. 2 of the embodiment of the present invention, including:
  • a first VCPU of the plurality of VCPUs in the virtual machine is configured to the first application, and the first VCPU is capable of directly communicating the a first physical CPU of the plurality of physical CPUs;
  • the physical server 600 further includes a communication interface 605 for performing communication interaction with other devices connected to the physical server 600.
  • the memory 603 is configured to store a program or the like.
  • the program may include program code
  • the program code package Includes computer operating instructions.
  • the memory 603 may include a random access memory (RAM), and may also include a non-volatile memory such as at least one disk storage.
  • the at least one physical CPU of the multi-core processor 601 executes an application stored in the memory 603 to implement the above functions, thereby implementing a computing resource scheduling method as shown in FIG. 2.
  • the physical CPU in the physical server includes a physical CPU that can communicate with the VCPU and a physical CPU that cannot communicate with the VCPU.
  • the physical server determines whether the first application is included in the set application set stored for the virtual machine; when determining the inclusion, the physical server device directly directly connects the virtual machine to the physical CPU.
  • the VCPU is configured to the first application; when it is determined not to be included, the physical server performs hardware computing resource scheduling for the first application through the VMM. In this way, the physical server can configure an application with strict performance requirements to be set in a set of application applications corresponding to each virtual machine for maintenance.
  • a virtual machine When a virtual machine starts an application, it determines the setting corresponding to the virtual machine. After the application exists in the application set (ie, the application is determined to be a performance-critical application), the computing resource scheduling may not be performed by the VMM in the physical server, that is, the physical server will be able to directly communicate with the physical CPU.
  • the first VCPU is configured to the first application without performing VMM scheduling, which reduces the performance overhead introduced by the server virtualization technology, ensures data processing efficiency of the virtual application for the first application, and improves the customer experience.
  • the embodiment of the present invention provides a method and a device for scheduling a computing resource.
  • the physical CPU in the physical server includes a physical CPU that can communicate with the VCPU and a physical CPU that cannot communicate with the VCPU.
  • the virtual machine determines whether the first application is included in the set application set; when determining the inclusion, the virtual machine directly directly configures the first VCPU capable of directly communicating with the first physical CPU. Giving the first application; when the determination is not included, the virtual machine performs hardware computing resource scheduling for the first application through the VMM. In this way, the virtual machine can configure the application with strict performance requirements in the set application set for maintenance.
  • the computing resource scheduling may be performed by the VMM in the physical server, that is, the virtual machine configures the first VCPU that is directly connected to the first physical CPU to the first application, and The VMM scheduling is not required, the performance overhead introduced by the server virtualization technology is reduced, the data processing efficiency of the virtual machine to the first application is ensured, and the customer experience is improved.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a particular computer capable of booting a computer or other programmable data processing device In a computer readable memory that operates in a computer readable memory, causing instructions stored in the computer readable memory to produce an article of manufacture comprising instruction means implemented in a block or in a flow or a flow diagram and/or block diagram of the flowchart The functions specified in the boxes.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Abstract

一种计算资源调度方法及装置,用以解决由于服务器虚拟化技术引入的性能开销较大,导致虚拟机处理部分应用的数据时造成数据丢包的问题。该方法为:在虚拟机启动第一应用后,虚拟机会判断在设定应用集合中是否包含该第一应用;在确定包含时,虚拟机直接将能够直通第一物理CPU的第一VCPU配置给该第一应用;在确定不包含时,虚拟机通过VMM为所述第一应用进行硬件计算资源调度。这样,在虚拟机启动设定应用集合中的第一应用时,虚拟机可以将能够与物理CPU直通的第一VCPU直接配置给该第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证虚拟机对该第一应用的数据处理效率。

Description

一种计算资源调度方法及装置
本申请要求在2016年08月31日提交中国专利局、申请号为201610793375.7、发明名称为《一种计算资源调度方法及装置》的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机技术领域,尤其涉及一种计算资源调度方法及装置。
背景技术
随着计算机技术的飞速发展,计算机的能耗和资源利用率引起了研发人员的关注。云计算作为计算机技术中关键的计算模式,需要将所有的计算机抽象成特定的计算资源,然后将这些计算资源提供给用户,而不是像以前传输的计算模式那样,直接为用户提供一台或多台计算机。这种计算模式最大的好处就是用户可以根据自己的需要来申请资源,避免不必要的资源的浪费,提高资源利用率。
服务器虚拟化技术是云计算中基于基础设施层的关键技术,即通过对物理服务器进行虚拟化,实现在单台物理服务器上部署多个虚拟机(Virtual Machine,VM),来提高物理服务器的资源利用率,降低使用成本。
由于服务器虚拟化技术是基于虚拟机监视器(Virtual Machine Monitor,VMM)实现的,因此,需要在物理服务器的硬件和各虚拟机的虚拟操作系统(即客户操作系统(Guest Operating System,GuestOS))中加入了一层抽象硬件层,所以不可避免的会带来一定的性能开销。
例如:在物理服务器是多核操作系统的环境下,虚拟机中运行的应用(Application,APP)被配置到哪个中央处理器(Central Processing Unit,CPU)上运行,对该应用最终的性能影响非常大(尤其是在非一致性内存访问(Non-Uniform Memory Access,NUMA)架构下),而由于部署VMM增加了虚拟硬件层,因此,针对该应用的进行计算资源调度时,需要进行双重调度,即该虚拟机中的客户操作系统的内核(kernel)调度和VMM调度。根据测试,由于上述就算资源调度造成的性能开销并不稳定,可能导致该应用的性能下降50%以上。
对于一些对性能要求不高的应用来说,由于服务器虚拟化技术引入的性能开销是可以接受的,对用户体验影响不大。但是,对于一些对性能要求比较严苛的应用来说,上述性能开销可能会对用户体验起到严重的影响。例如,对于视频通信应用,虚拟机可能要非常快速的转发数据,但是由于上述性能开销不稳定,会导致瞬间的转发不及时,造成数据丢包,进而导致视频通信断断续续,严重影响客户体验。
发明内容
本发明提供一种计算资源调度方法及装置,用以解决现有技术中由于服务器虚拟化技术引入的性能开销较大,导致虚拟机处理部分应用的数据时造成数据丢包,进而影响客户体验的问题。
本发明提供的具体技术方案如下:
一方面,本发明实施例提供了一种计算资源调度方法,该方法应用于物理服务器中,其中,在所述物理服务器中的物理CPU包含能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU,该方法包括以下步骤:在所述物理服务器中的一个虚拟机启动第一应用时,所述虚拟机判断在设定应用集合中是否包含所述第一应用;在所述虚拟机确定所述设定应用集合中包含所述第一应用的情况下,所述虚拟机将能够直通第一物理CPU的第一VCPU配置给所述第一应用;在所述虚拟机确定所述设定应用集合中不包含所述第一应用的情况下,所述虚拟机将不能物理CPU直通的第二VCPU配置给所述第一应用,以使后续所述物理服务器中的VMM将所述第二VCPU调度给不能与VCPU直通的第二物理CPU。
通过上述方法,所述虚拟机可以将对性能要求比较严苛的应用配置在设定应用集合中进行维护,当所述虚拟机启动一个应用时,在判定该设定应用集合中存在该应用(即确定该应用为对性能要求严苛的应用)后,可以不通过所述物理服务器中的VMM进行计算资源调度,即所述虚拟机将直通所述第一物理CPU的第一VCPU配置给所述第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
在一个可能的设计中,在所述虚拟机启动所述第一应用之前,所述虚拟机确定所述物理服务器中设置的能够与VCPU直通的所述第一物理CPU,并将所述第一物理CPU与所述第一VCPU绑定。
在上述方法中,所述虚拟机将所述第一物理CPU与所述虚拟机中的第一VCPU绑定,这样,当所述虚拟机将所述第一VCPU配置给所述第一应用后,所述虚拟机可以直接确定所述第一应用的硬件计算资源为所述第一物理CPU,所述虚拟机可以不通过所述物理机服务器中的VMM进行硬件计算资源调度,而是通过所述虚拟机的客户操作系统的内核调度,实现硬件计算资源调度。
在一个可能的设计中,所述设定应用集合中包含的所述多个应用中任一个应用的调度优先级高于不属于所述设定集合的应用的调度优先级。这样,所述虚拟机可以确保在同时启动多个应用时,优先为所述设定应用集合中的应用进行计算资源调度,保证所述设定应用集合中的应用的性能要求。
在一个可能的设计中,所述虚拟机可以通过应用对应的服务质量(Quality of Service,QoS)参数确定的所述应用的调度优先级。
在一个可能的设计中,在所述虚拟机启动所述第一应用之前,所述虚拟机将所述第一VCPU添加到非虚拟化调度列表中;在上述情况下,所述虚拟机通过以下步骤实现将所述第一VCPU配置给所述第一应用:所述虚拟机在所述非虚拟化调度列表中选择所述第一VCPU,并将所述第一VCPU配置给所述第一应用。
通过上述方法,所述虚拟机包含的所述非虚拟化调度列表中包含能够直通物理CPU的VCPU,这样,当所述虚拟机启动一个对性能要求严苛的应用后,将非虚拟化调度列表中包含的VCPU配置给所述第一应用。
在一个可能的设计中,在所述虚拟机将所述第一VCPU配置给所述第一应用之前,所述虚拟机的内核创建所述第一应用对应的内核线程,所述内核线程为所述第一VCPU在实现所述第一应用的业务时执行的。
通过上述方法,后续所述虚拟机将所述第一VCPU配置给所述第一应用后,所述第一VCPU可以执行所述第一应用的业务。这样,该方法可以保证与所述第一VCPU直通的所述第一物理CPU有一个对应的内核线程来维护所述第一应用的运行环境,并保证所述第一应用不通过虚拟化层运行的运行环境。
在一个可能的设计中,在所述虚拟机将所述第一VCPU配置给所述第一应用之后,当所述第一VCPU在执行所述内核线程过程中出现异常切换时,所述虚拟机的内核执行所述内核线程直至将所述异常切换处理完成,继续转由所述第一VCPU继续执行所述内核线程。
通过上述方法,在所述第一物理CPU运行所述第一应用出现异常切换问题时,所述虚拟机的内核可以保证跳过VMM,直接进行处理,保证所述虚拟机运行所述第一应用的安全性。
在一个可能的设计中,所述虚拟机可以解除所述第一VCPU与所述第一物理CPU的绑定,即所述虚拟机将所述第一物理CPU还原为不能与VCPU直通的物理CPU。通过上述方法,所述虚拟机可以实现对能够与VCPU直通和不能与VCPU直通的两种物理CPU进行动态切换迁移,保证物理服务器的计算资源调度的灵活性。
又一方面,本发明实施例还提供了一种计算资源调度装置,该装置具有实现上述方法实例中虚拟机行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的设计中,所述装置的结构中包括判断单元和处理单元,这些单元可以执行上述方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。
再一方面,本发明实施例还提供了一种物理服务器,所述物理服务器中包含多个物理中央处理器CPU、虚拟机监视器VMM、至少一个虚拟机,每个虚拟机包含多个虚拟中央处理器VCPU,且每个虚拟机中维护有设定应用集合,所述设定应用集合中包括多个应用,其中,所述至少一个虚拟机中的每个虚拟机均具有实现上述方法示例中虚拟机的功能,具体参见方法示例中的详细描述。
采用本发明提供的计算资源调度方法,虚拟机可以将对性能要求比较严苛的应用配置在设定应用集合中进行维护,当虚拟机启动一个应用时,在判定该设定应用集合中存在该应用(即确定该应用为对性能要求严苛的应用)后,可以不通过所述物理服务器中的VMM进行计算资源调度,即所述虚拟机将直通所述第一物理CPU的第一VCPU配置给所述第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
附图说明
图1为本发明实施例提供的一种物理服务器的架构示意图;
图2为本发明实施例提供的一种计算资源调度方法的流程图;
图3为本发明实施例提供的一种计算资源调度装置的结构示意图;
图4为本发明实施例提供的一种物理服务器的结构示意图;
图5为本发明实施例提供的一种计算资源调度方法的具体流程图;
图6为本发明实施例提供的另一种一种物理服务器的结构示意图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
本发明实施例提供一种计算资源调度方法及装置,用以解决现有技术中由于服务器虚拟化技术引入的性能开销较大,导致虚拟机处理部分应用的数据时造成数据丢包,进而影响客户体验的问题。其中,本发明所述方法和装置基于同一发明构思,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
本发明实施例中,物理服务器中的物理CPU包含能够与虚拟中央处理器(Virtual CPU,VCPU)直通的物理CPU和不能与VCPU直通的物理CPU,在虚拟机启动第一应用后,所述虚拟机会判断在设定应用集合中是否包含所述第一应用;在确定包含时,所述虚拟机直接将与能够直通第一物理CPU的第一VCPU配置给所述第一应用;在确定不包含时,所述虚拟机通过VMM为所述第一应用进行硬件计算资源调度。这样,虚拟机可以将对性能要求比较严苛的应用配置在设定应用集合中进行维护,当虚拟机启动一个应用时,在判定该设定应用集合中存在该应用(即确定该应用为对性能要求严苛的应用)后,可以不通过所述物理服务器中的VMM进行计算资源调度,即所述虚拟机将直通所述第一物理CPU的第一VCPU配置给所述第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
以下,对本申请中的部分用语进行解释说明,以便与本领域技术人员理解。
物理CPU,是相对于VCPU而言,为物理服务器中的硬件计算资源。在本发明实施例中,一个可以实现服务器虚拟化技术的多核的物理服务器中包含多个物理CPU。所述多个物理CPU包含能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU。其中,能够与VCPU直通的物理CPU为该物理服务器配置给对性能要求严苛的应用的硬件计算资源,当一个物理CPU能够与VCPU直通后,该物理CPU不再对该物理服务器中的VMM所管理的其他虚拟机可见;不能与VCPU直通的物理CPU为该物理服务器配置给普通对性能要求不高的应用的硬件计算资源,当虚拟机实现对普通应用的计算资源调度时,需要采用传统的计算资源调度方式,即所述虚拟机通过所述虚拟机的内核调度和该物理服务器的VMM调度相结合的方式实现对所述普通应用的计算资源调度。
VCPU,为虚拟机中的CPU,在虚拟机通过一个VCPU执行某应用时,实际上还是需要将该应用调度到与该VCPU绑定的物理CPU上运行。
设定应用集合,维护在虚拟机中,其中包含多个应用。在本发明实施例中所述多个应用均为对性能要求严苛的应用。其中,所述设定应用集合中包含的所述多个应用中任一个应用的调度优先级高于不属于所述设定集合的应用的调度优先级。所述调度优先级指示虚拟机对应用进行计算资源调度的顺序。在本发明实施例中,虚拟机优先为所述设定应用集合中的应用进行计算资源调度。
客户操作系统,即虚拟操作系统,为虚拟机的操作系统。
主系统(HostSystem),为物理服务器的操作系统。
虚拟机管理系统,部署在物理服务器的主系统中,用于实现对物理服务器中的虚拟 机进行管理控制,例如实现对虚拟机创建、启动、关闭以及删除等操作。
多个,是指两个或两个以上。
和/或,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
为了更加清晰的描述本发明实施例的技术方案,下面结合图1,对本发明实施例可能的物理服务器的架构进行说明。
如图所示,所述物理服务器包括多个核,即包含多个物理CPU,且所述多个物理CPU包含能够与VCPU直通的物理CPU(如图中的物理CPU0和物理CPUq)和不能与VCPU直通的物理CPU(如图中物理CPU1、物理CPU2以及物理CPUp)。
由于对所述物理服务器进行虚拟化,因此,该物理服务器上可以部署多个VM,即如图中的VM1和VM2,且为了实现服务器虚拟化技术,该物理服务器中还包括VMM,该VMM用于实现所述物理服务器的硬件资源管理,以及计算资源调度。
在本发明实施例提供的所述物理服务器中,根据该应用的性能要求的不同,该应用启动时的计算资源的调度方式也不同,所述多个VM中的每个VM均可以维护一个设定应用集合,该设定应用集合中的应用都是对性能要求严格的应用,当一个VM启动一个应用时,该虚拟机会根据该应用是否包含在自身维护的设定应用集合中,若是,则通过虚拟机的内核调度,直接将能够直通与物理CPU的VCPU配置给该应用;否则,通过传统的计算资源调度方式,即通过虚拟机的内核调度和VMM调度双重调度相结合,为该应用进行硬件计算资源调度。
这样,该物理服务器可以保证实现服务器虚拟化技术的情况下,在该物理服务器中的虚拟机启动一个应用时,在确定该应用为对性能要求严苛的应用后,所述虚拟机直接将能够直通物理CPU的VCPU配置给所述该应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述该应用的数据处理效率,提高了客户体验。
进一步的,由于传统的物理服务器中的所有的VM均通过VMM进行计算资源调度,当VMM被攻击后,该物理服务器中的所有虚拟机都会被控制,导致虚拟机以及虚拟机中的应用的安全性较差。在本发明实施例中,VM中一些对性能要求严苛的应用的计算资源调度不通过VMM实现,也保证了这些应用的安全性。
例如,如图所示,当VM1启动APP1时,所述VM1确定该APP1包含在所述VM1维护的设定应用集合中,所述VM1确定能够与物理CPU0直通的VCPU1,并将所述VCPU1配置给所述APP1,这样,所述VM1为所述APP1分配的硬件计算资源为物理CPU0;
当VM1启动APP2时,所述VM1确定APP2不包含在所述VM1维护的所述设定应用集合中,所述VM1通过传统的计算资源调度方式,首先通过内核调度,将不能与物理CPU直通的VCPU2配置给所述APP2,然后再通过所述物理服务器中的VMM将所述VCPU2调度给物理CPU1,这样,所述VM1为所述APP2分配的硬件计算资源为物理CPU1。
下面将基于上面所述的本发明实施例涉及的共性方面,并结合附图,对本发明实施例进一步详细说明。
参阅图2所示,本发明实施例提供的一种计算资源调度方法,该方法可以应用于如图1所示的物理服务器中。该方法的处理流程包括:
步骤201:在虚拟机启动第一应用时,所述虚拟机判断在设定应用集合中是否包含所述第一应用,其中,所述设定应用集合中包括多个应用。
其中,所述虚拟机中维护有设定应用集合,所述设定应用集合中的多个应用为对性能要求严苛的应用。通过对图1所示的物理服务器描述可知,虚拟机中对计算资源的调度方式有两种,针对应用对性能要求的不同,该应用启动时,虚拟机使用的调度方式也不同,因此,在虚拟机启动所述第一应用时,需要判定所述第一应用是否为对性能要求严苛的应用,即需要判定在本地维护的设定应用集合中是否包含所述第一应用。
可选的,所述虚拟机中还维护有应用管理模块,该应用管理模块可以对用户提供应用注册接口,用户可以通过该注册接口将一些应用注册到所述设定应用集合中,以保证这些应用的性能要求。
可选的,在所述虚拟机启动所述第一应用之前,所述方法还包括:
所述虚拟机确定所述虚拟机所属的物理服务器中的所述第一物理CPU;
所述虚拟机将所述第一物理CPU与所述第一VCPU绑定。
其中,所述物理服务器中的物理CPU是否能够与VCPU直通,为所述物理服务器设置的,可选的,所述物理服务器的主系统中包含物理CPU资源管理模块,用于将所述物理服务器中VMM所管理的多个物理CPU划分为上述两种类型。
所述物理服务器中的虚拟机管理系统在创建所述虚拟机时,会通过所述物理CPU资源管理模块的接口,确定该物理服务器中设置为能够与VCPU直通的所述第一物理CPU,所述虚拟机将确定的所述第一物理CPU与该虚拟机中的VCPU进行绑定,其中,所述第一物理CPU可以绑定该虚拟机中的至少一个VCPU(包含所述第一VCPU),同时所述第一物理CPU还可以绑定一个或多个虚拟机中的VCPU,本发明对此不做限定。
通过上述方法,所述虚拟机将所述第一物理CPU与所述虚拟机中的第一VCPU绑定,这样,当所述虚拟机将所述第一VCPU配置给所述第一应用后,所述虚拟机可以直接确定所述第一应用的硬件计算资源为所述第一物理CPU,所述虚拟机可以不通过所述物理机服务器中的VMM进行硬件计算资源调度,而是通过所述虚拟机的客户操作系统的内核调度,实现硬件计算资源调度。
可选的,所述物理服务器中的虚拟机管理系统在创建所述虚拟机时,可以对该虚拟机添加一个非虚拟化标签,用于指示该虚拟机可以实现不通过VMM进行计算资源调度。
可选的,所述虚拟机可以通过调度优先级,确定对同时启动的多个应用进行计算资源调度的顺序。在这种情况下,所述设定应用集合中包含的所述多个应用中任一个应用的调度优先级高于不属于所述设定集合的应用的调度优先级。通过这种方法,所述虚拟机可以确保在同时启动多个应用时,优先为所述设定应用集合中的应用进行计算资源调度,保证所述设定应用集合中的应用的性能要求。
可选的,应用的调度优先级可以是用户针对该应用设置的,也可以是所述虚拟机通过该应用对应的服务质量(Quality of Service,QoS)参数确定的,对此本发明不做限定。
步骤202:当所述虚拟机确定在所述设定应用集合中包含所述第一应用时,所述虚拟 机将第一虚拟中央处理器VCPU配置给所述第一应用,所述第一VCPU能够直通第一物理CPU;当所述虚拟机确定在所述设定应用集合中不包含所述第一应用时,所述虚拟机将第二VCPU配置给所述第一应用,并通过所述虚拟机所属的物理服务器中的VMM将所述第二VCPU调度给第二物理CPU,所述第二VCPU不能直通所述第二物理CPU。
其中,所述第一物理CPU和所述第二物理CPU均为所述虚拟机所属的物理服务器中的物理CPU。
由于以上论述可知,根据应用的性能要求的不同,该应用启动时的所述虚拟机的计算资源的调度方式也不同。该虚拟机中维护的设定应用集合中的应用都是对性能要求严格的应用,因此当所述虚拟机启动所述第一应用时,所述虚拟机会根据所述第一应用是否包含在该设定应用集合中,若是,则通过虚拟机的内核调度,直接将能够与物理CPU绑定的VCPU配置给该应用;否则,通过传统的计算资源调度方式,即通过虚拟机的内核调度和VMM调度双重调度相结合,为该第一应用进行硬件计算资源调度。
可选的,在所述虚拟机启动所述第一应用之前,所述方法还包括:所述虚拟机将所述第一VCPU添加到非虚拟化调度列表中;可选的,本操作可以在所述虚拟机启动时执行。
通过上述可知,所述虚拟机启动以后所述虚拟机中包含两个调度列表,一个是传统的所述虚拟机默认的普通调度列表,另一个为非虚拟化调度列表。普通调度列表中包含所述虚拟机中不能直通物理CPU的VCPU,而所述非虚拟化调度列表中包含所述虚拟机中能够直通物理CPU的VCPU。当所述虚拟机启动一个应用后,确定该应用为普通应用时,所述虚拟机会将普通调度列表中的VCPU配置给该应用;当所述虚拟机启动一个对性能要求严苛的应用后,将非虚拟化调度列表中包含的VCPU配置给所述第一应用。
由上述论述可知,所述虚拟机将所述第一VCPU配置给所述第一应用,包括:
所述虚拟机在所述非虚拟化调度列表中选择所述第一VCPU,并将所述第一VCPU配置给所述第一应用。
可选的,当所述非虚拟化调度列表中包含多个VCPU时,所述虚拟机可以对多个VCPU进行负载均衡处理,在所述多个VCPU中选择所述第一VCPU。
可选的,在所述虚拟机将所述第一VCPU配置给所述第一应用之前,所述方法还包括:
所述虚拟机的内核创建所述第一应用对应的内核线程,所述内核线程为所述第一VCPU在实现所述第一应用的业务时执行的。
所述虚拟机的内核创建该内核线程,这样,后续所述虚拟机将所述第一VCPU配置给所述第一应用后,所述第一VCPU可以执行所述第一应用的业务。这样,可以保证与所述第一VCPU直通的所述第一物理CPU有一个对应的内核线程来维护所述第一应用的运行环境,并保证所述第一应用不通过虚拟化层运行的运行环境。
可选的,在所述虚拟机将所述第一VCPU配置给所述第一应用之后,所述方法还包括:
当所述第一VCPU在执行所述内核线程过程中出现异常切换(例如缺页、中断等情况)时,所述虚拟机的内核执行所述内核线程直至将所述异常切换处理完成,继续转由所述第一VCPU继续执行。
通过上述方法,在所述第一物理CPU运行所述第一应用出现异常切换问题时,所述 虚拟机的内核可以保证跳过VMM,直接进行处理,保证所述虚拟机运行所述第一应用的安全性。
在本发明实施例提供的方法中,该方法还包括:
所述虚拟机解除所述第一VCPU与所述第一物理CPU的绑定,即所述虚拟机将所述第一物理CPU还原为不能与VCPU直通的物理CPU。虚拟机可以实现对能够与VCPU直通和不能与VCPU直通的两种物理CPU进行动态切换迁移,保证物理服务器的计算资源调度的灵活性。
采用本发明上述实施例中的计算资源调度方法,物理服务器中的物理CPU中包含能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU,在虚拟机启动第一应用后,所述虚拟机会判断在设定应用集合中是否包含所述第一应用;在确定包含时,所述虚拟机直接将与能够直通第一物理CPU的第一VCPU配置给所述第一应用;在确定不包含时,所述虚拟机通过VMM为所述第一应用进行硬件计算资源调度。这样,虚拟机可以将对性能要求比较严苛的应用配置在设定应用集合中进行维护,当虚拟机启动一个应用时,在判定该设定应用集合中存在该应用(即确定该应用为对性能要求严苛的应用)后,可以不通过所述物理服务器中的VMM进行计算资源调度,即所述虚拟机将直通所述第一物理CPU的第一VCPU配置给所述第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
基于以上实施例,本发明还提供了一种计算资源调度装置,参阅图3所示,该计算资源调度装置300中包括:判断单元301和处理单元302,其中,
判断单元301,用于在所述装置300启动第一应用时,判断在设定应用集合中是否包含所述第一应用,其中,所述设定应用集合中包括多个应用;
处理单元302,用于当所述判断单元301确定在所述设定应用集合中包含所述第一应用时,将第一虚拟中央处理器VCPU配置给所述第一应用,所述第一VCPU能够直通第一物理CPU;以及
当所述判断单元301确定在所述设定应用集合中不包含所述第一应用时,将第二VCPU配置给所述第一应用,并通过所述装置300所在的物理服务器中的虚拟机监视器VMM将所述第二VCPU调度给第二物理CPU,所述第二VCPU不能直通所述第二物理CPU绑定;
其中,所述第一物理CPU和所述第二物理CPU均为所述物理服务器中的物理CPU。
可选的,所述处理单元302,还用于:
在所述装置300启动所述第一应用之前,确定所述物理服务器中的所述第一物理CPU;
将所述第一物理CPU与所述第一VCPU绑定。
可选的,所述设定应用集合中包含的所述多个应用中任一个应用的调度优先级高于不属于所述设定集合的应用的调度优先级。
可选的,所述处理单元302,还用于:
在所述装置300启动所述第一应用之前,将所述第一VCPU添加到非虚拟化调度列表中;
所述处理单元302,在将所述第一VCPU配置给所述第一应用时,具体用于:
在所述非虚拟化调度列表中选择所述第一VCPU,并将所述第一VCPU配置给所述第一应用。
可选的,所述处理单元302,还用于:
在将所述第一VCPU配置给所述第一应用之前,创建所述第一应用对应的内核线程,所述内核线程为所述第一VCPU在实现所述第一应用的业务时执行的。
可选的,所述处理单元302,还用于:
在将所述第一VCPU配置给所述第一应用之后,当所述第一VCPU在执行所述内核线程过程中出现异常切换时,执行所述内核线程直至将所述异常切换处理完成。
可选的,所述装置300还包含管理单元,用于:
解除所述第一VCPU与所述第一物理CPU的绑定。
采用本发明实施例提供的计算资源调度装置,虚拟机运行在该装置中,该装置所在的物理服务器中的物理CPU中包含能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU,在该装置启动第一应用后,所述装置会判断在设定应用集合中是否包含所述第一应用;在确定包含时,所述装置直接将与能够直通第一物理CPU的第一VCPU配置给所述第一应用;在确定不包含时,所述装置通过VMM为所述第一应用进行硬件计算资源调度。这样,该装置可以将对性能要求比较严苛的应用配置在设定应用集合中进行维护,当该装置启动一个应用时,在判定该设定应用集合中存在该应用(即确定该应用为对性能要求严苛的应用)后,可以不通过所述物理服务器中的VMM进行计算资源调度,即所述装置将与直通所述第一物理CPU的第一VCPU配置给所述第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
基于以上实施例,本发明还提供了一种物理服务器,如图4所示,该物理服务器中包括VM、VMM以及所述物理服务器的主系统,其中,
所述VMM用于实现所述物理服务器的硬件资源管理,以及计算资源调度。
所述主系统中包含物理CPU资源管理模块,用于将所述物理服务器中的物理CPU设置为能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU两种,并对这两种物理CPU进行管理,例如,可以将某个能够与VCPU直通的物理CPU转换为不能与VCPU直通的物理CPU,或者将某个不能与VCPU直通的物理CPU转换为能够与VCPU直通的物理CPU。
需要说明的是,在实际应用中,根据实际的运行平台的不同,所述物理CPU资源管理模块的部署方式也不尽相同,本发明实施例仅仅提供一种可能的示例。例如,在内核级虚拟机(Kernel-based Virtual Machine,KVM)虚拟化平台下,所述物理CPU资源管理模块可以部署在主系统中。又例如,在Xen虚拟化平台或者VMware ESXi虚拟化平台下,由于其中的VMM在主系统和硬件之间,负载管理硬件资源,所以在这两个平台下,所述物理CPU资源管理模块分为物理CPU虚拟化管理模块和物理CPU资源管理接口,并分别部署在VMM和主系统中,其中,部署在VMM中的物理CPU虚拟化管理模块,用于将上述两种物理CPU进行隔离,保证虚拟机的VCPU不会在能够与VCPU直通的物理CPU上运行;部署在主系统中的物理CPU资源管理接口,用于实现对上述两种物理CPU的管理。
在虚拟机管理系统在创建VM时,会通过所述物理CPU资源管理模块的接口,确定 该物理服务器中的能够与VCPU直通的物理CPU,从而所述VM将确定的该物理CPU与该VM中的VCPU进行绑定。
VM中还包含本地资源适配装置,用于将该VM中的关键应用(对性能要求严苛的应用)进行计算资源调度,即将关键应用调度到能够与VCPU直通的物理CPU上运行。其中,所述本地资源适配装置中包含应用管理模块,以及客户操作系统的内核中的本地CPU控制微内核、内核调度适配模块以及混合资源驱动,其中,
应用管理模块,用于对用户提供应用注册接口,以使用户通过该注册接口将一个应用注册到所述VM维护的设定应用集合中,从而保证这些应用的性能要求;
本地CPU控制微内核,用于创建关键应用的内核线程,以及处理所述第一VCPU在执行该内核线程过程中出现的异常切换问题,保证所述关键应用的运行环境和运行安全性;
内核调度适配模块,用于将关键应用调度到能够与VCPU直通的物理CPU上运行,即将该物理CPU直通的VCPU配置给所述关键应用;
混合资源驱动,通过调用物理CPU资源管理的接口,确定上述两种物理CPU,并将每种物理CPU的列表呈现给所述VM。
在所述物理服务器中,参阅图5所示,所述VM对关键应用进行计算资源调度的流程包括:
步骤501:所述VM启动,所述VM的客户操作系统的内核加载本地资源适配装置。
步骤502:混合资源驱动调用物理CPU资源管理模块,确定能够与VCPU直通的第一物理CPU,以及确定所述VM中与所述第一物理CPU绑定的第一VCPU。
步骤503:所述VM确定具有非虚拟化标签,加载本地CPU控制微内核和内核调度适配模块。
步骤504:本地CPU控制微内核将所述第一物理CPU的运行模式设置为非虚拟化模式,即关闭所述第一物理CPU的硬件虚拟化支持,并启动一组用户响应所述第一物理CPU内存的内核线程。
步骤505:内核调度适配模块将内核默认的普通调度列表中与所述第一物理CPU绑定的VCPU删除,并创建非虚拟化调度列表,该非虚拟化物理调度列表中包含该VM中所有与所述第一物理CPU绑定的VCPU(包含第一VCPU)。
步骤506:所述VM启动一个应用后,所述VM确定该应用为关键应用。
其中,所述VM可以通过确定本地维护的设定应用集合中是否存在所述应用,判断该应用是否为关键应用。
步骤507:本地CPU控制微内核创建所述关键应用对应的内核线程和资源。
步骤508:内核调度适配模块在所述非虚拟化调度队列中选择一个目标VCPU(所述第一VCPU),并将选择的所述第一VCPU配置给所述关键应用。
步骤509:在与所述第一VCPU绑定的所述第一物理CPU上执行所述关键应用对应的内核线程。
步骤510:当所述第一VCPU在执行所述内核线程过程中出现异常切换时,所述本地CPU控制微内核执行所述内核线程直至将所述异常切换处理,在所述异常切换处理完成后,所述内核线程的运行权切换回所述第一VCPU。
采用本发明上述实施例中的计算资源调度方法,当虚拟机启动一个关键应用后,所 述虚拟机可以不通过所述物理服务器中的VMM进行计算资源调度,而是将与能够与物理CPU直通的VCPU配置给所述第一应用,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
需要说明的是,本发明实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
基于以上实施例,本发明实施例还提供了一种物理服务器,参阅图6所示,该物理服务器600包括:多核处理器601、通信总线602以及存储器603,其中,
所述多核处理器601和所述存储器603通过所述通信总线602相互连接。所述通信总线602可以是外设部件互连标准(peripheral component interconnect,PCI)通信总线或扩展工业标准结构(extended industry standard architecture,EISA)通信总线等。所述通信总线可以分为地址通信总线、数据通信总线、控制通信总线等。为便于表示,图6中仅用一条粗线表示,但并不表示仅有一根通信总线或一种类型的通信总线。
所述物理服务器采用服务器虚拟化技术,因此,所述物理服务器600内部还包含VMM 604以及至少一个虚拟机,其中,每个虚拟机包含多个VCPU,且每个虚拟机中维护有设定应用集合,所述设定应用集合中包含多个应用。
所述多核处理器601中包括多个物理CPU,所述多个物理CPU可以标记为能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU两种类型。所述多核处理器601中的至少一个物理CPU,用于实现本发明实施例图2所示的计算资源调度方法,包括:
在所述至少一个虚拟机中一个虚拟机启动第一应用时,判断在维护的设定应用集合中是否包含所述第一应用;
当确定在所述设定应用集合中包含所述第一应用时,将所述虚拟机中的多个VCPU中的第一VCPU配置给所述第一应用,所述第一VCPU能够直通所述多个物理CPU中的第一物理CPU;
当确定在所述设定应用集合中不包含所述第一应用时,将所述虚拟机中的多个VCPU中的第二VCPU配置给所述第一应用,所述第二VCPU不能直通所述第二物理CPU;并控制所述VMM 604将所述第二VCPU调度给所述多个物理CPU中的第二物理CPU。
可选的,所述物理服务器600还包括通信接口605,用于与所述物理服务器600相连的其他设备进行通信交互。
所述存储器603,用于存放程序等。具体地,程序可以包括程序代码,该程序代码包 括计算机操作指令。存储器603可能包含随机存取存储器(random access memory,简称RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。所述多核处理器601中的所述至少一个物理CPU执行存储器603所存放的应用程序,实现上述功能,从而实现如图2所示的计算资源调度方法。
采用本发明实施例提供的物理服务器,该物理服务器中的物理CPU包含能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU,在该物理服务器中一个虚拟机启动第一应用后,所述物理服务器会判断在针对所述虚拟机存储的设定应用集合中是否包含所述第一应用;在确定包含时,所述物理服务器设备直接将所述虚拟机中能够与物理CPU直通的第一VCPU配置给所述第一应用;在确定不包含时,所述物理服务器通过VMM为所述第一应用进行硬件计算资源调度。这样,所述物理服务器可以将对性能要求比较严苛的应用配置在每个虚拟机对应的设定应用集合中进行维护,当一个虚拟机启动一个应用时,在判定该虚拟机对应的设定应用集合中存在该应用(即确定该应用为对性能要求严苛的应用)后,可以不通过所述物理服务器中的VMM进行计算资源调度,即所述物理服务器将能够直通物理CPU的所述第一VCPU配置给所述第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
综上所述,本发明实施例提供了一种计算资源调度方法及装置,在该方法中,物理服务器中的物理CPU中包含能够与VCPU直通的物理CPU和不能与VCPU直通的物理CPU,在虚拟机启动第一应用后,所述虚拟机会判断在设定应用集合中是否包含所述第一应用;在确定包含时,所述虚拟机直接将与能够直通第一物理CPU的第一VCPU配置给所述第一应用;在确定不包含时,所述虚拟机通过VMM为所述第一应用进行硬件计算资源调度。这样,虚拟机可以将对性能要求比较严苛的应用配置在设定应用集合中进行维护,当虚拟机启动一个应用时,在判定该设定应用集合中存在该应用(即确定该应用为对性能要求严苛的应用)后,可以不通过所述物理服务器中的VMM进行计算资源调度,即所述虚拟机将直通所述第一物理CPU的第一VCPU配置给所述第一应用,而无需进行VMM调度,降低了由于服务器虚拟化技术引入的性能开销,保证所述虚拟机对所述第一应用的数据处理效率,提高了客户体验。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方 式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (15)

  1. 一种计算资源调度方法,其特征在于,包括:
    在虚拟机启动第一应用时,所述虚拟机判断在设定应用集合中是否包含所述第一应用,其中,所述设定应用集合中包含多个应用;
    当所述虚拟机确定在所述设定应用集合中包含所述第一应用时,所述虚拟机将第一虚拟中央处理器VCPU配置给所述第一应用,所述第一VCPU能够直通第一物理中央处理器CPU;
    当所述虚拟机确定在所述设定应用集合中不包含所述第一应用时,所述虚拟机将第二VCPU配置给所述第一应用,并通过所述虚拟机所属的物理服务器中的虚拟机监视器VMM将所述第二VCPU调度给第二物理CPU,所述第二VCPU不能直通所述第二物理CPU;
    其中,所述第一物理CPU和所述第二物理CPU均为所述物理服务器中的物理CPU。
  2. 如权利要求1所述的方法,其特征在于,在所述虚拟机启动所述第一应用之前,所述方法还包括:
    所述虚拟机确定所述物理服务器中的所述第一物理CPU;
    所述虚拟机将所述第一物理CPU与所述第一VCPU绑定。
  3. 如权利要求1或2所述的方法,其特征在于,所述设定应用集合中包含的所述多个应用中任一个应用的调度优先级高于不属于所述设定集合的应用的调度优先级。
  4. 如权利要求1-3任一项所述的方法,其特征在于,在所述虚拟机启动所述第一应用之前,所述方法还包括:所述虚拟机将所述第一VCPU添加到非虚拟化调度列表中;
    所述虚拟机将所述第一VCPU配置给所述第一应用,包括:
    所述虚拟机在所述非虚拟化调度列表中选择所述第一VCPU,并将所述第一VCPU配置给所述第一应用。
  5. 如权利要求1-4任一项所述的方法,其特征在于,在所述虚拟机将所述第一VCPU配置给所述第一应用之前,所述方法还包括:
    所述虚拟机的内核创建所述第一应用对应的内核线程,所述内核线程为所述第一VCPU在实现所述第一应用的业务时执行的。
  6. 如权利要求5所述的方法,其特征在于,在所述虚拟机将所述第一VCPU配置给所述第一应用之后,所述方法还包括:
    当所述第一VCPU在执行所述内核线程过程中出现异常切换时,所述虚拟机的内核执行所述内核线程直至将所述异常切换处理完成。
  7. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    所述虚拟机解除所述第一VCPU与所述第一物理CPU的绑定。
  8. 一种计算资源调度装置,其特征在于,包括:
    判断单元,用于在所述装置启动第一应用时,判断在设定应用集合中是否包含所述第一应用,其中,所述设定应用集合中包括多个应用;
    处理单元,用于当所述判断单元确定在所述设定应用集合中包含所述第一应用时,将第一虚拟中央处理器VCPU配置给所述第一应用,所述第一VCPU能够直通第一物理中央处理器CPU;以及
    当所述判断单元确定在所述设定应用集合中不包含所述第一应用时,将第二VCPU配置给所述第一应用,并通过所述装置所在的物理服务器中的虚拟机监视器VMM将所述第二VCPU调度给第二物理CPU,所述第二VCPU不能直通所述第二物理CPU;
    其中,所述第一物理CPU和所述第二物理CPU均为所述物理服务器中的物理CPU。
  9. 如权利要求8所述的装置,其特征在于,所述处理单元,还用于:
    在所述装置启动所述第一应用之前,确定所述物理服务器中的所述第一物理CPU;
    将所述第一物理CPU与所述第一VCPU绑定。
  10. 如权利要求8或9所述的装置,其特征在于,所述设定应用集合中包含的所述多个应用中任一个应用的调度优先级高于不属于所述设定集合的应用的调度优先级。
  11. 如权利要求8-10任一项所述的装置,其特征在于,所述处理单元,还用于:
    在所述装置启动所述第一应用之前,将所述第一VCPU添加到非虚拟化调度列表中;
    所述处理单元,在将所述第一VCPU配置给所述第一应用时,具体用于:
    在所述非虚拟化调度列表中选择所述第一VCPU,并将所述第一VCPU配置给所述第一应用。
  12. 如权利要求8-11任一项所述的装置,其特征在于,所述处理单元,还用于:
    在将所述第一VCPU配置给所述第一应用之前,创建所述第一应用对应的内核线程,所述内核线程为所述第一VCPU在实现所述第一应用的业务时执行的。
  13. 如权利要求12所述的装置,其特征在于,所述处理单元,还用于:
    在将所述第一VCPU配置给所述第一应用之后,当所述第一VCPU在执行所述内核线程过程中出现异常切换时,执行所述内核线程直至将所述异常切换处理完成。
  14. 如权利要求9所述的装置,其特征在于,管理单元,用于:
    解除所述第一VCPU与所述第一物理CPU的绑定。
  15. 一种物理服务器,其特征在于,所述物理服务器中包含多个物理中央处理器CPU、虚拟机监视器VMM、至少一个虚拟机,每个虚拟机包含多个虚拟中央处理器VCPU,且每个虚拟机中维护有设定应用集合,所述设定应用集合中包括多个应用,其中,
    在所述至少一个虚拟机中一个虚拟机启动第一应用时,所述虚拟机判断在维护的设定应用集合中是否包含所述第一应用;当所述虚拟机确定在所述设定应用集合中包含所述第一应用时,所述虚拟机将所述虚拟机中的多个VCPU中的第一VCPU配置给所述第一应用,所述第一VCPU能够直通所述多个物理CPU中的第一物理CPU;当所述虚拟机确定在所述设定应用集合中不包含所述第一应用时,所述虚拟机将所述虚拟机中的多个VCPU中的第二VCPU配置给所述第一应用,所述第二VCPU不能直通所述第二物理CPU;所述VMM将所述第二VCPU调度给所述多个物理CPU中的第二物理CPU。
PCT/CN2017/095879 2016-08-31 2017-08-03 一种计算资源调度方法及装置 WO2018040845A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610793375.7 2016-08-31
CN201610793375.7A CN106383747A (zh) 2016-08-31 2016-08-31 一种计算资源调度方法及装置

Publications (1)

Publication Number Publication Date
WO2018040845A1 true WO2018040845A1 (zh) 2018-03-08

Family

ID=57938150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/095879 WO2018040845A1 (zh) 2016-08-31 2017-08-03 一种计算资源调度方法及装置

Country Status (2)

Country Link
CN (1) CN106383747A (zh)
WO (1) WO2018040845A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383747A (zh) * 2016-08-31 2017-02-08 华为技术有限公司 一种计算资源调度方法及装置
CN108459906B (zh) * 2017-02-20 2021-06-29 华为技术有限公司 一种vcpu线程的调度方法及装置
CN107273188B (zh) * 2017-07-19 2020-08-18 苏州浪潮智能科技有限公司 一种虚拟机中央处理单元cpu绑定方法及装置
CN107479945B (zh) * 2017-08-15 2021-06-22 爱普(福建)科技有限公司 一种虚拟机资源调度方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169731A (zh) * 2007-12-05 2008-04-30 华为技术有限公司 多路多核服务器及其cpu的虚拟化处理方法
CN101354663A (zh) * 2007-07-25 2009-01-28 联想(北京)有限公司 应用于虚拟机系统的真实cpu资源的调度方法及调度装置
US20090150896A1 (en) * 2007-12-05 2009-06-11 Yuji Tsushima Power control method for virtual machine and virtual computer system
CN103617071A (zh) * 2013-12-02 2014-03-05 北京华胜天成科技股份有限公司 一种资源独占及排它的提升虚拟机计算能力的方法及装置
CN103699428A (zh) * 2013-12-20 2014-04-02 华为技术有限公司 一种虚拟网卡中断亲和性绑定的方法和计算机设备
CN106383747A (zh) * 2016-08-31 2017-02-08 华为技术有限公司 一种计算资源调度方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470635B (zh) * 2007-12-24 2012-01-25 联想(北京)有限公司 一种多虚拟处理器同步调度的方法及计算机
US8739179B2 (en) * 2008-06-30 2014-05-27 Oracle America Inc. Method and system for low-overhead data transfer
CN105589751B (zh) * 2015-11-27 2019-03-15 新华三技术有限公司 一种物理资源调度方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354663A (zh) * 2007-07-25 2009-01-28 联想(北京)有限公司 应用于虚拟机系统的真实cpu资源的调度方法及调度装置
CN101169731A (zh) * 2007-12-05 2008-04-30 华为技术有限公司 多路多核服务器及其cpu的虚拟化处理方法
US20090150896A1 (en) * 2007-12-05 2009-06-11 Yuji Tsushima Power control method for virtual machine and virtual computer system
CN103617071A (zh) * 2013-12-02 2014-03-05 北京华胜天成科技股份有限公司 一种资源独占及排它的提升虚拟机计算能力的方法及装置
CN103699428A (zh) * 2013-12-20 2014-04-02 华为技术有限公司 一种虚拟网卡中断亲和性绑定的方法和计算机设备
CN106383747A (zh) * 2016-08-31 2017-02-08 华为技术有限公司 一种计算资源调度方法及装置

Also Published As

Publication number Publication date
CN106383747A (zh) 2017-02-08

Similar Documents

Publication Publication Date Title
JP6646114B2 (ja) 動的仮想マシンサイジング
EP3039540B1 (en) Virtual machine monitor configured to support latency sensitive virtual machines
Suzuki et al. {GPUvm}: Why Not Virtualizing {GPUs} at the Hypervisor?
EP3073373B1 (en) Method for interruption affinity binding of virtual network interface card, and computer device
EP2519877B1 (en) Hypervisor-based isolation of processor cores
CN108037994B (zh) 一种支持异构环境下多核并行处理的调度机制
US9697029B2 (en) Guest idle based VM request completion processing
CN104598294B (zh) 用于移动设备的高效安全的虚拟化方法及其设备
US9959134B2 (en) Request processing using VM functions
WO2018040845A1 (zh) 一种计算资源调度方法及装置
KR101847518B1 (ko) 가상화 플랫폼에 의한 중단을 처리하는 및 관련 장치
JP2008527506A (ja) Os隔離シーケンサー上のユーザーレベルのマルチスレッド化をエミュレートする機構
KR20160033517A (ko) 인터럽트 컨트롤러를 위한 하이브리드 가상화 방법
CN103744716A (zh) 一种基于当前vcpu调度状态的动态中断均衡映射方法
Yu et al. Real-time enhancement for Xen hypervisor
US9600314B2 (en) Scheduler limited virtual device polling
CN110447012B (zh) 协作虚拟处理器调度
US20220156103A1 (en) Securing virtual machines in computer systems
CN103473135A (zh) 虚拟化环境下自旋锁lhp现象的处理方法
US11169837B2 (en) Fast thread execution transition
US10387178B2 (en) Idle based latency reduction for coalesced interrupts
US9766917B2 (en) Limited virtual device polling based on virtual CPU pre-emption
Gao et al. Building a virtual machine-based network storage system for transparent computing
Fornaeus Device hypervisors
Hamayun et al. Towards hard real-time control and infotainment applications in automotive platforms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17845147

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17845147

Country of ref document: EP

Kind code of ref document: A1