WO2018112855A1 - Virtualisation method and device, electronic device, and computer program product - Google Patents

Virtualisation method and device, electronic device, and computer program product Download PDF

Info

Publication number
WO2018112855A1
WO2018112855A1 PCT/CN2016/111590 CN2016111590W WO2018112855A1 WO 2018112855 A1 WO2018112855 A1 WO 2018112855A1 CN 2016111590 W CN2016111590 W CN 2016111590W WO 2018112855 A1 WO2018112855 A1 WO 2018112855A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
operating system
virtual cpu
end thread
cpu
Prior art date
Application number
PCT/CN2016/111590
Other languages
French (fr)
Chinese (zh)
Inventor
温燕飞
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2016/111590 priority Critical patent/WO2018112855A1/en
Priority to CN201680002851.7A priority patent/CN106796530B/en
Publication of WO2018112855A1 publication Critical patent/WO2018112855A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Definitions

  • the present application relates to computer technology, and in particular, to a virtualization method, apparatus, and electronic device, computer program product.
  • FIG. 1 A virtualization architecture based on Qemu/KVM (Kernel-based Virtual Machine) technology is shown in FIG.
  • the virtualization architecture based on Qemu/KVM technology consists of a primary Host operating system and several virtual guest guest operating systems.
  • the Host operating system includes multiple Host user space programs and the Host Linux kernel.
  • Each guest guest operating system includes user space, Guest Linux kernel, and analog processor Qemu. These operating systems run on the same set of hardware processor chips, sharing processor and peripheral resources.
  • the ARM processor supporting the virtualization architecture includes at least EL2, EL1, and EL0 modes, and the virtual machine manager Hypervisor program is run in EL2 mode; the Linux kernel program is run in EL1 mode, that is, the Linux kernel program; and the user space is run in the EL0 mode. program.
  • the Hypervisor layer manages hardware resources such as CPU, memory, timers, and interrupts, and can use different CPUs, memory, timers, and interrupted virtualization resources to load different operating systems into physical processors for runtime. Functionality.
  • KVM/Hypervisor spans the Host Linux kernel and Hypervisor. It provides a driver node for Qemu, which allows Qemu to create virtual CPUs through KVM nodes and manage virtualized resources.
  • KVM/Hypervisor can also host Host Linux systems. Switch out on the physical CPU, then load the Guest Linux system to run on the physical processor, and handle subsequent transactions that the Guest Linux system exits abnormally.
  • Qemu runs as an application for Host Linux and provides virtual operation for Guest Linux.
  • the hardware device resources through the KVM node of the KVM/Hypervisor module, create a virtual CPU, allocate physical hardware resources, and load an unmodified Guest Linux onto the physical hardware processing to run.
  • the front-end thread in the guest operating system usually calls a corresponding API (Application Program Interface) to generate a corresponding processing instruction when receiving a user operation; and then the processing is performed;
  • the instruction is passed to a back-end thread corresponding to the front-end thread in the backend server Backend Server running in the Host operating system, and the back-end thread drives the corresponding hardware device or module to perform the corresponding operation, and then executes
  • the result is a response to the application interface call instruction.
  • the response process to the API call instruction is relatively long, so that the response to the user operation takes a long time and affects the user experience.
  • the embodiment of the present application provides a virtualization method, device, and electronic device and computer program product, which are mainly used to shorten a response process to an API call instruction in a virtualization system.
  • a virtualization method is provided, which is applied to a multi-core processor, including: binding a front-end thread and a back-end thread to a same physical central processing unit CPU; wherein the front-end thread is used Determining, according to the application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface calling instruction, and sending the processing instruction to a corresponding backend thread at the second operating system; the backend thread, And being used to receive and execute the processing instruction at the second operating system, and return the processing result as a response of the application interface calling instruction or return to the front-end thread.
  • a virtualization apparatus for a multi-core processor, including: a binding module, configured to bind a front-end thread and a back-end thread to the same physical center a processor CPU, wherein the front-end thread is configured to determine, according to an application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface call instruction, and send the processing instruction to the second operating system Corresponding backend thread; the backend thread is configured to receive and execute the processing instruction at the second operating system, and return the processing result as a response of the application interface calling instruction or to the front end thread.
  • an electronic device comprising: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored
  • the memory is configured and executed by the one or more processors, the one or more modules comprising instructions for performing the various steps in the virtual method of the first aspect of the embodiments of the present application.
  • a computer program product for encoding an instruction for performing a process, the process comprising the virtual method of the first aspect of the embodiment of the present application .
  • the virtualization method, the device, and the electronic device and the computer program product according to the embodiment of the present application bind the front-end thread and the corresponding back-end thread to the same physical CPU, so that during the virtualization process, the guest operating system and the Host Switching between operating systems and switching between front-end threads and back-end threads can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instructions, and improving the user experience.
  • FIG. 1 A schematic diagram of a virtualization architecture based on Qemu/KVM technology is shown in FIG. 1;
  • FIG. 2 illustrates a system architecture for implementing a virtualization method in an embodiment of the present application
  • FIG. 3 is a flowchart of a virtualization method according to Embodiment 1 of the present application.
  • FIG. 4 is a flowchart of a virtualization method according to Embodiment 2 of the present application.
  • FIG. 5 is a flowchart of a virtualization method according to Embodiment 3 of the present application.
  • FIG. 6 is a schematic structural diagram of a virtualization device according to Embodiment 4 of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to Embodiment 5 of the present application.
  • the inventor has found that in the prior art virtualization solution, when a certain thread is created in a multi-core processor, the Linux kernel schedules a virtual CPU running by the thread according to a preset policy, for example, According to the load balancing principle, scheduling according to the polling principle; therefore, which virtual CPU is running on a certain thread is uncertain; meanwhile, when Qemu creates a virtual CPU, the Linux kernel also schedules the virtual CPU according to a preset policy.
  • the running physical CPU for example, according to the load balancing principle or scheduling according to the polling principle, etc., therefore, the correspondence between the virtual CPU and the physical CPU is also uncertain. Therefore, it is unclear which physical CPU the thread in the system is running on.
  • a virtualization method, apparatus, system, electronic device, and computer program product are provided, and a front-end thread and a corresponding back-end thread are bound to the same physical CPU, thereby making the virtualization process Switching between the guest operating system and the host operating system, and switching between the front-end thread and the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
  • the solution in the embodiment of the present application can be applied to various scenarios, for example, based on Virtualized architecture of Qemu/KVM technology, multi-core intelligent terminal with multiple operating systems, Android simulator, etc.
  • the solution in the embodiment of the present application can be implemented in various computer languages, for example, an object-oriented programming language Java or the like.
  • FIG. 2 illustrates a system architecture for implementing a virtualization method in an embodiment of the present application.
  • a virtualization system according to an embodiment of the present application is applied to an electronic device including a plurality of physical CPUs 201a, 201b, 201c, and 201d, and in the electronic device, a plurality of virtual CPUs 202a, 202b are created, 202c, and 202d; and a first operating system 203 and a second operating system 204 are created based on the Qemu/KVM technology.
  • the first operating system may be a guest operating system; the second operating system may be a Host operating system. It should be understood that, in a specific implementation, the first operating system may also be a Host operating system, and the second operating system may also be a Guest operating system, which is not limited in this application.
  • FIG. 1 For the purpose of example, only the case of a quad core processor, four virtual CPUs respectively corresponding to a quad core processor is shown in FIG. However, it should be understood that, in the specific implementation, it may be any other plural physical CPUs such as 2, 3, 5, 8, etc., and the number of virtual CPUs may also be any other plural such as 5, 6, 8, 10, and the like. number.
  • the number of virtual CPUs When the number of virtual CPUs is set, the number of physical CPUs can be 1:1 or 2:1. Generally, the number of virtual CPUs can be set to be greater than or equal to the number of physical CPUs. .
  • Guest operating system For the purposes of example, only one Guest operating system, one Host operating system is shown in FIG. However, it should be understood that, in the specific implementation, it may also be one or more Guest operating systems.
  • the system may be one or more Host operating systems; that is, the Guest operating system and the Host operating system may be any number, which is not limited in this application.
  • the physical CPU is four, and the number of the virtual CPU and the physical CPU to be created is 1:1.
  • the first operating system is the guest operating system
  • the second operating system is the host operating system. The detailed description is described in detail.
  • the guest operating system 203 may include a user space 2031, a Guest Linux Kernel 2032, and a Qemu 2033; in the user space of the Guest operating system, an interface of multiple virtual hardware devices or modules may be provided, specifically, the multiple The interface may include a graphics program interface, a multimedia program interface, a codec interface, and the like; more specifically, for example, the graphics program interface may be an OpenGL (Open Graphics Library) API interface, a Direct 3D, a Quick Draw 3D, or the like.
  • the program interface, the multimedia/video program interface may be an OpenMAX (Open Media Acceleration) interface, etc., which is not limited in this application.
  • the host operating system 204 may include a user space 2041 and a Host Linux Kernel 2042; in the user space of the Host operating system, a backend server Backend Server corresponding to each interface in the Guest operating system may be provided.
  • the backend server can be an OpenGL Backend Server; the backend server can operate the GPU device through the GPU driver in the Host Linux Kernel; the multimedia/video in the Guest operating system
  • the back-end server can be OpenMAX Backend Server; the back-end server can operate the corresponding multimedia/video device through the multimedia/video driver in the Host Linux Kernel.
  • FIG. 3 shows a flow chart of a virtualization method according to Embodiment 1 of the present application.
  • the steps of the virtualization method using the guest operating system as the execution subject are described.
  • a virtualization method of a GPU according to an embodiment of the present application includes the following steps:
  • each thread that maintains each virtual CPU is bound to a corresponding physical CPU.
  • the first virtual CPU created is bound to the physical CPU 201a
  • the second virtual CPU created is bound to the physical CPU 201b
  • the third virtual CPU created is bound to the physical CPU 201c
  • the fourth The virtual CPU is bound to the physical CPU 201d and the like.
  • the specific implementation of binding the virtual CPU to a certain physical CPU may employ the common technical means of those skilled in the art, which is not described in this application.
  • the user may perform a user operation on a thread in the guest operating system. For example, the user may perform a new window, a new page, etc., playing multimedia/video, etc. in a thread such as WeChat or QQ. operating.
  • the thread when receiving a user operation, the thread generates an API call instruction according to the user operation to invoke the corresponding front-end thread. For example, when the user performs an operation of opening a new window, playing a new page, etc., The corresponding graphic processing interface is called, and when the user performs operations such as playing multimedia/video, the corresponding multimedia/video interface can be called.
  • the processing instruction when the corresponding front-end thread is invoked, the processing instruction may be further determined according to the specific operation content of the user; specifically, the processing instruction may be a new window, a page; an instruction for encoding a video, specifically, the processing
  • the instructions may include one or more of the following codes: processing functions, parameters, synchronization information, content data, etc., which are not limited in this application.
  • the front-end thread can be bound to the virtual CPU running by the thread that invokes the front-end thread. For example, if the thread that calls the front-end thread is WeChat, the front-end thread can be bound to the virtual CPU that WeChat runs.
  • the identifier of the virtual CPU running by the thread that invokes the API may be obtained.
  • the identifier of the virtual CPU may be the number of the virtual CPU.
  • the number of the virtual CPU of the thread that invokes the API may be obtained by using a common technical means of a person skilled in the art. For example, the number of the virtual CPU running by the thread may be detected by the thread that invokes the API, which is not described herein. .
  • the identifier of the virtual CPU may be sent to the second operating system separately, or the identifier of the virtual CPU may be carried in other messages and sent to the second operating system.
  • the host operating system when the front-end thread is called, the host operating system is usually triggered to create a back-end thread corresponding to the front-end thread.
  • the channel initialization information between the front-end thread and the back-end thread may carry the identifier of the virtual CPU and send the channel initialization information to the host operating system. So that the host operating system can create a backend thread corresponding to the front end thread in the background server of the corresponding interface, and can further bind the back end thread to the virtual CPU.
  • Steps S303 and S304 may not be repeatedly performed.
  • the sending of the processing instruction may adopt a plurality of conventional sending manners by those skilled in the art, which is not described herein.
  • the switching from the front-end thread to the back-end thread, and the switching between the first operating system and the second operating system are all commonly used by those skilled in the art. Do not repeat them.
  • the backend thread drives the corresponding hardware device/module to execute the corresponding processing instruction and obtain the processing result.
  • steps S304 and S305 do not have a strict timing relationship. S304 may be executed first, then S305 may be performed, S305 may be performed first, and then S304 may be performed. S304 and S305 may also be executed synchronously.
  • the backend thread may directly feed back the processing result to the user as a response of the application interface call instruction, or return the processing result to the front end thread, and the front end thread responds to the user operation.
  • the remote operation of the hardware device/module by the user program in the guest operating system is realized; that is, the virtualization scheme in which the front-end thread and the back-end thread run on the same physical CPU is realized.
  • the virtualization method in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the Guest operating system and the Host operating system and the front-end thread in the virtualization process are performed.
  • the switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
  • FIG. 4 shows a flow chart of a virtualization method according to Embodiment 2 of the present application.
  • the steps of the virtualization method using the Host operating system as the execution subject are described.
  • the system architecture in the embodiment of the present application refer to the system architecture shown in FIG. 2 in the first embodiment, and details are not described herein again.
  • the virtualization method includes the following steps:
  • the second operating system determines a virtual CPU bound to the front-end thread, and binds the back-end thread to the virtual CPU, where the virtual CPU has a corresponding relationship with the physical CPU.
  • the establishment of the correspondence between the virtual CPU and the physical CPU may refer to the present application.
  • the implementation of S301 in Embodiment 1 will not be repeated here.
  • the host operating system when the host operating system creates a corresponding backend thread in the background server, the virtual CPU bound to the previous thread is determined, and the backend thread is bound to the virtual CPU. Specifically, if the guest operating system sends the identifier of the virtual CPU bound to the front-end thread, for example, the number, carried in the channel initialization information of the front-end thread to the host operating system, the host operating system may extract corresponding information from the channel initialization information. The number of the virtual CPU, and after creating the backend thread, bind the backend thread to the virtual CPU.
  • the backend thread at the second operating system acquires a processing instruction from the front end thread at the first operating system.
  • the switching from the front-end thread to the back-end thread, and the switching between the first operating system and the second operating system all adopt the common technical means of those skilled in the art, which is not described in this application.
  • the processing instruction may be a new window, a page, and an instruction for encoding a video.
  • the processing instruction may include one or more of the following codes: a processing function, a parameter, and synchronization information. , content data, etc., this application does not limit this.
  • the processing instruction can be obtained by a plurality of conventional acquisition methods, which are not described herein.
  • the processing result is a response of the processing operation, wherein the processing operation corresponds to the processing instruction.
  • the backend thread in the backend server drives the corresponding hardware device/module to execute the processing instruction and obtain a processing result in response to the processing operation.
  • Step S401 may not be repeatedly performed, but S402 and S403 may be directly executed.
  • the remote operation of the hardware device/module is implemented in the Host operating system in conjunction with the user program in the Guest operating system; that is, the virtualization scheme in which the front-end thread and the back-end thread run on the same physical CPU is realized.
  • the virtualization method in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the Guest operating system and the Host operating system and the front-end thread in the virtualization process are performed.
  • the switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
  • FIG. 5 is a flow chart showing a virtualization method according to Embodiment 3 of the present application.
  • the steps of the guest operating system and the host operating system to implement the virtualization method are described.
  • the system architecture in the embodiment of the present application refer to the system architecture shown in FIG. 2 in the first embodiment, and details are not described herein again.
  • the initiator of the API function remote call is the Guest operating system
  • the function execution party is the Host operating system
  • the downlink synchronization process from the Guest operating system to the Host operating system goes through the Guest Linux kernel and Qemu to the Backend Server
  • the uplink synchronization process from the Host operating system to the Guest operating system is initiated from the Backend Server and reaches the emulator API through the Qemu and Guest Linux kernels.
  • the virtualization method according to Embodiment 3 of the present application includes the following steps:
  • the thread that Qemu creates virtual CPU0 can be bound to physical CPU0, and so on, and the thread that creates virtual CPU 3 is bound to the physics. On CPU3.
  • the Guest operating system receives an application interface call instruction.
  • the guest operating system may receive user operations through an operating system or through a thread in the operating system.
  • the user can perform a user operation on a thread in the guest operating system, for example, in a thread such as WeChat or QQ, performing an operation of opening a new window, playing a new page, playing multimedia/video, and the like.
  • These threads usually generate API call instructions after receiving a user action.
  • the thread that invokes the front-end thread may be the thread that receives the user operation.
  • the thread that receives the user operation may be a user program such as WeChat or QQ.
  • the number of the virtual CPU of the thread that invokes the API may be detected by a common technical means of a person skilled in the art, which is not described herein.
  • the guest operating system binds the front-end thread to the virtual CPU, and transmits the virtual CPU number to the corresponding Backend Server in the host operating system.
  • the virtual CPU number can be sent to the Backend Server separately, or the virtual CPU number can be carried in other messages, for example, in the channel initialization message, to the Backend Server.
  • the Backend Server may be a background server corresponding to the front-end thread.
  • the corresponding Backend Server is a graphics program background server
  • the corresponding Backend Server is a multimedia/video backend server.
  • the Backend Server obtains the virtual CPU number when creating the subsequent thread; if the guest operating system carries the virtual CPU number in the channel initialization message, To Backend Server, the Backend Server extracts the CPU number from the channel initialization message and binds it.
  • the remote call of the API can be directly performed without repeating the binding again; That is, the steps of binding the front-end thread and the back-end thread to the same physical CPU, that is, S503-S505, may no longer be performed.
  • the front-end thread sends the processing instruction to the back-end thread.
  • step S403 for the implementation of this step, reference may be made to the implementation of step S403 in the second embodiment of the present application, and the repeated description is not repeated.
  • the remote calling of the hardware program/module by the user program between the multiple operating systems is realized; that is, the virtualization scheme in which the front-end thread and the back-end thread run on the same physical CPU is realized.
  • the virtualization method in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the Guest operating system and the Host operating system and the front-end thread in the virtualization process are performed.
  • the switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
  • the embodiment of the present application further provides a virtualization device, which is applied to a multi-core processor.
  • the principle of solving the problem is similar to the virtualization method provided by the first embodiment of the present application.
  • the implementation of the method refer to the implementation of the method, and the repetition will not be repeated.
  • FIG. 6 is a schematic structural diagram of a virtualization device according to Embodiment 4 of the present application.
  • the virtualization device 600 includes: a binding module 601, configured to bind a front-end thread and a back-end thread to a same physical central processing unit CPU; wherein the front-end thread is used for Determining, according to the application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface call instruction, and sending the processing instruction to a corresponding backend thread at the second operating system; At the second operating system, the processing instruction is received and executed, and the processing result is returned as a response to the application interface calling instruction or returned to the front-end thread.
  • the binding module includes: a corresponding relationship establishing submodule, configured to establish a correspondence between each virtual CPU and each physical CPU at the analog processor Qemu; the first binding submodule is used in the first operating system Binding the front-end thread to a virtual CPU; sending the identifier of the bound virtual CPU to the second operating system; and the second binding sub-module, in the second operating system, from the first The operating system receives the identifier of the virtual CPU and binds the backend thread to the virtual CPU.
  • the corresponding relationship establishes a sub-module, which is specifically used to bind each thread that maintains each virtual CPU to a corresponding physical CPU when creating a virtual CPU in Qemu.
  • the first binding sub-module is specifically configured to: obtain an identifier of a virtual CPU running by a thread that invokes the front-end thread, where the identifier is used to identify the virtual CPU; and bind the front-end thread to the virtual CPU identifier. Virtual CPU; sends the identity of the bound virtual CPU to the second operating system.
  • the first binding sub-module is specifically configured to: when the front-end thread starts, acquire an identifier of a virtual CPU running by a thread that invokes the front-end thread; and initialize a channel between the front-end thread and the back-end thread
  • the information carries the identifier of the virtual CPU, and sends the channel initialization information to the second operating system.
  • the second binding sub-module is specifically configured to receive the front-end thread and the back end from the first operating system.
  • the channel initialization information between the threads wherein the channel initialization information carries the identifier of the virtual CPU; the identifier of the virtual CPU is extracted from the channel initialization information, and the backend thread is bound to the virtual CPU.
  • the virtualization device in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the guest operating system and the host operating system and the front-end thread in the virtualization process are performed.
  • the switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
  • an electronic device 700 as shown in FIG. 7 is also provided in the embodiment of the present application.
  • an electronic device 700 includes: a display 701, a memory 702, one or more processors 703; a bus 704; and one or more modules, the one or more modules being stored In the memory, and configured to be executed by the one or more processors, the one or more modules include instructions for performing the steps in any of the methods one to three of the present application.
  • a computer program product is also provided in the embodiment of the present application, and the computer program product encodes an instruction for executing a process, where the process includes the steps in the first embodiment of the present application. Virtual method.
  • the computer program product can be used in conjunction with electronic device 700.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

Provided are a virtualisation method and device, an electronic device and a computer program product, wherein same are applied to a multi-core processor, the method comprising: binding a front-end thread and a back-end thread to the same physical central processing unit (CPU) (S303, S401), wherein the front-end thread is used for determining, according to an application interface invoking instruction received at a first operating system, a processing instruction corresponding to the application interface invoking instruction (S302), and sending the processing instruction to a corresponding back-end thread at a second operating system (S305); and the back-end thread is used for receiving and executing the processing instruction at the second operation system, and taking a processing result as a response to the application interface invoking instruction or returning same to the front-end thread (S402, S403). Using this method can shorten the time used for system switching and decrease the response time for the application interface invoking instruction, thereby improving the user experience.

Description

一种虚拟化方法、装置、及电子设备、计算机程序产品Virtualization method, device, and electronic device and computer program product 技术领域Technical field
本申请涉及计算机技术,具体地,涉及一种虚拟化方法、装置、及电子设备、计算机程序产品。The present application relates to computer technology, and in particular, to a virtualization method, apparatus, and electronic device, computer program product.
背景技术Background technique
图1中示出了基于Qemu/KVM(Kernel-based Virtual Machine,基于内核的虚拟机)技术的虚拟化架构。A virtualization architecture based on Qemu/KVM (Kernel-based Virtual Machine) technology is shown in FIG.
如图1所示,基于Qemu/KVM技术的虚拟化架构由一个主Host操作系统,若干个虚拟出来的客Guest操作系统组成。Host操作系统包括多个Host用户空间程序、Host Linux内核。每个客Guest操作系统分别包括用户空间、Guest Linux内核、和模拟处理器Qemu。这些操作系统运行在同一套硬件处理器芯片上,共享处理器及外设资源。支持虚拟化架构的ARM处理器至少包含EL2,EL1,EL0三种模式,EL2模式下运行虚拟机管理器Hypervisor程序;EL1模式下运行Linux内核程序,即,Linux kernel程序;EL0模式下运行用户空间程序。Hypervisor层管理CPU、内存、定时器、中断等硬件资源,并通过CPU、内存、定时器、中断的虚拟化资源,可以把不同的操作系统分时加载到物理处理器上运行,从而实现系统虚拟化的功能。As shown in Figure 1, the virtualization architecture based on Qemu/KVM technology consists of a primary Host operating system and several virtual guest guest operating systems. The Host operating system includes multiple Host user space programs and the Host Linux kernel. Each guest guest operating system includes user space, Guest Linux kernel, and analog processor Qemu. These operating systems run on the same set of hardware processor chips, sharing processor and peripheral resources. The ARM processor supporting the virtualization architecture includes at least EL2, EL1, and EL0 modes, and the virtual machine manager Hypervisor program is run in EL2 mode; the Linux kernel program is run in EL1 mode, that is, the Linux kernel program; and the user space is run in the EL0 mode. program. The Hypervisor layer manages hardware resources such as CPU, memory, timers, and interrupts, and can use different CPUs, memory, timers, and interrupted virtualization resources to load different operating systems into physical processors for runtime. Functionality.
KVM/Hypervisor跨越Host Linux kernel和Hypervisor两层,一方面为Qemu提供驱动节点,即,允许Qemu通过KVM节点创建虚拟CPU,并管理虚拟化资源;另一方面KVM/Hypervisor还可以把Host Linux系统从物理CPU上切换出去,然后把Guest Linux系统加载到物理处理器上运行,并处理Guest Linux系统异常退出的后续事务。KVM/Hypervisor spans the Host Linux kernel and Hypervisor. It provides a driver node for Qemu, which allows Qemu to create virtual CPUs through KVM nodes and manage virtualized resources. On the other hand, KVM/Hypervisor can also host Host Linux systems. Switch out on the physical CPU, then load the Guest Linux system to run on the physical processor, and handle subsequent transactions that the Guest Linux system exits abnormally.
Qemu作为Host Linux的一个应用运行,为Guest Linux的运行提供虚拟 的硬件设备资源,通过KVM/Hypervisor模块的设备KVM节点,创建虚拟CPU,分配物理硬件资源,实现把一个未经修改的Guest Linux加载到物理硬件处理上去运行。Qemu runs as an application for Host Linux and provides virtual operation for Guest Linux. The hardware device resources, through the KVM node of the KVM/Hypervisor module, create a virtual CPU, allocate physical hardware resources, and load an unmodified Guest Linux onto the physical hardware processing to run.
在手机或平板等终端设备上实现上述虚拟化架构,需要解决所有硬件设备的虚拟化,允许虚拟出来的操作系统也能使用真实的硬件设备。To implement the above virtualization architecture on a terminal device such as a mobile phone or a tablet, it is necessary to solve the virtualization of all hardware devices, and the virtual operating system can also use real hardware devices.
在现有技术的虚拟化方案中,通常是由Guest操作系统中的前端线程在接收到用户操作时,调用相应的API(Application Program Interface,应用接口),产生对应的处理指令;然后将该处理指令传递至Host操作系统中运行的、与该API对应的后台服务器Backend Server中的、该前端线程对应的后端线程,并由后端线程驱动相应的硬件设备或模块执行相应操作、然后将执行结果作为应用接口调用指令的响应。In the prior art virtualization solution, the front-end thread in the guest operating system usually calls a corresponding API (Application Program Interface) to generate a corresponding processing instruction when receiving a user operation; and then the processing is performed; The instruction is passed to a back-end thread corresponding to the front-end thread in the backend server Backend Server running in the Host operating system, and the back-end thread drives the corresponding hardware device or module to perform the corresponding operation, and then executes The result is a response to the application interface call instruction.
采用现有技术中的虚拟化方案,对于API调用指令的响应过程比较长,因此导致对用户操作的响应耗时较长,影响用户体验。With the virtualization solution in the prior art, the response process to the API call instruction is relatively long, so that the response to the user operation takes a long time and affects the user experience.
发明内容Summary of the invention
本申请实施例中提供了一种虚拟化方法、装置、及电子设备、计算机程序产品,主要用于缩短虚拟化系统中对于API调用指令的响应过程。The embodiment of the present application provides a virtualization method, device, and electronic device and computer program product, which are mainly used to shorten a response process to an API call instruction in a virtualization system.
根据本申请实施例的第一个方面,提供了一种虚拟化方法,应用于多核处理器,包括:将前端线程与后端线程绑定至同一物理中央处理器CPU;其中,该前端线程用于根据在第一操作系统处接收到的应用接口调用指令,确定该应用接口调用指令对应的处理指令,并将该处理指令发送至第二操作系统处的相应后端线程;该后端线程,用于在第二操作系统处,接收和执行该处理指令,并将处理结果作为该应用接口调用指令的响应或者返回给前端线程。According to a first aspect of the embodiments of the present application, a virtualization method is provided, which is applied to a multi-core processor, including: binding a front-end thread and a back-end thread to a same physical central processing unit CPU; wherein the front-end thread is used Determining, according to the application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface calling instruction, and sending the processing instruction to a corresponding backend thread at the second operating system; the backend thread, And being used to receive and execute the processing instruction at the second operating system, and return the processing result as a response of the application interface calling instruction or return to the front-end thread.
根据本申请实施例的第二个方面,提供了一种虚拟化装置,应用于多核处理器,包括:绑定模块,用于将前端线程与后端线程绑定至同一物理中央 处理器CPU;其中,该前端线程用于根据在第一操作系统处接收到的应用接口调用指令,确定该应用接口调用指令对应的处理指令,并将该处理指令发送至第二操作系统处的相应后端线程;该后端线程,用于在第二操作系统处,接收和执行该处理指令,并将处理结果作为该应用接口调用指令的响应或者返回给前端线程。According to a second aspect of the embodiments of the present application, a virtualization apparatus is provided for a multi-core processor, including: a binding module, configured to bind a front-end thread and a back-end thread to the same physical center a processor CPU, wherein the front-end thread is configured to determine, according to an application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface call instruction, and send the processing instruction to the second operating system Corresponding backend thread; the backend thread is configured to receive and execute the processing instruction at the second operating system, and return the processing result as a response of the application interface calling instruction or to the front end thread.
根据本申请实施例的第三个方面,提供了一种电子设备,该电子设备包括:显示器,存储器,一个或多个处理器;以及一个或多个模块,该一个或多个模块被存储在该存储器中,并被配置成由该一个或多个处理器执行,该一个或多个模块包括用于执行本申请实施例的第一个方面的虚拟方法中各个步骤的指令。According to a third aspect of embodiments of the present application, there is provided an electronic device comprising: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored The memory is configured and executed by the one or more processors, the one or more modules comprising instructions for performing the various steps in the virtual method of the first aspect of the embodiments of the present application.
据本申请实施例的第四个方面,提供了一种计算机程序产品,该计算机程序产品对用于执行一种过程的指令进行编码,该过程包括本申请实施例的第一个方面的虚拟方法。According to a fourth aspect of the embodiments of the present application, there is provided a computer program product for encoding an instruction for performing a process, the process comprising the virtual method of the first aspect of the embodiment of the present application .
采用根据本申请实施例的虚拟化方法、装置、及电子设备、计算机程序产品,将前端线程和对应的后端线程绑定至同一物理CPU,从而使得在虚拟化过程中,Guest操作系统和Host操作系统之间的切换、以及前端线程和后端线程的切换,能够在同一物理CPU上执行,从而缩短切换时间,减少对应用接口调用指令的响应时间,提升用户体验。The virtualization method, the device, and the electronic device and the computer program product according to the embodiment of the present application bind the front-end thread and the corresponding back-end thread to the same physical CPU, so that during the virtualization process, the guest operating system and the Host Switching between operating systems and switching between front-end threads and back-end threads can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instructions, and improving the user experience.
附图说明DRAWINGS
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described herein are intended to provide a further understanding of the present application, and are intended to be a part of this application. In the drawing:
图1中示出了基于Qemu/KVM技术的虚拟化架构示意图;A schematic diagram of a virtualization architecture based on Qemu/KVM technology is shown in FIG. 1;
图2示出了用于实施本申请实施例中的虚拟化方法的一种系统架构;FIG. 2 illustrates a system architecture for implementing a virtualization method in an embodiment of the present application;
图3示出了根据本申请实施例一的虚拟化方法的流程图;FIG. 3 is a flowchart of a virtualization method according to Embodiment 1 of the present application;
图4示出了根据本申请实施例二的虚拟化方法的流程图; FIG. 4 is a flowchart of a virtualization method according to Embodiment 2 of the present application;
图5示出了根据本申请实施例三的虚拟化方法的流程图;FIG. 5 is a flowchart of a virtualization method according to Embodiment 3 of the present application;
图6示出了根据本申请实施例四的虚拟化装置的结构示意图;FIG. 6 is a schematic structural diagram of a virtualization device according to Embodiment 4 of the present application;
图7示出了根据本申请实施例五的电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device according to Embodiment 5 of the present application.
具体实施方式detailed description
在实现本申请的过程中,发明人发现,在现有技术的虚拟化方案中,在多核处理器中创建某一线程时,由Linux kernel根据预先设置的策略调度该线程运行的虚拟CPU,例如,根据负载均衡原则调度、根据轮询原则调度等;因此,某一线程运行于哪个虚拟CPU是不确定的;同时,Qemu在创建虚拟CPU时,也是由Linux kernel根据预先设置的策略调度虚拟CPU运行的物理CPU,例如,根据负载均衡原则、或者根据轮询原则调度等,因此,虚拟CPU与物理CPU之间的对应关系也是不确定的。因此,系统中的线程运行于哪个物理CPU是不确定。In the process of implementing the present application, the inventor has found that in the prior art virtualization solution, when a certain thread is created in a multi-core processor, the Linux kernel schedules a virtual CPU running by the thread according to a preset policy, for example, According to the load balancing principle, scheduling according to the polling principle; therefore, which virtual CPU is running on a certain thread is uncertain; meanwhile, when Qemu creates a virtual CPU, the Linux kernel also schedules the virtual CPU according to a preset policy. The running physical CPU, for example, according to the load balancing principle or scheduling according to the polling principle, etc., therefore, the correspondence between the virtual CPU and the physical CPU is also uncertain. Therefore, it is unclear which physical CPU the thread in the system is running on.
发明人认为,上述线程和虚拟CPU的调度策略会导致现有技术中的虚拟化方案中,Guest操作系统中运行的前端线程和Host操作系统中运行的、与该前端线程对应的后端线程通常会运行于不同的物理CPU上,从而使得在实现虚拟化的过程中,前端线程和后端线程之间进行切换时,同时也是在Guest操作系统和Host操作系统之间进行切换时,需要在不同物理CPU之间切换,从而导致对应用接口调用指令的响应耗时较长,这样当对于一些用户操作的相应时间也会比较长,影响用户体验。The inventor believes that the scheduling policy of the thread and the virtual CPU may cause the front-end thread running in the guest operating system and the back-end thread running in the host operating system corresponding to the front-end thread to be usually in the virtualization scheme in the prior art. Will run on different physical CPUs, so that when the virtual machine is switched between the front-end thread and the back-end thread, it is also different when switching between the guest operating system and the host operating system. Switching between physical CPUs results in a longer response time to the application interface call instructions, so that the corresponding time for some user operations will be longer, affecting the user experience.
针对上述问题,本申请实施例中提供了一种虚拟化方法、装置、系统及电子设备、计算机程序产品,将前端线程和对应的后端线程绑定至同一物理CPU,从而使得在虚拟化过程中,Guest操作系统和Host操作系统之间的切换、以及前端线程和后端线程的切换,能够在同一物理CPU上执行,从而缩短切换时间,减少对应用接口调用指令的响应时间,提升用户体验。In the embodiment of the present application, a virtualization method, apparatus, system, electronic device, and computer program product are provided, and a front-end thread and a corresponding back-end thread are bound to the same physical CPU, thereby making the virtualization process Switching between the guest operating system and the host operating system, and switching between the front-end thread and the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience. .
本申请实施例中的方案可以应用于各种场景中,例如,采用基于 Qemu/KVM技术的虚拟化架构的、多操作系统的多核智能终端、安卓模拟器等。The solution in the embodiment of the present application can be applied to various scenarios, for example, based on Virtualized architecture of Qemu/KVM technology, multi-core intelligent terminal with multiple operating systems, Android simulator, etc.
本申请实施例中的方案可以采用各种计算机语言实现,例如,面向对象的程序设计语言Java等。The solution in the embodiment of the present application can be implemented in various computer languages, for example, an object-oriented programming language Java or the like.
为了使本申请实施例中的技术方案及优点更加清楚明白,以下结合附图对本申请的示例性实施例进行进一步详细的说明,显然,所描述的实施例仅是本申请的一部分实施例,而不是所有实施例的穷举。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。The exemplary embodiments of the present application are further described in detail below with reference to the accompanying drawings. Not all embodiments are exhaustive. It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict.
实施例一Embodiment 1
图2示出了用于实施本申请实施例中的虚拟化方法的一种系统架构。FIG. 2 illustrates a system architecture for implementing a virtualization method in an embodiment of the present application.
如图2所示,根据本申请实施例的虚拟化系统应用于包括多个物理CPU201a、201b、201c和201d的电子设备中,并且在该电子设备中,创建了多个虚拟CPU 202a、202b、202c、和202d;并基于Qemu/KVM技术创建有第一操作系统203和第二操作系统204。As shown in FIG. 2, a virtualization system according to an embodiment of the present application is applied to an electronic device including a plurality of physical CPUs 201a, 201b, 201c, and 201d, and in the electronic device, a plurality of virtual CPUs 202a, 202b are created, 202c, and 202d; and a first operating system 203 and a second operating system 204 are created based on the Qemu/KVM technology.
具体地,该第一操作系统可以是Guest操作系统;该第二操作系统可以是Host操作系统。应当理解,在具体实施时,该第一操作系统也可以是Host操作系统,该第二操作系统也可以是Guest操作系统,本申请对此不作限制。Specifically, the first operating system may be a guest operating system; the second operating system may be a Host operating system. It should be understood that, in a specific implementation, the first operating system may also be a Host operating system, and the second operating system may also be a Guest operating system, which is not limited in this application.
为了示例的目的,图2中仅示出了四核处理器、分别对应于四核处理器的四个虚拟CPU的情况。但应当理解,在具体实施时,也可以是例如2、3、5、8等其他任意复数个数的物理CPU,虚拟CPU的数量也可以是例如5、6、8、10等其他任意复数个数。在具体设置虚拟CPU的数量时,可以与物理CPU的数量为1:1,也可以设置为2:1等;通常可以设置虚拟CPU的数量大于等于物理CPU的数量;本申请对此均不作限制。For the purpose of example, only the case of a quad core processor, four virtual CPUs respectively corresponding to a quad core processor is shown in FIG. However, it should be understood that, in the specific implementation, it may be any other plural physical CPUs such as 2, 3, 5, 8, etc., and the number of virtual CPUs may also be any other plural such as 5, 6, 8, 10, and the like. number. When the number of virtual CPUs is set, the number of physical CPUs can be 1:1 or 2:1. Generally, the number of virtual CPUs can be set to be greater than or equal to the number of physical CPUs. .
为了示例的目的,图2中仅示出了一个Guest操作系统、一个Host操作系统的情况。但应当理解,在具体实施时,也可以是一个或多个Guest操作系 统,也可以是一个或多个Host操作系统;即,对于Guest操作系统、Host操作系统可以为任意的数量,本申请对此均不作限制。For the purposes of example, only one Guest operating system, one Host operating system is shown in FIG. However, it should be understood that, in the specific implementation, it may also be one or more Guest operating systems. The system may be one or more Host operating systems; that is, the Guest operating system and the Host operating system may be any number, which is not limited in this application.
接下来,将以物理CPU为4个,待创建的虚拟CPU与物理CPU的数量为1:1;同时第一操作系统为Guest操作系统,第二操作系统为Host操作系统为例,对本申请的具体实施方式进行详细介绍。Next, the physical CPU is four, and the number of the virtual CPU and the physical CPU to be created is 1:1. The first operating system is the guest operating system, and the second operating system is the host operating system. The detailed description is described in detail.
具体地,Guest操作系统203中可以包括用户空间2031、Guest Linux Kernel2032、和Qemu 2033;在Guest操作系统的用户空间中可以提供有虚拟的多种硬件设备或模块的接口,具体地,该多种接口可以包括图形程序接口、多媒体程序接口、编解码接口等;更具体地,例如,该图形程序接口可以是OpenGL(Open Graphics Library,开放图形实验室)API接口、Direct 3D、Quick Draw 3D等图形程序接口,该多媒体/视频程序接口可以是OpenMAX(Open Media Acceleration,开放多媒体加速层)接口等,本申请对此不作限制。Specifically, the guest operating system 203 may include a user space 2031, a Guest Linux Kernel 2032, and a Qemu 2033; in the user space of the Guest operating system, an interface of multiple virtual hardware devices or modules may be provided, specifically, the multiple The interface may include a graphics program interface, a multimedia program interface, a codec interface, and the like; more specifically, for example, the graphics program interface may be an OpenGL (Open Graphics Library) API interface, a Direct 3D, a Quick Draw 3D, or the like. The program interface, the multimedia/video program interface may be an OpenMAX (Open Media Acceleration) interface, etc., which is not limited in this application.
具体地,Host操作系统204中可以包括用户空间2041和Host Linux Kernel2042;在Host操作系统的用户空间中可以提供对应于Guest操作系统中的各接口的后端服务器Backend Server。例如,Guest操作系统中的图形程序接口为OpenGL API时,后端服务器可以是OpenGL Backend Server;后端服务器可以通过Host Linux Kernel中的GPU驱动程序去操作GPU设备;Guest操作系统中的多媒体/视频程序接口为OpenMAX API时,后端服务器可以是OpenMAX Backend Server;后端服务器可以通过Host Linux Kernel中的多媒体/视频驱动程序去操作相应的多媒体/视频设备。Specifically, the host operating system 204 may include a user space 2041 and a Host Linux Kernel 2042; in the user space of the Host operating system, a backend server Backend Server corresponding to each interface in the Guest operating system may be provided. For example, when the graphics program interface in the Guest operating system is the OpenGL API, the backend server can be an OpenGL Backend Server; the backend server can operate the GPU device through the GPU driver in the Host Linux Kernel; the multimedia/video in the Guest operating system When the program interface is OpenMAX API, the back-end server can be OpenMAX Backend Server; the back-end server can operate the corresponding multimedia/video device through the multimedia/video driver in the Host Linux Kernel.
接下来,将结合图2所示系统架构对根据本申请实施例的虚拟化方法进行描述。Next, a virtualization method according to an embodiment of the present application will be described in conjunction with the system architecture shown in FIG. 2.
图3示出了根据本申请实施例一的虚拟化方法的流程图。在本申请实施例一中,描述了以Guest操作系统作为执行主体的虚拟化方法的步骤。如图3所示,根据本申请实施例的GPU的虚拟化方法包括以下步骤:FIG. 3 shows a flow chart of a virtualization method according to Embodiment 1 of the present application. In the first embodiment of the present application, the steps of the virtualization method using the guest operating system as the execution subject are described. As shown in FIG. 3, a virtualization method of a GPU according to an embodiment of the present application includes the following steps:
S301,在创建虚拟CPU时,在Qemu处建立各虚拟CPU与各物理CPU 的对应关系。S301. When creating a virtual CPU, establish each virtual CPU and each physical CPU at Qemu. Correspondence.
在具体实施时,可以在Qemu创建虚拟CPU时,将维护各虚拟CPU的各个线程分别绑定至相应的物理CPU。例如,将创建的第一个虚拟CPU绑定至物理CPU 201a,将创建的第二个虚拟CPU绑定至物理CPU 201b,将创建的第三个虚拟CPU绑定至物理CPU 201c,第四个虚拟CPU绑定至物理CPU201d等。具体地,将虚拟CPU绑定至某一物理CPU的具体实施可以采用本领域技术人员的常用技术手段,本申请对此不赘述。In a specific implementation, when a virtual CPU is created by Qemu, each thread that maintains each virtual CPU is bound to a corresponding physical CPU. For example, the first virtual CPU created is bound to the physical CPU 201a, the second virtual CPU created is bound to the physical CPU 201b, and the third virtual CPU created is bound to the physical CPU 201c, the fourth The virtual CPU is bound to the physical CPU 201d and the like. Specifically, the specific implementation of binding the virtual CPU to a certain physical CPU may employ the common technical means of those skilled in the art, which is not described in this application.
S302,接收在第一操作系统处执行的应用接口调用指令,根据用户操作调用对应的前端线程,并确定处理指令。S302. Receive an application interface call instruction executed at the first operating system, invoke a corresponding front-end thread according to a user operation, and determine a processing instruction.
在具体实施时,用户可以针对Guest操作系统中的某一线程执行用户操作,例如,用户可以在微信、QQ等线程中,执行打开一个新窗口,打一个新页面等、播放多媒体/视频等的操作。In a specific implementation, the user may perform a user operation on a thread in the guest operating system. For example, the user may perform a new window, a new page, etc., playing multimedia/video, etc. in a thread such as WeChat or QQ. operating.
在具体实施时,当接收到用户操作时,线程会根据用户操作产生一个API调用指令调用对应的前端线程,例如,当用户执行的是打开一个新窗口,打一个新页面等的操作时,可以调用对应的图形处理接口,当用户执行的是播放多媒体/视频等操作时,可以调用对应的多媒体/视频接口等。In a specific implementation, when receiving a user operation, the thread generates an API call instruction according to the user operation to invoke the corresponding front-end thread. For example, when the user performs an operation of opening a new window, playing a new page, etc., The corresponding graphic processing interface is called, and when the user performs operations such as playing multimedia/video, the corresponding multimedia/video interface can be called.
在具体时,当调用对应的前端线程后,可以进一步根据用户的具体操作内容,确定处理指令;具体地,该处理指令可以是新建一个窗口、页面;编码一段视频的指令,具体的,该处理指令可以包括下述代码的一种或多种:处理函数、参数、同步信息、内容数据等,本申请对此不作限制。Specifically, when the corresponding front-end thread is invoked, the processing instruction may be further determined according to the specific operation content of the user; specifically, the processing instruction may be a new window, a page; an instruction for encoding a video, specifically, the processing The instructions may include one or more of the following codes: processing functions, parameters, synchronization information, content data, etc., which are not limited in this application.
S303,将前端线程绑定至一虚拟CPU。S303. Bind the front-end thread to a virtual CPU.
在具体实施时,可以在将前端线程绑定至调用该前端线程的线程所运行的虚拟CPU。例如,调用该前端线程的线程是微信,则可以将前端线程绑定至微信所运行的虚拟CPU。In a specific implementation, the front-end thread can be bound to the virtual CPU running by the thread that invokes the front-end thread. For example, if the thread that calls the front-end thread is WeChat, the front-end thread can be bound to the virtual CPU that WeChat runs.
具体地,可以在该前端线程启动时,即,可以在Guest操作系统初始化API的调用时,获取调用该API的线程所运行的虚拟CPU的标识。具体地, 该虚拟CPU的标识可以是虚拟CPU的编号。并将该前端线程,即,API绑定至对应的虚拟CPU。具体地,获取调用该API的线程的虚拟CPU的编号可以采用本领域技术人员的常用技术手段,例如,可以由调用该API的线程检测自身所运行的虚拟CPU的编号,本申请在此不作赘述。Specifically, when the front-end thread is started, that is, when the guest operating system initialization API is called, the identifier of the virtual CPU running by the thread that invokes the API may be obtained. specifically, The identifier of the virtual CPU may be the number of the virtual CPU. And bind the front-end thread, ie, the API, to the corresponding virtual CPU. Specifically, the number of the virtual CPU of the thread that invokes the API may be obtained by using a common technical means of a person skilled in the art. For example, the number of the virtual CPU running by the thread may be detected by the thread that invokes the API, which is not described herein. .
至此,由于虚拟CPU与物理CPU存在固定的对应关系,通过S301-S303,可以将一前端线程绑定至一指定物理CPU。So far, because the virtual CPU has a fixed correspondence with the physical CPU, a front-end thread can be bound to a specified physical CPU through S301-S303.
S304,将该虚拟CPU的标识发送至第二操作系统。S304. Send the identifier of the virtual CPU to the second operating system.
在具体实施时,可以将虚拟CPU的标识单独发送至第二操作系统,也可以将虚拟CPU的标识携带在其他消息中发送至第二操作系统。In a specific implementation, the identifier of the virtual CPU may be sent to the second operating system separately, or the identifier of the virtual CPU may be carried in other messages and sent to the second operating system.
具体地,在调用前端线程时,通常会触发Host操作系统创建与该前端线程相对应的后端线程。为了使后端线程与前端线程绑定至同一物理CPU,可以在前端线程和后端线程之间的通道初始化信息中,携带虚拟CPU的标识,并将通道初始化信息发送至Host操作系统。以使得Host操作系统可以在相应接口的后台服务器中创建与前端线程相对应的后端线程,并且可以进一步将该后端线程绑定至该虚拟CPU。Specifically, when the front-end thread is called, the host operating system is usually triggered to create a back-end thread corresponding to the front-end thread. In order to bind the back-end thread and the front-end thread to the same physical CPU, the channel initialization information between the front-end thread and the back-end thread may carry the identifier of the virtual CPU and send the channel initialization information to the host operating system. So that the host operating system can create a backend thread corresponding to the front end thread in the background server of the corresponding interface, and can further bind the back end thread to the virtual CPU.
至此,通过S301-S304,可以将一前端线程和其对应的后端线程绑定至同一指定物理CPU。So far, through S301-S304, a front-end thread and its corresponding back-end thread can be bound to the same specified physical CPU.
应当理解,如果在S302之前,第一操作系统中已经建立起对应于该应用接口调用指令的前端线程,且该前端线程已经被绑定至相应的虚拟CPU,则在再次调用该前端线程时,可以不再重复执行步骤S303和S304。It should be understood that if a front-end thread corresponding to the application interface call instruction has been established in the first operating system before S302, and the front-end thread has been bound to the corresponding virtual CPU, when the front-end thread is called again, Steps S303 and S304 may not be repeatedly performed.
S305,将处理指令发送至后端线程;以使后端线程执行该处理指令,并得到处理结果。S305. Send the processing instruction to the backend thread, so that the backend thread executes the processing instruction, and obtains the processing result.
在具体实施时,处理指令的发送可以采用多种本领域技术人员的常规发送方式,本申请在此不做赘述。In a specific implementation, the sending of the processing instruction may adopt a plurality of conventional sending manners by those skilled in the art, which is not described herein.
在具体实施时,从前端线程至后端线程的切换,以及第一操作系统和第二操作系统之间的切换均采用本领域技术人员的常用技术手段,本申请对此 不作赘述。In a specific implementation, the switching from the front-end thread to the back-end thread, and the switching between the first operating system and the second operating system are all commonly used by those skilled in the art. Do not repeat them.
在具体实施时,后端线程驱动相应的硬件设备/模块执行相应的处理指令,并得到处理结果。In a specific implementation, the backend thread drives the corresponding hardware device/module to execute the corresponding processing instruction and obtain the processing result.
应当理解,步骤S304和S305并没有严格的时序关系,可以先执行S304,再执行S305,也可以先执行S305,再执行S304;还可以同步执行S304和S305,本申请对此均不作限制。It should be understood that steps S304 and S305 do not have a strict timing relationship. S304 may be executed first, then S305 may be performed, S305 may be performed first, and then S304 may be performed. S304 and S305 may also be executed synchronously.
在具体实施时,后端线程可以将该处理结果直接作为应用接口调用指令的响应反馈给用户,也可以将该处理结果返回给前端线程,由前端线程响应用户操作。In a specific implementation, the backend thread may directly feed back the processing result to the user as a response of the application interface call instruction, or return the processing result to the front end thread, and the front end thread responds to the user operation.
至此,实现了Guest操作系统中用户程序对硬件设备/模块的远程调用;即,实现了前端线程与后端线程运行于同一物理CPU的虚拟化方案。At this point, the remote operation of the hardware device/module by the user program in the guest operating system is realized; that is, the virtualization scheme in which the front-end thread and the back-end thread run on the same physical CPU is realized.
采用本申请实施例中的虚拟化方法,将前端线程和对应的后端线程绑定至同一物理CPU,从而使得在虚拟化过程中,Guest操作系统和Host操作系统之间的切换、以及前端线程和后端线程的切换,能够在同一物理CPU上执行,从而缩短切换时间,减少对应用接口调用指令的响应时间,提升用户体验。The virtualization method in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the Guest operating system and the Host operating system and the front-end thread in the virtualization process are performed. The switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
实施例二Embodiment 2
图4示出了根据本申请实施例二的虚拟化方法的流程图。在本申请实施例二中,描述了以Host操作系统作为执行主体的虚拟化方法的步骤。本申请实施例中的系统架构的实施可以参见实施例一中图2所示的系统架构,重复之处不再赘述。FIG. 4 shows a flow chart of a virtualization method according to Embodiment 2 of the present application. In the second embodiment of the present application, the steps of the virtualization method using the Host operating system as the execution subject are described. For the implementation of the system architecture in the embodiment of the present application, refer to the system architecture shown in FIG. 2 in the first embodiment, and details are not described herein again.
如图4所示,根据本申请实施例的虚拟化方法包括以下步骤:As shown in FIG. 4, the virtualization method according to an embodiment of the present application includes the following steps:
S401,第二操作系统确定前端线程绑定的虚拟CPU,并将后端线程绑定至该虚拟CPU,其中,该虚拟CPU与物理CPU存在对应关系。S401. The second operating system determines a virtual CPU bound to the front-end thread, and binds the back-end thread to the virtual CPU, where the virtual CPU has a corresponding relationship with the physical CPU.
在具体实施时,虚拟CPU与物理CPU的对应关系的建立可以参考本申请 实施例一中S301的实施,重复之处不再赘述。In the specific implementation, the establishment of the correspondence between the virtual CPU and the physical CPU may refer to the present application. The implementation of S301 in Embodiment 1 will not be repeated here.
在具体实施时,Host操作系统可以在后台服务器中创建相应的后端线程时,确定前线程绑定的虚拟CPU,并将后端线程绑定至该虚拟CPU。具体地,如果Guest操作系统将前端线程绑定的虚拟CPU的标识,例如,编号,携带在前后端线程的通道初始化信息中发送至Host操作系统,则Host操作系统可以从通道初始化信息中提取相应的虚拟CPU的编号,并在创建后端线程后,将后端线程绑定至该虚拟CPU。In a specific implementation, when the host operating system creates a corresponding backend thread in the background server, the virtual CPU bound to the previous thread is determined, and the backend thread is bound to the virtual CPU. Specifically, if the guest operating system sends the identifier of the virtual CPU bound to the front-end thread, for example, the number, carried in the channel initialization information of the front-end thread to the host operating system, the host operating system may extract corresponding information from the channel initialization information. The number of the virtual CPU, and after creating the backend thread, bind the backend thread to the virtual CPU.
至此,已经将前端线程和后端线程绑定至同一物理CPU。At this point, the front-end thread and the back-end thread have been bound to the same physical CPU.
S402,在第二操作系统处的后端线程获取来自第一操作系统处的前端线程的处理指令;S402. The backend thread at the second operating system acquires a processing instruction from the front end thread at the first operating system.
在具体实施时,从前端线程至后端线程的切换,以及第一操作系统和第二操作系统之间的切换均采用本领域技术人员的常用技术手段,本申请对此不作赘述。In the specific implementation, the switching from the front-end thread to the back-end thread, and the switching between the first operating system and the second operating system all adopt the common technical means of those skilled in the art, which is not described in this application.
在具体实施时,具体地,该处理指令可以是新建一个窗口、页面;编码一段视频的指令,具体的,该处理指令可以包括下述代码的一种或多种:处理函数、参数、同步信息、内容数据等,本申请对此不作限制。In a specific implementation, specifically, the processing instruction may be a new window, a page, and an instruction for encoding a video. Specifically, the processing instruction may include one or more of the following codes: a processing function, a parameter, and synchronization information. , content data, etc., this application does not limit this.
在具体实施时,处理指令的获取可以采用多种本领域技术人员的常规获取方式,本申请在此不做赘述。In the specific implementation, the processing instruction can be obtained by a plurality of conventional acquisition methods, which are not described herein.
S403,在后端线程处执行处理指令,并得到处理结果;将处理结果作为处理操作的响应,其中,处理操作与处理指令对应。S403, executing a processing instruction at the backend thread, and obtaining a processing result; the processing result is a response of the processing operation, wherein the processing operation corresponds to the processing instruction.
在具体实施时,后端服务器中的后端线程驱动相应的硬件设备/模块执行该处理指令,并得到响应于该处理操作的处理结果。In a specific implementation, the backend thread in the backend server drives the corresponding hardware device/module to execute the processing instruction and obtain a processing result in response to the processing operation.
应当理解,如果在S402之前,第二操作系统中已经建立与前端线程相对应的后端线程,且该后端线程已经被绑定至相应的虚拟CPU,则在再次调用该后端线程时,可以不再重复执行步骤S401,而是直接执行S402和S403即可。 It should be understood that if a backend thread corresponding to the front end thread has been established in the second operating system before S402, and the backend thread has been bound to the corresponding virtual CPU, when the backend thread is called again, Step S401 may not be repeatedly performed, but S402 and S403 may be directly executed.
至此,实现了在Host操作系统中,配合Guest操作系统中的用户程序对硬件设备/模块的远程调用;即,实现了前端线程与后端线程运行于同一物理CPU的虚拟化方案。So far, the remote operation of the hardware device/module is implemented in the Host operating system in conjunction with the user program in the Guest operating system; that is, the virtualization scheme in which the front-end thread and the back-end thread run on the same physical CPU is realized.
采用本申请实施例中的虚拟化方法,将前端线程和对应的后端线程绑定至同一物理CPU,从而使得在虚拟化过程中,Guest操作系统和Host操作系统之间的切换、以及前端线程和后端线程的切换,能够在同一物理CPU上执行,从而缩短切换时间,减少对应用接口调用指令的响应时间,提升用户体验。The virtualization method in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the Guest operating system and the Host operating system and the front-end thread in the virtualization process are performed. The switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
实施例三Embodiment 3
图5示出了根据本申请实施例三的虚拟化方法的流程图。在本申请实施例三中,描述了Guest操作系统与Host操作系统配合实现虚拟化方法的步骤。本申请实施例中的系统架构的实施可以参见实施例一中图2所示的系统架构,重复之处不再赘述。FIG. 5 is a flow chart showing a virtualization method according to Embodiment 3 of the present application. In the third embodiment of the present application, the steps of the guest operating system and the host operating system to implement the virtualization method are described. For the implementation of the system architecture in the embodiment of the present application, refer to the system architecture shown in FIG. 2 in the first embodiment, and details are not described herein again.
在本申请实施例中,API函数远程调用的发起方为Guest操作系统,函数执行方为Host操作系统,从Guest操作系统到Host操作系统的下行同步过程经历Guest Linux kernel、Qemu到达Backend Server;从Host操作系统到Guest操作系统的上行同步过程从Backend Server发起,经过Qemu、Guest Linux kernel到达emulator API。In the embodiment of the present application, the initiator of the API function remote call is the Guest operating system, and the function execution party is the Host operating system, and the downlink synchronization process from the Guest operating system to the Host operating system goes through the Guest Linux kernel and Qemu to the Backend Server; The uplink synchronization process from the Host operating system to the Guest operating system is initiated from the Backend Server and reaches the emulator API through the Qemu and Guest Linux kernels.
接下来,将对基于上述应用场景的虚拟方法的实施过程进行详细描述。Next, the implementation process of the virtual method based on the above application scenario will be described in detail.
如图5所示,根据本申请实施例三的虚拟化方法包括以下步骤:As shown in FIG. 5, the virtualization method according to Embodiment 3 of the present application includes the following steps:
S501,在创建虚拟CPU时,Qemu将维护虚拟CPU的各个线程分别绑定到固定的物理CPU。S501: When creating a virtual CPU, Qemu binds each thread of the maintenance virtual CPU to a fixed physical CPU.
例如,在4核的处理器芯片上实现虚拟化,并创建4个虚拟CPU,则可以把Qemu创建虚拟CPU0的线程绑定到物理CPU0上,以此类推,创建虚拟CPU3的线程绑定到物理CPU3上。 For example, if virtualization is implemented on a 4-core processor chip and 4 virtual CPUs are created, the thread that Qemu creates virtual CPU0 can be bound to physical CPU0, and so on, and the thread that creates virtual CPU 3 is bound to the physics. On CPU3.
S502,Guest操作系统接收应用接口调用指令。S502. The Guest operating system receives an application interface call instruction.
在具体实施时,Guest操作系统可以通过操作系统,也可以通过操作系统中的某一线程接收用户操作。例如,用户可以针对Guest操作系统中的某一线程执行用户操作,例如,在微信、QQ等线程中,执行打开一个新窗口,打一个新页面等、播放多媒体/视频等的操作。这些线程在接收到用户操作后,通常会产生API调用指令。In a specific implementation, the guest operating system may receive user operations through an operating system or through a thread in the operating system. For example, the user can perform a user operation on a thread in the guest operating system, for example, in a thread such as WeChat or QQ, performing an operation of opening a new window, playing a new page, playing multimedia/video, and the like. These threads usually generate API call instructions after receiving a user action.
S503,Guest操作系统创建前端线程时,检测调用前端线程的线程所在的虚拟CPU编号。S503: When the guest operating system creates the front-end thread, it detects the virtual CPU number of the thread that invokes the front-end thread.
在具体实施时,调用前端线程的线程,也就是发起API远程调用的线程可以是接收用户操作的线程。例如,可以是微信、QQ等用户程序。In a specific implementation, the thread that invokes the front-end thread, that is, the thread that initiates the remote call of the API, may be the thread that receives the user operation. For example, it may be a user program such as WeChat or QQ.
在具体实施时,可以采用本领域技术人员的常用技术手段,检测调用该API的线程的虚拟CPU的编号,本申请在此不作赘述。In the specific implementation, the number of the virtual CPU of the thread that invokes the API may be detected by a common technical means of a person skilled in the art, which is not described herein.
S504,Guest操作系统将前端线程绑定在此虚拟CPU上运行,并把虚拟CPU编号传给Host操作系统中的对应Backend Server。S504: The guest operating system binds the front-end thread to the virtual CPU, and transmits the virtual CPU number to the corresponding Backend Server in the host operating system.
在具体实施时,可以将虚拟CPU编号单独发送至Backend Server,也可以将虚拟CPU的编号携带在其他消息,例如,通道初始化消息中发送至Backend Server。In the specific implementation, the virtual CPU number can be sent to the Backend Server separately, or the virtual CPU number can be carried in other messages, for example, in the channel initialization message, to the Backend Server.
应当理解,该Backend Server可以是与前端线程对应的后台服务器,例如,如果前端线程是图形程序接口,则对应的Backend Server是图形程序后台服务器,如果前端线程是多媒体/视频接口,则对应的Backend Server是多媒体/视频后台服务器。It should be understood that the Backend Server may be a background server corresponding to the front-end thread. For example, if the front-end thread is a graphics program interface, the corresponding Backend Server is a graphics program background server, and if the front-end thread is a multimedia/video interface, the corresponding Backend Server is a multimedia/video backend server.
S505,Backend Server在创建后端线程时,将后端线程绑定至Guest操作系统指定的虚拟CPU。S505: When the backend thread is created, the backend server binds the backend thread to the virtual CPU specified by the guest operating system.
在具体实施时,如果Guest操作系统将虚拟CPU编号单独发送至Backend Server;则Backend Server在创建后续线程时,获取该虚拟CPU编号;如果Guest操作系统将虚拟CPU的编号携带在通道初始化消息中发送至Backend  Server,则Backend Server从通道初始化消息中提取该CPU编号,并进行绑定。In the specific implementation, if the guest operating system sends the virtual CPU number to the Backend Server separately, the Backend Server obtains the virtual CPU number when creating the subsequent thread; if the guest operating system carries the virtual CPU number in the channel initialization message, To Backend Server, the Backend Server extracts the CPU number from the channel initialization message and binds it.
应当理解,为提高效率,当前端线程和对应的后端线程被绑定至同一物理CPU之后,在接下来的虚拟化过程中,可以直接进行API的远程调用,而不需要再次重复绑定;即,可以不再执行前端线程和后端线程绑定至同一物理CPU的步骤,即,S503-S505。It should be understood that, in order to improve efficiency, after the current end thread and the corresponding back end thread are bound to the same physical CPU, in the next virtualization process, the remote call of the API can be directly performed without repeating the binding again; That is, the steps of binding the front-end thread and the back-end thread to the same physical CPU, that is, S503-S505, may no longer be performed.
S506,前端线程将处理指令发送给后端线程。S506, the front-end thread sends the processing instruction to the back-end thread.
在具体实施时,本步骤的实施可以参考本申请实施例一的S305和本申请实施例二的S402,重复之处不再赘述。For the implementation of this step, refer to S305 in the first embodiment of the present application and S402 in the second embodiment of the present application, and the repeated description is not repeated.
S507,在后端线程处执行处理指令,并得到处理结果;将处理结果作为应用接口调用指令的响应,或者返回给前端线程。S507, executing a processing instruction at the backend thread, and obtaining a processing result; using the processing result as a response of the application interface calling instruction, or returning to the front end thread.
在具体实施时,本步骤的实施,可以参考本申请实施例二中步骤S403的实施,重复之处不再赘述。For the implementation of this step, reference may be made to the implementation of step S403 in the second embodiment of the present application, and the repeated description is not repeated.
至此,实现了在多核处理器中,多个操作系统之间用户程序对硬件设备/模块的远程调用;即,实现了前端线程与后端线程运行于同一物理CPU的虚拟化方案。So far, in the multi-core processor, the remote calling of the hardware program/module by the user program between the multiple operating systems is realized; that is, the virtualization scheme in which the front-end thread and the back-end thread run on the same physical CPU is realized.
采用本申请实施例中的虚拟化方法,将前端线程和对应的后端线程绑定至同一物理CPU,从而使得在虚拟化过程中,Guest操作系统和Host操作系统之间的切换、以及前端线程和后端线程的切换,能够在同一物理CPU上执行,从而缩短切换时间,减少对应用接口调用指令的响应时间,提升用户体验。The virtualization method in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the Guest operating system and the Host operating system and the front-end thread in the virtualization process are performed. The switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
基于同一发明构思,本申请实施例中还提供了一种虚拟化装置,应用于多核处理器,由于该装置解决问题的原理与本申请实施例一所提供的虚拟化方法的相似,因此该装置的实施可以参见方法的实施,重复之处不再赘述。Based on the same inventive concept, the embodiment of the present application further provides a virtualization device, which is applied to a multi-core processor. The principle of solving the problem is similar to the virtualization method provided by the first embodiment of the present application. For the implementation of the method, refer to the implementation of the method, and the repetition will not be repeated.
实施例四 Embodiment 4
图6示出了根据本申请实施例四的虚拟化装置的结构示意图。FIG. 6 is a schematic structural diagram of a virtualization device according to Embodiment 4 of the present application.
如图6所示,根据本申请实施例四的虚拟化装置600包括:绑定模块601,用于将前端线程与后端线程绑定至同一物理中央处理器CPU;其中,该前端线程用于根据在第一操作系统处接收到的应用接口调用指令,确定该应用接口调用指令对应的处理指令,并将该处理指令发送至第二操作系统处的相应后端线程;该后端线程,用于在第二操作系统处,接收和执行该处理指令,并将处理结果作为该应用接口调用指令的响应或者返回给前端线程。As shown in FIG. 6, the virtualization device 600 according to Embodiment 4 of the present application includes: a binding module 601, configured to bind a front-end thread and a back-end thread to a same physical central processing unit CPU; wherein the front-end thread is used for Determining, according to the application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface call instruction, and sending the processing instruction to a corresponding backend thread at the second operating system; At the second operating system, the processing instruction is received and executed, and the processing result is returned as a response to the application interface calling instruction or returned to the front-end thread.
具体地,绑定模块,具体包括:对应关系建立子模块,用于在模拟处理器Qemu处建立各虚拟CPU与各物理CPU的对应关系;第一绑定子模块,用于在第一操作系统处,将该前端线程绑定至一虚拟CPU;并将所绑定的虚拟CPU的标识发送至第二操作系统;第二绑定子模块,用于在第二操作系统中,从该第一操作系统处接收该虚拟CPU的标识,将该后端线程绑定至该虚拟CPU。Specifically, the binding module includes: a corresponding relationship establishing submodule, configured to establish a correspondence between each virtual CPU and each physical CPU at the analog processor Qemu; the first binding submodule is used in the first operating system Binding the front-end thread to a virtual CPU; sending the identifier of the bound virtual CPU to the second operating system; and the second binding sub-module, in the second operating system, from the first The operating system receives the identifier of the virtual CPU and binds the backend thread to the virtual CPU.
具体地,对应关系建立子模块,具体用于:在Qemu创建虚拟CPU时,将维护各虚拟CPU的各个线程分别绑定至相应的物理CPU。Specifically, the corresponding relationship establishes a sub-module, which is specifically used to bind each thread that maintains each virtual CPU to a corresponding physical CPU when creating a virtual CPU in Qemu.
具体地,第一绑定子模块,具体用于:获取调用该前端线程的线程所运行的虚拟CPU的标识,该标识用于标识该虚拟CPU;将该前端线程绑定至该虚拟CPU标识对应的虚拟CPU;将所绑定的虚拟CPU的标识发送至第二操作系统。Specifically, the first binding sub-module is specifically configured to: obtain an identifier of a virtual CPU running by a thread that invokes the front-end thread, where the identifier is used to identify the virtual CPU; and bind the front-end thread to the virtual CPU identifier. Virtual CPU; sends the identity of the bound virtual CPU to the second operating system.
具体地,第一绑定子模块,具体用于:在该前端线程启动时,获取调用该前端线程的线程所运行的虚拟CPU的标识;在该前端线程和该后端线程之间的通道初始化信息中,携带该虚拟CPU的标识,并将该通道初始化信息发送至该第二操作系统;第二绑定子模块,具体用于:从该第一操作系统处接收该前端线程与该后端线程之间的通道初始化信息,其中,该通道初始化信息中携带该虚拟CPU的标识;从该通道初始化信息中提取该虚拟CPU的标识,将该后端线程绑定至该虚拟CPU。 Specifically, the first binding sub-module is specifically configured to: when the front-end thread starts, acquire an identifier of a virtual CPU running by a thread that invokes the front-end thread; and initialize a channel between the front-end thread and the back-end thread The information carries the identifier of the virtual CPU, and sends the channel initialization information to the second operating system. The second binding sub-module is specifically configured to receive the front-end thread and the back end from the first operating system. The channel initialization information between the threads, wherein the channel initialization information carries the identifier of the virtual CPU; the identifier of the virtual CPU is extracted from the channel initialization information, and the backend thread is bound to the virtual CPU.
采用本申请实施例中的虚拟化装置,将前端线程和对应的后端线程绑定至同一物理CPU,从而使得在虚拟化过程中,Guest操作系统和Host操作系统之间的切换、以及前端线程和后端线程的切换,能够在同一物理CPU上执行,从而缩短切换时间,减少对应用接口调用指令的响应时间,提升用户体验。The virtualization device in the embodiment of the present application binds the front-end thread and the corresponding back-end thread to the same physical CPU, so that the switching between the guest operating system and the host operating system and the front-end thread in the virtualization process are performed. The switching with the back-end thread can be performed on the same physical CPU, thereby shortening the switching time, reducing the response time to the application interface call instruction, and improving the user experience.
实施例五Embodiment 5
基于同一发明构思,本申请实施例中还提供了如图7所示的一种电子设备700。Based on the same inventive concept, an electronic device 700 as shown in FIG. 7 is also provided in the embodiment of the present application.
如图7所示,根据本申请实施例五的电子设备700包括:显示器701,存储器702,一个或多个处理器703;总线704;以及一个或多个模块,该一个或多个模块被存储在该存储器中,并被配置成由该一个或多个处理器执行,该一个或多个模块包括用于执行根据本申请实施例一至三中任一方法中各个步骤的指令。As shown in FIG. 7, an electronic device 700 according to Embodiment 5 of the present application includes: a display 701, a memory 702, one or more processors 703; a bus 704; and one or more modules, the one or more modules being stored In the memory, and configured to be executed by the one or more processors, the one or more modules include instructions for performing the steps in any of the methods one to three of the present application.
基于同一发明构思,本申请实施例中还提供了一种计算机程序产品,该计算机程序产品对用于执行一种过程的指令进行编码,所述过程包括根据本申请实施例一中的各个步骤的虚拟方法。Based on the same inventive concept, a computer program product is also provided in the embodiment of the present application, and the computer program product encodes an instruction for executing a process, where the process includes the steps in the first embodiment of the present application. Virtual method.
在具体实施时,该计算机程序产品可以与电子设备700结合使用。In particular implementations, the computer program product can be used in conjunction with electronic device 700.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present application can be provided as a method, system, or computer program product. Thus, the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程 和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the present application. It will be understood that each of the processes and/or blocks in the flowcharts and/or block diagrams, and the flows in the flowcharts and/or block diagrams can be implemented by computer program instructions. And/or a combination of boxes. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。While the preferred embodiment of the present application has been described, it will be apparent that those skilled in the art can make further changes and modifications to the embodiments. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and the modifications and
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。 It will be apparent to those skilled in the art that various modifications and changes can be made in the present application without departing from the spirit and scope of the application. Thus, it is intended that the present invention cover the modifications and variations of the present invention.

Claims (12)

  1. 一种虚拟化方法,应用于多核处理器,其特征在于,包括:A virtualization method for a multi-core processor, comprising:
    将前端线程与后端线程绑定至同一物理中央处理器CPU;其中,Bind the front-end thread to the back-end thread to the same physical CPU CPU;
    所述前端线程用于根据在第一操作系统处接收到的应用接口调用指令,确定所述应用接口调用指令对应的处理指令,并将所述处理指令发送至第二操作系统处的相应后端线程;The front-end thread is configured to determine, according to an application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface call instruction, and send the processing instruction to a corresponding back end at the second operating system Thread
    所述后端线程,用于在第二操作系统处,接收和执行所述处理指令,并将处理结果作为所述应用接口调用指令的响应或者返回给前端线程。The backend thread is configured to receive and execute the processing instruction at the second operating system, and return the processing result as a response of the application interface calling instruction or to the front end thread.
  2. 根据权利要求1所述的方法,其特征在于,将前端线程与后端线程绑定至同一物理中央处理器CPU,具体包括:The method of claim 1, wherein binding the front-end thread to the back-end thread to the same physical CPU CPU comprises:
    在模拟处理器Qemu处建立各虚拟CPU与各物理CPU的对应关系;Establishing a correspondence between each virtual CPU and each physical CPU at the analog processor Qemu;
    在第一操作系统处,将所述前端线程绑定至一虚拟CPU;并将所绑定的虚拟CPU的标识发送至第二操作系统;At the first operating system, binding the front-end thread to a virtual CPU; and sending the identifier of the bound virtual CPU to the second operating system;
    在第二操作系统中,从所述第一操作系统处接收所述虚拟CPU的标识,将所述后端线程绑定至所述虚拟CPU。In the second operating system, the identifier of the virtual CPU is received from the first operating system, and the backend thread is bound to the virtual CPU.
  3. 根据权利要求2所述的方法,其特征在于,在Qemu处建立各虚拟CPU与各物理CPU的对应关系,具体包括:The method according to claim 2, wherein the mapping between each virtual CPU and each physical CPU is established at Qemu, and specifically includes:
    在Qemu创建虚拟CPU时,将维护各虚拟CPU的各个线程分别绑定至相应的物理CPU。When creating a virtual CPU in Qemu, each thread that maintains each virtual CPU is bound to a corresponding physical CPU.
  4. 根据权利要求2所述的方法,其特征在于,将前端线程绑定至一虚拟CPU,具体包括:The method of claim 2, wherein binding the front end thread to a virtual CPU comprises:
    获取调用所述前端线程的线程所运行的虚拟CPU的标识,所述标识用于标识所述虚拟CPU;Obtaining an identifier of a virtual CPU running by a thread that invokes the front-end thread, where the identifier is used to identify the virtual CPU;
    将所述前端线程绑定至所述虚拟CPU标识对应的虚拟CPU。Binding the front end thread to the virtual CPU corresponding to the virtual CPU identifier.
  5. 根据权利要求4所述的方法,其特征在于,获取调用所述前端线程的线程所运行的虚拟CPU的标识,具体包括: The method of claim 4, wherein the identifier of the virtual CPU running by the thread that invokes the front-end thread is obtained, which specifically includes:
    在所述前端线程启动时,获取调用所述前端线程的线程所运行的虚拟CPU的标识;When the front-end thread is started, acquiring an identifier of a virtual CPU running by a thread that invokes the front-end thread;
    并将所绑定的虚拟CPU的标识发送至第二操作系统,具体包括:在所述前端线程和所述后端线程之间的通道初始化信息中,携带所述虚拟CPU的标识,并将所述通道初始化信息发送至所述第二操作系统;And sending the identifier of the bound virtual CPU to the second operating system, specifically: carrying the identifier of the virtual CPU in the channel initialization information between the front-end thread and the back-end thread, and Transmitting channel initialization information to the second operating system;
    从所述第一操作系统处接收所述虚拟CPU的标识,具体包括:Receiving the identifier of the virtual CPU from the first operating system, specifically:
    从所述第一操作系统处接收所述前端线程与所述后端线程之间的通道初始化信息,其中,所述通道初始化信息中携带所述虚拟CPU的标识;Receiving channel initialization information between the front-end thread and the back-end thread from the first operating system, where the channel initialization information carries an identifier of the virtual CPU;
    从所述通道初始化信息中提取所述虚拟CPU的标识。Extracting an identifier of the virtual CPU from the channel initialization information.
  6. 一种虚拟化装置,应用于多核处理器,其特征在于,包括:A virtualization device for a multi-core processor, comprising:
    绑定模块,用于将前端线程与后端线程绑定至同一物理中央处理器CPU;其中,a binding module for binding a front-end thread and a back-end thread to the same physical CPU;
    所述前端线程用于根据在第一操作系统处接收到的应用接口调用指令,确定所述应用接口调用指令对应的处理指令,并将所述处理指令发送至第二操作系统处的相应后端线程;The front-end thread is configured to determine, according to an application interface call instruction received at the first operating system, a processing instruction corresponding to the application interface call instruction, and send the processing instruction to a corresponding back end at the second operating system Thread
    所述后端线程,用于在第二操作系统处,接收和执行所述处理指令,并将处理结果作为所述应用接口调用指令的响应或者返回给前端线程。The backend thread is configured to receive and execute the processing instruction at the second operating system, and return the processing result as a response of the application interface calling instruction or to the front end thread.
  7. 根据权利要求6所述的装置,其特征在于,绑定模块,具体包括:The device according to claim 6, wherein the binding module comprises:
    对应关系建立子模块,用于在模拟处理器Qemu处建立各虚拟CPU与各物理CPU的对应关系;Corresponding relationship establishing submodule, configured to establish a correspondence between each virtual CPU and each physical CPU at the analog processor Qemu;
    第一绑定子模块,用于在第一操作系统处,将所述前端线程绑定至一虚拟CPU;并将所绑定的虚拟CPU的标识发送至第二操作系统;a first binding submodule, configured to bind the front end thread to a virtual CPU at the first operating system, and send the identifier of the bound virtual CPU to the second operating system;
    第二绑定子模块,用于在第二操作系统中,从所述第一操作系统处接收所述虚拟CPU的标识,将所述后端线程绑定至所述虚拟CPU。a second binding submodule, configured to receive an identifier of the virtual CPU from the first operating system, and bind the backend thread to the virtual CPU in a second operating system.
  8. 根据权利要求7所述的装置,其特征在于,对应关系建立子模块,具体用于: The device according to claim 7, wherein the corresponding relationship establishes a sub-module, specifically for:
    在Qemu创建虚拟CPU时,将维护各虚拟CPU的各个线程分别绑定至相应的物理CPU。When creating a virtual CPU in Qemu, each thread that maintains each virtual CPU is bound to a corresponding physical CPU.
  9. 根据权利要求7所述的装置,其特征在于,第一绑定子模块,具体用于:The device according to claim 7, wherein the first binding submodule is specifically configured to:
    获取调用所述前端线程的线程所运行的虚拟CPU的标识,所述标识用于标识所述虚拟CPU;Obtaining an identifier of a virtual CPU running by a thread that invokes the front-end thread, where the identifier is used to identify the virtual CPU;
    将所述前端线程绑定至所述虚拟CPU标识对应的虚拟CPU;Binding the front end thread to a virtual CPU corresponding to the virtual CPU identifier;
    将所绑定的虚拟CPU的标识发送至第二操作系统。Send the ID of the bound virtual CPU to the second operating system.
  10. 根据权利要求9所述的装置,其特征在于,第一绑定子模块,具体用于:The device according to claim 9, wherein the first binding submodule is specifically configured to:
    在所述前端线程启动时,获取调用所述前端线程的线程所运行的虚拟CPU的标识;在所述前端线程和所述后端线程之间的通道初始化信息中,携带所述虚拟CPU的标识,并将所述通道初始化信息发送至所述第二操作系统;The identifier of the virtual CPU running by the thread that invokes the front-end thread is acquired when the front-end thread is started; and the identifier of the virtual CPU is carried in the channel initialization information between the front-end thread and the back-end thread. And sending the channel initialization information to the second operating system;
    第二绑定子模块,具体用于:The second binding submodule is specifically used to:
    从所述第一操作系统处接收所述前端线程与所述后端线程之间的通道初始化信息,其中,所述通道初始化信息中携带所述虚拟CPU的标识;Receiving channel initialization information between the front-end thread and the back-end thread from the first operating system, where the channel initialization information carries an identifier of the virtual CPU;
    从所述通道初始化信息中提取所述虚拟CPU的标识,将所述后端线程绑定至所述虚拟CPU。Extracting an identifier of the virtual CPU from the channel initialization information, and binding the backend thread to the virtual CPU.
  11. 一种电子设备,其特征在于,所述电子设备包括:显示器,存储器,一个或多个处理器;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行权利要求1-5中任一所述方法中各个步骤的指令。An electronic device, comprising: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored in the memory and being Configured to be performed by the one or more processors, the one or more modules comprising instructions for performing the various steps of the method of any of claims 1-5.
  12. 一种计算机程序产品,所述计算机程序产品对用于执行一种过程的指令进行编码,所述过程包括根据权利要求1-5中任一项所述的方法。 A computer program product for encoding instructions for performing a process, the process comprising the method of any of claims 1-5.
PCT/CN2016/111590 2016-12-22 2016-12-22 Virtualisation method and device, electronic device, and computer program product WO2018112855A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/111590 WO2018112855A1 (en) 2016-12-22 2016-12-22 Virtualisation method and device, electronic device, and computer program product
CN201680002851.7A CN106796530B (en) 2016-12-22 2016-12-22 A kind of virtual method, device and electronic equipment, computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/111590 WO2018112855A1 (en) 2016-12-22 2016-12-22 Virtualisation method and device, electronic device, and computer program product

Publications (1)

Publication Number Publication Date
WO2018112855A1 true WO2018112855A1 (en) 2018-06-28

Family

ID=58952282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/111590 WO2018112855A1 (en) 2016-12-22 2016-12-22 Virtualisation method and device, electronic device, and computer program product

Country Status (2)

Country Link
CN (1) CN106796530B (en)
WO (1) WO2018112855A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114443192A (en) * 2021-12-27 2022-05-06 天翼云科技有限公司 Multi-window virtual application method and device based on cloud desktop
CN114553753A (en) * 2022-01-07 2022-05-27 中信科移动通信技术股份有限公司 Method, device and system for debugging and testing communication module with serial communication interface

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110489212B (en) * 2019-08-20 2022-08-02 东软集团股份有限公司 Universal input/output port virtualization method and device and vehicle machine
CN110764901B (en) * 2019-09-17 2021-02-19 创新先进技术有限公司 Data processing method based on GPU (graphics processing Unit) resources, electronic equipment and system
CN110673928B (en) * 2019-09-29 2021-12-14 天津卓朗科技发展有限公司 Thread binding method, thread binding device, storage medium and server
CN111930425B (en) * 2020-06-23 2022-06-10 联宝(合肥)电子科技有限公司 Data control method and device and computer readable storage medium
CN113467884A (en) * 2021-05-25 2021-10-01 阿里巴巴新加坡控股有限公司 Resource allocation method and device, electronic equipment and computer readable storage medium
CN113553124B (en) * 2021-05-26 2022-06-21 武汉深之度科技有限公司 Application program running method, computing device and storage medium
CN116795557A (en) * 2022-03-15 2023-09-22 华为技术有限公司 Communication method, electronic device, and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090100424A1 (en) * 2007-10-12 2009-04-16 International Business Machines Corporation Interrupt avoidance in virtualized environments
CN102279766A (en) * 2011-08-30 2011-12-14 华为技术有限公司 Method and system for concurrently simulating processors and scheduler
CN103092675A (en) * 2012-12-24 2013-05-08 北京伸得纬科技有限公司 Virtual environment construction method
CN104461735A (en) * 2014-11-28 2015-03-25 杭州华为数字技术有限公司 Method and device for distributing CPU resources in virtual scene
CN104778075A (en) * 2015-04-03 2015-07-15 北京奇虎科技有限公司 Method and device for calling Java layer API (Application Program Interface) by native layer in Android system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090100424A1 (en) * 2007-10-12 2009-04-16 International Business Machines Corporation Interrupt avoidance in virtualized environments
CN102279766A (en) * 2011-08-30 2011-12-14 华为技术有限公司 Method and system for concurrently simulating processors and scheduler
CN103092675A (en) * 2012-12-24 2013-05-08 北京伸得纬科技有限公司 Virtual environment construction method
CN104461735A (en) * 2014-11-28 2015-03-25 杭州华为数字技术有限公司 Method and device for distributing CPU resources in virtual scene
CN104778075A (en) * 2015-04-03 2015-07-15 北京奇虎科技有限公司 Method and device for calling Java layer API (Application Program Interface) by native layer in Android system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114443192A (en) * 2021-12-27 2022-05-06 天翼云科技有限公司 Multi-window virtual application method and device based on cloud desktop
CN114443192B (en) * 2021-12-27 2024-04-26 天翼云科技有限公司 Multi-window virtual application method and device based on cloud desktop
CN114553753A (en) * 2022-01-07 2022-05-27 中信科移动通信技术股份有限公司 Method, device and system for debugging and testing communication module with serial communication interface
CN114553753B (en) * 2022-01-07 2024-03-15 中信科移动通信技术股份有限公司 Method, device and system for adjusting and measuring communication module with serial communication interface

Also Published As

Publication number Publication date
CN106796530B (en) 2019-01-25
CN106796530A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018112855A1 (en) Virtualisation method and device, electronic device, and computer program product
CN107077377B (en) Equipment virtualization method, device and system, electronic equipment and computer program product
WO2018119951A1 (en) Gpu virtualization method, device, system, and electronic apparatus, and computer program product
US10768960B2 (en) Method for affinity binding of interrupt of virtual network interface card, and computer device
CN104618155B (en) A kind of virtual machine fault-tolerant method, apparatus and system
CN106797388B (en) Cross-system multimedia data encoding and decoding method and device, electronic equipment and computer program product
US8468524B2 (en) Inter-virtual machine time profiling of I/O transactions
US9635098B2 (en) Open platform, open platform access system, storage medium, and method for allowing third party application to access open platform
US10095536B2 (en) Migration of virtual machines with shared memory
CN108156181B (en) Vulnerability detection method based on coroutine asynchronous IO and vulnerability scanning system thereof
US20130162661A1 (en) System and method for long running compute using buffers as timeslices
CN107479943B (en) Multi-operating-system operation method and device based on industrial Internet operating system
US10002016B2 (en) Configuration of virtual machines in view of response time constraints
WO2023050819A1 (en) System on chip, virtual machine task processing method and device, and storage medium
WO2019174074A1 (en) Method for processing service data, and network device
CN112988346B (en) Task processing method, device, equipment and storage medium
CN115686758B (en) VirtIO-GPU performance controllable method based on frame statistics
CN110955499A (en) Processor core configuration method, device, terminal and storage medium
CN116320469B (en) Virtualized video encoding and decoding system and method, electronic equipment and storage medium
CN109213607A (en) A kind of method and apparatus of multithreading rendering
US10318343B2 (en) Migration methods and apparatuses for migrating virtual machine including locally stored and shared data
US9613390B2 (en) Host context techniques for server based graphics processing
CN104702534B (en) A kind of method and device for the data processing for realizing multi-process shared port
Yadav et al. Adaptive GPU resource scheduling on virtualized servers in cloud gaming
US20180052700A1 (en) Facilitation of guest application display from host operating system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16924394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16924394

Country of ref document: EP

Kind code of ref document: A1