WO2018119952A1 - 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品 - Google Patents

一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品 Download PDF

Info

Publication number
WO2018119952A1
WO2018119952A1 PCT/CN2016/113265 CN2016113265W WO2018119952A1 WO 2018119952 A1 WO2018119952 A1 WO 2018119952A1 CN 2016113265 W CN2016113265 W CN 2016113265W WO 2018119952 A1 WO2018119952 A1 WO 2018119952A1
Authority
WO
WIPO (PCT)
Prior art keywords
operating system
shared memory
storage area
instruction
memory
Prior art date
Application number
PCT/CN2016/113265
Other languages
English (en)
French (fr)
Inventor
温燕飞
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201680002834.3A priority Critical patent/CN107077377B/zh
Priority to PCT/CN2016/113265 priority patent/WO2018119952A1/zh
Publication of WO2018119952A1 publication Critical patent/WO2018119952A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Definitions

  • the present application relates to computer technology, and in particular, to a device virtualization method, device, system, and electronic device, computer program product.
  • FIG. 1 A virtualization architecture based on Qemu/KVM (Kernel-based Virtual Machine) technology is shown in FIG.
  • the virtualization architecture based on Qemu/KVM technology consists of a primary Host operating system and one or more virtual guest guest operating systems.
  • the Host operating system includes multiple Host user space programs, Host Linux Kernel, that is, the Host Linux kernel.
  • Each guest guest operating system includes user space, Guest Linux Kernel, and Qemu.
  • These operating systems run on the same set of hardware processor chips, sharing processor and peripheral resources.
  • the ARM processor supporting the virtualization architecture includes at least EL2, EL1, and EL0 modes, and the virtual machine manager Hypervisor program is run in EL2 mode; the Linux kernel program is run in EL1 mode, that is, the Linux kernel program; and the user space is run in the EL0 mode. program.
  • the Hypervisor layer manages hardware resources such as CPU, memory, timers, and interrupts, and can load different operating systems into physical processors by using CPUs, memory, timers, and interrupted virtualization resources. The function of implementing system virtualization.
  • KVM/Hypervisor spans the Host Linux kernel and Hypervisor. It provides a driver node for the analog processor Qemu, which allows Qemu to create virtual CPUs through KVM nodes and manage virtualized resources. On the other hand, KVM/Hypervisor can also host Host. The Linux system switches out from the physical CPU, then loads the Guest Linux system onto the physical processor and processes the subsequent transactions that the Guest Linux system exits abnormally.
  • Qemu provides virtual physical device resources for the operation of Guest Linux.
  • KVM node of the KVM/Hypervisor module a virtual CPU is created, and physical device resources are allocated to load an unmodified Guest Linux. Go to the physical processor to run.
  • Cross-system API remote calls mainly involve the transfer of function parameters, the return of running results, the execution time of functions, and synchronization.
  • a system architecture for cross-system API remote invocation in the prior art is shown in FIG. As shown in Figure 2, an API call is initiated via the Guest Android system.
  • the HAL layer, Guest Linux Kernel, Qemu, the Host Backend server, and then the Host Linux kernel driver are used to access the physical device.
  • the above software architecture is difficult to achieve the desired performance requirements.
  • a device virtualization method, device, system, electronic device, and computer program product are provided, which are mainly used to solve the problem that the device virtualization method in the prior art has poor performance.
  • a device virtualization method including: creating a shared memory at a first operating system and mapping the shared memory to a peripheral component interconnection standard of a second operating system a memory space of the PCI device; wherein the shared memory corresponds to a physical device; receiving an application interface API operation instruction of the physical device at the second operating system, and determining a corresponding processing instruction according to the API operation instruction; and processing the instruction through the shared memory Passing to the first operating system; executing the processing instruction at the first operating system, and returning the processing result as a response to the API operation instruction or returning to the second operating system via the shared memory.
  • a device virtualization apparatus includes: a shared memory creation module, configured to create a shared memory at a first operating system, and map the shared memory to a second operating system
  • the peripheral component interconnects a standard PCI device memory space; wherein the shared memory corresponds to a physical device; the receiving module is configured to receive an application interface API operation instruction of the physical device at the second operating system, and according to the API operation instruction, Determining a corresponding processing instruction; a sending module, configured to pass the processing instruction to the first operating system through the shared memory; and a processing module, configured to execute the processing instruction at the first operating system, and use the processing result as a response of the API operation instruction or Returned to the second operating system via shared memory.
  • an electronic device comprising: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored in the memory, And configured to be executed by the one or more processors, the one or more modules comprising instructions for performing the various steps in the virtual method in accordance with the first aspect of the embodiments of the present application.
  • a computer program product for encoding instructions for performing a process, the process comprising virtualizing according to the first aspect of embodiments of the present application method.
  • a device virtualization method, apparatus, system, and electronic device, computer program product according to an embodiment of the present application, by creating a shared memory between a first operating system and a second operating system, and then implementing virtualization of the physical device through the shared memory Due to the first operating system and the second operating system Through the shared memory transfer API call, the system delay in the virtualization process is reduced, and the system performance is improved.
  • FIG. 1 A schematic diagram of a virtualization architecture based on Qemu/KVM technology is shown in FIG. 1;
  • FIG. 2 The system architecture of the cross-system API remote call in the prior art is shown in FIG. 2;
  • FIG. 3 illustrates a system architecture for implementing a device virtualization method in an embodiment of the present application
  • FIG. 4 is a flowchart of a device virtualization method according to Embodiment 1 of the present application.
  • FIG. 5 is a flowchart of a device virtualization method according to Embodiment 2 of the present application.
  • FIG. 6 is a schematic structural diagram of a device virtualization apparatus according to Embodiment 3 of the present application.
  • FIG. 7 is a schematic structural diagram of a device virtualization system according to Embodiment 4 of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to Embodiment 5 of the present application.
  • the inventor has found that the prior art adopts the virtualization process as shown in FIG. 2, from the guest user space program, to the HAL, to the system call of the Guest Linux Kernel layer, from Qemu to the latter.
  • the process switching of the end server Backend server consumes processor time in each link, and the call of a remote API requires multiple parameter passing, and may also be a parameter with a large amount of data, so the virtualized operating system is called.
  • the system latency is greatly increased and the performance is several times lower than that of the Host system.
  • a device virtualization method, apparatus, system, and electronic device and computer program product are provided, by creating a shared memory between a first operating system and a second operating system, and then through the sharing.
  • the memory realizes the virtualization of the physical device, and the first operating system and the second operating system reduce the virtualization process by using the shared memory transfer API call. System latency in the system improves system performance.
  • the solution in the embodiment of the present application can be applied to various scenarios, for example, an intelligent terminal adopting a virtualization architecture based on Qemu/KVM technology, an Android simulator, a server virtualization platform, and the like.
  • the solution in the embodiment of the present application can be implemented in various computer languages, for example, an object-oriented programming language Java or the like.
  • FIG. 3 illustrates a system architecture for implementing a device virtualization method in an embodiment of the present application.
  • the device virtualization system includes a first operating system 301, a second operating system 302, a plurality of shared memory 303a, 303b, 303c, and a plurality of physical devices 304a, 304b, 304c.
  • the first operating system may be a Host operating system; the second operating system may be a Guest operating system.
  • the first operating system may also be a guest operating system, and the second operation may also be a Host operating system, which is not limited in this application.
  • the guest operating system 302 may include a user space 3021, a guest Linux kernel 3022, and an analog processor Qemu 3023.
  • an interface of multiple virtual physical devices or modules may be provided in the user space of the guest operating system.
  • the plurality of interfaces may include a graphics program interface, a multimedia program interface, a camera program interface, and the like; more specifically, for example, the graphics program interface may be an OpenGL (Open Graphics Library) API interface, Direct 3D, The graphics program interface such as the Quick Draw 3D, the multimedia/video program interface may be an OpenMAX (Open Media Acceleration) interface, etc., which is not limited in this application.
  • the host operating system 301 can include a user space 3011 and a Host Linux Kernel 3012.
  • the backend server Backend Server corresponding to each interface in the Guest operating system can be provided in the user space of the Host operating system.
  • the backend server can be an OpenGL Backend Server; the backend server can operate the GPU device through the GPU driver in the Host Linux Kernel; the multimedia/video in the Guest operating system
  • the back-end server can be OpenMAX Backend Server; the back-end server can operate the corresponding multimedia/video device through the multimedia/video driver in the Host Linux Kernel.
  • the amount of shared memory may correspond to a physical device that implements virtualization; that is, one physical device corresponds to a shared memory.
  • the GPU device corresponds to the shared memory 303a
  • the multimedia device corresponds to the shared memory 303b
  • the imaging device corresponds to the shared memory 303c, and the like.
  • the following describes the division of the shared memory in the embodiment of the present application in detail by taking the shared memory 303a corresponding to the GPU device as an example.
  • the shared memory 303a may include only the first storage area 3031; and may also be divided into a first storage area 3031 and a second storage area 3032.
  • the first storage area may also be referred to as Private memory; this second storage area can also be called public memory.
  • the division of the first storage area and the second storage area has no specific rules, and may be divided according to the data size generally stored by the first storage area and the second storage area, according to the experience of the designer; Pre-set policies are used to divide, and this application does not limit this.
  • the first storage area may be used for transmission of functions and parameters between the respective threads of the Guest operating system and the Backend Server thread, and/or synchronization information; specifically, the private memory may be further divided into multiple blocks.
  • One block is defined as one channel, one channel corresponds to one thread of the Guest operating system; in the specific division, the number of the channel can be preset by the developer; in the specific division, the multiple blocks can be average division
  • the blocks of equal size may be intelligently divided according to the functions and parameters of the GPU called by the common thread in the system, and/or the size of the synchronization information, which is not limited in this application.
  • the user program of the Guest operating system can dynamically manage the channels in the private memory, that is, the user program can allocate, reallocate, and release the channels in the private memory at any time.
  • the physical devices 304a, 304b, 304c may be physical devices that are not integrated onto the central processing unit CPU; more preferably, may be physical devices with high throughput, such as GPU devices, multimedia devices, imaging devices Wait.
  • the shared memory shown in FIG. 3 includes two storage areas of private memory and common memory; and the private memory is divided into three equal-sized channels; the common memory is divided into four sizes. Channel.
  • the shared memory may be a storage area including only private memory; and the private memory may not be divided or divided into multiple channels of different sizes; the common memory may not exist or may be divided into multiple sizes. Equal channels, etc., are not limited in this application.
  • S401 Create a shared memory corresponding to the GPU device when the Qemu corresponding to the guest system is started.
  • Qemu can create a corresponding shared memory through a system call.
  • a specific address space can be divided from the memory as a share within the GPU device. Save.
  • the size of the shared memory can be set by the developer and adapted to the respective physical device.
  • the shared memory corresponding to the GPU device can be set to 128M; the shared memory corresponding to the multimedia device can be set to 64M; the shared memory corresponding to the camera device can be set to 64M, etc., which is not limited in this application.
  • a shared memory may be newly created for each physical device by Qemu of each guest system, or the shared memory corresponding to a physical device may be shared by the multiple guest systems;
  • the physical device adopts different schemes, for example, for GPU devices; each guest system uses independent shared memory, and for multimedia devices, each guest system shares a shared memory; this application does not limit this.
  • the Guest Linux Kernel divides the shared memory into private memory and common memory.
  • the Guest Linux Kernel can partition the shared memory when initializing the GPU device; so that the shared memory supports access by multiple processes or threads.
  • the private memory that is, the first storage area may be divided into a first preset number of multiple channels; the common memory, that is, the second storage area may be divided into a second preset number of multiple blocks.
  • the first preset number and the second preset number may be set by a developer.
  • a front-end thread corresponding to the API call instruction that is, a first thread may be created.
  • the thread creation instruction corresponding to the API call instruction is sent to the Host operating system to trigger the Host operating system to create a corresponding back-end thread, that is, the second thread.
  • the user may perform a user operation on a thread in the guest operating system. For example, the user may perform a new window, a new page, etc., playing multimedia/video, etc. in a thread such as WeChat or QQ. operating.
  • the thread when receiving a user operation, the thread generates an API call instruction according to the user operation to invoke the corresponding front-end thread. For example, when the user performs an operation of opening a new window, playing a new page, etc., The corresponding graphic processing interface is called, and when the user performs operations such as playing multimedia/video, the corresponding multimedia/video interface can be called.
  • the host operating system is usually triggered to create a back-end thread corresponding to the front-end thread.
  • the host operating system is usually triggered to create a back-end thread corresponding to the front-end thread.
  • the guest system calls the graphics program processing interface
  • a corresponding back-end thread is created in the graphics processing background server in the host operating system; if the user invokes the multimedia program processing interface, then the host operation is performed.
  • a corresponding back-end thread is created in the multimedia processing background server in the system.
  • the address space of the private memory channel corresponding to the front-end thread and the common memory address space allocated to the front-end thread are obtained from the Guest Linux Kernel; and the front-end thread is privately owned.
  • the address space of the memory channel and the common memory address space allocated to the front-end thread are mapped to the address space of the front-end thread; thereby establishing a synchronous control channel with Qemu.
  • a certain channel in the private memory is usually allocated to the front-end thread, and the common memory is entirely allocated to the front-end thread.
  • the address space of the private memory channel corresponding to the front-end thread and the address space of the common memory can be transferred to Qemu through the PCI configuration space; then Qemu uses the inter-process communication mechanism to address the address space of the private memory channel corresponding to the front-end thread, And the address space of the public memory is sent to the backend server; and it is mapped to the address space of the backend thread.
  • the corresponding processing instruction when receiving the GPU device at the front-end thread of the guest user space
  • the corresponding processing instruction may be determined according to the API operation instruction; and the processing instruction is transmitted to the backend thread in the Backend Server in the Host system through the shared memory; and then the processing is performed at the backend thread.
  • the instruction, and the result of the processing is returned to the front-end thread as a response to the API call instruction or via shared memory.
  • processing instruction is passed to the backend thread in the Backend Server in the Host system through the shared memory, and can be implemented in the following manners:
  • the front-end thread when the processing instructions include API call functions, parameters, and synchronization information; the front-end thread can write functions, parameters, and synchronization information to the corresponding private memory channel; and offset the function and parameters.
  • the address is sent to the backend thread; the triggering backend thread shares the fetch processing instruction in the memory according to the offset address.
  • the offset address can be sent to the backend server of the Host operating system through Qemu, and then synchronized by the backend server to the backend thread.
  • the front-end thread can write the function and the parameter into the corresponding private memory channel; and write the graphic content data into the common memory; Sending the offset address of the shared memory where the processing instruction is located to the backend thread; triggering the backend thread to share the acquisition processing instruction in the memory according to the offset address.
  • the offset address can be sent to the backend server of the Host operating system through Qemu, and then synchronized by the backend server to the backend thread.
  • the front-end thread can write the function, the parameter, and the synchronization information into the corresponding private memory channel; and the graphic content data Write to the common memory; send the offset address of the shared memory where the processing instruction is located to the backend thread; trigger the backend thread to share according to the offset address The process of obtaining the processing instruction.
  • the offset address can be sent to the backend server of the Host operating system through Qemu, and then synchronized by the backend server to the backend thread.
  • the switching from the front-end thread to the back-end thread, and the switching between the first operating system and the second operating system all adopt the common technical means of those skilled in the art, which is not described in this application.
  • the backend thread drives the corresponding physical device/module to execute the corresponding processing instruction and obtain the processing result.
  • the backend thread may directly feed the processing result to the user as a response of the application interface call instruction, or return the processing result to the front end thread, and the front end thread responds.
  • the shared memory is created between the first operating system and the second operating system, and then the virtual device is virtualized by the shared memory, because the first operating system and the second operation
  • the system uses this shared memory transfer API call, which reduces system latency during virtualization and improves system performance.
  • FIG. 5 is a flowchart of a device virtualization method according to Embodiment 2 of the present application.
  • a guest operating system, a Host operating system, and three physical devices: a GPU device, a multimedia device, and a camera device are taken as an example, and a device virtualization method of multiple physical devices is described in detail.
  • the device virtualization method according to an embodiment of the present application includes the following steps:
  • the process of creating a shared memory corresponding to the multimedia device and the camera device may refer to the process of creating a shared memory corresponding to the GPU device in S401 in the first embodiment of the present application, and details are not described herein.
  • Qemu further maps each shared memory to the PCI device memory space of the guest system, and provides a corresponding number of virtual PCI registers as the PCI configuration space for the guest system.
  • the number of the virtual PCI registers corresponds to the number of shared memories, and each has a one-to-one correspondence.
  • the Guest Linux Kernel divides the multiple shared memories into private memory and common memory.
  • the physical device corresponding to the API call instruction may be determined according to an API call instruction that invokes the front-end thread, and the corresponding shared memory is determined according to the physical device.
  • the API call instruction of the front-end thread is an OpenGL interface call instruction
  • the API call instruction of the front-end thread is an OpenMAX interface call instruction
  • the API call instruction of the front-end thread is the Camera interface call instruction
  • the implementation of the S404 in the first embodiment of the present application may be implemented in the implementation of the S404 in the first step of the present application. Repeat the details.
  • the shared memory is created between the first operating system and the second operating system, and then the virtual device is virtualized by the shared memory, because the first operating system and the second operation
  • the system uses this shared memory transfer API call, which reduces system latency during virtualization and improves system performance.
  • FIG. 6 is a schematic structural diagram of a device virtualization apparatus according to Embodiment 3 of the present application.
  • the shared memory creation module specifically includes: a shared memory creation submodule, configured to When the Qemu corresponding to the second operating system is started, the shared memory is created for the physical device; the mapping submodule is configured to map the shared memory to the PCI device memory space of the second operating system; and provide virtual for the second operating system.
  • the PCI register acts as a PCI configuration space.
  • the physical device is a plurality of shared memory creation modules, and is specifically configured to: when the analog processor Qemu corresponding to the second operating system is started, separately create shared memory for each physical device; and map the multiple shared memories separately A PCI device memory space of the second operating system; and providing a virtual plurality of PCI registers as a PCI configuration space for the second operating system, the plurality of PCI registers respectively corresponding to the plurality of shared memories.
  • the device virtualization apparatus further includes: a dividing module, configured to divide the shared memory into a first storage area and a second storage area, where the first storage area includes a first preset a plurality of channels; the second storage area includes a second predetermined number of blocks.
  • the sizes of the multiple channels of the first storage area are equal; the sizes of the multiple blocks of the second storage area are adapted to the processing data of the physical device corresponding to the shared memory.
  • the physical device is multiple, and the device further includes: a shared memory determining module, configured to determine, according to the API operation instruction, a physical device corresponding to the API operation instruction, and determine a corresponding shared memory according to the physical device.
  • a shared memory determining module configured to determine, according to the API operation instruction, a physical device corresponding to the API operation instruction, and determine a corresponding shared memory according to the physical device.
  • the device virtualization apparatus further includes: a first mapping module, configured to: in the second operating system, when receiving the API call instruction, create a first thread corresponding to the API call instruction And sending a thread creation instruction corresponding to the API call instruction to the first operating system; and allocating, for the first thread, an address space of a corresponding channel in the first storage area and a corresponding address space of the second storage area; Passing the address space of the channel in the first storage area and the address space of the second storage area to the Qemu of the second operating system through the PCI configuration space; the second mapping module is used in the first operating system, After receiving the thread creation instruction corresponding to the API call instruction, creating a corresponding second thread; and mapping the address space of the corresponding channel in the first storage area and the corresponding address space of the second storage area to the The address space of the second thread; the sending module is specifically configured to write the processing instruction into the corresponding channel in the first storage area by using the first thread The address space of the track,
  • the shared memory is created between the first operating system and the second operating system, and then the virtual device is virtualized by the shared memory, because the first operating system and the second operation
  • the system uses this shared memory transfer API call, which reduces system latency during virtualization and improves system performance.
  • FIG. 7 is a schematic structural diagram of a device virtualization system according to Embodiment 4 of the present application.
  • the device virtualization system 700 includes: a second operating system 701, configured to receive an application interface API call instruction of the physical device, determine a processing instruction corresponding to the application interface call instruction, and Sending the processing instruction to the first operating system 702 via the shared memory corresponding to the physical device; one or more shared memory 703, configured to transfer a processing instruction between the first operating system and the second operating system; wherein The one or more shared memories respectively correspond to the physical devices; the first operating system 702 is configured to receive and execute the processing instruction, and use the processing result as a response of the application interface calling instruction or a shared memory corresponding to the physical device. Return to the second operating system.
  • first operating system 702 For the implementation of the first operating system 702, refer to the implementation of the first operating system 301 in the first embodiment of the present application, and details are not described herein again.
  • the first operating system may be a guest guest operating system
  • the second operating system may be a host guest operating system
  • the shared memory is created between the first operating system and the second operating system, and then the virtual device is virtualized by the shared memory, because the first operating system and the second operation
  • the system uses this shared memory transfer API call, which reduces system latency during virtualization and improves system performance.
  • an electronic device 800 as shown in FIG. 8 is also provided in the embodiment of the present application.
  • an electronic device 800 includes: a display 801, a memory 802, a processor 803, a bus 804, and one or more modules in which the one or more modules are stored. And configured to be executed by the one or more processors, the one or more modules comprising instructions for performing the steps in any of the methods of embodiment one or embodiment two of the present application.
  • a computer program product that can be used in conjunction with an electronic device 800 including a display, the computer program product including a computer readable storage medium and a computer program mechanism embedded therein is also provided.
  • the computer program mechanism includes instructions for performing the steps of the method of any of the first or second embodiment of the present application.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the present application is made with reference to a method, a device (system), and a computer program according to an embodiment of the present application.
  • the flow chart and/or block diagram of the product is described. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Multi Processors (AREA)

Abstract

一种设备虚拟化方法、装置、系统(700)及电子设备(800)、计算机程序产品,该方法包括:在第一操作系统(301,702)处创建共享内存(303a,303b,303c,703),并将该共享内存(303a,303b,303c,703)映射为第二操作系统(302,701)的外设部件互连标准PCI设备内存空间;其中,该共享内存(303a,303b,303c,703)对应于一物理设备(304a,304b,304c);在第二操作系统(302,701)处接收物理设备(304a,304b,304c)的应用接口API操作指令,并根据API操作指令,确定对应的处理指令;将处理指令通过共享内存(303a,303b,303c,703)传递至第一操作系统(301,702);在第一操作系统(301,702)处执行处理指令,并将处理结果作为API操作指令的响应或者经共享内存(303a,303b,303c,703)返回给第二操作系统(302,701)。采用上述方案,能够减少虚拟化过程中的系统延时,提高系统性能。

Description

一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品 技术领域
本申请涉及计算机技术,具体地,涉及一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品。
背景技术
图1中示出了基于Qemu/KVM(Kernel-based Virtual Machine,基于内核的虚拟机)技术的虚拟化架构。
如图1所示,基于Qemu/KVM技术的虚拟化架构由一个主Host操作系统,一个或多个虚拟出来的客Guest操作系统组成。Host操作系统包括多个Host用户空间程序、Host Linux Kernel,即,Host Linux内核。每个客Guest操作系统分别包括用户空间、Guest Linux Kernel、和Qemu。这些操作系统运行在同一套硬件处理器芯片上,共享处理器及外设资源。支持虚拟化架构的ARM处理器至少包含EL2,EL1,EL0三种模式,EL2模式下运行虚拟机管理器Hypervisor程序;EL1模式下运行Linux内核程序,即,Linux kernel程序;EL0模式下运行用户空间程序。Hypervisor层管理CPU、内存、定时器、中断等硬件资源,并通过中央处理器CPU、内存、定时器、中断的虚拟化资源,可以把不同的操作系统分时加载到物理处理器上运行,从而实现系统虚拟化的功能。
KVM/Hypervisor跨越Host Linux kernel和Hypervisor两层,一方面为模拟处理器Qemu提供驱动节点,即,允许Qemu通过KVM节点创建虚拟CPU,并管理虚拟化资源;另一方面KVM/Hypervisor还可以把Host Linux系统从物理CPU上切换出去,然后把Guest Linux系统加载到物理处理器上运行,并处理Guest Linux系统异常退出的后续事务。
Qemu作为Host Linux的一个应用运行,为Guest Linux的运行提供虚拟的物理设备资源,通过KVM/Hypervisor模块的设备KVM节点,创建虚拟CPU,分配物理设备资源,实现把一个未经修改的Guest Linux加载到物理处理器上去运行。
当Guest Linux需要访问物理设备时,比如GPU(Graphics Processing Unit,图形处理器)设备、多媒体设备、摄像设备等,需要对这些物理设备进行本地虚拟化,目前通常通过Qemu转接去调用Host Linux kernel的驱动节点;具体地,这些物理设备提供了较多数量的API(Application Programming Interface,应用程序编程接口)函数,可以通过远程API调用实现这些设备的虚拟化,具体地,可以从Host与Guest系统软件架构层次中选择合适的层进行API转接。例如,对于Android系统,Guest Android可以选择从HAL(Hard Abstract Layer,硬件抽象层)进行API转接;并在Host Linux用户空间实现一个后端服务器Backend Server,最终使Guest系统能够通过Host系统来实现API函数的远程调用。
跨系统API远程调用主要涉及到函数参数的传递、运行结果的返回、函数的执行时间以及同步。图2中示出了现有技术中跨系统API远程调用的系统架构。如图2所示,一个API的调用经由Guest Android系统发起,经HAL层、Guest Linux Kernel、Qemu、到达Host Backend server、然后调用Host Linux kernel驱动程序实现对物理设备的访问。对于性能要求较高的物理设备,比如GPU设备、多媒体设备、摄像设备等,上述软件架构很难达到理想的性能要求。
发明内容
本申请实施例中提供了一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品,主要用于解决现有技术中的设备虚拟化方法性能较差的问题。
根据本申请实施例的第一个方面,提供了一种设备虚拟化方法,包括:在第一操作系统处创建共享内存,并将该共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间;其中,该共享内存对应于一物理设备;在第二操作系统处接收物理设备的应用接口API操作指令,并根据API操作指令,确定对应的处理指令;将处理指令通过共享内存传递至第一操作系统;在第一操作系统处执行处理指令,并将处理结果作为API操作指令的响应或者经共享内存返回给第二操作系统。
根据本申请实施例的第二个方面,提供了一种设备虚拟化装置,包括:共享内存创建模块,用于在第一操作系统处创建共享内存,并将该共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间;其中,该共享内存对应于一物理设备;接收模块,用于在第二操作系统处接收物理设备的应用接口API操作指令,并根据API操作指令,确定对应的处理指令;发送模块,用于将处理指令通过共享内存传递至第一操作系统;处理模块,用于在第一操作系统处执行处理指令,并将处理结果作为API操作指令的响应或者经共享内存返回给第二操作系统。
根据本申请实施例的第三个方面,提供了一种电子设备,包括:显示器,存储器,一个或多个处理器;以及一个或多个模块,该一个或多个模块被存储在存储器中,并被配置成由该一个或多个处理器执行,该一个或多个模块包括用于执行根据本申请实施例的第一个方面的虚拟方法中各个步骤的指令。
根据本申请实施例的第四个方面,提供了一种计算机程序产品,该计算机程序产品对用于执行一种过程的指令进行编码,该过程包括根据本申请实施例的第一个方面的虚拟方法。
采用根据本申请实施例的设备虚拟化方法、装置、系统及电子设备、计算机程序产品,通过在第一操作系统和第二操作系统之间创建共享内存,然后通过该共享内存实现物理设备的虚拟化,由于第一操作系统和第二操作系 统通过该共享内存转接API调用,从而减少了虚拟化过程中的系统延时,提高了系统性能。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1中示出了基于Qemu/KVM技术的虚拟化架构示意图;
图2中示出了现有技术中跨系统API远程调用的系统架构;
图3示出了用于实施本申请实施例中设备虚拟化方法的一种系统架构;
图4示出了根据本申请实施例一的设备虚拟化方法的流程图;
图5示出了根据本申请实施例二的设备虚拟化方法的流程图;
图6示出了根据本申请实施例三的设备虚拟化装置的结构示意图;
图7示出了根据本申请实施例四的设备虚拟化系统的结构示意图;
图8示出了根据本申请实施例五的电子设备的结构示意图。
具体实施方式
在实现本申请的过程中,发明人发现,现有技术中采用如图2所示的虚拟化流程,从Guest用户空间程序、到HAL、再到Guest Linux Kernel层的系统调用、从Qemu到后端服务器Backend server的进程切换、每一环节都要消耗处理器时间,而且一次远程API的调用需要多次参数的传递,还可能是数据量相当大的参数,所以虚拟化后的操作系统在调用这些设备时,系统延时会大大增加,性能比Host系统下降好几倍。
针对上述问题,本申请实施例中提供了一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品,通过在第一操作系统和第二操作系统之间创建共享内存,然后通过该共享内存实现物理设备的虚拟化,由于第一操作系统和第二操作系统通过该共享内存转接API调用,从而减少了虚拟化过程 中的系统延时,提高了系统性能。
本申请实施例中的方案可以应用于各种场景中,例如,采用基于Qemu/KVM技术的虚拟化架构的智能终端、安卓模拟器、服务器虚拟化平台等。
本申请实施例中的方案可以采用各种计算机语言实现,例如,面向对象的程序设计语言Java等。
为了使本申请实施例中的技术方案及优点更加清楚明白,以下结合附图对本申请的示例性实施例进行进一步详细的说明,显然,所描述的实施例仅是本申请的一部分实施例,而不是所有实施例的穷举。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
实施例一
图3示出了用于实施本申请实施例中设备虚拟化方法的一种系统架构。如图3所示,根据本申请实施例的设备虚拟化系统包括第一操作系统301、第二操作系统302、多块共享内存303a、303b、303c、以及多个物理设备304a、304b、304c。具体地,该第一操作系统可以是Host操作系统;该第二操作系统可以是Guest操作系统。应当理解,在具体实施时,该第一操作系统也可以是Guest操作系统,该第二操作也可以是Host操作系统,本申请对此不作限制。
接下来,将对第一操作系统为Host操作系统,第二操作系统为Guest操作系统为例,对本申请的具体实施方式进行详细介绍。
具体地,Guest操作系统302中可以包括用户空间3021、Guest Linux Kernel3022、和模拟处理器Qemu 3023;在Guest操作系统的用户空间中可以提供有虚拟的多种物理设备或模块的接口,具体地,该多种接口可以包括图形程序接口、多媒体程序接口、摄像程序接口等;更具体地,例如,该图形程序接口可以是OpenGL(Open Graphics Library,开放图形库)API接口、Direct 3D、 Quick Draw 3D等图形程序接口,该多媒体/视频程序接口可以是OpenMAX(Open Media Acceleration,开放多媒体加速层)接口等,本申请对此不作限制。
具体地,Host操作系统301中可以包括用户空间3011和Host Linux Kernel3012;在Host操作系统的用户空间中可以提供对应于Guest操作系统中的各接口的后端服务器Backend Server。例如,Guest操作系统中的图形程序接口为OpenGL API时,后端服务器可以是OpenGL Backend Server;后端服务器可以通过Host Linux Kernel中的GPU驱动程序去操作GPU设备;Guest操作系统中的多媒体/视频程序接口为OpenMAX API时,后端服务器可以是OpenMAX Backend Server;后端服务器可以通过Host Linux Kernel中的多媒体/视频驱动程序去操作相应的多媒体/视频设备。
在具体实施时,共享内存303a、303b、303c是Guest操作系统和Host操作系统均可见的多块内存;并且该内存对于Guest操作系统和Host操作系统均处于可读和可写状态,即,Guest操作系统和Host操作系统均可以在共享内存上执行读和写操作。
在具体实施时,共享内存的数量可以对应于实现虚拟化的物理设备;即,一个物理设备对应一块共享内存。例如,GPU设备对应于共享内存303a、多媒体设备对应于共享内存303b、摄像设备对应于共享内存303c等。
在具体实施时,各共享内存的大小可以由开发人员设置,并适配于各自对应的物理设备。例如,GPU设备对应的共享内存可以设置为128M;多媒体设备对应的共享内存可以设置为64M;摄像设备对应的共享内存可以设置为64M等,本申请对此均不作限制。
接下来,将以GPU设备对应的共享内存303a为例,对本申请实施例中的共享内存的划分进行详细描述。
在具体实施时,该共享内存303a可以仅包括第一存储区3031;也可以划分为第一存储区3031和第二存储区3032。具体地,该第一存储区也可以称为 私有内存;该第二存储区也可以称为公共内存。在具体实施时,第一存储区和第二存储区的划分没有特定规则,可以是根据第一存储区和第二存储区各自通常存储的数据大小、依据设计人员的经验划分;也可以根据其他预先设置的策略来划分,本申请对此不作限制。
具体地,第一存储区可以用于Guest操作系统的各个线程与Backend Server线程之间的函数和参数、和/或同步信息的传输;具体地,该私有内存还可以进一步被划分为多个块,一个块定义为一个通道,一个通道对应于Guest操作系统的一个线程;在具体划分时,该通道的数量可以是由开发人员预设的;在具体划分时,该多个块可以是平均划分的、尺寸大小相等的块,也可以是根据系统中常用线程调用GPU的函数和参数、和/或同步信息的大小来智能划分,本申请对此不作限制。在具体实施时,Guest操作系统的用户程序可以对私有内存中的通道进行动态管理,即,用户程序可以随时对私有内存中的通道进行分配、重新分配及释放操作。
具体地,第二存储区可以用于Guest操作系统的所有线程与Backend Server线程之间的大数据块,例如,图形内容数据的传输。在具体实施时,可以将公共内存划分为若干个尺寸大小不相等的大块,具体地,该块的数量可以是由开发人员预设的。具体地,Guest操作系统中的用户程序可以对公共内存中的块进行管理,即,用户程序可以随时对公共内存中的通道进行分配、及释放操作,且每次分配和释放都是按整个块处理的。
在具体实施时,公共内存中块的大小可以适配于常用的GPU图形处理数据。例如,研发人员发现,在GPU虚拟化过程中,通常第一操作系统将2M至16M左右的图形内容数据传输至第二操作系统就能够满足GPU图形虚拟化处理的需求;而在因此,在分配公共内存中块的大小时,可以将公共内存分隔为2M,4M,8M,16M等多个内存块。
举例来说,如果总公共内存大小为32M,分隔为2M,2M,4M,8M,16M 5个内存块,用户程序申请3M空间时,可以直接把4M的内存块区分配 给相应的线程,并在该线程释放时置一个空闲标志给4M块区。
在具体实施时,物理设备304a、304b、304c可以是未集成至中央处理器CPU上的物理设备;更优选地,可以是具有高吞吐量的物理设备,例如,GPU设备、多媒体设备、摄像设备等。
应当理解,为了示例的目的,图3中仅示出了一个Guest操作系统、一个Host操作系统、三个共享内存以及三个物理设备的情况;但在具体实施时,可以是一个或多个Guest操作系统,也可以是一个或多个Host操作系统,还可以是其他数量的共享内存,以及其他数量的物理设备;即,对于Guest操作系统、Host操作系统、共享内存以及物理设备可以为任意的数量,本申请对此均不作限制。
应当理解,为了示例的目的,图3中示出的共享内存包括私有内存和公共内存两个存储区;并且将私有内存划分为3个大小相等的通道;公共内存被划分为4个大小不等的通道。在具体实施时,共享内存可以是仅包括私有内存一个存储区;并且私有内存可以不进行划分、或者划分为多个大小不等的通道;公共内存可以不存在,也可以被划分为多个大小相等的通道等,本申请对此均不作限制。
接下来,将结合图3所示系统架构对根据本申请实施例的设备虚拟化方法进行描述。
图4示出了根据本申请实施例一的设备虚拟化方法的流程图。在本申请实施例中,以将一个Guest操作系统,一个Host操作系统,一个GPU设备,一块对应于GPU设备的共享内存为例,对GPU设备的设备虚拟化方法进行详细描述。如图4所示,根据本申请实施例的设备虚拟化方法包括以下步骤:
S401,在Guest系统对应的Qemu启动时,创建GPU设备对应的共享内存。
具体地,Qemu可以通过系统调用来创建对应的共享内存。
具体地,可以从内存中划分一块特定的地址空间作为GPU设备的共享内 存。该共享内存的大小可以可以由开发人员设置,并适配于各自对应的物理设备。例如,GPU设备对应的共享内存可以设置为128M;多媒体设备对应的共享内存可以设置为64M;摄像设备对应的共享内存可以设置为64M等,本申请对此均不作限制。
应当理解,当有多个Guest系统时,可以由每个Guest系统的Qemu为各物理设备重新创建一块共享内存,也可以是该多个Guest系统共享一块物理设备对应的共享内存;还可以针对不同的物理设备采用不同的方案,比如对于GPU设备;各个Guest系统采用独立的共享内存,而对于多媒体设备,各个Guest系统共享一块共享内存;本申请对此均不作限制。
S402,Qemu进一步将该共享内存映射为Guest系统的PCI(Peripheral Component Interconnect,外设部件互连标准)设备内存空间;并为Guest系统提供虚拟的PCI寄存器作为PCI配置空间。
S403,Guest Linux Kernel将该共享内存划分为私有内存和公共内存。
具体地,Guest Linux Kernel可以在对GPU设备初始化时对共享内存进行划分;以使共享内存支持多个进程或线程的访问。
具体地,可以将私有内存,即,第一存储区划分为第一预设数量的多个通道;可以将公共内存,即,第二存储区划分为第二预设数量的多个块。具体的,该第一预设数量和第二预设数量可以由开发人员设置。
具体地,该私有内存的多个通道的大小可以相等;该公共内存的多个块的大小可以适配于该共享内存对应的物理设备的处理数据。
S404,在前端线程启动时,为该前端线程、以及对应的后端线程分配相应的共享内存地址空间。
在具体实施时,当接收到API调用指令时,可以创建与该API调用指令对应的前端线程,即,第一线程。并将API调用指令对应的线程创建指令发送到Host操作系统,以触发Host操作系统创建相应的后端线程,即,第二线程。
在具体实施时,用户可以针对Guest操作系统中的某一线程执行用户操作,例如,用户可以在微信、QQ等线程中,执行打开一个新窗口,打一个新页面等、播放多媒体/视频等的操作。
在具体实施时,当接收到用户操作时,线程会根据用户操作产生一个API调用指令调用对应的前端线程,例如,当用户执行的是打开一个新窗口,打一个新页面等的操作时,可以调用对应的图形处理接口,当用户执行的是播放多媒体/视频等操作时,可以调用对应的多媒体/视频接口等。
具体地,在调用前端线程时,通常还会触发Host操作系统创建与该前端线程相对应的后端线程。具体地,如果Guest系统调用的是图形程序处理接口,那么会在Host操作系统中的图形处理后台服务器中创建一对应的后端线程;如果用户调用的是多媒体程序处理接口,那么会在Host操作系统中的多媒体处理后台服务器中创建一对应的后端线程。
在具体实施时,可以在前端线程启动时,从Guest Linux Kernel处获取该前端线程对应的私有内存通道的地址空间、以及分配给该前端线程的公共内存地址空间;并将该前端线程对应的私有内存通道的地址空间、以及分配给该前端线程的公共内存地址空间映射为该前端线程的地址空间;从而与Qemu建立同步控制通道。具体地,通常将私有内存中的某一通道分配给该前端线程,并将公共内存整个分配给该前端线程。
接下来,可以将该前端线程对应的私有内存通道的地址空间、以及公共内存的地址空间通过PCI配置空间传递给Qemu;然后Qemu通过进程间通信机制把前端线程对应的私有内存通道的地址空间、以及公共内存的地址空间发送给后端服务器;并将其映射为该后端线程的地址空间。
至此,就完成了前端线程与后端线程之间共享内存的初始化。
S405,在前端线程和对应的后端线程之间,通过该共享内存实现物理设备的虚拟化。
在具体实施时,当在Guest用户空间的前端线程处接收到针对GPU设备 的API操作指令时,可以根据该API操作指令,确定对应的处理指令;并将该处理指令通过该共享内存传递至Host系统中的Backend Server中的后端线程;然后在后端线程处执行处理指令,并将处理结果作为该API调用指令的响应或者经共享内存返回给前端线程。
具体地,将该处理指令通过该共享内存传递至Host系统中的Backend Server中的后端线程可以采用以下多种方式实现:
在第一种具体实施方式中,当处理指令包括API调用函数和参数时;前端线程可以将函数和参数写入对应的私有内存通道;并将函数和参数所在的偏移地址,发送给后端线程;触发后端线程根据该偏移地址去共享内存中该获取处理指令。具体地,可以通过Qemu将偏移地址发送到Host操作系统的后端服务器,再由后端服务器同步至后端线程。
在第二种具体实施方式中,当处理指令包括API调用函数、参数和同步信息时;前端线程可以将函数、参数和同步信息写入对应的私有内存通道;并将函数和参数所在的偏移地址,发送给后端线程;触发后端线程根据该偏移地址去共享内存中该获取处理指令。具体地,可以通过Qemu将偏移地址发送到Host操作系统的后端服务器,再由后端服务器同步至后端线程。
在第三种具体实施方式中,当处理指令包括API调用函数、参数和图形内容数据时;前端线程可以将函数、参数写入对应的私有内存通道;并将图形内容数据写入公共内存;并将处理指令所在的共享内存的偏移地址,发送给后端线程;触发后端线程根据该偏移地址去共享内存中该获取处理指令。具体地,可以通过Qemu将偏移地址发送到Host操作系统的后端服务器,再由后端服务器同步至后端线程。
在第四种具体实施方式中,当处理指令包括API调用函数、参数、同步信息和图形内容数据时;前端线程可以将函数、参数、同步信息写入对应的私有内存通道;并将图形内容数据写入公共内存;并将处理指令所在的共享内存的偏移地址,发送给后端线程;触发后端线程根据该偏移地址去共享内 存中该获取处理指令。具体地,可以通过Qemu将偏移地址发送到Host操作系统的后端服务器,再由后端服务器同步至后端线程。
在具体实施时,从前端线程至后端线程的切换,以及第一操作系统和第二操作系统之间的切换均采用本领域技术人员的常用技术手段,本申请对此不作赘述。
在具体实施时,后端线程驱动相应的物理设备/模块执行相应的处理指令,并得到处理结果。
在具体实施时,后端线程可以将该处理结果直接作为应用接口调用指令的响应反馈给用户,也可以将该处理结果返回给前端线程,由前端线程进行响应。
至此,实现了Guest操作系统中用户程序对物理设备的远程调用;即,实现了物理设备的虚拟化。
采用本申请实施例中的设备虚拟化方法,通过在第一操作系统和第二操作系统之间创建共享内存,然后通过该共享内存实现物理设备的虚拟化,由于第一操作系统和第二操作系统通过该共享内存转接API调用,从而减少了虚拟化过程中的系统延时,提高了系统性能。
实施例二
接下来,将结合图3所示系统架构对根据本申请实施例二的设备虚拟化方法进行描述。
图5示出了根据本申请实施例二的设备虚拟化方法的流程图。在本申请实施例中,将一个Guest操作系统,一个Host操作系统,三个物理设备:GPU设备、多媒体设备、摄像设备为例,对多个物理设备的设备虚拟化方法进行详细描述。如图5所示,根据本申请实施例的设备虚拟化方法包括以下步骤:
S501,在Guest系统对应的Qemu启动时,分别创建GPU设备、多媒体设备、摄像设备各自对应的共享内存。
在具体实施时,多媒体设备、摄像设备各自对应的共享内存的创建过程可以参考本申请实施例一中S401中GPU设备对应的共享内存的创建过程,在此不重复赘述。
S502,Qemu进一步将各共享内存分别映射为Guest系统的PCI设备内存空间;并为Guest系统提供对应数量的虚拟PCI寄存器作为PCI配置空间。
在具体实施时,该虚拟PCI寄存器的数量对应于共享内存的数量,且各自一一对应。
S503,Guest Linux Kernel将该多个共享内存分别划分为私有内存和公共内存。
在具体实施时,本步骤的实施可以参考本申请实施例一中S403的实施,在此不重复赘述。
S504,在前端线程启动时,根据调用前端线程的API调用指令,确定该前端线程对应的共享内存,并为该前端线程、以及对应的后端线程分配相应的共享内存地址空间。
具体地,可以根据调用前端线程的API调用指令,确定API调用指令对应的物理设备,并根据物理设备确定对应的共享内存。具体地,如果调用前端线程的API调用指令是OpenGL接口调用指令;则可以确定对应的物理设备是GPU设备,那么可以确定该前端线程对应的共享内存是物理设备对应的共享内存,例如,可以是303a;如果调用前端线程的API调用指令是OpenMAX接口调用指令;则可以确定对应的物理设备是多媒体设备,那么可以确定该前端线程对应的共享内存是多媒体设备对应的共享内存,例如,可以是303b;如果调用前端线程的API调用指令是Camera接口调用指令;则可以确定对应的物理设备是摄像设备,那么可以确定该前端线程对应的共享内存是摄像设备对应的共享内存,例如,可以是303c。
在具体实施时,本步骤中为该前端线程、以及对应的后端线程分配相应的共享内存地址空间的实施可以参考本申请实施例一中S404的实施,在此不 重复赘述。
S505,在前端线程和对应的后端线程之间,通过该共享内存实现物理设备的虚拟化。
在具体实施时,本步骤的实施可以参考本申请实施例一中S405的实施,在此不重复赘述。
至此,实现了Guest操作系统中用户程序对多个物理设备的远程调用;即,实现了多个物理设备的虚拟化。
采用本申请实施例中的设备虚拟化方法,通过在第一操作系统和第二操作系统之间创建共享内存,然后通过该共享内存实现物理设备的虚拟化,由于第一操作系统和第二操作系统通过该共享内存转接API调用,从而减少了虚拟化过程中的系统延时,提高了系统性能。
基于同一发明构思,本申请实施例中还提供了一种设备虚拟化装置,由于该装置解决问题的原理与本申请实施例一和二所提供的设备虚拟化方法的相似,因此该装置的实施可以参见方法的实施,重复之处不再赘述。
实施例三
图6示出了根据本申请实施例三的设备虚拟化装置的结构示意图。
如图6所示,根据本申请实施例三的设备虚拟化装置600包括:共享内存创建模块601,用于在第一操作系统处创建共享内存,并将该共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间;其中,该共享内存对应于一物理设备;接收模块602,用于在该第二操作系统处接收该物理设备的应用接口API操作指令,并根据该API操作指令,确定对应的处理指令;发送模块603,用于将该处理指令通过该共享内存传递至第一操作系统;处理模块604,用于在第一操作系统处执行该处理指令,并将处理结果作为该API操作指令的响应或者经该共享内存返回给该第二操作系统。
具体地,共享内存创建模块,具体包括:共享内存创建子模块,用于在 第二操作系统对应的Qemu启动时,为该物理设备创建共享内存;映射子模块,用于将该共享内存映射为第二操作系统的PCI设备内存空间;并为该第二操作系统提供虚拟的PCI寄存器作为PCI配置空间。
具体地,该物理设备为多个,共享内存创建模块,具体用于:在第二操作系统对应的模拟处理器Qemu启动时,为各物理设备分别创建共享内存;将该多个共享内存分别映射为第二操作系统的PCI设备内存空间;并为该第二操作系统提供虚拟的多个PCI寄存器作为PCI配置空间,该多个PCI寄存器分别对应于该多个共享内存。
具体地,根据本申请实施例三的设备虚拟化装置还包括:划分模块,用于将该共享内存划分为第一存储区和第二存储区,其中,该第一存储区包括第一预设数量的多个通道;该第二存储区包括第二预设数量的多个块。
具体地,该第一存储区的多个通道的大小相等;该第二存储区的多个块的大小适配于该共享内存对应的物理设备的处理数据。
具体地,该物理设备为多个,该装置还包括:共享内存确定模块,用于根据该API操作指令,确定该API操作指令对应的物理设备,并根据该物理设备确定对应的共享内存。
具体地,根据本申请实施例三的设备虚拟化装置还包括:第一映射模块,用于在第二操作系统中,在接收到API调用指令时,创建与该API调用指令对应的第一线程;并将API调用指令对应的线程创建指令发送到第一操作系统;并为该第一线程分配在该第一存储区中对应的通道的地址空间、以及对应的该第二存储区的地址空间;将在第一存储区中的通道的地址空间、以及第二存储区的地址空间通过PCI配置空间传递给该第二操作系统的Qemu;第二映射模块,用于在第一操作系统中,在接收到API调用指令对应的线程创建指令后,创建对应的第二线程;并将在该第一存储区中对应的通道的地址空间、以及对应的该第二存储区的地址空间映射为该第二线程的地址空间;该发送模块,具体用于通过第一线程将处理指令写入第一存储区中对应的通 道的地址空间、以及对应的该第二存储区的地址空间中;并将该处理指令在该地址空间中的偏移地址通过Qemu发送到第一操作系统;在第一操作系统中,将接收到的偏移地址同步给对应的第二线程。
采用本申请实施例中的设备虚拟化装置,通过在第一操作系统和第二操作系统之间创建共享内存,然后通过该共享内存实现物理设备的虚拟化,由于第一操作系统和第二操作系统通过该共享内存转接API调用,从而减少了虚拟化过程中的系统延时,提高了系统性能。
基于同一发明构思,本申请实施例中还提供了一种设备虚拟化系统,由于该系统解决问题的原理与本申请实施例一和二所提供的设备虚拟化方法相似,因此该系统的实施可以参见方法的实施,重复之处不再赘述。
实施例四
图7示出了根据本申请实施例四的设备虚拟化系统的结构示意图。
如图7所示,根据本申请实施例四的设备虚拟化系统700包括:第二操作系统701,用于接收物理设备的应用接口API调用指令,确定该应用接口调用指令对应的处理指令,并将该处理指令经该物理设备对应的共享内存发送至第一操作系统702;一个或多个共享内存703,用于在该第一操作系统和第二操作系统之间传递处理指令;其中,该一个或多个共享内存分别对应于各物理设备;该第一操作系统702,用于接收和执行该处理指令,并将处理结果作为该应用接口调用指令的响应或者经该物理设备对应的共享内存返回给该第二操作系统。
在具体实施时,第二操作系统701的实施可以参见本申请实施例一中第二操作系统302的实施,重复之处不再赘述。
在具体实施时,第一操作系统702的实施可以参见本申请实施例一中第一操作系统301的实施,重复之处不再赘述。
在具体实施时,共享内存703的实施可以参见本申请实施例一中共享内 存303a、303b、303c的实施,重复之处不再赘述。
具体地,该第一操作系统可以为客Guest操作系统,该第二操作系统可以为主Host操作系统。
采用本申请实施例中的设备虚拟化系统,通过在第一操作系统和第二操作系统之间创建共享内存,然后通过该共享内存实现物理设备的虚拟化,由于第一操作系统和第二操作系统通过该共享内存转接API调用,从而减少了虚拟化过程中的系统延时,提高了系统性能。
实施例五
基于同一发明构思,本申请实施例中还提供了如图8所示的一种电子设备800。
如图8所示,根据本申请实施例五的电子设备800包括:显示器801,存储器802,处理器803;总线804;以及一个或多个模块,该一个或多个模块被存储在该存储器中,并被配置成由该一个或多个处理器执行,该一个或多个模块包括用于执行根据本申请实施例一或实施例二中任一方法中各个步骤的指令。
基于同一发明构思,本申请实施例中还提供了一种可以与包括显示器的电子设备800结合使用的计算机程序产品,该计算机程序产品包括计算机可读的存储介质和内嵌于其中的计算机程序机制,该计算机程序机制包括用于执行本申请实施例一或实施例二中任一该方法中各个步骤的指令。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产 品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (16)

  1. 一种设备虚拟化方法,其特征在于,包括:
    在第一操作系统处创建共享内存,并将所述共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间;其中,所述共享内存对应于一物理设备;
    在所述第二操作系统处接收所述物理设备的应用接口API操作指令,并根据所述API操作指令,确定对应的处理指令;将所述处理指令通过所述共享内存传递至第一操作系统;
    在第一操作系统处执行所述处理指令,并将处理结果作为所述API操作指令的响应或者经所述共享内存返回给所述第二操作系统。
  2. 根据权利要求1所述的方法,其特征在于,在第一操作系统处创建共享内存,并将所述共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间,具体包括:
    在第二操作系统对应的模拟处理器Qemu启动时,为所述物理设备创建共享内存;
    将所述共享内存映射为所述第二操作系统的PCI设备内存空间;并为所述第二操作系统提供虚拟的PCI寄存器作为PCI配置空间。
  3. 根据权利要求1所述的方法,其特征在于,所述物理设备为多个,在第一操作系统处创建共享内存,并将所述共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间,具体包括:
    在第二操作系统的对应的Qemu启动时,为各物理设备分别创建共享内存;
    将所述多个共享内存分别映射为第二操作系统的PCI设备内存空间;并为所述第二操作系统提供虚拟的多个PCI寄存器作为PCI配置空间,所述多个PCI寄存器分别对应于所述多个共享内存。
  4. 根据权利要求1所述的方法,其特征在于,在第一操作系统处创建共 享内存,并将所述共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间之后,在所述第二操作系统处接收所述物理设备的应用接口API操作指令之前,还包括:
    将所述共享内存划分为第一存储区和第二存储区,其中,所述第一存储区包括第一预设数量的多个通道;所述第二存储区包括第二预设数量的多个块。
  5. 根据权利要求4所述的方法,其特征在于,所述第一存储区的多个通道的大小相等;所述第二存储区的多个块的大小适配于所述共享内存对应的物理设备的处理数据。
  6. 根据权利要求1所述的方法,其特征在于,所述物理设备为多个,将所述处理指令通过所述共享内存传递至第一操作系统之前,还包括:
    根据所述API操作指令,确定所述API操作指令对应的物理设备,并根据所述物理设备确定对应的共享内存。
  7. 根据权利要求2所述的方法,其特征在于,在将所述共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间之后,在所述第二操作系统处接收所述物理设备的应用接口API操作指令之前,还包括:
    在第二操作系统中,在接收到API调用指令时,创建与所述API调用指令对应的第一线程;并将API调用指令对应的线程创建指令发送到第一操作系统;并为所述第一线程分配在所述第一存储区中对应的通道的地址空间、以及对应的所述第二存储区的地址空间;将在第一存储区中的通道的地址空间、以及第二存储区的地址空间通过PCI配置空间传递给所述第二操作系统的Qemu;
    在第一操作系统中,在接收到API调用指令对应的线程创建指令后,创建对应的第二线程;并将在所述第一存储区中对应的通道的地址空间、以及对应的所述第二存储区的地址空间映射为所述第二线程的地址空间;
    将所述处理指令通过所述共享内存传递至第一操作系统,具体包括:
    通过第一线程将所述处理指令写入所述第一存储区中对应的通道的地址空间、以及对应的所述第二存储区的地址空间中;并将所述处理指令在所述地址空间中的偏移地址通过Qemu发送到第一操作系统;在第一操作系统中,将接收到的偏移地址同步给对应的第二线程。
  8. 一种设备虚拟化装置,其特征在于,包括:
    共享内存创建模块,用于在第一操作系统处创建共享内存,并将所述共享内存映射为第二操作系统的外设部件互连标准PCI设备内存空间;其中,所述共享内存对应于一物理设备;
    接收模块,用于在所述第二操作系统处接收所述物理设备的应用接口API操作指令,并根据所述API操作指令,确定对应的处理指令;
    发送模块,用于将所述处理指令通过所述共享内存传递至第一操作系统;
    处理模块,用于在第一操作系统处执行所述处理指令,并将处理结果作为所述API操作指令的响应或者经所述共享内存返回给所述第二操作系统。
  9. 根据权利要求8所述的装置,其特征在于,共享内存创建模块,具体包括:
    共享内存创建子模块,用于在第二操作系统对应的Qemu启动时,为所述物理设备创建共享内存;
    映射子模块,用于将所述共享内存映射为第二操作系统的PCI设备内存空间;并为所述第二操作系统提供虚拟的PCI寄存器作为PCI配置空间。
  10. 根据权利要求8所述的装置,其特征在于,所述物理设备为多个,共享内存创建模块,具体用于:
    在第二操作系统对应的模拟处理器Qemu启动时,为各物理设备分别创建共享内存;
    将所述多个共享内存分别映射为第二操作系统的PCI设备内存空间;并为所述第二操作系统提供虚拟的多个PCI寄存器作为PCI配置空间,所述 多个PCI寄存器分别对应于所述多个共享内存。
  11. 根据权利要求8所述的装置,其特征在于,还包括:
    划分模块,用于将所述共享内存划分为第一存储区和第二存储区,其中,所述第一存储区包括第一预设数量的多个通道;所述第二存储区包括第二预设数量的多个块。
  12. 根据权利要求11所述的装置,其特征在于,所述第一存储区的多个通道的大小相等;所述第二存储区的多个块的大小适配于所述共享内存对应的物理设备的处理数据。
  13. 根据权利要求8所述的装置,其特征在于,所述物理设备为多个,所述装置还包括:
    共享内存确定模块,用于根据所述API操作指令,确定所述API操作指令对应的物理设备,并根据所述物理设备确定对应的共享内存。
  14. 根据权利要求9所述的装置,其特征在于,还包括:
    第一映射模块,用于在第二操作系统中,在接收到API调用指令时,创建与所述API调用指令对应的第一线程;并将API调用指令对应的线程创建指令发送到第一操作系统;并为所述第一线程分配在所述第一存储区中对应的通道的地址空间、以及对应的所述第二存储区的地址空间;将在第一存储区中的通道的地址空间、以及第二存储区的地址空间通过PCI配置空间传递给所述第二操作系统的Qemu;
    第二映射模块,用于在第一操作系统中,在接收到API调用指令对应的线程创建指令后,创建对应的第二线程;并将在所述第一存储区中对应的通道的地址空间、以及对应的所述第二存储区的地址空间映射为所述第二线程的地址空间;
    所述发送模块,具体用于通过第一线程将所述处理指令写入所述第一存储区中对应的通道的地址空间、以及对应的所述第二存储区的地址空间中;并将所述处理指令在所述地址空间中的偏移地址通过Qemu发送到第一操作 系统;在第一操作系统中,将接收到的偏移地址同步给对应的第二线程。
  15. 一种电子设备,其特征在于,所述电子设备包括:显示器,存储器,一个或多个处理器;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行权利要求1-7中任一所述方法中各个步骤的指令。
  16. 一种计算机程序产品,所述计算机程序产品对用于执行一种过程的指令进行编码,所述过程包括根据权利要求1-7中任一项所述的方法。
PCT/CN2016/113265 2016-12-29 2016-12-29 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品 WO2018119952A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680002834.3A CN107077377B (zh) 2016-12-29 2016-12-29 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品
PCT/CN2016/113265 WO2018119952A1 (zh) 2016-12-29 2016-12-29 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113265 WO2018119952A1 (zh) 2016-12-29 2016-12-29 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品

Publications (1)

Publication Number Publication Date
WO2018119952A1 true WO2018119952A1 (zh) 2018-07-05

Family

ID=59623873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113265 WO2018119952A1 (zh) 2016-12-29 2016-12-29 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品

Country Status (2)

Country Link
CN (1) CN107077377B (zh)
WO (1) WO2018119952A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685197A (zh) * 2020-12-28 2021-04-20 浪潮软件科技有限公司 接口数据的交互系统
CN112860506A (zh) * 2019-11-28 2021-05-28 阿里巴巴集团控股有限公司 监控数据的处理方法、装置、系统和存储介质
CN114661497A (zh) * 2022-03-31 2022-06-24 慧之安信息技术股份有限公司 操作系统分区共享内存方法和系统

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107741863A (zh) * 2017-10-08 2018-02-27 深圳市星策网络科技有限公司 一种显卡的驱动方法和装置
CN108932213A (zh) * 2017-10-10 2018-12-04 北京猎户星空科技有限公司 多操作系统间的通讯方法、装置、电子设备和存储介质
CN109669782A (zh) * 2017-10-13 2019-04-23 阿里巴巴集团控股有限公司 硬件抽象层复用方法、装置、操作系统和设备
CN108369604B (zh) * 2017-12-28 2021-12-03 深圳前海达闼云端智能科技有限公司 一种多操作系统共享文件系统的方法、装置和电子设备
WO2019127476A1 (zh) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 虚拟系统蓝牙通信方法及装置、虚拟系统、存储介质及电子设备
CN109343922B (zh) * 2018-09-17 2022-01-11 广东微云科技股份有限公司 一种gpu虚拟化画面显示的方法及装置
CN109725867A (zh) * 2019-01-04 2019-05-07 中科创达软件股份有限公司 虚拟化屏幕共享方法、装置及电子设备
CN112131146B (zh) * 2019-06-24 2022-07-12 维塔科技(北京)有限公司 获取设备信息的方法、装置、存储介质及电子设备
CN110442389B (zh) * 2019-08-07 2024-01-09 北京技德系统技术有限公司 一种多桌面环境共享使用gpu的方法
CN111510780B (zh) * 2020-04-10 2021-10-26 广州方硅信息技术有限公司 视频直播控制、桥接、流控、播控方法及客户端
CN111522670A (zh) * 2020-05-09 2020-08-11 中瓴智行(成都)科技有限公司 一种用于Android系统的GPU虚拟化方法、系统及介质
CN112015605B (zh) * 2020-07-28 2024-05-14 深圳市金泰克半导体有限公司 内存的测试方法、装置、计算机设备和存储介质
CN115081010A (zh) * 2021-03-16 2022-09-20 华为技术有限公司 分布式的访问控制方法、相关装置及系统
CN112764872B (zh) * 2021-04-06 2021-07-02 阿里云计算有限公司 计算机设备、虚拟化加速设备、远程控制方法及存储介质
CN115437717A (zh) * 2021-06-01 2022-12-06 北京小米移动软件有限公司 跨操作系统的调用方法、装置及电子设备
CN113379589A (zh) * 2021-07-06 2021-09-10 湖北亿咖通科技有限公司 双系统的图形处理方法、装置及终端
CN113805952B (zh) * 2021-09-17 2023-10-31 中国联合网络通信集团有限公司 外设虚拟化管理方法、服务器和系统
CN114047960A (zh) * 2021-11-10 2022-02-15 北京鲸鲮信息系统技术有限公司 操作系统运行方法及装置、电子设备和存储介质
CN114327944B (zh) * 2021-12-24 2022-11-11 科东(广州)软件科技有限公司 一种多系统共享内存的方法、装置、设备及存储介质
CN114816417B (zh) * 2022-04-18 2022-10-11 北京凝思软件股份有限公司 一种交叉编译方法、装置、计算设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661381A (zh) * 2009-09-08 2010-03-03 华南理工大学 一种基于Xen的数据共享与访问控制方法
CN101847105A (zh) * 2009-03-26 2010-09-29 联想(北京)有限公司 一种计算机及多操作系统共享内存的方法
CN102262557A (zh) * 2010-05-25 2011-11-30 运软网络科技(上海)有限公司 通过总线架构构建虚拟机监控器的方法及性能服务框架

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477511B (zh) * 2008-12-31 2010-08-25 杭州华三通信技术有限公司 一种实现多操作系统共享存储介质的方法和装置
US10061701B2 (en) * 2010-04-26 2018-08-28 International Business Machines Corporation Sharing of class data among virtual machine applications running on guests in virtualized environment using memory management facility
CN102541618B (zh) * 2010-12-29 2015-05-27 中国移动通信集团公司 一种通用图形处理器虚拟化的实现方法、系统及装置
CN103077071B (zh) * 2012-12-31 2016-08-03 北京启明星辰信息技术股份有限公司 一种kvm虚拟机进程信息的获取方法及系统
CN104216862B (zh) * 2013-05-29 2017-08-04 华为技术有限公司 一种用户进程与系统服务之间的通信方法、装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847105A (zh) * 2009-03-26 2010-09-29 联想(北京)有限公司 一种计算机及多操作系统共享内存的方法
CN101661381A (zh) * 2009-09-08 2010-03-03 华南理工大学 一种基于Xen的数据共享与访问控制方法
CN102262557A (zh) * 2010-05-25 2011-11-30 运软网络科技(上海)有限公司 通过总线架构构建虚拟机监控器的方法及性能服务框架

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860506A (zh) * 2019-11-28 2021-05-28 阿里巴巴集团控股有限公司 监控数据的处理方法、装置、系统和存储介质
CN112860506B (zh) * 2019-11-28 2024-05-17 阿里巴巴集团控股有限公司 监控数据的处理方法、装置、系统和存储介质
CN112685197A (zh) * 2020-12-28 2021-04-20 浪潮软件科技有限公司 接口数据的交互系统
CN114661497A (zh) * 2022-03-31 2022-06-24 慧之安信息技术股份有限公司 操作系统分区共享内存方法和系统

Also Published As

Publication number Publication date
CN107077377B (zh) 2020-08-04
CN107077377A (zh) 2017-08-18

Similar Documents

Publication Publication Date Title
WO2018119952A1 (zh) 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品
WO2018119951A1 (zh) Gpu虚拟化方法、装置、系统及电子设备、计算机程序产品
US10191759B2 (en) Apparatus and method for scheduling graphics processing unit workloads from virtual machines
US8151275B2 (en) Accessing copy information of MMIO register by guest OS in both active and inactive state of a designated logical processor corresponding to the guest OS
JP5583180B2 (ja) 仮想gpu
CN103034524B (zh) 半虚拟化的虚拟gpu
JP5170782B2 (ja) ヘテロジニアス処理ユニットのための集中デバイス仮想化レイヤ
TWI417790B (zh) 異質架構中之邏輯分割以及虛擬化
WO2017024783A1 (zh) 一种虚拟化方法、装置和系统
US20140095769A1 (en) Flash memory dual in-line memory module management
EP3086228A1 (en) Resource processing method, operating system, and device
US20150293776A1 (en) Data processing systems
US20060206891A1 (en) System and method of maintaining strict hardware affinity in a virtualized logical partitioned (LPAR) multiprocessor system while allowing one processor to donate excess processor cycles to other partitions when warranted
JP2009508183A (ja) 仮想マシン・モニタと、acpi準拠ゲスト・オペレーティング・システムとの間の双方向通信のための方法、装置及びシステム
CN113778612A (zh) 基于微内核机制的嵌入式虚拟化系统实现方法
JP7123235B2 (ja) 仮想化用のgpuタスクコンテナとしてのvmid
US10853259B2 (en) Exitless extended page table switching for nested hypervisors
US20220050795A1 (en) Data processing method, apparatus, and device
CN107077376B (zh) 帧缓存实现方法、装置、电子设备和计算机程序产品
US20210055948A1 (en) Fast thread execution transition
CN114138423A (zh) 基于国产gpu显卡的虚拟化构建系统及方法
CN115904617A (zh) 一种基于sr-iov技术的gpu虚拟化实现方法
CN116324706A (zh) 分离式存储器池分配
US20200201691A1 (en) Enhanced message control banks
US20150186180A1 (en) Systems and methods for affinity dispatching based on network input/output requests

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16924975

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16924975

Country of ref document: EP

Kind code of ref document: A1