CN107077377B - Equipment virtualization method, device and system, electronic equipment and computer program product - Google Patents

Equipment virtualization method, device and system, electronic equipment and computer program product Download PDF

Info

Publication number
CN107077377B
CN107077377B CN201680002834.3A CN201680002834A CN107077377B CN 107077377 B CN107077377 B CN 107077377B CN 201680002834 A CN201680002834 A CN 201680002834A CN 107077377 B CN107077377 B CN 107077377B
Authority
CN
China
Prior art keywords
operating system
shared memory
storage area
instruction
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680002834.3A
Other languages
Chinese (zh)
Other versions
CN107077377A (en
Inventor
温燕飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhaida Yunyun Intelligent Technology Co ltd
Original Assignee
Shenzhen Qianhaida Yunyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhaida Yunyun Intelligent Technology Co ltd filed Critical Shenzhen Qianhaida Yunyun Intelligent Technology Co ltd
Publication of CN107077377A publication Critical patent/CN107077377A/en
Application granted granted Critical
Publication of CN107077377B publication Critical patent/CN107077377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Multi Processors (AREA)

Abstract

The embodiment of the application provides a method, a device and a system for virtualization of equipment, electronic equipment and a computer program product, wherein the method comprises the following steps: creating a shared memory at a first operating system, and mapping the shared memory into a Peripheral Component Interconnect (PCI) device memory space of a second operating system; wherein the shared memory corresponds to a physical device; receiving an application interface API operating instruction of the physical equipment at a second operating system, and determining a corresponding processing instruction according to the API operating instruction; transmitting the processing instruction to a first operating system through a shared memory; the processing instructions are executed at the first operating system and the processing results are returned to the second operating system as a response to the API operating instructions or via the shared memory. By adopting the scheme in the application, the system delay in the virtualization process can be reduced, and the system performance is improved.

Description

Equipment virtualization method, device and system, electronic equipment and computer program product
Technical Field
The present application relates to computer technologies, and in particular, to a device virtualization method, apparatus, system, electronic device, and computer program product.
Background
A virtualization architecture based on Qemu/KVM (Kernel-based Virtual Machine) technology is shown in fig. 1.
As shown in FIG. 1, a virtualization architecture based on Qemu/KVM technology comprises a Host operating system and one or more virtualized Guest operating systems, wherein the Host operating system comprises a plurality of Host user space programs, a Host L inuxKernel, namely, a Host L inux kernel, each Guest operating system respectively comprises a user space, a Guest L0 inuxKernel and a Qemu, the operating systems run on the same hardware processor chip and share processors and peripheral resources, an ARM processor supporting the virtualization architecture at least comprises three modes of E L2, E L1 and E L0, a Hypervisor runs in an E L2 mode, a L inux kernel program runs in an E L1 mode, namely, an L inux kernel program runs in an E L mode, a user space program runs in an E L0 mode, a Hypervisor manager, a memory manager, a CPU and the like, and the CPU can load the virtual resources on the system through different virtual operating functions, thereby realizing the virtual interrupt processing of the CPU.
The KVM/Hypervisor spans two layers of Host L inux kernel and Hypervisor, on one hand, provides a driving node for the simulation processor Qemu, namely, the Qemu is allowed to create a virtual CPU through the KVM node and manage virtualized resources, on the other hand, the KVM/Hypervisor can also switch the Host L inux system out of the physical CPU, and then load the Guest L inux system onto the physical processor for running and process subsequent transactions of abnormal exit of the Guest L inux system.
Qemu is operated as an application of Host L inux, virtual physical device resources are provided for operation of Guest L inux, a virtual CPU is created through a device KVM node of a KVM/Hypervisor module, the physical device resources are distributed, and unmodified Guest L inux is loaded to a physical processor to be operated.
When Guest L inux needs to access physical devices, such as GPU (Graphics Processing Unit) devices, multimedia devices, camera devices, and the like, local virtualization needs to be performed on the physical devices, and currently, a drive node of Host L inux kernel is usually called through Qemu forwarding, specifically, the physical devices provide a large number of API (Application Programming Interface) functions, virtualization of the devices can be realized through remote API calling, specifically, an appropriate layer can be selected from a software architecture layer of a Host and a Guest system for API forwarding, for example, for a Guest Android system, Guest Android can select HA L (Hard Abstract L interlayer) for API forwarding, and a backend server vendor can be realized in a Host L inux user space, so that a Guest system can finally realize remote invocation of the API function through the Guest system.
Fig. 2 shows a system architecture of the cross-system API remote call in the prior art, where the call of an API is initiated through a Guest Android system, and passes through an HA L layer, Guest L inux Kernel, Qemu, and arrives at a Host backend server, and then calls a Host L inux Kernel driver to access a physical device.
Disclosure of Invention
The embodiment of the application provides a device virtualization method, a device virtualization system, an electronic device and a computer program product, which are mainly used for solving the problem that the device virtualization method in the prior art is poor in performance.
According to a first aspect of embodiments of the present application, there is provided a device virtualization method, including: creating a shared memory at a first operating system, and mapping the shared memory into a Peripheral Component Interconnect (PCI) device memory space of a second operating system; wherein the shared memory corresponds to a physical device; receiving an application interface API operating instruction of the physical equipment at a second operating system, and determining a corresponding processing instruction according to the API operating instruction; transmitting the processing instruction to a first operating system through a shared memory; the processing instructions are executed at the first operating system and the processing results are returned to the second operating system as a response to the API operating instructions or via the shared memory.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for virtualization of a device, including: the shared memory creating module is used for creating a shared memory at the first operating system and mapping the shared memory into a peripheral component interconnect standard (PCI) equipment memory space of a second operating system; wherein the shared memory corresponds to a physical device; the receiving module is used for receiving an application interface API operating instruction of the physical equipment at the second operating system and determining a corresponding processing instruction according to the API operating instruction; the sending module is used for transmitting the processing instruction to the first operating system through the shared memory; and the processing module is used for executing the processing instruction at the first operating system and returning the processing result to the second operating system as the response of the API operating instruction or through the shared memory.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: a display, a memory, one or more processors; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing the steps of the virtual method according to the first aspect of the embodiments of the present application.
According to a fourth aspect of embodiments herein, there is provided a computer program product encoding instructions for performing a process, the process comprising a virtual method according to the first aspect of embodiments herein.
By adopting the device virtualization method, the device, the system, the electronic device and the computer program product according to the embodiment of the application, the shared memory is established between the first operating system and the second operating system, and then the virtualization of the physical device is realized through the shared memory.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a diagram of a Qemu/KVM technology based virtualization architecture;
the system architecture for remote invocation across system APIs in the prior art is shown in FIG. 2;
FIG. 3 illustrates a system architecture for implementing the device virtualization method of the embodiments of the present application;
FIG. 4 is a flowchart illustrating a device virtualization method according to a first embodiment of the present application;
FIG. 5 is a flowchart of a device virtualization method according to a second embodiment of the present application;
fig. 6 is a schematic structural diagram of a device virtualization apparatus according to a third embodiment of the present application;
fig. 7 is a schematic structural diagram of a device virtualization system according to a fourth embodiment of the present application;
fig. 8 shows a schematic structural diagram of an electronic device according to a fifth embodiment of the present application.
Detailed Description
In the process of implementing the present application, the inventor finds that, in the prior art, a virtualization process shown in fig. 2 is adopted, processor time is consumed in each link from a Guest user space program, to HA L, to a Guest L inux Kernel layer system call, and to process switching from Qemu to a Backend server, and multiple parameter transfers are required for one-time remote API call, and a parameter with a considerable data volume may also be used, so when calling these devices, the system delay is greatly increased, and the performance is reduced by several times compared with that of a Host system.
In view of the foregoing problems, embodiments of the present application provide a device virtualization method, apparatus, system, electronic device, and computer program product, in which a shared memory is created between a first operating system and a second operating system, and then virtualization of a physical device is implemented through the shared memory, and since the first operating system and the second operating system transfer API calls through the shared memory, system latency in a virtualization process is reduced, and system performance is improved.
The scheme in the embodiment of the application can be applied to various scenes, for example, an intelligent terminal, an android simulator, a server virtualization platform and the like which adopt a virtualization architecture based on the Qemu/KVM technology.
The scheme in the embodiment of the present application may be implemented by using various computer languages, for example, object-oriented programming language Java, and the like.
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example one
Fig. 3 shows a system architecture for implementing the device virtualization method in the embodiment of the present application. As shown in fig. 3, the device virtualization system according to the embodiment of the present application includes a first operating system 301, a second operating system 302, a plurality of blocks of shared memory 303a, 303b, 303c, and a plurality of physical devices 304a, 304b, 304 c. Specifically, the first operating system may be a Host operating system; the second operating system may be a Guest operating system. It should be understood that, in implementation, the first operating system may also be a Guest operating system, and the second operating system may also be a Host operating system, which is not limited in this application.
Next, a detailed description will be given of a specific embodiment of the present application, taking as an example that the first operating system is a Host operating system and the second operating system is a Guest operating system.
The Guest operating system 302 may include a user space 3021, a Guest L inux Kernel3022, and an analog processor Qemu 3023, and the Guest operating system may be provided with a virtual interface of various physical devices or modules in the user space, specifically, the various interfaces may include a Graphics program interface, a multimedia program interface, a camera program interface, and the like, and more specifically, the Graphics program interface may be, for example, an OpenG L (Open Graphics L library) API interface, a Direct 3D, a Quick Draw 3D, and the like, and the multimedia/video program interface may be an OpenMAX (Open Media Acceleration) interface, and the like, which is not limited in this application.
Specifically, the Host operating system 301 may include a user space 3011 and a Host L inux Kernel3012, a Backend Server corresponding to each interface in the Guest operating system may be provided in the user space of the Host operating system, for example, when a graphics program interface in the Guest operating system is OpenG L API, the Backend Server may be OpenG L Backend Server, the Backend Server may operate a GPU device through a GPU driver in the Host L inux Kernel, when a multimedia/video program interface in the Guest operating system is OpenMAX API, the Backend Server may be OpenMAX Backend Server, and the Backend Server may operate a corresponding multimedia/video device through the multimedia/video driver in the Host L inux Kernel.
In specific implementation, the shared memories 303a, 303b and 303c are a plurality of memories visible to both a Guest operating system and a Host operating system; and the memory is in a readable and writable state for both the Guest operating system and the Host operating system, i.e., both the Guest operating system and the Host operating system can perform read and write operations on the shared memory.
In particular implementation, the number of shared memories may correspond to the physical device implementing virtualization; that is, one physical device corresponds to one shared memory. For example, a GPU device corresponds to the shared memory 303a, a multimedia device corresponds to the shared memory 303b, an image pickup device corresponds to the shared memory 303c, and so on.
In specific implementation, the size of each shared memory may be set by a developer and adapted to the respective corresponding physical device. For example, the shared memory corresponding to the GPU device may be set to 128M; the shared memory corresponding to the multimedia device can be set to 64M; the shared memory corresponding to the image capturing device may be set to 64M, which is not limited in this application.
Next, taking the shared memory 303a corresponding to the GPU device as an example, the division of the shared memory in the embodiment of the present application will be described in detail.
In a specific implementation, the shared memory 303a may include only the first storage area 3031; or may be divided into a first storage area 3031 and a second storage area 3032. Specifically, the first storage area may also be referred to as a private memory; the second storage area may also be referred to as a common memory. In specific implementation, the division of the first storage area and the second storage area has no specific rule, and can be divided according to the data size commonly stored in the first storage area and the second storage area respectively and according to the experience of a designer; the system can also be divided according to other preset strategies, which are not limited in the present application.
Specifically, the first storage area may be used for transmission of functions and parameters, and/or synchronization information between each thread of the Guest operating system and the Backend Server thread; specifically, the private memory may be further divided into a plurality of blocks, where one block is defined as one channel, and one channel corresponds to one thread of the Guest operating system; the number of channels may be preset by a developer when specifically divided; when the GPU is divided into a plurality of blocks, the plurality of blocks may be equally divided and have the same size, or may be intelligently divided according to the size of the function and parameter of the GPU called by the common thread in the system and/or the synchronization information, which is not limited in the present application. In specific implementation, the user program of the Guest operating system can dynamically manage the channels in the private memory, that is, the user program can perform allocation, reallocation, and release operations on the channels in the private memory at any time.
In particular, the second memory area may be used for the transmission of large data blocks, e.g., graphics content data, between all threads of the Guest operating system and the Backend Server thread. In a specific implementation, the common memory may be divided into a plurality of large blocks with different sizes, and specifically, the number of the blocks may be preset by a developer. Specifically, the user program in the Guest operating system may manage the blocks in the common memory, that is, the user program may perform allocation and release operations on the channels in the common memory at any time, and each allocation and release is processed according to the whole block.
In particular implementations, the size of the blocks in the common memory may be adapted to commonly used GPU graphics processing data. For example, research and development personnel find that, in the GPU virtualization process, generally, a first operating system transmits about 2M to 16M of graphics content data to a second operating system to meet the GPU graphics virtualization processing requirement; therefore, when allocating the size of a block in the common memory, the common memory may be divided into a plurality of memory blocks such as 2M, 4M, 8M, and 16M.
For example, if the total common memory size is 32M and the memory blocks are divided into 2M, 4M, 8M and 16M5 memory blocks, when the user program applies for 3M space, the 4M memory block area may be directly allocated to the corresponding thread, and a free flag may be set to the 4M block area when the thread is released.
In particular implementations, the physical devices 304a, 304b, 304c may be physical devices that are not integrated onto the central processor CPU; more preferably, it may be a physical device having high throughput, such as a GPU device, a multimedia device, a camera device, and the like.
It should be understood that for purposes of example, only one Guest operating system, one Host operating system, three shared memories, and three physical devices are shown in FIG. 3; however, in specific implementation, the operating system may be one or more Guest operating systems, or one or more Host operating systems, or may be another number of shared memories, or another number of physical devices; that is, the Guest operating system, the Host operating system, the shared memory, and the physical device may be any number, which is not limited in this application.
It should be understood that the shared memory shown in FIG. 3 includes, for exemplary purposes, both private memory and public memory storage areas; dividing the private memory into 3 channels with equal size; the common memory is divided into 4 channels of unequal size. In specific implementation, the shared memory may be a memory area including only a private memory; the private memory can be divided into a plurality of channels with different sizes without being divided; the public memory may not exist, and may also be divided into a plurality of channels with equal size, etc., which are not limited in the present application.
Next, a device virtualization method according to an embodiment of the present application will be described with reference to the system architecture shown in fig. 3.
Fig. 4 shows a flowchart of a device virtualization method according to a first embodiment of the present application. In the embodiment of the present application, a detailed description is given to a device virtualization method for a GPU device by taking, as an example, a Guest operating system, a Host operating system, a GPU device, and a shared memory corresponding to the GPU device. As shown in fig. 4, a device virtualization method according to an embodiment of the present application includes the following steps:
s401, when the Qemu corresponding to the Guest system is started, a shared memory corresponding to the GPU equipment is created.
Specifically, Qemu may create a corresponding shared memory through a system call.
Specifically, a specific block of address space may be partitioned from memory as shared memory for the GPU device. The size of the shared memory can be set by a developer and is adapted to the respective corresponding physical device. For example, the shared memory corresponding to the GPU device may be set to 128M; the shared memory corresponding to the multimedia device can be set to 64M; the shared memory corresponding to the image capturing device may be set to 64M, which is not limited in this application.
It should be understood that when there are multiple Guest systems, a shared memory may be created again for each physical device by the Qemu of each Guest system, or the multiple Guest systems may share a shared memory corresponding to a physical device; different schemes may also be employed for different physical devices, such as for GPU devices; each Guest system adopts an independent shared memory, and for the multimedia equipment, each Guest system shares one shared memory; this is not a limitation of the present application.
S402, Qemu further maps the shared memory to a PCI (Peripheral component interconnect) device memory space of the Guest system; and provides a virtual PCI register for the Guest system as a PCI configuration space.
S403, Guest L inux Kernel divides the shared memory into private memory and public memory.
In particular, Guest L inux Kernel may partition the shared memory when initializing the GPU device so that the shared memory supports access by multiple processes or threads.
Specifically, the private memory, that is, the first storage area may be divided into a plurality of channels of a first preset number; the common memory, i.e., the second storage area, may be divided into a second preset number of blocks. Specifically, the first preset number and the second preset number may be set by a developer.
Specifically, the size of the plurality of channels of the private memory may be equal; the size of the plurality of blocks of the common memory may be adapted to process data of the physical device corresponding to the shared memory.
S404, when the front-end thread is started, allocating corresponding shared memory address spaces to the front-end thread and the corresponding back-end thread.
In particular implementations, when an API call instruction is received, a front-end thread, i.e., a first thread, corresponding to the API call instruction may be created. And sending a thread creating instruction corresponding to the API calling instruction to the Host operating system to trigger the Host operating system to create a corresponding back-end thread, namely a second thread.
In specific implementation, a user may perform a user operation on a thread in the Guest operating system, for example, the user may perform operations of opening a new window, making a new page, and playing multimedia/video in the thread such as WeChat and QQ.
In specific implementation, when receiving a user operation, the thread may generate an API call instruction according to the user operation to call a corresponding front-end thread, for example, when the user performs an operation of opening a new window, making a new page, or the like, the thread may call a corresponding graphics processing interface, and when the user performs an operation of playing multimedia/video, or the like, the thread may call a corresponding multimedia/video interface, or the like.
Specifically, when a front-end thread is called, the Host operating system is also typically triggered to create a back-end thread corresponding to the front-end thread. Specifically, if the Guest system calls a graphics program processing interface, a corresponding back-end thread is created in a graphics processing background server in the Host operating system; if the user calls the multimedia program processing interface, a corresponding back-end thread is created in a multimedia processing background server in the Host operating system.
In specific implementation, when a front-end thread is started, the address space of the private memory channel corresponding to the front-end thread and the public memory address space allocated to the front-end thread are obtained from the Guest L inux Kernel, and the address space of the private memory channel corresponding to the front-end thread and the public memory address space allocated to the front-end thread are mapped to the address space of the front-end thread, so as to establish a synchronous control channel with Qemu.
Next, the address space of the private memory channel corresponding to the front-end thread and the address space of the public memory may be transmitted to Qemu through the PCI configuration space; then Qemu sends the address space of the private memory channel corresponding to the front-end thread and the address space of the public memory to a back-end server through an interprocess communication mechanism; and maps it to the address space of the back-end thread.
At this point, the initialization of the shared memory between the front-end thread and the back-end thread is completed.
S405, implementing virtualization of the physical device between the front-end thread and the corresponding back-end thread through the shared memory.
In specific implementation, when receiving an API operating instruction for the GPU device at a front-end thread of a Guest user space, a corresponding processing instruction may be determined according to the API operating instruction; transmitting the processing instruction to a back-end thread in a backup Server in the Host system through the shared memory; the processing instructions are then executed at the back-end thread and the processing results are returned to the front-end thread as a response to the API call instruction or via shared memory.
Specifically, the processing instruction is transmitted to the Backend thread in the Backend Server in the Host system through the shared memory, and the method can be implemented in various ways as follows:
in a first embodiment, when the processing instruction includes an API call function and a parameter; the front-end thread can write the function and the parameter into the corresponding private memory channel; and sending the offset address of the function and the parameter to the back-end thread; and triggering the back-end thread to share the acquiring and processing instruction in the memory according to the offset address. Specifically, the offset address can be sent to the backend server of the Host operating system through Qemu, and then synchronized to the backend thread by the backend server.
In a second embodiment, when the processing instruction includes an API call function, parameters, and synchronization information; the front-end thread can write the function, the parameter and the synchronous information into the corresponding private memory channel; and sending the offset address of the function and the parameter to the back-end thread; and triggering the back-end thread to share the acquiring and processing instruction in the memory according to the offset address. Specifically, the offset address can be sent to the backend server of the Host operating system through Qemu, and then synchronized to the backend thread by the backend server.
In a third embodiment, when the processing instruction includes an API call function, a parameter, and graphics content data; the front-end thread can write the function and the parameter into the corresponding private memory channel; writing the graphic content data into a public memory; sending the offset address of the shared memory where the processing instruction is located to a back-end thread; and triggering the back-end thread to share the acquiring and processing instruction in the memory according to the offset address. Specifically, the offset address can be sent to the backend server of the Host operating system through Qemu, and then synchronized to the backend thread by the backend server.
In a fourth embodiment, when the processing instruction includes an API call function, a parameter, synchronization information, and graphic content data; the front-end thread can write the function, the parameter and the synchronous information into the corresponding private memory channel; writing the graphic content data into a public memory; sending the offset address of the shared memory where the processing instruction is located to a back-end thread; and triggering the back-end thread to share the acquiring and processing instruction in the memory according to the offset address. Specifically, the offset address can be sent to the backend server of the Host operating system through Qemu, and then synchronized to the backend thread by the backend server.
In specific implementation, the switching from the front-end thread to the back-end thread and the switching between the first operating system and the second operating system both adopt common technical means of those skilled in the art, and are not described in detail herein.
In specific implementation, the back-end thread drives the corresponding physical device/module to execute the corresponding processing instruction, and obtains a processing result.
In specific implementation, the back-end thread may directly feed back the processing result to the user as a response of the application interface call instruction, or may return the processing result to the front-end thread, which responds.
Therefore, remote calling of the user program to the physical equipment in the Guest operating system is realized; that is, virtualization of the physical device is achieved.
By adopting the device virtualization method in the embodiment of the application, the shared memory is established between the first operating system and the second operating system, and then the virtualization of the physical device is realized through the shared memory.
Example two
Next, a device virtualization method according to a second embodiment of the present application will be described with reference to the system architecture shown in fig. 3.
Fig. 5 shows a flowchart of a device virtualization method according to the second embodiment of the present application. In the embodiment of the present application, a Guest operating system, a Host operating system, three physical devices: the GPU device, the multimedia device, and the camera device are taken as examples, and a device virtualization method of a plurality of physical devices is described in detail. As shown in fig. 5, a device virtualization method according to an embodiment of the present application includes the following steps:
and S501, respectively creating shared memories corresponding to the GPU equipment, the multimedia equipment and the camera equipment when the Qemu corresponding to the Guest system is started.
In specific implementation, the creation process of the shared memory corresponding to each of the multimedia device and the camera device may refer to the creation process of the shared memory corresponding to the GPU device in S401 in the first embodiment of the present application, which is not repeated herein.
S502, Qemu further maps each shared memory into a PCI equipment memory space of a Guest system; and provides a corresponding number of virtual PCI registers for the Guest system as PCI configuration space.
In one embodiment, the number of the virtual PCI registers corresponds to the number of the shared memories, and each of the virtual PCI registers corresponds to one another.
S503, Guest L inux Kernel divides the shared memories into private memories and public memories, respectively.
In specific implementation, reference may be made to the implementation of S403 in the first embodiment of the present application for implementation of this step, which is not repeated herein.
S504, when the front end thread is started, according to the API call instruction for calling the front end thread, the shared memory corresponding to the front end thread is determined, and corresponding shared memory address spaces are distributed for the front end thread and the corresponding back end thread.
Specifically, if the API call instruction calling the front-end thread is an OpenG L interface call instruction, it may be determined that the corresponding physical device is a GPU device, it may be determined that the shared memory corresponding to the front-end thread is a shared memory corresponding to the physical device, for example, 303a, if the API call instruction calling the front-end thread is an OpenMAX interface call instruction, it may be determined that the corresponding physical device is a multimedia device, it may be determined that the shared memory corresponding to the front-end thread is a shared memory corresponding to a multimedia device, for example, 303b, if the API call instruction calling the front-end thread is a Camera interface call instruction, it may be determined that the corresponding physical device is a Camera device, and it may be determined that the shared memory corresponding to the front-end thread is a shared memory corresponding to the Camera device, for example, 303 c.
In specific implementation, reference may be made to the implementation of S404 in the first embodiment of the present application for the implementation of allocating the corresponding shared memory address space to the front-end thread and the corresponding back-end thread in this step, which is not repeated herein.
And S505, realizing the virtualization of the physical equipment between the front-end thread and the corresponding back-end thread through the shared memory.
In specific implementation, reference may be made to the implementation of S405 in the first embodiment of the present application for implementation of this step, which is not repeated herein.
Therefore, remote calling of a user program to a plurality of physical devices in a Guest operating system is realized; that is, virtualization of a plurality of physical devices is achieved.
By adopting the device virtualization method in the embodiment of the application, the shared memory is established between the first operating system and the second operating system, and then the virtualization of the physical device is realized through the shared memory.
Based on the same inventive concept, the embodiment of the present application further provides an apparatus virtualization device, and as the principle of the device to solve the problem is similar to that of the apparatus virtualization methods provided in the first and second embodiments of the present application, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
EXAMPLE III
Fig. 6 shows a schematic structural diagram of a device virtualization apparatus according to a third embodiment of the present application.
As shown in fig. 6, the device virtualization apparatus 600 according to the third embodiment of the present application includes: a shared memory creating module 601, configured to create a shared memory at a first operating system, and map the shared memory into a peripheral component interconnect standard PCI device memory space of a second operating system; wherein the shared memory corresponds to a physical device; a receiving module 602, configured to receive, at the second operating system, an application interface API operating instruction of the physical device, and determine a corresponding processing instruction according to the API operating instruction; a sending module 603, configured to transmit the processing instruction to a first operating system through the shared memory; the processing module 604 is configured to execute the processing instruction at the first operating system, and return a processing result to the second operating system as a response to the API operating instruction or via the shared memory.
Specifically, the shared memory creation module specifically includes: the shared memory creating submodule is used for creating a shared memory for the physical device when the Qemu corresponding to the second operating system is started; the mapping submodule is used for mapping the shared memory into a PCI equipment memory space of a second operating system; and provides virtual PCI registers for the second operating system as PCI configuration space.
Specifically, the multiple physical devices are shared memory creation modules, and are specifically configured to: when the analog processor Qemu corresponding to the second operating system is started, respectively establishing a shared memory for each physical device; mapping the shared memories into PCI equipment memory spaces of a second operating system respectively; and providing a plurality of virtual PCI registers as PCI configuration space for the second operating system, wherein the plurality of PCI registers respectively correspond to the plurality of shared memories.
Specifically, the device virtualization apparatus according to the third embodiment of the present application further includes: the shared memory comprises a dividing module, a storing module and a storing module, wherein the dividing module is used for dividing the shared memory into a first storage area and a second storage area, and the first storage area comprises a plurality of channels with a first preset number; the second storage area includes a second preset number of blocks.
Specifically, the sizes of the channels of the first storage area are equal; the size of the plurality of blocks of the second storage area is adapted to the processing data of the physical device corresponding to the shared memory.
Specifically, the number of the physical devices is plural, and the apparatus further includes: and the shared memory determining module is used for determining the physical equipment corresponding to the API operating instruction according to the API operating instruction and determining the corresponding shared memory according to the physical equipment.
Specifically, the device virtualization apparatus according to the third embodiment of the present application further includes: the first mapping module is used for creating a first thread corresponding to an API calling instruction when the API calling instruction is received in a second operating system; sending a thread creating instruction corresponding to the API calling instruction to the first operating system; allocating the address space of the corresponding channel in the first storage area and the address space of the corresponding second storage area for the first thread; transmitting the address space of the channel in the first storage area and the address space of the second storage area to Qemu of the second operating system through a PCI configuration space; the second mapping module is used for creating a corresponding second thread after receiving a thread creating instruction corresponding to the API calling instruction in the first operating system; mapping the address space of the corresponding channel in the first storage area and the address space of the corresponding second storage area into the address space of the second thread; the sending module is specifically configured to write a processing instruction into an address space of a corresponding channel in the first storage area and an address space of the corresponding second storage area through the first thread; and sending the offset address of the processing instruction in the address space to the first operating system through Qemu; and in the first operating system, synchronizing the received offset address to the corresponding second thread.
By adopting the device virtualization apparatus in the embodiment of the application, the shared memory is created between the first operating system and the second operating system, and then the virtualization of the physical device is realized through the shared memory.
Based on the same inventive concept, the embodiment of the present application further provides a device virtualization system, and since the principle of solving the problem of the system is similar to the device virtualization methods provided in the first and second embodiments of the present application, the implementation of the system may refer to the implementation of the method, and repeated details are not repeated.
Example four
Fig. 7 shows a schematic structural diagram of a device virtualization system according to a fourth embodiment of the present application.
As shown in fig. 7, a device virtualization system 700 according to the fourth embodiment of the present application includes: the second operating system 701 is configured to receive an application interface API call instruction of the physical device, determine a processing instruction corresponding to the application interface call instruction, and send the processing instruction to the first operating system 702 through the shared memory corresponding to the physical device; one or more shared memories 703 for passing processing instructions between the first operating system and the second operating system; wherein the one or more shared memories respectively correspond to the physical devices; the first operating system 702 is configured to receive and execute the processing instruction, and return a processing result to the second operating system as a response to the application interface call instruction or via the shared memory corresponding to the physical device.
In specific implementation, the implementation of the second operating system 701 may refer to the implementation of the second operating system 302 in the first embodiment of the present application, and repeated details are not repeated.
In specific implementation, the implementation of the first operating system 702 may refer to the implementation of the first operating system 301 in the first embodiment of the present application, and repeated descriptions are omitted.
In specific implementation, the implementation of the shared memory 703 may refer to the implementation of the shared memories 303a, 303b, and 303c in the first embodiment of the present application, and repeated details are not described herein.
Specifically, the first operating system may be a Guest operating system, and the second operating system may be a Host operating system.
By adopting the device virtualization system in the embodiment of the application, the shared memory is established between the first operating system and the second operating system, and then the virtualization of the physical device is realized through the shared memory.
EXAMPLE five
Based on the same inventive concept, an electronic device 800 as shown in fig. 8 is also provided in the embodiment of the present application.
As shown in fig. 8, an electronic device 800 according to a fifth embodiment of the present application includes: a display 801, a memory 802, a processor 803; a bus 804; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing the steps of the method according to any one of embodiments one or two of the present application.
Based on the same inventive concept, a computer program product that can be used in conjunction with an electronic device 800 including a display is also provided in the embodiments of the present application, and includes a computer-readable storage medium and a computer program mechanism embedded therein, where the computer program mechanism includes instructions for executing the steps of the method in any one of the first or second embodiments of the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (16)

1. A device virtualization method, comprising:
creating a shared memory at a first operating system, and mapping the shared memory into a Peripheral Component Interconnect (PCI) device memory space of a second operating system; wherein the shared memory corresponds to a physical device; the shared memory comprises a first storage area, and the first storage area comprises a plurality of channels with a first preset number; one channel corresponds to one thread of the second operating system;
receiving an application interface (API) operating instruction of the physical equipment at the second operating system, and determining a corresponding processing instruction according to the API operating instruction; transmitting the processing instruction to a first operating system through the shared memory;
and executing the processing instruction at the first operating system, and returning a processing result to the second operating system as a response of the API operating instruction or through the shared memory.
2. The method of claim 1, wherein creating a shared memory at a first operating system and mapping the shared memory to a peripheral component interconnect standard (PCI) device memory space of a second operating system comprises:
when a simulation processor Qemu corresponding to a second operating system is started, a shared memory is established for the physical equipment;
mapping the shared memory into a PCI equipment memory space of the second operating system; and providing a virtual PCI register for the second operating system as a PCI configuration space.
3. The method of claim 1, wherein the plurality of physical devices create a shared memory at a first operating system and map the shared memory to a peripheral component interconnect standard (PCI) device memory space of a second operating system, and the method comprises:
when the corresponding Qemu of the second operating system is started, respectively establishing a shared memory for each physical device;
mapping the shared memories into PCI equipment memory spaces of a second operating system respectively; and providing a plurality of virtual PCI registers as PCI configuration spaces for the second operating system, wherein the plurality of PCI registers respectively correspond to the plurality of shared memories.
4. The method of claim 1, wherein after creating the shared memory at the first operating system and mapping the shared memory to a peripheral component interconnect standard (PCI) device memory space of the second operating system, prior to receiving application interface (API) operating instructions for the physical device at the second operating system, further comprising:
and dividing the shared memory into a first storage area and a second storage area, wherein the second storage area comprises a plurality of blocks with a second preset number.
5. The method of claim 4, wherein the plurality of channels of the first storage area are equal in size; the sizes of the plurality of blocks of the second storage area are adapted to the processing data of the physical device corresponding to the shared memory.
6. The method of claim 1, wherein the plurality of physical devices, before transferring the processing instruction to the first operating system via the shared memory, further comprises:
and determining the physical equipment corresponding to the API operation instruction according to the API operation instruction, and determining the corresponding shared memory according to the physical equipment.
7. The method of claim 4, further comprising, after mapping the shared memory to a peripheral component interconnect standard (PCI) device memory space of a second operating system, prior to receiving application interface (API) operating instructions for the physical device at the second operating system:
in a second operating system, when an API call instruction is received, a first thread corresponding to the API call instruction is created; sending a thread creating instruction corresponding to the API calling instruction to the first operating system; allocating the address space of the corresponding channel in the first storage area and the address space of the corresponding second storage area for the first thread; transmitting the address space of the channel in the first storage area and the address space of the second storage area to Qemu of the second operating system through a PCI configuration space;
in a first operating system, after a thread creating instruction corresponding to an API calling instruction is received, a corresponding second thread is created; mapping the address space of the corresponding channel in the first storage area and the address space of the corresponding second storage area into the address space of the second thread;
transmitting the processing instruction to a first operating system through the shared memory, specifically including:
writing the processing instruction into the address space of the corresponding channel in the first storage area and the address space of the corresponding second storage area through a first thread; and sending the offset address of the processing instruction in the address space to a first operating system through Qemu; and in the first operating system, synchronizing the received offset address to the corresponding second thread.
8. An apparatus virtualization device, comprising:
the shared memory creating module is used for creating a shared memory at a first operating system and mapping the shared memory into a Peripheral Component Interconnect (PCI) equipment memory space of a second operating system; wherein the shared memory corresponds to a physical device; the shared memory comprises a first storage area, and the first storage area comprises a plurality of channels with a first preset number; one channel corresponds to one thread of the second operating system;
the receiving module is used for receiving an application interface API operating instruction of the physical equipment at the second operating system and determining a corresponding processing instruction according to the API operating instruction;
the sending module is used for transmitting the processing instruction to a first operating system through the shared memory;
and the processing module is used for executing the processing instruction at the first operating system and returning a processing result to the second operating system as a response of the API operating instruction or through the shared memory.
9. The apparatus of claim 8, wherein the shared memory creation module specifically comprises:
the shared memory creating sub-module is used for creating a shared memory for the physical equipment when the Qemu corresponding to the second operating system is started;
the mapping submodule is used for mapping the shared memory into a PCI equipment memory space of a second operating system; and providing a virtual PCI register for the second operating system as a PCI configuration space.
10. The apparatus according to claim 8, wherein the plurality of physical devices are a shared memory creation module, and is specifically configured to:
when the analog processor Qemu corresponding to the second operating system is started, respectively establishing a shared memory for each physical device;
mapping the shared memories into PCI equipment memory spaces of a second operating system respectively; and providing a plurality of virtual PCI registers as PCI configuration spaces for the second operating system, wherein the plurality of PCI registers respectively correspond to the plurality of shared memories.
11. The apparatus of claim 8, further comprising:
the shared memory comprises a dividing module and a storing module, wherein the dividing module is used for dividing the shared memory into a first storage area and a second storage area, and the second storage area comprises a plurality of blocks with a second preset number.
12. The apparatus of claim 11, wherein the plurality of channels of the first storage area are equal in size; the sizes of the plurality of blocks of the second storage area are adapted to the processing data of the physical device corresponding to the shared memory.
13. The apparatus of claim 8, wherein the physical device is a plurality of devices, the apparatus further comprising:
and the shared memory determining module is used for determining the physical equipment corresponding to the API operating instruction according to the API operating instruction and determining the corresponding shared memory according to the physical equipment.
14. The apparatus of claim 11, further comprising:
the first mapping module is used for creating a first thread corresponding to an API calling instruction when the API calling instruction is received in a second operating system; sending a thread creating instruction corresponding to the API calling instruction to the first operating system; allocating the address space of the corresponding channel in the first storage area and the address space of the corresponding second storage area for the first thread; transmitting the address space of the channel in the first storage area and the address space of the second storage area to Qemu of the second operating system through a PCI configuration space;
the second mapping module is used for creating a corresponding second thread after receiving a thread creating instruction corresponding to the API calling instruction in the first operating system; mapping the address space of the corresponding channel in the first storage area and the address space of the corresponding second storage area into the address space of the second thread;
the sending module is specifically configured to write the processing instruction into an address space of a corresponding channel in the first storage area and an address space of the corresponding second storage area through a first thread; and sending the offset address of the processing instruction in the address space to a first operating system through Qemu; and in the first operating system, synchronizing the received offset address to the corresponding second thread.
15. An electronic device, characterized in that the electronic device comprises: a display, a memory, one or more processors; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules comprising instructions for performing the steps of the method of any of claims 1-7.
16. A computer-readable storage medium having stored thereon a computer program product encoding instructions for performing a process, the process comprising the method according to any of claims 1-7.
CN201680002834.3A 2016-12-29 2016-12-29 Equipment virtualization method, device and system, electronic equipment and computer program product Active CN107077377B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113265 WO2018119952A1 (en) 2016-12-29 2016-12-29 Device virtualization method, apparatus, system, and electronic device, and computer program product

Publications (2)

Publication Number Publication Date
CN107077377A CN107077377A (en) 2017-08-18
CN107077377B true CN107077377B (en) 2020-08-04

Family

ID=59623873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680002834.3A Active CN107077377B (en) 2016-12-29 2016-12-29 Equipment virtualization method, device and system, electronic equipment and computer program product

Country Status (2)

Country Link
CN (1) CN107077377B (en)
WO (1) WO2018119952A1 (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107741863A (en) * 2017-10-08 2018-02-27 深圳市星策网络科技有限公司 The driving method and device of a kind of video card
CN108932213A (en) * 2017-10-10 2018-12-04 北京猎户星空科技有限公司 The means of communication, device, electronic equipment and storage medium between multiple operating system
CN109669782A (en) * 2017-10-13 2019-04-23 阿里巴巴集团控股有限公司 Hardware abstraction layer multiplexing method, device, operating system and equipment
WO2019127191A1 (en) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 File system sharing method and apparatus for multi-operating system, and electronic device
CN108124475B (en) * 2017-12-29 2022-05-20 达闼机器人股份有限公司 Virtual system Bluetooth communication method and device, virtual system, storage medium and electronic equipment
CN109343922B (en) * 2018-09-17 2022-01-11 广东微云科技股份有限公司 GPU (graphics processing Unit) virtual picture display method and device
CN109725867A (en) * 2019-01-04 2019-05-07 中科创达软件股份有限公司 Virtual screen sharing method, device and electronic equipment
CN112131146B (en) * 2019-06-24 2022-07-12 维塔科技(北京)有限公司 Method and device for acquiring equipment information, storage medium and electronic equipment
CN110442389B (en) * 2019-08-07 2024-01-09 北京技德系统技术有限公司 Method for sharing GPU (graphics processing Unit) in multi-desktop environment
CN112860506B (en) * 2019-11-28 2024-05-17 阿里巴巴集团控股有限公司 Method, device, system and storage medium for processing monitoring data
CN111510780B (en) * 2020-04-10 2021-10-26 广州方硅信息技术有限公司 Video live broadcast control, bridging, flow control and broadcast control method and client
CN111522670A (en) * 2020-05-09 2020-08-11 中瓴智行(成都)科技有限公司 GPU virtualization method, system and medium for Android system
CN112015605B (en) * 2020-07-28 2024-05-14 深圳市金泰克半导体有限公司 Memory testing method and device, computer equipment and storage medium
CN112685197B (en) * 2020-12-28 2022-08-23 浪潮软件科技有限公司 Interface data interactive system
CN115081010A (en) * 2021-03-16 2022-09-20 华为技术有限公司 Distributed access control method, related device and system
CN112764872B (en) * 2021-04-06 2021-07-02 阿里云计算有限公司 Computer device, virtualization acceleration device, remote control method, and storage medium
CN115437717A (en) * 2021-06-01 2022-12-06 北京小米移动软件有限公司 Cross-operating-system calling method and device and electronic equipment
CN113379589A (en) * 2021-07-06 2021-09-10 湖北亿咖通科技有限公司 Dual-system graphic processing method and device and terminal
CN113805952B (en) * 2021-09-17 2023-10-31 中国联合网络通信集团有限公司 Peripheral virtualization management method, server and system
CN114047960A (en) * 2021-11-10 2022-02-15 北京鲸鲮信息系统技术有限公司 Operating system running method and device, electronic equipment and storage medium
CN114327944B (en) * 2021-12-24 2022-11-11 科东(广州)软件科技有限公司 Method, device, equipment and storage medium for sharing memory by multiple systems
CN114661497B (en) * 2022-03-31 2023-01-10 慧之安信息技术股份有限公司 Memory sharing method and system for partition of operating system
CN114816417B (en) * 2022-04-18 2022-10-11 北京凝思软件股份有限公司 Cross compiling method, device, computing equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477511A (en) * 2008-12-31 2009-07-08 杭州华三通信技术有限公司 Method and apparatus for sharing memory medium between multiple operating systems
CN101847105A (en) * 2009-03-26 2010-09-29 联想(北京)有限公司 Computer and internal memory sharing method of a plurality of operation systems
CN103077071A (en) * 2012-12-31 2013-05-01 北京启明星辰信息技术股份有限公司 Method and system for acquiring process information of KVM (Kernel-based Virtual Machine)
CN104216862A (en) * 2013-05-29 2014-12-17 华为技术有限公司 Method and device for communication between user process and system service
CN102541618B (en) * 2010-12-29 2015-05-27 中国移动通信集团公司 Implementation method, system and device for virtualization of universal graphic processor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661381B (en) * 2009-09-08 2012-05-30 华南理工大学 Data sharing and access control method based on Xen
US10061701B2 (en) * 2010-04-26 2018-08-28 International Business Machines Corporation Sharing of class data among virtual machine applications running on guests in virtualized environment using memory management facility
CN102262557B (en) * 2010-05-25 2015-01-21 运软网络科技(上海)有限公司 Method for constructing virtual machine monitor by bus architecture and performance service framework

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477511A (en) * 2008-12-31 2009-07-08 杭州华三通信技术有限公司 Method and apparatus for sharing memory medium between multiple operating systems
CN101847105A (en) * 2009-03-26 2010-09-29 联想(北京)有限公司 Computer and internal memory sharing method of a plurality of operation systems
CN102541618B (en) * 2010-12-29 2015-05-27 中国移动通信集团公司 Implementation method, system and device for virtualization of universal graphic processor
CN103077071A (en) * 2012-12-31 2013-05-01 北京启明星辰信息技术股份有限公司 Method and system for acquiring process information of KVM (Kernel-based Virtual Machine)
CN104216862A (en) * 2013-05-29 2014-12-17 华为技术有限公司 Method and device for communication between user process and system service

Also Published As

Publication number Publication date
CN107077377A (en) 2017-08-18
WO2018119952A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
CN107077377B (en) Equipment virtualization method, device and system, electronic equipment and computer program product
CN107003892B (en) GPU virtualization method, device and system, electronic equipment and computer program product
CN108243118B (en) Method for forwarding message and physical host
CN103034524B (en) Half virtualized virtual GPU
TWI475488B (en) Virtual machine system, virtualization method and machine-readable medium containing instructions for virtualization
US9965826B2 (en) Resource management
WO2017024783A1 (en) Virtualization method, apparatus and system
CN106796530B (en) A kind of virtual method, device and electronic equipment, computer program product
US11204790B2 (en) Display method for use in multi-operating systems and electronic device
US20240220309A1 (en) Flexible source assignment to physical and virtual functions in a virtualized processing system
CN106598696B (en) Method and device for data interaction between virtual machines
US20170024231A1 (en) Configuration of a computer system for real-time response from a virtual machine
US20170147374A1 (en) Virtual pci device based hypervisor bypass for vm bridging
CN116320469B (en) Virtualized video encoding and decoding system and method, electronic equipment and storage medium
US20110134132A1 (en) Method and system for transparently directing graphics processing to a graphical processing unit (gpu) of a multi-gpu system
US10587861B2 (en) Flicker-free remoting support for server-rendered stereoscopic imaging
CN113419845A (en) Calculation acceleration method and device, calculation system, electronic equipment and computer readable storage medium
CN101154166A (en) Virtual machine system and its graphics card access method
CN115904617A (en) GPU virtualization implementation method based on SR-IOV technology
CN113485791B (en) Configuration method, access method, device, virtualization system and storage medium
US20180052700A1 (en) Facilitation of guest application display from host operating system
CN114253656A (en) Overlay container storage drive for microservice workloads
CN114253704A (en) Method and device for allocating resources
US20160026567A1 (en) Direct memory access method, system and host module for virtual machine
JPWO2018173300A1 (en) I / O control method and I / O control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant