CN111552554A - Graphic library API agent-based GPU virtualization method, system and medium - Google Patents

Graphic library API agent-based GPU virtualization method, system and medium Download PDF

Info

Publication number
CN111552554A
CN111552554A CN202010386295.6A CN202010386295A CN111552554A CN 111552554 A CN111552554 A CN 111552554A CN 202010386295 A CN202010386295 A CN 202010386295A CN 111552554 A CN111552554 A CN 111552554A
Authority
CN
China
Prior art keywords
api
library
virtual machine
gpu
graphic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010386295.6A
Other languages
Chinese (zh)
Inventor
陈绪戈
邓华利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongling Zhixing Chengdu Technology Co ltd
Original Assignee
Zhongling Zhixing Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongling Zhixing Chengdu Technology Co ltd filed Critical Zhongling Zhixing Chengdu Technology Co ltd
Priority to CN202010386295.6A priority Critical patent/CN111552554A/en
Publication of CN111552554A publication Critical patent/CN111552554A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Stored Programmes (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to GPU virtualization and virtual machine technology, and particularly discloses a GPU virtualization method, a system and a medium based on a graphics library API. The method not only configures the virtual machine without GPU hardware resources, so that when the virtual machine needs to process rendering tasks, the virtual machine calls the API of the graphic proxy library and sends out corresponding call notification; and, a virtual machine or host having GPU hardware resources is also configured, receives a call notification from another virtual machine, and calls an API of the hardware accelerated graphics library corresponding to the API of the graphics proxy library according to the received call notification, thereby processing rendering tasks of the other virtual machine through the GPU hardware resources. Therefore, the invention can convert a virtualized scene that a plurality of virtual machines share one GPU into a simple scene that a plurality of processes on one virtual machine use the GPU, and has the advantages of small hardware performance loss and good universality.

Description

Graphic library API agent-based GPU virtualization method, system and medium
Technical Field
The invention relates to GPU virtualization and virtual machine technology, in particular to a GPU virtualization method, a system and a medium based on a graphic library API.
Background
Hypervisor, called a Virtual Machine Monitor, is an operating system that runs other operating systems. The Hypervisor runs an intermediate operating system between the physical hardware and the operating system, which allows multiple operating systems and applications to share a set of basic physical hardware, and the operating system running on the Hypervisor is called a Guest operating system (Guest OS).
Because a plurality of Guest OSs share a set of basic physical hardware, each Guest OS can use an independent physical hardware resource independently under the condition that the hardware resource is sufficient, for example, each Guest OS uses a piece of GPU hardware. When the hardware resources are insufficient, one physical hardware needs to be virtualized and provided for a plurality of Guest OSs to use, that is, actually, a plurality of Guest OSs compete for one GPU hardware resource.
At present, part of high-end GPUs in the market generally have a hardware virtualization function, the function is realized by virtualizing one GPU into a plurality of independent GPUs for a plurality of Guest OSs through hardware instructions, however, most low-end GPUs do not have the function of hardware virtualization, and the existing other virtualization methods for the low-end GPUs are complex in design, large in hardware performance loss and poor in universality.
Therefore, it is necessary to design a GPU virtualization method with less hardware performance loss and good versatility.
Disclosure of Invention
The invention aims to: the GPU virtualization method is small in hardware performance loss and good in universality.
In order to achieve the above purpose, the technical solution adopted by the invention to solve the technical problem is as follows: a GPU virtualization method based on a graphics library API agent comprises the following steps:
when a virtual machine without GPU hardware resources is configured to process a rendering task, calling an API of a graphic proxy library of the virtual machine and sending a corresponding calling notice; and configuring a virtual machine or a host machine with GPU hardware resources, receiving the call notification of other virtual machines, and calling the API of the hardware acceleration graphic library corresponding to the API of the graphic proxy library according to the call notification received by the virtual machine or the host machine so as to process the rendering tasks of other virtual machines through the GPU hardware resources.
According to a specific implementation mode, the graphics library API agent-based GPU virtualization method further comprises the step of configuring interprocess communication between the virtual machines provided by the Hypervisor into Socket communication.
According to a specific implementation manner, in the GPU virtualization method based on a graphics library API proxy of the present invention, the call notification includes memory address data of a virtual machine corresponding to the call notification.
According to a specific implementation mode, in the GPU virtualization method based on the graphics library API proxy of the present invention, the function names and parameters of the API set of the graphics proxy library and the API set of the hardware accelerated graphics library are all consistent.
Further, configuring the virtual machine without GPU hardware resources to enable the virtual machine not to create a context for hardware to accelerate the operation of the graphic library; and configuring the virtual machine or the host with the GPU hardware resources to create the context of the hardware accelerated graphic library operation according to the received call notification.
Based on the same inventive concept as the GPU virtualization method based on the graphic library API agent disclosed by the invention, on the aspect of specific implementation, the invention also provides a GPU virtualization system based on the graphic library API agent, which comprises a virtual machine and a host machine; the host is used for operating Hypervisor and providing interprocess communication between the virtual machines;
the virtual machine without GPU hardware resources is used for calling the API of the graphic proxy library and sending out a corresponding calling notice when the rendering task needs to be processed;
and the virtual machine or the host with GPU hardware resources is used for receiving the call notification of other virtual machines and calling the API of the hardware acceleration graphic library corresponding to the API of the graphic proxy library according to the received call notification so as to process the rendering tasks of other virtual machines through the GPU hardware resources.
Based on the same inventive concept as the graphics library API agent-based GPU virtualization method disclosed by the invention, in one aspect of specific implementation, the invention also provides an image rendering method, which comprises the following steps:
after the virtual machine receives the rendering task, if the virtual machine does not have GPU hardware resources, calling an API of a graphic proxy library of the virtual machine and sending a corresponding calling notice;
after receiving the call notification, the virtual machine or the host with the GPU hardware resources calls an API of a hardware acceleration graphic library corresponding to the API of the graphic proxy library according to the received call notification; and the GPU hardware resources are utilized to process the rendering tasks of other virtual machines.
Furthermore, in an aspect of specific implementation, the present invention further provides a readable storage medium, on which one or more programs are stored, wherein the one or more programs, when executed by one or more processors, implement the graphics library API proxy-based GPU virtualization method of the present invention or the image rendering method of the present invention.
In summary, compared with the prior art, the invention has the beneficial effects that:
the GPU virtualization method based on the graphic library API agent not only configures a virtual machine without GPU hardware resources, but also calls the API of the graphic library of the virtual machine and sends corresponding call notification when the virtual machine needs to process rendering tasks; and, a virtual machine or host having GPU hardware resources is also configured, receives a call notification from another virtual machine, and calls an API of the hardware accelerated graphics library corresponding to the API of the graphics proxy library according to the received call notification, thereby processing rendering tasks of the other virtual machine through the GPU hardware resources. Therefore, the invention can convert a virtualized scene that a plurality of virtual machines share one GPU into a simple scene that a plurality of processes on one virtual machine use the GPU, and has the advantages of small hardware performance loss and good universality.
Description of the drawings:
FIG. 1 is an architecture diagram of a graphics library API agent based GPU virtualization system of the present invention;
FIG. 2 is a flow chart of graphics library API agent based GPU virtualization in accordance with the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
As shown in fig. 1, the GPU virtualization system based on the API proxy of the graphics library of the present invention is configured to run two Guest OSs on a bare computer on hardware, and one Hypervisor controls the Guest OSs.
The invention relates to a GPU virtualization system based on a graphic library API agent, which mainly comprises: the system comprises a front-end Guest OS (i.e. a virtual machine without GPU hardware resources, which can be a plurality of virtual machines), a back-end Guest OS (i.e. a virtual machine with GPU hardware resources), a Hypervisor running on a host, a graphic agent library of the front-end Guest OS, a graphic processing service of the back-end Guest OS and corresponding hardware resources.
Specifically, the front-end Guest OS: the method does not have real GPU physical hardware, cannot directly operate the GPU, does not have real graphic output equipment, and can only inform a back-end Guest OS to perform graphic operation and output through a graphic agent library of a hardware acceleration graphic library (such as OpenGL, DirectX and the like). That is, the front-end Guest OS can issue a corresponding call notification to the back-end Guest OS by calling the API of its graphics proxy library when it needs to process a rendering task.
Back-end Guest OS: the system comprises real GPU hardware resources and two sets of graphic output devices, wherein one set of graphic output device is used for outputting self graphics, the other set of graphic output device is used for outputting graphics of a front-end Guest OS, and image processing services of a graphic agent library of the front-end Guest OS are realized. That is, the back-end Guest OS can receive the call notification from the other virtual machine, and can call the API of the hardware accelerated graphics library corresponding to the API of the graphics proxy library according to the received call notification, thereby processing the rendering task of the other virtual machine by the GPU hardware resource it has.
Hypervisor running on the host: the virtual machine monitor of the front Guest OS and the back Guest OS can realize a memory address mapping method between different Guest OSs, so that the back Guest OS can directly access the memory data of the front Guest OS, memory copy is reduced, and communication efficiency is improved. Moreover, the Hypervisor can provide Inter-Process Communication (IPC) Communication capability across virtual machines, such as Socket Communication, and can realize fast Communication between the graphics proxy library of the front-end Guest OS and the back-end Guest OS, and other Communication methods can also be adopted. Although the invention increases the overhead of front-end Guest OS communication, in practice, the GPU performance penalty is minimal.
Graphics proxy library of front-end Guest OS: all API interfaces of a hardware accelerated graphic library (such as OpenGL, DirectX and the like) can be realized to replace the hardware accelerated graphic library, and meanwhile App of a front-end Guest OS can not sense bottom layer change. In fact, the API of the graphics proxy library mainly implements the proxy function, that is, data transmission, and the specific rendering task is processed by the back-end graphics processing service. In order to improve the efficiency, parameter data of the API is transmitted to a back-end graphic processing service in an address mapping mode, a large number of memory copies are not made in the middle, and data transmission is carried out in a cross-virtual machine communication mode provided by Hypervisor.
In implementation, the API set declared by the graphics agent library of the front-end Guest OS is consistent with the API set of the real hardware accelerated graphics library, that is, each API in the API set declared by the graphics agent library is respectively consistent with the function name and the parameters of each API in the API set of the hardware accelerated graphics library, so that after the App of the front-end Guest OS calls one API of the graphics agent library, the back-end Guest OS calls one API of the real hardware accelerated graphics library with the same name and the consistent parameters, that is, one-to-one proxy at the API level is realized, so that the App of the front-end Guest OS does not need to be modified, and the universality is ensured.
Back-end graphics processing service: and as a system service of the back-end Guest OS, monitoring and receiving a call notification sent when the API of the front-end Guest OS calls the graphic proxy library. And calling the API of the hardware acceleration graphics library corresponding to the API of the graphics proxy library called by the front-end Guest OS by analyzing the calling notice, so as to realize the remote operation of the front-end Guest OS on the GPU. And finally, displaying the rendering result on a display device specified by the back-end Guest OS. From the outside, as if the back-end Guest OS uses the display device 1 and the front-end Guest OS uses the display device 2, but actually the back-end Guest OS uses the display device, the front-end Guest OS remotely controls the display device of the back-end Guest OS only by proxy.
In the GPU virtualization system based on the graphic library API agent, the front-end Guest OS is configured, so that the front-end Guest OS does not have the capacity of creating the context for the operation of the hardware accelerated graphic library, and the front-end Guest OS cannot call the hardware accelerated graphic library and can only call the API of the graphic library. And meanwhile, configuring the back-end Guest OS, and creating the context of the running of the hardware accelerated graphics library according to the received calling notification, so as to start the back-end graphics processing service and further complete the rendering task of the front-end Guest OS.
It should be noted that, the architecture shown in fig. 1 is to implement the GPU hardware resource sharing of one or more front-end Guest OSs through the GPU hardware resource possessed by the back-end Guest OSs, and in addition, another architecture is provided, the architecture implements the GPU hardware resource sharing of one or more front-end Guest OSs through the GPU hardware resource possessed by the host, and the specific working principles of the two architectures are consistent, that is, the data interaction process of the back-end Guest OSs is replaced by the host to be executed, which is not described herein again.
To further explain the GPU virtualization method based on the graphics library API proxy of the present invention, the following will describe in detail the operation manner of the GPU virtualization method based on the graphics library API proxy of the present invention by combining the flowchart shown in fig. 2 and using the opengl ES proxy library to implement the scene of GPU virtualization under the system architecture of the dual virtual machine with linux + linux shown in fig. 1.
1. And after the back-end Guest OS is started, automatically starting back-end graphic processing service. And calling an API provided by Hypervisor by the service process, and monitoring the message notification of the front Guest OS.
2. After the front Guest OS is started, the App calls an API (e.g., eglGetDisplay, eglCreateWindowSurface, etc.) of the EGL proxy library to create a context of the display environment. The incoming parameters of local display and local window of eglGetDisplay and eglCreateWindowSurface are just the handles recognizable by the backend graphics processing service, so as to inform the backend Guest OS to create the corresponding real context.
3. And the EGL agent library packs the parameter data transmitted by the API call together with the identification code of the API, then calls the message passing API provided by the Hypervisor, and transmits the packed data to the backend Guest OS through the Hypervisor.
4. And after monitoring the notification message, the back-end graphic processing service acquires message data, analyzes the message data, and calls a processing flow corresponding to the API through the API identification code in the data. As the corresponding flow of eglGetDisplay:
analyzing the analyzed native _ display data, selecting a corresponding back-end local display handle as a parameter to call eglGetdisplay of a real EGL library at the back end, packaging the returned data, calling a message transfer API provided by Hypervisor, and returning the message transfer API to a front-end Guest OS.
Other context environments such as eglInitialize, eglCreateWindowSurface create the same processing as the associated API.
5. Repeating steps 2, 3 and 4 until the context of the display environment is created at the back end.
6. After the App of the front-end Guest OS calls the EGL agent library, and after the context of the display environment is created at the back end, the API of the agent library of the OpenGL ES (such as glsharersource, glBindBuffer, glDrawArrays, etc.) is called to render and draw the graph.
7. The packaging and transmission mode of the API data of the agent library of OpenGL ES is the same as that of step 3.
8. And after monitoring the notification message, the back-end graphic processing service acquires message data and analyzes the message data. And calling the processing flow of the corresponding API through the API identification code in the data. If the calling data of the API includes the memory pointer data of the front Guest OS, the API related to the conversion of the memory addresses of different virtual machines provided by the Hypervisor needs to be called to convert the memory pointer data into the memory address of the back Guest OS. As the corresponding flow for glsharersource:
and converting the analyzed data of the third pointer parameter and the fourth pointer parameter into a memory address of a back-end Guest OS, and then transmitting the parameters into a call of a glShaderSource of a back-end real OpenGL ES library. And packaging the returned data, calling a message transmission API provided by Hypervisor, and returning the data to the front-end Guest OS.
9. And repeating the steps 6,7 and 8 until the rendering process of the front end is completed.
10. App of the front-end Guest OS calls eglSwapBuffers API of the EGL proxy library and pushes the rendering result to the display device. Both the display and surface parameters use handles returned by the back-end graphics processing service when creating an environment context.
11. And 3, packaging and transferring the API data of the EGL agent library in the same way as the step 3.
12. And after monitoring the notification message, the back-end graphic processing service acquires message data and analyzes the message data. After recognizing that the call is eglSwapBuffers, the eglSwapBuffers API of the real EGL library is called to exchange the real display buffer. The corresponding processing flow of eglSwapBuffers may be different according to the difference between the actual local display of the backend and the local window environment. For example, in the case of a drm platform, it may be necessary to call a corresponding drm interface to specify the display to be sent to a corresponding display device.
In addition, in one aspect of specific implementation, that is, on the GPU virtualization method based on the graphics library API proxy of the present invention, a step of executing a rendering task is added, so as to implement image rendering, where the image rendering method includes the following steps:
after the virtual machine receives the rendering task, if the virtual machine does not have GPU hardware resources, calling an API of a graphic proxy library of the virtual machine and sending a corresponding calling notice;
after receiving the call notification, the virtual machine or the host with the GPU hardware resources calls an API of a hardware acceleration graphic library corresponding to the API of the graphic proxy library according to the received call notification; and the GPU hardware resources are utilized to process the rendering tasks of other virtual machines.
Furthermore, in an embodied aspect, the present invention also provides a readable storage medium, such as a ROM storage device, a removable hard disk, a usb disk, or an optical disk, and one or more programs are written into the storage, and executed by one or more processors. Thus, the program in the memory, when executed by the processor, implements the graphics library API agent-based GPU virtualization method of the present invention or the image rendering method of the present invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A GPU virtualization method based on a graphic library API agent is characterized by comprising the following steps:
when a virtual machine without GPU hardware resources is configured to process a rendering task, calling an API of a graphic proxy library of the virtual machine and sending a corresponding calling notice; and configuring a virtual machine or a host machine with GPU hardware resources, receiving the call notification of other virtual machines, and calling the API of the hardware acceleration graphic library corresponding to the API of the graphic proxy library according to the call notification received by the virtual machine or the host machine so as to process the rendering tasks of other virtual machines through the GPU hardware resources.
2. The graphics library API agent-based GPU virtualization method as claimed in claim 1, wherein interprocess communication between virtual machines provided by the Hypervisor is configured as Socket communication.
3. The graphics library API proxy-based GPU virtualization method of claim 1, wherein the call notification comprises memory address data of its corresponding virtual machine.
4. The graphics library API-proxy-based GPU virtualization method of claim 1, wherein the API set of the graphics proxy library and the API set of the hardware accelerated graphics library have consistent function names and parameters.
5. The graphics library API agent-based GPU virtualization method of claim 4, wherein a virtual machine without GPU hardware resources is configured to be unable to create a context for hardware-accelerated graphics library execution; and configuring the virtual machine or the host with the GPU hardware resources to create the context of the hardware accelerated graphic library operation according to the received call notification.
6. A GPU virtualization system based on a graphic library API agent is characterized by comprising a virtual machine and a host; the host is used for operating Hypervisor and providing interprocess communication between the virtual machines;
the virtual machine without GPU hardware resources is used for calling the API of the graphic proxy library and sending out a corresponding calling notice when the rendering task needs to be processed;
and the virtual machine or the host with GPU hardware resources is used for receiving the call notification of other virtual machines and calling the API of the hardware acceleration graphic library corresponding to the API of the graphic proxy library according to the received call notification so as to process the rendering tasks of other virtual machines through the GPU hardware resources.
7. An image rendering method, comprising the steps of:
after the virtual machine receives the rendering task, if the virtual machine does not have GPU hardware resources, calling an API of a graphic proxy library of the virtual machine and sending a corresponding calling notice;
after receiving the call notification, the virtual machine or the host with the GPU hardware resources calls an API of a hardware acceleration graphic library corresponding to the API of the graphic proxy library according to the received call notification; and the GPU hardware resources are utilized to process the rendering tasks of other virtual machines.
8. A readable storage medium having one or more programs stored thereon, wherein the one or more programs, when executed by one or more processors, implement the graphics library API proxy based GPU virtualization method of any of claims 1-5 or the image rendering method of claim 7.
CN202010386295.6A 2020-05-09 2020-05-09 Graphic library API agent-based GPU virtualization method, system and medium Pending CN111552554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010386295.6A CN111552554A (en) 2020-05-09 2020-05-09 Graphic library API agent-based GPU virtualization method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010386295.6A CN111552554A (en) 2020-05-09 2020-05-09 Graphic library API agent-based GPU virtualization method, system and medium

Publications (1)

Publication Number Publication Date
CN111552554A true CN111552554A (en) 2020-08-18

Family

ID=72000477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010386295.6A Pending CN111552554A (en) 2020-05-09 2020-05-09 Graphic library API agent-based GPU virtualization method, system and medium

Country Status (1)

Country Link
CN (1) CN111552554A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519887A (en) * 2023-12-13 2024-02-06 南京云玑信息科技有限公司 Method and system for improving cloud computer remote operation experience

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103270492A (en) * 2010-12-15 2013-08-28 国际商业机器公司 Hardware accelerated graphics for network enabled applications
CN103631634A (en) * 2012-08-24 2014-03-12 中国电信股份有限公司 Graphics processor virtualization achieving method and device
CN104272285A (en) * 2012-05-31 2015-01-07 英特尔公司 Rendering multiple remote graphics applications
CN104737129A (en) * 2012-08-23 2015-06-24 思杰系统有限公司 Specialized virtual machine to virtualize hardware resource for guest virtual machines
CN106406977A (en) * 2016-08-26 2017-02-15 山东乾云启创信息科技股份有限公司 Virtualization implementation system and method of GPU (Graphics Processing Unit)
CN108762934A (en) * 2018-06-02 2018-11-06 北京泽塔云科技股份有限公司 Remote graphics Transmission system, method and Cloud Server
CN109508212A (en) * 2017-09-13 2019-03-22 深信服科技股份有限公司 Method for rendering graph, equipment and computer readable storage medium
CN109582425A (en) * 2018-12-04 2019-04-05 中山大学 A kind of GPU service redirection system and method merged based on cloud with terminal GPU

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103270492A (en) * 2010-12-15 2013-08-28 国际商业机器公司 Hardware accelerated graphics for network enabled applications
CN104272285A (en) * 2012-05-31 2015-01-07 英特尔公司 Rendering multiple remote graphics applications
CN104737129A (en) * 2012-08-23 2015-06-24 思杰系统有限公司 Specialized virtual machine to virtualize hardware resource for guest virtual machines
CN103631634A (en) * 2012-08-24 2014-03-12 中国电信股份有限公司 Graphics processor virtualization achieving method and device
CN106406977A (en) * 2016-08-26 2017-02-15 山东乾云启创信息科技股份有限公司 Virtualization implementation system and method of GPU (Graphics Processing Unit)
CN109508212A (en) * 2017-09-13 2019-03-22 深信服科技股份有限公司 Method for rendering graph, equipment and computer readable storage medium
CN108762934A (en) * 2018-06-02 2018-11-06 北京泽塔云科技股份有限公司 Remote graphics Transmission system, method and Cloud Server
CN109582425A (en) * 2018-12-04 2019-04-05 中山大学 A kind of GPU service redirection system and method merged based on cloud with terminal GPU

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519887A (en) * 2023-12-13 2024-02-06 南京云玑信息科技有限公司 Method and system for improving cloud computer remote operation experience
CN117519887B (en) * 2023-12-13 2024-03-12 南京云玑信息科技有限公司 Method and system for improving cloud computer remote operation experience

Similar Documents

Publication Publication Date Title
CN111488196B (en) Rendering method and device, storage medium and processor
JP6140190B2 (en) Paravirtualized high performance computing and GDI acceleration
CN105122210B (en) GPU virtualization implementation method and related device and system
US9665921B2 (en) Adaptive OpenGL 3D graphics in virtual desktop infrastructure
US7937452B2 (en) Framework for rendering plug-ins in remote access services
CN102109997A (en) Accelerating opencl applications by utilizing a virtual opencl device as interface to compute clouds
CN111966504B (en) Task processing method in graphics processor and related equipment
WO2021008183A1 (en) Data transmission method and apparatus, and server
US9507624B2 (en) Notification conversion program and notification conversion method
WO2022041507A1 (en) 3d rendering method and system
CN113672387B (en) Remote calling graphic rendering method and system based on drawing programming interface
KR20140027741A (en) Application service providing system and method, server apparatus and client apparatus for application service
CN110458748A (en) Data transmission method, server and client
KR20230074802A (en) Cloud desktop display method and system
KR20090123012A (en) Distributed processing system and method
WO2021008185A1 (en) Data transmission method and apparatus, and server
CN106991057B (en) Memory calling method in virtualization of shared display card and virtualization platform
KR20120116771A (en) Apparatus for supporting multiple operating system in terminal and operating system conversion method thereof
CN116860391A (en) GPU computing power resource scheduling method, device, equipment and medium
CN114968152A (en) Method for reducing additional performance loss of VIRTIO-GPU
CN111552554A (en) Graphic library API agent-based GPU virtualization method, system and medium
CN114116393A (en) Method, device and equipment for collecting GPU performance data of virtual machine
CN113379588A (en) Rendering system for container applications
US12028491B2 (en) Scanning preview method for a remote application when using scanner redirection for remote desktop services
CN113793246B (en) Method and device for using graphics processor resources and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200818

RJ01 Rejection of invention patent application after publication