WO2017143718A1 - 一种云渲染系统、服务器及方法 - Google Patents

一种云渲染系统、服务器及方法 Download PDF

Info

Publication number
WO2017143718A1
WO2017143718A1 PCT/CN2016/088766 CN2016088766W WO2017143718A1 WO 2017143718 A1 WO2017143718 A1 WO 2017143718A1 CN 2016088766 W CN2016088766 W CN 2016088766W WO 2017143718 A1 WO2017143718 A1 WO 2017143718A1
Authority
WO
WIPO (PCT)
Prior art keywords
gpu
virtual machine
memory
rendering
address space
Prior art date
Application number
PCT/CN2016/088766
Other languages
English (en)
French (fr)
Inventor
张微
杨磊
罗涛
曾锦平
邱泳天
周益
陈乐吉
苏永生
杨学亮
雷智聪
唐迎力
付兵
谢琼
陈平
Original Assignee
成都赫尔墨斯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都赫尔墨斯科技有限公司 filed Critical 成都赫尔墨斯科技有限公司
Publication of WO2017143718A1 publication Critical patent/WO2017143718A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Definitions

  • the present invention relates to the field of GPU virtualization technologies, and in particular, to a cloud rendering system, a server, and a method.
  • Cloud computing has become more and more popular, and more and more vendors are considering moving their business to cloud service providers' cloud hosts (such as virtual machines, container virtualization).
  • cloud hosts cannot provide strong 3D rendering capabilities and GPGPU (General Purpose GPU) computing power, applications that support strong 3D real-time rendering requirements and high-performance computing applications.
  • GPGPU General Purpose GPU
  • cloud service providers rely mainly on The cloud service management platform, virtualization software, and hardware support for virtualization technology, cutting, isolating, and encapsulating physical resources into cloud hosts, and providing services based on this. Due to the complexity of the GPU (Graphics Processing Unit) and the lag of the GPU hardware support for virtualization technology, the cloud host does not have direct 3D rendering capability for a long time.
  • GPU Graphics Processing Unit
  • Nvidia VGPU architecture To provide 3D cloud rendering services. Since vGPU (Virtual GPU Virtual Graphics Processor) technology is exclusively owned by Nvidia vendors, only the GRID GPU provided by Nvidia can have 3D rendering capability, but as a monopoly. The price of this GPU is much higher than that of the normal GPU; secondly, the rendering performance of the Nvidia VGPU architecture is lower.
  • the GRID K1 has an average rendering frame rate of only 8.5 FPS when compared to the Unigine Heaven Benchmark 4.0 test, as a comparison of Nvidia GTX.
  • the average rendering frame rate of the 970 is 95.4, which is an order of magnitude difference.
  • the architecture can only meet the business requirements of CAD and other requirements for 3D rendering. Only when the virtual device can directly access the independent GPU can the GPU be played. Better Rendering ability.
  • Patent CN201010612078.0 discloses a method, system and device for implementing general-purpose graphics processor virtualization.
  • the method disclosed in the patent document realizes a method for enabling multiple virtual devices to access GPU hardware without relying on the Nvidia VGPU architecture.
  • the GPU address accessed by the virtual machine V1 configures the real physical GPU address, and the virtual machine shares the same memory with other multiple virtual machines V2.
  • the other virtual machine V2 stores the information to the shared memory after receiving the request, V1 reads The shared memory information is taken, and the data is processed by the physical GPU. After the completion, the result is sent to the shared memory, and the calculation result is read by the virtual machine that sends the request.
  • the method enables multiple virtual machines to communicate indirectly with the GPU hardware. However, most of the virtual machines in this method do not directly communicate with the GPU, but send the rendering request to another predetermined virtual machine, so that it can complete the rendering task, so the computing efficiency and rendering ability are not high.
  • the existing cloud rendering technologies do not implement virtual machine direct access to the hardware GPU for rendering, and the existing cloud rendering device is expensive and the 3D rendering performance is poor.
  • the present invention aims to overcome the above-mentioned deficiencies in the prior art, and to provide a cloud rendering system, server and method capable of directly accessing a hardware GPU, and having low cost and high rendering performance.
  • a cloud rendering system includes a host machine and a plurality of GPUs, the host machine is provided with a plurality of virtual machines, and each of the virtual machines is configured with a corresponding one GPU driver;
  • the cloud rendering system also includes an MMU coupled to each of the GPU drivers and each of the GPUs, coupled virtual machine memory and host memory, configured to be when any of the virtual machines request access to the GPU Allocating a GPU address to the GPU driver of the virtual machine, where the GPU address is used to access the GPU; when any virtual machine requests access to the virtual machine memory, assigning a corresponding host memory address to the virtual machine;
  • An IOMMU coupled to each of the GPU and virtual machine memory, is configured to assign a corresponding host memory address to the GPU when any of the GPUs request access to the virtual machine memory.
  • a memory address space is set, and the host memory is mapped to a memory address space, where the memory address space is used to store an address corresponding to the host memory, and the virtual machine accesses by accessing an address in the memory address space. Corresponding host memory.
  • the IOMMU is further configured to map the discontinuous memory segments stored in the memory address space into consecutive memory segments, so that the GPU can read and write data through the DMA technology.
  • the GPU address space is set, and the GPU is mapped to a GPU address space, where the GPU address space is used to store an address corresponding to the GPU control register, and the GPU driver accesses the corresponding GPU control through the GPU address space. register.
  • the virtual machine when any one of the virtual machines is started, the virtual machine is bound to a GPU through the MMU and/or the IOMMU, and during the binding, the bound GPU can no longer be bound to other virtual machines. set.
  • the present invention also provides a cloud rendering server, including the cloud rendering system of the present invention, and a cloud service management platform for monitoring and managing the running status of the system, and managing users who use the system.
  • the invention also provides a cloud rendering method, comprising the following steps:
  • the virtual machine receives the rendering request, and sends the rendering request to the GPU driver of the virtual machine, receives the rendering data, and writes the rendering data into the memory;
  • the GPU driver of the virtual machine accesses the GPU control register, and writes the rendering request information into the GPU control register;
  • the GPU accesses the corresponding virtual machine memory according to the rendering request information, and processes the rendering data in the corresponding virtual machine memory.
  • the virtual machine and the GPU are multiple, and the virtual machine is in one-to-one correspondence with the GPU.
  • each of the virtual machines respectively processes the multiple Render any of the requests.
  • the writing of the rendering data to the memory includes: the MMU coupling the virtual machine memory and the host memory, and when the virtual machine requests access to the virtual machine memory, the MMU allocates a corresponding host memory address to the virtual machine by using the MMU.
  • the rendering data is stored according to the host memory address.
  • step S2 includes:
  • the GPU driver of the virtual machine accesses the GPU address space segment, and accesses the corresponding GPU control register according to the address information on the address space segment.
  • step S3 includes:
  • the IOMMU couples the virtual machine memory and the GPU, and maps the discontinuous memory address space used by the virtual machine into a continuous address space segment.
  • the GPU performs DMA reading and writing on the consecutive address space segments according to the rendering request information, acquires data required for rendering, and completes rendering.
  • the invention also provides a computer program for performing the above steps.
  • the MMU and the IOMMU are configured to enable multiple virtual machines to directly and independently access the GPU.
  • the system of the present invention is low in cost and high in rendering performance.
  • FIG. 1 is a block diagram of a cloud rendering system module of the present invention.
  • FIG. 2 is a block diagram of an internal module of a cloud rendering system in accordance with an embodiment of the present invention.
  • FIG. 3 is a flow chart of a cloud rendering method of the present invention.
  • Figure 4 is an internal schematic diagram of the method of the invention.
  • FIG. 1 is a block diagram of a cloud rendering system module of the present invention, including a host machine and a plurality of GPUs, the host machine is provided with a plurality of virtual machines, each of which is configured with a corresponding GPU driver;
  • the cloud rendering system further includes: an MMU (Memory Management Unit Memory Management Unit) coupled to each of the GPU drivers and each of the GPUs, coupled virtual machine memory and host memory, configured to be any virtual When requesting access to the GPU, assigning a GPU address to the GPU driver of the virtual machine, the GPU address is used to access the GPU; when any virtual machine requests access to the virtual machine memory, the corresponding host memory is allocated to the virtual machine. address;
  • MMU Memory Management Unit Memory Management Unit
  • IOMMU Input/Output Memory Management Unit
  • the host machine of the present invention is represented as a physical server entity capable of creating a virtual machine, and the virtual machine running on the host machine can share the resources of the physical server.
  • the physical server entity of the present invention is provided with multiple A virtual machine is called a host machine.
  • the present invention utilizes the MMU technology to enable the virtual machine to access the memory.
  • the MMU technology converts its access address into a corresponding physical address. Therefore, when the virtual machine accesses the memory or the GPU, the access mode of the virtual machine is the same as that of the physical machine.
  • the GPU processes the rendering request, it needs to access the rendering data in the memory.
  • IOMMU allows the GPU to directly access the data in memory, so when processing data, the data processing method is the same as that of the physical machine. This is equivalent to rendering the virtual machine as a separate host, rendering efficiency, rendering. Performance is no different from a true standalone host.
  • the cloud rendering is performed by using the solution of the present invention.
  • a host machine runs multiple virtual machine processes. When there are multiple different rendering tasks, each virtual machine and the GPU corresponding to the virtual machine can process these independently and simultaneously. task. For example, when the virtual machine 1 and the GPU 1 cooperate to process one rendering task, the host machine receives another rendering request, and at this time, the virtual machine 2 and the GPU 2 process the second rendering task according to the scheme of the present invention, and so on, in one sink.
  • the plurality of GPUs running on the host and the plurality of GPUs corresponding to the plurality of virtual machines can process multiple rendering requests simultaneously, and the rendering capability of the present invention is stronger and the rendering efficiency is higher than the existing technical solutions.
  • the MMU by setting a memory address space for address mapping, it is convenient for the MMU to manage the memory.
  • the memory access is performed in an orderly manner according to the corresponding setting of the memory address space, thereby improving the rendering efficiency. .
  • the IOMMU is further configured to map the discontinuous memory segments stored in the memory address space into consecutive memory segments, so that the GPU can read and write data through a DMA (Directional Memory Access) technology.
  • DMA Directional Memory Access
  • the traditional memory data reading mode is controlled by the CPU to control the GPU to read the content of the corresponding address segment, which requires additional scheduling of CPU resources. Not only does it waste resources, but also reduces GPU processing efficiency.
  • this non-contiguous memory area can be mapped into a continuous address space segment, so that the GPU can directly read the rendered data from the memory for rendering. , further improve the rendering efficiency.
  • the GPU address space is used to store an address corresponding to a GPU control register, and the GPU driver accesses a corresponding GPU control register through the GPU address space.
  • FIG. 2 is a block diagram of an internal module of a cloud rendering system according to an embodiment of the present invention.
  • the virtual machine V1 is the same as the virtual machine VN architecture.
  • the memory address space located in the address space is first accessed.
  • the address space provides the host memory corresponding to the virtual machine memory address, and then reads and writes data;
  • the virtual machine GPU driver first accesses the GPU control register address set in the GPU address space through the GPU address space when accessing the GPU.
  • the GPU is controlled by the GPU control register, which in turn controls the GPU for rendering.
  • the virtual machine and a GPU pass the MMU and/or IOMMU The binding is performed, and during the binding, the bound GPU can no longer be bound to other virtual machines.
  • the virtual machine when the virtual machine runs inside the host, it is used as a normal process by the host.
  • the virtual machine When the virtual machine is started, it will be assigned a GPU that is not used by other virtual machines for binding. Binding establishes the address mapping relationship between the virtual machine and the GPU.
  • each virtual machine When multiple virtual machines are started and run at the same time, each virtual machine is independently bound to a GPU, and each virtual machine is bound. The memory is mapped to different areas of the address space, and each GPU is also mapped to the GPU address space. Therefore, in the subsequent information interaction, each virtual machine can simultaneously process the rendering requests independently, according to their corresponding
  • the mapping relationship implements information transfer and information storage.
  • the invention performs the binding operation when the virtual machine is started.
  • the binding may be performed when the virtual machine is created, or may be bound when the virtual machine is started, and may be actively bound or passively Binding is established when a command is received.
  • the binding time can be determined according to the actual situation or established at other times.
  • the present invention also provides a cloud rendering server, including the cloud rendering system of the present invention, and a cloud service management platform for monitoring and managing the running status of the system, and managing users who use the system.
  • the cloud service management platform binds the virtual machine to the GPU, and also configures the mapping relationship between the MMU and the IOMMU, so that the virtual machine can directly access the virtual machine. Physical memory and physical devices.
  • the GPU is exclusively used by the virtual machine to which it is bound. Until the virtual machine is destroyed or the GPU resources are released, the GPU can be rebinded.
  • the cloud service management platform uses an existing platform system, such as openstack, Amazon Web Service, Amazon Cloud, etc., and will not be described here.
  • FIG. 3 is a flow chart of a cloud rendering method of the present invention, comprising the following steps:
  • the virtual machine receives the rendering request, and sends the rendering request to the GPU driver of the virtual machine, receives the rendering data, and writes the rendering data into the memory;
  • the virtual machine When the virtual machine performs information processing, it processes according to the processing flow of the physical machine.
  • Each virtual machine is provided with a GPU driver for driving the GPU to work, and the rendering data is written into the memory, which is actually the memory address through the MMU. After the space mapping, the data is written into the physical memory of the host.
  • the resources used by the virtual machine are the host physical resources accessed through the MMU and IOMMU mapping modes.
  • the GPU driver of the virtual machine accesses the GPU control register, and writes the rendering request information into the GPU control register;
  • the GPU accesses the corresponding virtual machine memory according to the rendering request information, and processes the rendering data in the corresponding virtual machine memory.
  • the virtual machine and the GPU are multiple, and the virtual machine has a one-to-one correspondence with the GPU.
  • each of the virtual machines is correspondingly processed. Any of the plurality of rendering requests.
  • the writing of the rendering data to the memory includes: the MMU coupling the virtual machine memory and the host memory, and when the virtual machine requests to access the virtual machine memory, assigning, by the MMU, the corresponding host memory address to the virtual machine, the rendering data Storage is performed according to the host memory address.
  • the coupling relationship of the MMU is established when the virtual machine is created.
  • the MMU directly converts the host virtual address (HVA, Host Virtual Address) into a sink.
  • the host physical address (HPA) causes the rendered data to be written to the host's memory.
  • the step S2 includes:
  • the GPU driver of the virtual machine accesses the GPU address space segment, and accesses the corresponding GPU control register according to the address information on the address space segment.
  • the step S3 includes:
  • the IOMMU couples the virtual machine memory and the GPU, and maps the discontinuous memory address space used by the virtual machine into a continuous address space segment.
  • the GPU performs DMA reading and writing on the consecutive address space segments according to the rendering request information, acquires data required for rendering, and completes rendering.
  • the GPU When the GPU performs DMA, it accesses a continuous Address Bus.
  • the host physical address (HPA) corresponding to the continuous physical address (GPA) in the virtual machine is not actually continuous. Therefore, it is necessary to map the Address Bus used by the GPU to a continuous HPA through the IOMMU, and then perform DMA.
  • the present invention also includes algorithms, computer programs, computer readable media and/or software, implantable and/or executable files on a general purpose computer or workstation equipped with a conventional processor, and for performing one or more Method and/or one or more of the hardware disclosed herein.
  • a computer program or computer readable medium typically includes a set of instructions configured to perform the above when executed by a suitable processing device (eg, a signal processing device such as a microcontroller, microprocessor or DSP device) Method, run, and/or algorithm.
  • the computer readable medium can include any medium that can be read by a signal processing device to execute code stored thereon, such as a floppy disk, optical disk, magnetic tape, or hard disk drive.
  • code can contain objects Code, source code, and/or binary code.
  • the code is generally numeric and is typically processed by a central processing unit.
  • one aspect of the invention relates to a non-transitory computer readable medium comprising an instruction set encoding adapted for use in the embodiments described below.
  • Figure 4 shows an internal schematic diagram of the method for implementing the present invention, wherein the virtual machine software running on the host machine selects qemu, the operating system is linux, and the plurality of independent GPUs are Nvidia GTX970, and the host machine is provided with multiple virtual machines.
  • the cloud service management platform allocates an unused GPU to bind the virtual machine, and the GPU is exclusively used by the virtual machine until the virtual machine is destroyed, and the GPU can no longer be used by other virtual machines.
  • Machine sharing when another virtual machine is also started, the cloud service management platform also assigns an unused GPU to bind to the virtual machine to handle another rendering task, achieving more on one host.
  • the vfio-pci driver configures the MMU and the IOMMU according to the mapping relationship, so that the GPU driver of the virtual machine can directly access the GPU hardware, and the GPU hardware can directly access the virtual machine memory.
  • the host maps the memory to the HPA Space. (Host Physical Address Space)
  • the memory address area of the host can access the memory, map the physical resources of the host's GPU to the GPU address area of the physical address space, and the GPU driver of the virtual machine.
  • the segment address space can be accessed to control the GPU and map the non-contiguous virtual machine memory regions into consecutive PCI address space segments.
  • the cloud rendering system established according to the above configuration has the following specific implementation steps:
  • Step 1 The 3D application in the virtual machine writes the rendering data into the virtual machine memory and renders it. Request a GTX970 driver to be sent to the virtual machine;
  • Step 2 The GTX970 driver accesses the GPU address space segment, and writes the rendering request into the GTX970 control register.
  • Step 3 The GTX 970 performs DMA on the continuous address space according to the information of the control register, and acquires data required for rendering;
  • Step 4 The GTX970 processes the rendering data, stores the rendering result, and completes rendering.
  • embodiments of the present invention also include a general purpose computer or workstation equipped with a display, keyboard, mouse, trackball or other cursor manipulation device and configured to perform the above methods and/or procedures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种云渲染系统,包括宿主机及多个GPU,所述宿主机设置有多个虚拟机,每个所述虚拟机都配置有对应的一个GPU驱动;还包括:MMU,耦合至每个所述GPU驱动及每个所述GPU、耦合虚拟机内存与宿主机内存,其被配置为当任一个虚拟机请求访问GPU时,向该虚拟机的GPU驱动分配一个GPU地址,所述GPU地址用于访问该GPU;当任一个虚拟机请求访问虚拟机内存时,向该虚拟机分配对应的宿主机内存地址;IOMMU,耦合至每个所述GPU及虚拟机内存,其被配置为当任一个GPU请求访问虚拟机内存时,向该GPU分配对应的宿主机内存地址。该系统通过配置MMU与IOMMU使得多个虚拟机都能够独立直接访问GPU,相比现有技术中一般采用的Nvidia VGPU架构,本系统价格低廉、渲染性能高。

Description

一种云渲染系统、服务器及方法 技术领域
本发明涉及GPU虚拟化技术领域,特别涉及一种云渲染系统、服务器及方法。
背景技术
云计算已经越来越普及,越来越多的厂商正在考虑将自己的业务转移到云服务提供商的云主机(如虚拟机,容器虚拟化)上。然而目前的云主机无法提供较强的3D渲染能力与GPGPU(General Purpose GPU通用计算图形处理器)计算能力,支持强3D实时渲染需求的应用和高性能计算应用,目前云服务提供商主要依赖于云服务管理平台、虚拟化软件以及硬件对虚拟化技术的支持,将物理资源进行切割、隔离、封装成为云主机,以此为基础对外提供服务。由于GPU(Graphics Processing Unit图形处理器)的复杂性和GPU硬件对虚拟化技术的支持滞后,使得很长时间内云主机没有直接的3D渲染能力。
而目前各个公司一般使用Nvidia VGPU架构的方式提供3D云渲染服务,由于vGPU(Virtual GPU虚拟图形处理器)技术由Nvidia厂商独占,因此只有使用Nvidia提供的GRID GPU才能拥有3D渲染能力,然而作为垄断,这种GPU的价格比普通GPU价格高很多;其次,Nvidia VGPU架构的渲染性能较低,例如GRID K1在进行Unigine Heaven Benchmark 4.0测试时,其平均渲染帧率仅为8.5FPS,作为对比Nvidia GTX 970其平均渲染帧率为95.4,性能上相差一个数量级,该架构仅仅只能满足CAD等对3D渲染要求较低的业务需要,只有当虚拟设备能够直接访问到独立的GPU时,才能使GPU发挥出更好 的渲染能力。
专利CN201010612078.0公开了一种通用图形处理器虚拟化的实现方法、系统及装置,该专利文件公开的方法实现了不依赖Nvidia VGPU架构,使得多个虚拟设备能够访问GPU硬件的方法,通过将虚拟机V1访问的GPU地址配置真实的物理GPU地址,并使该虚拟机与其他多个虚拟机V2间共享同一内存的方式.其他虚拟机V2接收到请求后将信息存储到共享内存,V1读取共享内存信息,通过物理GPU进行数据处理,完成后将结果发送到共享内存中,并由发送该请求的虚拟机读取计算结果。该方法实现多虚拟机间接与GPU硬件进行通信。然而该方法中大多数虚拟机并未直接与GPU进行通信,而是将渲染请求发送给另一台预定的虚拟机,让其代为完成渲染任务,因此计算效率、渲染能力也不会很高。
综上所述,现有的云渲染技术均未实现虚拟机直接访问硬件GPU进行渲染,且现有的云渲染设备价格昂贵、3D渲染性能较差。
发明内容
为了解决这些潜在问题,本发明的目的在于克服现有技术中所存在的上述不足,提供一种能够使虚拟机直接访问硬件GPU,且价格便宜、渲染性能高的云渲染系统、服务器及方法。
为了实现上述发明目的,本发明采用的技术方案是:
一种云渲染系统,包括宿主机及多个GPU,所述宿主机设置有多个虚拟机,每个所述虚拟机都配置有对应的一个GPU驱动;
所述云渲染系统还包括:MMU,耦合至每个所述GPU驱动及每个所述GPU、耦合虚拟机内存与宿主机内存,其被配置为当任一个虚拟机请求访问GPU时, 向该虚拟机的GPU驱动分配一个GPU地址,所述GPU地址用于访问该GPU;当任一个虚拟机请求访问虚拟机内存时,向该虚拟机分配对应的宿主机内存地址;
IOMMU,耦合至每个所述GPU及虚拟机内存,其被配置为当任一个GPU请求访问虚拟机内存时,向该GPU分配对应的宿主机内存地址。
进一步地,设置内存地址空间,并将所述宿主机内存映射到内存地址空间,所述内存地址空间用于存储宿主机内存对应的地址,所述虚拟机通过访问内存地址空间中的地址来访问对应的宿主机内存。
进一步地,所述IOMMU还用于将所述内存地址空间存储的不连续的内存段映射为连续的内存段,以便GPU能够通过DMA技术进行数据读写。
进一步地,设置GPU地址空间,并将所述GPU映射到GPU地址空间,所述GPU地址空间用于存储GPU控制寄存器对应的地址,所述GPU驱动通过所述GPU地址空间来访问对应的GPU控制寄存器。
进一步地,当任一个所述虚拟机启动时,该虚拟机与一个GPU通过MMU和/或IOMMU进行绑定,且在绑定期间,该已被绑定的GPU不能再与其他虚拟机进行绑定。
本发明同时提供一种云渲染服务器,包括本发明的云渲染系统,还包括云服务管理平台,用于对所述系统的运行状态进行监控和管理,对使用所述系统的用户进行管理。
本发明还提供一种云渲染方法,包括以下步骤:
S1、虚拟机接收渲染请求,并将渲染请求发送到虚拟机的GPU驱动,接收渲染数据,并将渲染数据写入内存;
S2、虚拟机的GPU驱动访问GPU控制寄存器,并将渲染请求信息写入GPU控制寄存器;
S3、GPU根据所述渲染请求信息访问对应的虚拟机内存,并对所述对应的虚拟机内存中的渲染数据进行处理。
进一步地,所述虚拟机、所述GPU均有多个,且所述虚拟机与所述GPU一一对应,当有多个渲染请求时,每个所述虚拟机分别对应处理所述多个渲染请求中的任一个。
进一步地,所述将渲染数据写入内存包括,MMU耦合所述虚拟机内存与宿主机内存,当虚拟机请求访问虚拟机内存时,通过MMU向该虚拟机分配对应的宿主机内存地址,所述渲染数据根据所述宿主机内存地址进行存储。
进一步地,所述S2步骤包括:
S201、将宿主机的GPU物理地址空间段映射到宿主机的虚拟地址空间段;
S202、利用GPA-HVA转换表将所述虚拟地址空间段映射到虚拟机中的GPU地址空间。
S203、虚拟机的GPU驱动访问GPU地址空间段,并根据所述地址空间段上的地址信息访问对应的GPU控制寄存器。
进一步地,所述S3步骤包括:
S301、IOMMU耦合所述虚拟机内存与GPU,并将虚拟机使用的不连续的内存地址空间映射为连续的地址空间段;
S302、GPU根据渲染请求信息对所述连续的地址空间段进行DMA读写,获取渲染所需的数据并完成渲染。
本发明还提供了用于执行上述步骤的计算机程序。
与现有技术相比,本发明的有益效果
本发明的一种云渲染系统,通过配置MMU与IOMMU使得多个虚拟机都能够独立直接访问GPU,相比现有技术中一般采用的Nvidia VGPU架构,本发明的系统价格低廉、渲染性能高。
附图说明
图1所示是本发明的一种云渲染系统模块框图。
图2所示是本发明一个具体实施例的云渲染系统内部模块框图。
图3所示是本发明的一种云渲染方法流程图。
图4所示是实现本发明方法的内部原理图。
具体实施方式
下面结合具体实施方式对本发明作进一步的详细描述。但不应将此理解为本发明上述主题的范围仅限于以下的实施例,凡基于本发明内容所实现的技术均属于本发明的范围。
图1所示是本发明的一种云渲染系统模块框图,包括宿主机及多个GPU,所述宿主机设置有多个虚拟机,每个所述虚拟机都配置有对应的一个GPU驱动;
所述云渲染系统还包括:MMU(Memory Management Unit内存管理单元),耦合至每个所述GPU驱动及每个所述GPU、耦合虚拟机内存与宿主机内存,其被配置为当任一个虚拟机请求访问GPU时,向该虚拟机的GPU驱动分配一个GPU地址,所述GPU地址用于访问该GPU;当任一个虚拟机请求访问虚拟机内存时,向该虚拟机分配对应的宿主机内存地址;
IOMMU(Input/Output Memory Management Unit输入/输出内存管理单元), 耦合至每个所述GPU及虚拟机内存,其被配置为当任一个GPU请求访问虚拟机内存时,向该GPU分配对应的宿主机内存地址。
本发明的宿主机表示为能够创建虚拟机的物理服务器实体,且运行在该宿主机上的虚拟机能够分享物理服务器的资源,就本发明而言,本发明的物理服务器实体上设置有多个虚拟机,即可称为宿主机。
由于虚拟机在宿主机内运行时,被宿主机作为一个普通的进程,因此虚拟机对于物理内存与物理GPU的访问时需要进行地址转换,本发明利用MMU技术,使得虚拟机在访问内存时,通过MMU技术将其访问地址转换为对应的物理地址,由此,使得虚拟机访问内存或GPU时,都跟物理机的访问方式相同,在GPU处理渲染请求时需要访问内存中的渲染数据,通过IOMMU使得GPU能够直接访问内存中的数据,因此在进行数据处理时,数据处理方式也跟物理机单独处理方式相同,这相当于将虚拟机作为一个独立的主机进行渲染操作,其渲染效率、渲染性能也就与真正的独立主机无异。
采用本发明的方案进行云渲染,一个宿主机上运行有多个虚拟机进程,当有多个不同的渲染任务时,每个虚拟机和与该虚拟机对应的GPU都能够独立、同时处理这些任务。例如当虚拟机1与GPU1协同处理一个渲染任务时,宿主机收到了另一项渲染请求,此时虚拟机2与GPU2根据本发明的方案来处理第二个渲染任务,依次类推,在一个宿主机上运行的多个虚拟机与该多个虚拟机对应的多个GPU能够同时处理多个渲染请求,相比现有的技术方案,本发明的渲染能力更强、渲染效率更高。
设置内存地址空间,并将所述宿主机内存映射到内存地址空间,所述内存地址空间用于存储宿主机内存对应的地址,所述虚拟机通过访问内存地址空间中的地址来访问对应的宿主机内存。
本发明中,通过设置内存地址空间进行地址映射,方便MMU对内存进行管理,当有多个虚拟机同时读取内存时,根据内存地址空间对应的设置有序的进行内存访问,提高了渲染效率。
所述IOMMU还用于将所述内存地址空间存储的不连续的内存段映射为连续的内存段,以便GPU能够通过DMA(Directional Memory Access直接内存访问)技术进行数据读写。
由于虚拟机使用的内存区域一般是不连续的空间,对于这种非连续空间,传统的内存数据读取模式是通过CPU协助控制GPU读取对应地址段的内容,这种方式需要额外调度CPU资源,不仅产生资源浪费,也降低了GPU处理效率,而通过IOMMU技术,可以将这种非连续的内存区域映射为连续的地址空间段,这样,GPU就可以直接从内存中读取渲染数据进行渲染,进一步提高了渲染效率。
设置GPU地址空间,并将所述GPU映射到GPU地址空间,所述GPU地址空间用于存储GPU控制寄存器对应的地址,所述GPU驱动通过所述GPU地址空间来访问对应的GPU控制寄存器。
具体的,图2所示是本发明一个具体实施例的云渲染系统内部模块框图,虚拟机V1与虚拟机VN架构相同,虚拟机内存在访问内存时先访问位于地址空间的内存地址空间,内存地址空间提供与该虚拟机内存地址对应的宿主机内存,继而进行数据的读写操作;虚拟机的GPU驱动在访问GPU时先通过GPU地址空间得到映射设于GPU地址空间的GPU控制寄存器地址,通过GPU控制寄存器,继而控制GPU进行渲染。
当任一个所述虚拟机启动时,该虚拟机与一个GPU通过MMU和/或IOMMU 进行绑定,且在绑定期间,该已被绑定的GPU不能再与其他虚拟机进行绑定。
如前所述,虚拟机在宿主机内运行时,被宿主机作为一个普通的进程,当虚拟机启动时,就会向该虚拟机分配一个未被其他虚拟机使用的GPU进行绑定,在绑定的同时建立起虚拟机与GPU间的地址映射关系,而当有多个虚拟机启动并在同一时刻运行时,每个虚拟机都对应独立的绑定了一个GPU,每个虚拟机的内存都被映射到地址空间的不同区域,每个GPU也都被映射到GPU地址空间,因此,在之后的信息交互时,每个虚拟机都能同时独立的处理渲染请求,直接根据他们对应的映射关系实现信息传递与信息存储。
本发明在当虚拟机启动时进行绑定操作,在实际应用中,可以在虚拟机创建时进行绑定,也可以在虚拟机启动时进行绑定,可以主动进行绑定,也可以被动的在接收到某个命令时建立绑定,绑定时间可依照实际情况而定,或者在其他时间进行建立。
本发明同时提供一种云渲染服务器,包括本发明的云渲染系统,还包括云服务管理平台,用于对所述系统的运行状态进行监控和管理,对使用所述系统的用户进行管理。
作为本发明的一个具体的实施方式,在虚拟机创建时,云服务管理平台将虚拟机与GPU进行了绑定,并且对MMU、IOMMU也进行了映射关系的配置,使虚拟机能够直接访问到物理内存与物理设备,当虚拟机与某个GPU绑定后,该GPU被与之绑定的虚拟机独占使用,直到该虚拟机被销毁或GPU资源被释放,才可重新绑定该GPU。
所述云服务管理平台使用现有的平台系统,如openstack,Amazon Web Service,阿里云等,在此不再赘述。
图3所示是本发明的一种云渲染方法流程图,包括以下步骤:
S1、虚拟机接收渲染请求,并将渲染请求发送到虚拟机的GPU驱动,接收渲染数据,并将渲染数据写入内存;
虚拟机在进行信息处理时,是依照物理机的处理流程进行处理,每个虚拟机都设置有GPU驱动程序,用于驱动GPU工作,将渲染数据写入内存,实际上是通过MMU进行内存地址空间映射后,数据写入到了宿主机的物理内存中,虚拟机所使用的资源均是通过MMU、IOMMU映射方式访问的宿主机物理资源。
S2、虚拟机的GPU驱动访问GPU控制寄存器,并将渲染请求信息写入GPU控制寄存器;
S3、GPU根据所述渲染请求信息访问对应的虚拟机内存,并对所述对应的虚拟机内存中的渲染数据进行处理。
在一个具体实施方式中,所述虚拟机、所述GPU均有多个,且所述虚拟机与所述GPU一一对应,当有多个渲染请求时,每个所述虚拟机分别对应处理所述多个渲染请求中的任一个。
所述将渲染数据写入内存包括,MMU耦合所述虚拟机内存与宿主机内存,当虚拟机请求访问虚拟机内存时,通过MMU向该虚拟机分配对应的宿主机内存地址,所述渲染数据根据所述宿主机内存地址进行存储。
在一个具体的实施方式中,MMU的这种耦合关系在虚拟机创建时就已经建立了,当渲染数据写入内存时,MMU直接将宿主机虚拟机地址(HVA,Host Virtual Address)转换为宿主机物理地址(HPA),使渲染数据写入到宿主机内存。
所述S2步骤包括:
S201、将宿主机的GPU物理地址空间段映射到宿主机的虚拟地址空间段;
S202、利用GPA-HVA转换表将所述虚拟地址空间段映射到虚拟机中的GPU地址空间。
S203、虚拟机的GPU驱动访问GPU地址空间段,并根据所述地址空间段上的地址信息访问对应的GPU控制寄存器。
所述S3步骤包括:
S301、IOMMU耦合所述虚拟机内存与GPU,并将虚拟机使用的不连续的内存地址空间映射为连续的地址空间段;
S302、GPU根据渲染请求信息对所述连续的地址空间段进行DMA读写,获取渲染所需的数据并完成渲染。
GPU在进行DMA时,访问的是连续的Address Bus(地址总线),然而在虚拟机中连续的物理地址(GPA,Guest Physical Address)对应的宿主机物理地址(HPA)实际上并非是连续的,因此需要通过IOMMU将GPU使用的Address Bus映射为连续HPA,继而进行DMA。
上述方法能够由专用的和集成的软件包操作和/或控制。因此,本发明还包括算法、计算机程序、计算机可读介质和/或软件,在配备了传统的处理器的通用计算机或工作站上可植入的和/或可执行文件,并用于执行一个或多方法和/或一个或多个操作的在此公开的硬件。例如,计算机程序或计算机可读介质通常包含一组指令,当由一个适当的处理装置(例如,信号处理装置,如微控制器、微处理器或DSP器件)执行时,其配置用于执行上述方法,运行,和/或算法。
计算机可读介质可包括任何媒介,可以通过信号处理装置读出中执行存储在其上的代码,如软盘、光盘、磁带或硬盘驱动器。这样的代码可以包含对象 代码、源代码和/或二进制代码。该代码一般是数字的,一般由中央处理器处理。
因此,本发明的一个方面涉及一种非暂时性的计算机可读介质,包括指令集编码,其适应用于下述实施例。
实施例1:
图4给出了实现本发明方法的内部原理图,其中,宿主机上运行的虚拟机软件选择qemu,操作系统为linux,多个独立GPU为Nvidia GTX970,该宿主机上设置有多个虚拟机,在一个虚拟机启动时,云服务管理平台分配一个未被使用的GPU与该虚拟机进行绑定,并且该GPU被该虚拟机独占使用,直到该虚拟机销毁,该GPU不能再被其他虚拟机共享使用,当另一个虚拟机也启动时,云服务管理平台也分配一个未被使用的GPU与该虚拟机进行绑定,来处理另一项渲染任务,实现了在一个宿主机上的多个虚拟机同时处理不同的渲染任务,这保证了虚拟机在协同渲染中数据的高效性与可靠性,相比现有的技术,其计算效率大大提升。在绑定过程中,vfio-pci驱动根据映射关系配置MMU与IOMMU,使得虚拟机的GPU驱动可以直接访问GPU硬件,GPU硬件可以直接访问虚拟机内存,具体的,宿主机将内存映射到HPA Space(Host Physical Address Space物理地址空间)的内存地址区域,对该区域的访问即可实现对内存的访问,将宿主机的GPU的物理资源映射到物理地址空间的GPU地址区域,虚拟机的GPU驱动可以访问该段地址空间,实现对GPU进行控制,并将不连续的虚拟机内存区域映射为连续的PCI地址空间段。按照上述配置建立的云渲染系统,其具体实施步骤如下:
步骤一、虚拟机中的3D应用程序将渲染数据写入虚拟机内存中,将渲染 请求发送到虚拟机的GTX970驱动程序;
步骤二、GTX970驱动访问GPU地址空间段,将渲染请求写入GTX970控制寄存器中;
步骤三、GTX970根据控制寄存器的信息,对连续的地址空间进行DMA,获取渲染所需的数据;
步骤四、GTX970对渲染数据进行处理,将渲染结果进行存储,完成渲染。
因此,在进一步的实施例中,本发明实施例还包括一个通用计算机或工作站,配备显示器、键盘、鼠标、轨迹球或其他光标操纵装置,并配置为执行上述方法和/或程序。
上面结合附图对本发明的具体实施方式进行了详细说明,但本发明并不限制于上述实施方式,在不脱离本申请的权利要求的精神和范围情况下,本领域的技术人员可以作出各种修改或改型。

Claims (12)

  1. 一种云渲染系统,其特征在于,包括宿主机及多个GPU,所述宿主机设置有多个虚拟机,每个所述虚拟机都配置有对应的一个GPU驱动;
    所述云渲染系统还包括:MMU,耦合至每个所述GPU驱动及每个所述GPU、耦合虚拟机内存与宿主机内存,其被配置为当任一个虚拟机请求访问GPU时,向该虚拟机的GPU驱动分配一个GPU地址,所述GPU地址用于访问该GPU;当任一个虚拟机请求访问虚拟机内存时,向该虚拟机分配对应的宿主机内存地址;
    IOMMU,耦合至每个所述GPU及虚拟机内存,其被配置为当任一个GPU请求访问虚拟机内存时,向该GPU分配对应的宿主机内存地址。
  2. 根据权利要求1所述的一种云渲染系统,其特征在于,设置内存地址空间,并将所述宿主机内存映射到内存地址空间,所述内存地址空间用于存储宿主机内存对应的地址,所述虚拟机通过访问内存地址空间中的地址来访问对应的宿主机内存。
  3. 根据权利要求2所述的一种云渲染系统,其特征在于,所述IOMMU还用于将所述内存地址空间存储的不连续的内存段映射为连续的内存段,以便GPU能够通过DMA技术进行数据读写。
  4. 根据权利要求1所述的一种云渲染系统,其特征在于,设置GPU地址空间,并将所述GPU映射到GPU地址空间,所述GPU地址空间用于存储GPU控制寄存器对应的地址,所述GPU驱动通过所述GPU地址空间来访问对应的GPU控制寄存器。
  5. 根据权利要求1-4任一项所述的一种云渲染系统,其特征在于,当任一个所述虚拟机启动时,该虚拟机与一个GPU通过MMU和/或IOMMU进行绑定,且在绑定期间,该已被绑定的GPU不能再与其他虚拟机进行绑定。
  6. 一种云渲染服务器,其特征在于,包括如权利要求1-5任一项所述的系统,还包括云服务管理平台,用于对所述系统的运行状态进行监控和管理,对使用所述系统的用户进行管理。
  7. 一种云渲染方法,其特征在于,包括以下步骤:
    S1、虚拟机接收渲染请求,并将渲染请求发送到虚拟机的GPU驱动,接收渲染数据,并将渲染数据写入内存;
    S2、虚拟机的GPU驱动访问GPU控制寄存器,并将渲染请求信息写入GPU控制寄存器;
    S3、GPU根据所述渲染请求信息访问对应的虚拟机内存,并对所述对应的虚拟机内存中的渲染数据进行处理。
  8. 根据权利要求7所述的一种云渲染方法,其特征在于,所述虚拟机、所述GPU均有多个,且所述虚拟机与所述GPU一一对应,当有多个渲染请求时,每个所述虚拟机分别对应处理所述多个渲染请求中的任一个。
  9. 根据权利要求7或8所述的一种云渲染方法,其特征在于,所述将渲染数据写入内存包括,MMU耦合所述虚拟机内存与宿主机内存,当虚拟机请求访问虚拟机内存时,通过MMU向该虚拟机分配对应的宿主机内存地址,所述渲染数据根据所述宿主机内存地址进行存储。
  10. 根据权利要求9所述的一种云渲染方法,其特征在于,所述S2步骤包括:
    S201、将宿主机的GPU物理地址空间段映射到宿主机的虚拟地址空间段;
    S202、利用GPA-HVA转换表将所述虚拟地址空间段映射到虚拟机中的GPU地址空间;
    S203、虚拟机的GPU驱动访问GPU地址空间段,并根据所述地址空间段上 的地址信息访问对应的GPU控制寄存器。
  11. 根据权利要求10所述的一种云渲染方法,其特征在于,所述S3步骤包括:
    S301、IOMMU耦合所述虚拟机内存与GPU,并将虚拟机使用的不连续的内存地址空间映射为连续的地址空间段;
    S302、GPU根据渲染请求信息对所述连续的地址空间段进行DMA读写,获取渲染所需的数据并完成渲染。
  12. 一种程序,当其在计算机上运行时,执行以下步骤:
    S1、虚拟机接收渲染请求,并将渲染请求发送到虚拟机的GPU驱动,接收渲染数据,并将渲染数据写入内存;
    S2、虚拟机的GPU驱动访问GPU控制寄存器,并将渲染请求信息写入GPU控制寄存器;
    S3、GPU根据所述渲染请求信息访问对应的虚拟机内存,并对所述对应的虚拟机内存中的渲染数据进行处理。
PCT/CN2016/088766 2016-02-26 2016-07-06 一种云渲染系统、服务器及方法 WO2017143718A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610107582.2A CN105786589A (zh) 2016-02-26 2016-02-26 一种云渲染系统、服务器及方法
CN201610107582.2 2016-02-26

Publications (1)

Publication Number Publication Date
WO2017143718A1 true WO2017143718A1 (zh) 2017-08-31

Family

ID=56402812

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/088766 WO2017143718A1 (zh) 2016-02-26 2016-07-06 一种云渲染系统、服务器及方法

Country Status (2)

Country Link
CN (1) CN105786589A (zh)
WO (1) WO2017143718A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580674A (zh) * 2019-07-24 2019-12-17 西安万像电子科技有限公司 信息处理方法、装置及系统
CN110681155A (zh) * 2019-09-29 2020-01-14 Oppo广东移动通信有限公司 一种游戏优化方法、游戏优化装置及移动终端
CN110807111A (zh) * 2019-09-23 2020-02-18 北京铂石空间科技有限公司 三维图形的处理方法及装置、存储介质、电子设备
CN111399964A (zh) * 2020-03-27 2020-07-10 重庆海云捷迅科技有限公司 一种基于视频串流技术的云桌面平台
CN111488196A (zh) * 2020-04-13 2020-08-04 西安万像电子科技有限公司 渲染方法及装置、存储介质、处理器
CN112925606A (zh) * 2019-12-06 2021-06-08 阿里巴巴集团控股有限公司 一种内存管理方法、装置及设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804199B (zh) * 2017-05-05 2021-03-05 龙芯中科技术股份有限公司 图形处理器虚拟化方法及装置
US10699364B2 (en) * 2017-07-12 2020-06-30 Citrix Systems, Inc. Graphical rendering using multiple graphics processors
CN108762934B (zh) * 2018-06-02 2021-09-07 武汉泽塔云科技股份有限公司 远程图形传输系统、方法及云服务器
CN109871250A (zh) * 2019-01-16 2019-06-11 山东超越数控电子股份有限公司 基于物理显卡的桌面交付方法、装置、终端及存储介质
CN110928695B (zh) * 2020-02-12 2020-05-22 南京芯瞳半导体技术有限公司 一种关于显存的管理方法、装置及计算机存储介质
CN113821308B (zh) * 2021-09-29 2023-11-24 上海阵量智能科技有限公司 片上系统、虚拟机任务处理方法及设备、存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120154389A1 (en) * 2010-12-15 2012-06-21 International Business Machines Corporation Hardware Accelerated Graphics for Network Enabled Applications
CN104754464A (zh) * 2013-12-31 2015-07-01 华为技术有限公司 一种音频播放方法、终端及系统
CN104915151A (zh) * 2015-06-02 2015-09-16 杭州电子科技大学 多虚拟机系统中一种主动共享的内存超量分配方法
CN105242957A (zh) * 2015-09-28 2016-01-13 广州云晫信息科技有限公司 一种云计算系统调配gpu资源到虚拟机的方法及系统
CN105302765A (zh) * 2014-07-22 2016-02-03 电信科学技术研究院 一种系统级芯片及其内存访问管理方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10310879B2 (en) * 2011-10-10 2019-06-04 Nvidia Corporation Paravirtualized virtual GPU
US20150009222A1 (en) * 2012-11-28 2015-01-08 Nvidia Corporation Method and system for cloud based virtualized graphics processing for remote displays
CN103491188B (zh) * 2013-09-30 2016-06-01 上海沃帆信息科技有限公司 利用虚拟桌面和gpu透传实现多用户共享图形工作站的方法
US10191759B2 (en) * 2013-11-27 2019-01-29 Intel Corporation Apparatus and method for scheduling graphics processing unit workloads from virtual machines

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120154389A1 (en) * 2010-12-15 2012-06-21 International Business Machines Corporation Hardware Accelerated Graphics for Network Enabled Applications
CN104754464A (zh) * 2013-12-31 2015-07-01 华为技术有限公司 一种音频播放方法、终端及系统
CN105302765A (zh) * 2014-07-22 2016-02-03 电信科学技术研究院 一种系统级芯片及其内存访问管理方法
CN104915151A (zh) * 2015-06-02 2015-09-16 杭州电子科技大学 多虚拟机系统中一种主动共享的内存超量分配方法
CN105242957A (zh) * 2015-09-28 2016-01-13 广州云晫信息科技有限公司 一种云计算系统调配gpu资源到虚拟机的方法及系统

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580674A (zh) * 2019-07-24 2019-12-17 西安万像电子科技有限公司 信息处理方法、装置及系统
CN110580674B (zh) * 2019-07-24 2024-01-16 西安万像电子科技有限公司 信息处理方法、装置及系统
CN110807111A (zh) * 2019-09-23 2020-02-18 北京铂石空间科技有限公司 三维图形的处理方法及装置、存储介质、电子设备
CN110681155A (zh) * 2019-09-29 2020-01-14 Oppo广东移动通信有限公司 一种游戏优化方法、游戏优化装置及移动终端
CN112925606A (zh) * 2019-12-06 2021-06-08 阿里巴巴集团控股有限公司 一种内存管理方法、装置及设备
CN112925606B (zh) * 2019-12-06 2024-05-28 阿里巴巴集团控股有限公司 一种内存管理方法、装置及设备
CN111399964A (zh) * 2020-03-27 2020-07-10 重庆海云捷迅科技有限公司 一种基于视频串流技术的云桌面平台
CN111399964B (zh) * 2020-03-27 2023-03-24 重庆海云捷迅科技有限公司 一种基于视频串流技术的云桌面平台
CN111488196A (zh) * 2020-04-13 2020-08-04 西安万像电子科技有限公司 渲染方法及装置、存储介质、处理器
CN111488196B (zh) * 2020-04-13 2024-03-22 西安万像电子科技有限公司 渲染方法及装置、存储介质、处理器

Also Published As

Publication number Publication date
CN105786589A (zh) 2016-07-20

Similar Documents

Publication Publication Date Title
WO2017143718A1 (zh) 一种云渲染系统、服务器及方法
US11093177B2 (en) Virtualized OCSSDs spanning physical OCSSD channels
US9563458B2 (en) Offloading and parallelizing translation table operations
US10310879B2 (en) Paravirtualized virtual GPU
KR20200017363A (ko) 호스트 스토리지 서비스들을 제공하기 위한 NVMe 프로토콜에 근거하는 하나 이상의 호스트들과 솔리드 스테이트 드라이브(SSD)들 간의 관리되는 스위칭
US10055254B2 (en) Accelerated data operations in virtual environments
JP2010186465A (ja) ヘテロジニアス処理ユニットのための集中デバイス仮想化レイヤ
JP2003256150A (ja) 記憶制御装置および記憶制御装置の制御方法
CN109144406B (zh) 分布式存储系统中元数据存储方法、系统及存储介质
US10140214B2 (en) Hypervisor translation bypass by host IOMMU with virtual machine migration support
US8650342B2 (en) System and method for distributed address translation in virtualized information handling systems
WO2016101282A1 (zh) 一种i/o任务处理的方法、设备和系统
WO2018119709A1 (zh) 用于多操作系统的内存访问方法、装置和电子设备
CN104636185A (zh) 业务上下文管理方法、物理主机、pcie设备及迁移管理设备
US10013199B2 (en) Translation bypass by host IOMMU for systems with virtual IOMMU
JP2020504890A (ja) マルチオペレーティングシステム用の表示方法、装置及び電子設備
US10169062B2 (en) Parallel mapping of client partition memory to multiple physical adapters
WO2018119712A1 (zh) 视频显示方法、装置、电子设备和计算机程序产品
WO2016172862A1 (zh) 一种内存管理方法、设备和系统
US11150928B2 (en) Hypervisor translation bypass
WO2015154226A1 (zh) 一种虚拟化环境中的数据通信的方法、装置及处理器
US20230185593A1 (en) Virtual device translation for nested virtual machines
US20200201758A1 (en) Virtualized input/output device local memory management
CN103902354A (zh) 一种虚拟化应用中快速初始化磁盘的方法
US20230315328A1 (en) High bandwidth extended memory in a parallel processing system

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16891170

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16891170

Country of ref document: EP

Kind code of ref document: A1