WO2018119951A1 - Système, dispositif, procédé de virtualisation gpu et appareil électronique et produit de programme d'ordinateur - Google Patents

Système, dispositif, procédé de virtualisation gpu et appareil électronique et produit de programme d'ordinateur Download PDF

Info

Publication number
WO2018119951A1
WO2018119951A1 PCT/CN2016/113260 CN2016113260W WO2018119951A1 WO 2018119951 A1 WO2018119951 A1 WO 2018119951A1 CN 2016113260 W CN2016113260 W CN 2016113260W WO 2018119951 A1 WO2018119951 A1 WO 2018119951A1
Authority
WO
WIPO (PCT)
Prior art keywords
graphics processing
operating system
shared memory
processing instruction
memory
Prior art date
Application number
PCT/CN2016/113260
Other languages
English (en)
Chinese (zh)
Inventor
温燕飞
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2016/113260 priority Critical patent/WO2018119951A1/fr
Priority to CN201680002845.1A priority patent/CN107003892B/zh
Publication of WO2018119951A1 publication Critical patent/WO2018119951A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45554Instruction set architectures of guest OS and hypervisor or native processor differ, e.g. Bochs or VirtualPC on PowerPC MacOS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Definitions

  • the present application relates to computer technology, and in particular, to a virtualization method, device, system, electronic device, and computer program product of a graphics processor GPU.
  • FIG. 1 A virtualization architecture based on Qemu/KVM (Kernel-based Virtual Machine) technology is shown in FIG.
  • the virtualization architecture based on Qemu/KVM technology consists of a primary Host operating system and several virtual guest guest operating systems.
  • the Host operating system includes multiple Host user space programs and the Host Linux kernel.
  • Each guest guest operating system includes user space, Guest Linux kernel, and Qemu. These operating systems run on the same set of hardware processor chips, sharing processor and peripheral resources.
  • the ARM processor supporting the virtualization architecture includes at least EL2, EL1, and EL0 modes, and the virtual machine manager Hypervisor program is run in EL2 mode; the Linux kernel program is run in EL1 mode, that is, the Linux kernel program; and the user space is run in the EL0 mode. program.
  • the Hypervisor layer manages hardware resources such as CPU, memory, timers, and interrupts, and can use different CPUs, memory, timers, and interrupted virtualization resources to load different operating systems into physical processors for runtime. Functionality.
  • KVM/Hypervisor spans the Host Linux kernel and Hypervisor. It provides a driver node for the analog processor Qemu, which allows Qemu to create virtual CPUs through KVM nodes and manage virtualized resources. On the other hand, KVM/Hypervisor can also host Host. The Linux system switches out from the physical CPU, then loads the Guest Linux system onto the physical processor and processes the subsequent transactions that the Guest Linux system exits abnormally.
  • Qemu provides virtual hardware device resources for the operation of Guest Linux.
  • KVM node of the KVM/Hypervisor module a virtual CPU is created, and physical hardware resources are allocated to load an unmodified Guest Linux. Go to the physical hardware processing to run.
  • a GPU virtualization method, device, system, and electronic device and computer program product are provided for implementing virtualization of a GPU.
  • a virtualization method of a graphics processor GPU including: receiving a graphics processing operation at a first operating system, and determining a corresponding graphics processing instruction according to the graphics processing operation Passing the graphics processing instruction to the second operating system through the shared memory; wherein the shared memory is in a readable and writable state for both the first operating system and the second operating system.
  • a virtualization method of a GPU including: acquiring a graphics processing instruction from a first operating system by using a shared memory; and executing the graphics processing instruction at a second operating system to obtain Processing the result, and displaying the processing result as a response to the graphics processing operation, wherein the graphics processing operation is received at the first operating system; wherein the shared memory is for the first operating system and the second operating system Both are readable and writable.
  • a GPU virtualization apparatus includes: a first receiving module, configured to receive a graphics processing operation at a first operating system, and determine a corresponding according to the graphics processing operation a graphics operation instruction; a first delivery module, configured to pass the graphics processing instruction to the second operating system through the shared memory; wherein the shared memory is readable and readable by the first operating system and the second operating system Write status.
  • a virtualization device of a GPU including: an obtaining module, configured to acquire a graphics processing instruction from a first operating system by using a shared memory; and an execution module, in the Executing, by the operating system, the graphics processing instruction, obtaining a processing result, and displaying the processing result as a response of the graphics processing operation, where the graphics processing operation is received at the first operating system; wherein the shared memory pair
  • the first operating system and the second operating system are both readable and writable.
  • a virtualization system of a GPU including: a first operating system, including a virtualization device of a GPU according to the third aspect of the embodiment of the present application; a shared memory, configured to: Storing graphics operation instructions from the first operating system and processing results from the second operating system; wherein the shared memory is in a readable and writable state for both the first operating system and the second operating system; A virtualization device for a GPU, such as the fourth aspect of the embodiments of the present application.
  • an electronic device comprising: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored In the memory, and configured to be executed by the one or more processors, the one or more modules include instructions for performing the various steps in the virtualization method of the GPU of the first aspect of the embodiments of the present application.
  • an electronic device comprising: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored In the memory, and configured to be executed by the one or more processors, the one or more modules include instructions for performing the various steps in the virtualization method of the GPU of the second aspect of the embodiments of the present application.
  • a computer program product for use in conjunction with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein
  • the computer program mechanism includes instructions for performing the various steps in the virtualization method of the GPU of the first aspect of the embodiments of the present application.
  • a computer program product for use in conjunction with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein
  • the computer program mechanism includes instructions for performing the various steps of the virtualization method of the GPU of the second aspect of the embodiments of the present application.
  • the GPU virtualization method, device, system, and electronic device and computer program product according to the embodiments of the present application are implemented, and the graphics processing instruction and the execution result are transmitted through the shared memory between the first operating system and the second operating system. Virtualization of the GPU.
  • FIG. 1 A schematic diagram of a virtualization architecture based on Qemu/KVM technology is shown in FIG. 1;
  • FIG. 2 is a schematic diagram of a system architecture for implementing a virtualization method of a GPU in an embodiment of the present application
  • FIG. 3 is a flowchart of a virtualization method of a GPU according to Embodiment 1 of the present application;
  • FIG. 4 is a flowchart of a virtualization method of a GPU according to Embodiment 2 of the present application.
  • FIG. 5 is a flowchart of a virtualization method of a GPU according to Embodiment 3 of the present application.
  • FIG. 6 is a schematic structural diagram of a virtualization device of a GPU according to Embodiment 4 of the present application.
  • FIG. 7 is a schematic structural diagram of a virtualization device of a GPU according to Embodiment 5 of the present application.
  • FIG. 8 is a schematic structural diagram of a virtualization system of a GPU according to Embodiment 6 of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to Embodiment 7 of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device according to Embodiment 8 of the present application.
  • a terminal device such as a mobile phone or a tablet requires solving the virtualization of all hardware devices and allowing the virtual operating system to be operated.
  • the system can also use real hardware devices. Therefore, there is a need to provide a virtualization method for a GPU.
  • a GPU virtualization method, apparatus, system, electronic device, and computer program product are provided, and graphics processing instructions and execution are implemented by using a shared memory between the first operating system and the second operating system.
  • the delivery of the results enables virtualization of the GPU.
  • the solution in the embodiment of the present application can be applied to various scenarios, for example, a smart terminal based on a virtualization architecture based on Qemu/KVM technology, an Android emulator, and the like.
  • the solution in the embodiment of the present application can be implemented in various computer languages, for example, an object-oriented programming language Java or the like.
  • FIG. 2 shows a system architecture for implementing a virtualization method of a GPU in an embodiment of the present application.
  • the GPU virtualization system according to an embodiment of the present application includes a first operating system 201, a second operating system 202, and a shared memory 203.
  • the first operating system may be a guest operating system; the second operating system may be a Host operating system. It should be understood that, in a specific implementation, the first operating system may also be a Host operating system, and the second operation may also be a Guest operating system, which is not limited in this application.
  • the first operating system is the guest operating system and the second operating system is the host operating system.
  • the guest operating system may include user space 2011, Guest Linux Kernel 2012, and Qemu 2013; there is a virtual graphics program interface in the user space of the guest operating system, specifically, the graphics program interface may be OpenGL (Open Graphics Library). , Open Graphics Lab) API (Application Program Interface) interface, which can also be, for example, Direct 3D, Other graphics program interfaces, such as Quick Draw 3D, are not limited in this application.
  • OpenGL Open Graphics Library
  • Open Graphics Lab API Application Program Interface
  • Direct 3D Direct 3D
  • Other graphics program interfaces such as Quick Draw 3D, are not limited in this application.
  • the host operating system may include a user space 2021 and a Host Linux Kernel 2022; in the user space of the Host operating system, a graphics program backend server corresponding to the graphical program interface in the Guest operating system may be installed, specifically, It is the OpenGL Backend Server; the backend server can operate the GPU device 204 through the GPU driver in the Host Linux Kernel.
  • the shared memory 203 is a memory that is visible to each other between the guest operating system and the host operating system; and the memory is in a readable and writable state for both the guest operating system and the host operating system, that is, both the guest operating system and the host operating system. Read and write operations can be performed on shared memory.
  • the shared memory 203 may include only the first storage area 2031; and may also be divided into a first storage area 2031 and a second storage area 2032.
  • the first storage area may also be referred to as private memory; the second storage area may also be referred to as public memory.
  • the division of the first storage area and the second storage area has no specific rules, and may be divided according to the data size generally stored by the first storage area and the second storage area, according to the experience of the designer; Pre-set policies are used to divide, and this application does not limit this.
  • the first storage area may be used for transmission of functions and parameters between the respective threads of the Guest operating system and the Backend Server thread, and/or synchronization information; specifically, the private memory may be further divided into multiple blocks.
  • One block is defined as one channel, one channel corresponds to one thread of the Guest operating system; in the specific division, the multiple blocks may be equally divided blocks of equal size, or may be called according to common threads in the system.
  • the functions and parameters, and/or the size of the synchronization information are intelligently divided, and this application does not limit this.
  • the user program of the Guest operating system can dynamically manage the channels in the private memory, that is, the user program can allocate, reallocate, and release the channels in the private memory at any time.
  • the second storage area can be used for large data blocks between all threads of the Guest operating system and the Backend Server thread, for example, transmission of graphic content data.
  • the user program in the Guest operating system can manage the blocks in the common memory, that is, the user program can allocate and release the channels in the common memory at any time, and each time the allocation and release are performed according to the entire block. Processed.
  • the division of the size of the block in the common memory can be adapted to the commonly used GPU graphics processing data.
  • the research and development personnel find that, in the process of GPU virtualization, the first operating system usually transmits about 2M to 16M of graphic content data to the second operating system to meet the requirements of GPU graphics virtualization processing. Therefore, when allocating the size of the block in the common memory, the common memory can be divided into multiple memory blocks such as 2M, 4M, 8M, and 16M.
  • the 4M memory block area can be directly allocated to the corresponding thread, and When the thread is released, an idle flag is set to the 4M block.
  • FIG. 2 only one Guest operating system, one Host operating system, and one shared memory are shown in FIG. 2; but in specific implementation, it may be one or more Guest operating systems, or may be One or more Host operating systems may also be one or more shared memory; that is, the Guest operating system, the Host operating system, and the shared memory may be any number, which is not limited in this application.
  • the shared memory shown in FIG. 2 includes two storage areas of private memory and common memory; and the private memory is divided into three equal-sized channels; the common memory is divided into four sizes. Channel.
  • the shared memory may be a storage area including only private memory; and the private memory may not be divided or divided into multiple channels of different sizes; the common memory may not exist or may be divided into multiple sizes. Equal channels, etc., are not limited in this application.
  • FIG. 3 is a flowchart of a virtualization method of a GPU according to Embodiment 1 of the present application.
  • the steps of the GPU virtualization method using the Guest operating system as the execution subject are described.
  • a virtualization method of a GPU according to an embodiment of the present application includes the following steps:
  • the shared memory corresponding to the GPU device may be created when the Qemu corresponding to the guest system is started.
  • Qemu can create a corresponding shared memory through a system call.
  • a specific address space can be divided from the memory as the shared memory of the GPU device.
  • the size of the shared memory can be set by the developer and adapted to the GPU.
  • the shared memory corresponding to the GPU device can be set to 128M or the like, which is not limited in this application.
  • a shared memory may be recreated for the GPU by the Qemu of each guest system, or a shared memory corresponding to the GPU may be shared by the multiple guest systems; .
  • Qemu further maps the shared memory to the PCI (Peripheral Component Interconnect) device memory space of the Guest system; and provides the guest system with a virtual PCI register as the PCI configuration space.
  • PCI Peripheral Component Interconnect
  • the Guest Linux Kernel then divides the shared memory into private and public memory.
  • the Guest Linux Kernel can partition the shared memory when initializing the GPU device; so that the shared memory supports access by multiple processes or threads.
  • the private memory that is, the first storage area may be divided into a first preset number of multiple channels; the common memory, that is, the second storage area may be divided into a second preset number of multiple blocks.
  • the first preset number and the second preset number may be set by a developer.
  • the size of the multiple channels of the private memory may be equal; the size of the multiple blocks of the common memory may be adapted to the processing data of the physical device corresponding to the shared memory.
  • the step of allocating a corresponding shared memory address space for the front-end thread and the corresponding back-end thread may be included when the front-end thread is started.
  • a front-end thread corresponding to the API call instruction may be created.
  • the thread creation instruction corresponding to the API call instruction is sent to the Host operating system to trigger the Host operating system to create a corresponding backend thread.
  • the address space of the private memory channel corresponding to the front-end thread and the common memory address space allocated to the front-end thread may also be obtained from the Guest Linux Kernel; and the front-end thread is correspondingly
  • the address space of the private memory channel and the common memory address space allocated to the front-end thread are mapped to the address space of the front-end thread; thereby establishing a synchronous control channel with Qemu.
  • a certain channel in the private memory is usually allocated to the front-end thread, and the common memory is entirely allocated to the front-end thread.
  • the address space of the private memory channel corresponding to the front-end thread and the address space of the common memory can be transferred to Qemu through the PCI configuration space; then Qemu uses the inter-process communication mechanism to address the address space of the private memory channel corresponding to the front-end thread, And the address space of the public memory is sent to the backend server; and it is mapped to the address space of the backend thread.
  • the user typically performs a graphics processing operation on a thread in the guest operating system, which may be, for example, opening a new window, opening a new page, or the like. It can be understood that, before this step, the step of creating a new thread in the user space of the Guest operating system may also be included.
  • the new thread may be an application, such as QQ, WeChat, and the like.
  • the behavior of the user creating a new thread may be, for example, the user opening WeChat or the like.
  • the first storage area may be further divided into one or more channels, and if the first storage area includes multiple channels, before the graphics processing instruction is written to the shared memory, the method further includes: processing the instruction according to the graphics The corresponding thread determines the channel corresponding to the graphics processing instruction.
  • the thread can be assigned a corresponding channel of the first storage area according to a preset rule.
  • the rule may be in the order in which the threads are created. For example, when a new thread is created, the Guest Linux kernel assigns a unique channel number to the thread, and the private memory corresponding to the channel number and the whole The public memory is simultaneously mapped to the user program; the Guest user program notifies OpenGL Backend Server to create a thread through Qemu, and maps the corresponding private memory channel number and the entire common memory space to the thread.
  • the step of allocating the channel number may not be performed; or the step corresponding to the thread corresponding to the graphics processing instruction may be not performed, and the step corresponding to the channel corresponding to the graphics processing instruction may be determined. .
  • the transferring the graphics processing instruction to the second operating system through the shared memory may be implemented by: writing the graphics processing instruction to the shared memory; and offsetting the graphics processing instruction in the shared memory Sent to the second operating system.
  • the guest user program may perform an offset record on the memory allocated for each block, that is, record the offset address of the memory currently written in the graphics processing instruction in the memory block corresponding to the current thread; and then shift the offset of the current memory block. The address is sent to the corresponding thread in the Host operating system. Then, the host operating system can read the graphics processing instruction to the corresponding position of the shared memory through the corresponding channel number and offset address, and immediately execute the function to obtain the processing result.
  • the graphics processing instructions may only include graphics processing functions and parameters; the graphics processing functions and parameters may be stored in a first memory area of the shared memory, ie, private memory.
  • the Host operating system obtains the corresponding graphics processing functions and parameters, it can execute the function immediately and get the processing result.
  • the number corresponding to the graphics processing function may be determined first; then the graphics processing function number and parameters are written to the first storage area.
  • the host operating system determines the corresponding graphics processing function according to the number, and then executes the function according to the graphics processing function and parameters, and obtains the processing result.
  • the graphics processing function may be an OpenGL function.
  • the graphics processing instruction includes, in addition to the graphics processing function and the parameter, synchronization information, where the synchronization information is used to indicate the time when the second operating system executes the graphics processing instruction; Graphics processing functions and parameters, as well as synchronization information are stored in the share The first storage area that is stored, that is, private memory. After obtaining the corresponding graphics processing function and parameters, the host operating system can execute the function at the time indicated by the synchronization information, and obtain the processing result.
  • the graphics processing instruction includes graphics content data in addition to graphics processing functions and parameters; the graphics processing function and parameters may be stored in the shared memory of the shared memory, and the graphics content is The data is written to the second storage area, which is the common memory.
  • the Guest user program sends the offset address of the private memory block and the offset address of the common memory block to the corresponding thread in the Host operating system. Then, the host operating system can read the graphic processing function and parameters through the corresponding channel number and the private memory offset address; to the corresponding position of the private memory; and read the graphic content data through the common memory offset address to the corresponding position of the common memory, And execute the function immediately after reading, and get the processing result.
  • the graphic content data may refer to an image frame that requires image processing.
  • the common memory may be further divided into a plurality of blocks having a size adapted to the GPU graphic content data; if the second storage area includes a plurality of blocks, the graphic content data is written to the second Before the storage area, the method further includes: determining, according to the size of the graphic content data, a block corresponding to the graphic content data.
  • the graphics processing instruction includes graphics content data in addition to graphics processing functions, parameters, and synchronization information; the graphics processing function, parameters, and synchronization information may be stored in the shared memory private
  • the memory writes the graphic content data to the second storage area, that is, the common memory.
  • the Guest user program sends the offset address of the private memory block and the offset address of the common memory block to the corresponding thread in the Host operating system.
  • the host operating system can read the graphics processing function, parameters, and synchronization information through the corresponding channel number and the private memory offset address; to the corresponding position in the private memory; and read the graphic through the common memory offset address to the corresponding position of the common memory.
  • the content data is executed at the time indicated by the synchronization information, and the processing result is obtained.
  • the shared memory may be utilized one or more times between the first operating system and the second operation to deliver any one or more of the following data: a graphics processing function or a graphics processing function number, a parameter , synchronization information, graphic content data.
  • the first operating system may transfer the graphics processing instruction to be delivered to the second operating system through the shared memory at one time; or may split the graphics processing instruction into an appropriate size and use the shared memory to transfer to the second operation multiple times.
  • the system does not limit the splitting strategy of the graphics processing instructions by using the common technical means of those skilled in the art.
  • the second operating system displays the processing result as a response of the graphics processing operation.
  • the second operating system may display the processing result to the screen through the GPU device.
  • the first operating system receives an execution result from the second operating system.
  • the second operating system may generate an execution result according to the execution result of the function.
  • the execution result may include a message that the graphics processing function performs success or failure, and/or software version information, etc.; and returns to the first operating system; so that the corresponding thread in the first operating system can acquire the function. carried out.
  • the host operating system can write the execution result to the shared memory; and record the current write execution result position, the offset address in the memory block corresponding to the current thread; and then send the offset address to the guest operating system. Corresponding thread. Then, the Guest operating system can read the data to the corresponding location of the shared memory through the corresponding offset address.
  • the remote call of the OpenGL API is implemented on the basis of the shared memory, thereby realizing the virtualization of the GPU.
  • FIG. 4 is a flowchart of a virtualization method of a GPU according to Embodiment 2 of the present application.
  • the steps of the GPU virtualization method using the Host operating system as the execution body are described.
  • the system architecture in the embodiment of the present application refer to the system architecture shown in FIG. 2 in the first embodiment, and details are not described herein again.
  • a virtualization method of a GPU includes the following steps:
  • the host operating system acquires graphics processing instructions from the guest operating system through the shared memory.
  • the shared memory may be divided into private memory and common memory; and the private memory may be further divided into multiple channels corresponding to different threads; if the private memory includes multiple channels, before S401, the method further includes: A thread corresponding to the graphics processing instruction determines a channel corresponding to the graphics processing instruction.
  • the thread when the user creates a new thread in the user space of the guest operating system, the thread may be allocated a corresponding channel of the first storage area according to a preset rule.
  • the rule may be in the order in which the threads are created. For example, when a new thread is created, the Guest Linux kernel assigns a unique channel number to the thread, and maps the private memory corresponding to the channel number and the entire public memory to the user program at the same time; the Guest user program notifies OpenGL Backend through Qemu. Server creates a thread and maps the corresponding private memory channel number and the entire public memory space to the thread.
  • the step of allocating the channel number may not be performed; or the step corresponding to the thread corresponding to the graphics processing instruction may be not performed, and the step corresponding to the channel corresponding to the graphics processing instruction may be determined. .
  • the guest operating system may send the graphics processing instruction to the host operating system at an offset address of the shared memory; the host operating system reads the graphics processing from the shared memory according to the offset address of the shared memory according to the graphics processing instruction. instruction.
  • the graphics processing instruction only includes the graphics processing function and the parameter; the host operating system can obtain the corresponding graphics processing function and parameters in the private memory. If the number of the graphics processing function is obtained in the private memory, the corresponding graphics processing function can be determined according to the number, and the function and parameters are processed according to the graphics.
  • the graphics processing instruction includes, in addition to the graphics processing function and the parameter, synchronization information, where the synchronization information is used to indicate the time when the second operating system executes the graphics processing instruction; Obtain the corresponding graphics processing functions, parameters, and synchronization information in private memory.
  • the graphics processing instruction includes graphic content data in addition to the graphics processing function and parameters; the host operating system can pass the corresponding channel number and the private memory offset address; corresponding to the private memory The position reads the graphics processing function and parameters; the graphic content data is read through the common memory offset address to the corresponding position of the common memory.
  • the graphics processing instruction includes graphic content data in addition to the graphics processing function, parameters, and synchronization information; the host operating system may pass the corresponding channel number and the private memory offset address; The corresponding position of the memory reads the graphics processing function, parameters and synchronization information; the graphic content data is read through the common memory offset address to the corresponding position of the common memory.
  • the host operating system executes the graphic processing instruction to obtain a processing result.
  • the host operating system may execute the graphics processing function based on the parameter at the moment indicated by the synchronization information, and obtain the processing result.
  • the synchronization processing information is not included in the graphics processing instruction, and after the host operating system acquires the graphics processing instruction, the host operating system can immediately execute the graphics processing function based on the parameters, and obtain the processing result.
  • the process of the host operating system displaying the function processing result may adopt the conventional technical means of those skilled in the art, which is not described in this application.
  • the execution result of the function for example, a message for identifying that the function execution succeeds or the execution failure is written, may be written into the shared memory; And sending the message to the first operating system at the offset address of the shared memory, so that the first operating system obtains the function execution result according to the offset address.
  • the remote calling of the GPU device is implemented in the Host operating system in conjunction with the user program in the Guest operating system; that is, the virtualization of the GPU is implemented.
  • the remote call of the OpenGL API is implemented on the basis of the shared memory, thereby realizing the virtualization of the GPU.
  • FIG. 5 is a flowchart of a virtualization method of a GPU according to Embodiment 3 of the present application.
  • the step of implementing the GPU virtualization method by using the OpenGL graphics processing interface as an example, the Guest operating system and the Host operating system are described.
  • the system architecture in the embodiment of the present application refer to the system architecture shown in FIG. 2 in the first embodiment, and details are not described herein again.
  • the initiator of the OpenGL API function remote call is the Guest operating system
  • the function execution party is the Host operating system.
  • the downlink synchronization process from the Guest operating system to the Host operating system goes through the Guest Linux kernel and Qemu reaches the OpenGL Backend Server.
  • the uplink synchronization process from the Host operating system to the Guest operating system is initiated from the OpenGL Backend Server and reaches the OpenGL emulator API via the Qemu and Guest Linux kernels.
  • a thread is created to initialize and call the OpenGL function.
  • the OpenGL Backend Server also creates a guest interface with the guest. Corresponding thread.
  • the virtualization method of the GPU according to Embodiment 3 of the present application includes the following steps:
  • the shared memory can be divided into two large blocks in the Guest Linux kernel, which are defined as private memory and common memory.
  • the private memory may be equally divided into a plurality of blocks of equal size, and one block is a channel, and each channel is used for transmitting data and synchronization information of a thread of the Guest operating system to an OpenGL Backend Server thread.
  • the data may include graphics processing function numbers and parameters.
  • the common memory can be divided into a plurality of large unequal chunks for large data block transmission from all threads of the Guest operating system to the OpenGL Backend Server thread.
  • the number of the private channel can be controlled by the Guest Linux kernel.
  • the kernel is responsible for allocating a unique channel number, and the private memory corresponding to the channel and the entire public memory. Also map to the user program.
  • the Guest user program tells OpenGL Backend Server to create a thread through Qemu and use the corresponding private channel memory and the entire public memory space.
  • the guest user program dynamically manages the private channel memory, and the program can allocate, redistribute, and release operations in the private memory at any time.
  • the Guest user program manages the fixed size of the common memory. Each allocation and release is handled by the entire block. For example, if the total public memory size is 32M, the partition is 2M, 2M, 4M, 8M, 16M 5 In the memory block, when the user applies for 3M space, the 4M memory block area is directly allocated, and an idle flag is set to the 4M block area when released.
  • the Guest user program performs an offset record on each allocated memory, that is, records the offset address of the currently allocated memory across the entire system memory block.
  • the Guest user program determines a corresponding graphics processing instruction in response to the user's graphics processing operation.
  • step S503 For the implementation of step S503, reference may be made to the implementation of S301 in the first embodiment, and the repeated description is omitted.
  • step S504 For the implementation of step S504, reference may be made to the process of transmitting the function number and parameters in the first embodiment S302. The implementation, repetitions will not be repeated.
  • S505 The host operating system acquires the passed function number and its parameters from the shared memory, and starts executing the function.
  • step S505 For the implementation of step S505, reference may be made to the process of obtaining the function number and parameters in the second embodiment S401, and the implementation of the function execution process in the second embodiment S402, and the repeated description is omitted.
  • step S506 reference may be made to the implementation of the second embodiment S403 and S404, and the repeated description is not repeated.
  • the virtualization method of the GPU in the embodiment of the present application uses a method of sharing memory across operating systems, that is, two operating systems are mutually visible and read and written on one memory, and implemented on the basis of shared memory. Remote calls to the OpenGL API to virtualize the GPU.
  • a GPU virtualization device is also provided in the embodiment of the present application.
  • the principle of the device is similar to the GPU virtualization method provided in the first embodiment of the present application. See the implementation of the method, and the repetition will not be repeated.
  • FIG. 6 is a schematic structural diagram of a virtualization device of a GPU according to Embodiment 4 of the present application.
  • the virtualization device 600 of the GPU includes: a first receiving module 601, configured to receive a graphics processing operation at a first operating system, and determine a corresponding according to the graphics processing operation. a graphics operation instruction; the first delivery module 602 is configured to pass the graphics processing instruction to the second operating system through the shared memory, so that the second operating system executes the graphics processing instruction, obtains a processing result, and uses the processing result as The response of the graphics processing operation is displayed;
  • the shared memory is in a readable and writable state for both the first operating system and the second operating system.
  • the first operating system may be a guest guest operating system
  • the second operating system may be a host guest operating system
  • the first delivery module may specifically include: a first writing submodule, the graphic processing instruction is written to the shared memory; and the first sending submodule is configured to bias the graphic processing instruction in the shared memory The transfer address is sent to the second operating system.
  • the graphics processing instruction may include a graphics processing function and a parameter; the first writing sub-module may be specifically configured to: store the graphics processing instruction into the first storage area of the shared memory.
  • the graphics processing instruction may further include synchronization information, where the synchronization information may be used to indicate a timing at which the second operating system executes the graphics processing instruction.
  • the graphics processing instruction may further include graphic content data;
  • the shared memory may further include a second storage area;
  • the first writing sub-module may further be configured to: write the graphic content data to the second storage area .
  • the second storage area includes a plurality of blocks, wherein each block has a preset size, and the preset size is adapted to the GPU graphic content data; the device may further include: a first determining module, configured to use the graphic content according to the graphic content The size of the data determines the block corresponding to the graphic content data.
  • the first storage area includes a plurality of channels, wherein each channel corresponds to a different thread; the device may further include: a second determining module, configured to determine, according to the thread corresponding to the graphics processing instruction, the graphics processing instruction Channel.
  • the graphics processing instruction may include a number and a parameter corresponding to the graphics processing function; the first writing sub-module may be specifically configured to: determine a number corresponding to the graphics processing function; and write the graphics processing function number and parameters to The first storage area.
  • the GPU virtualization apparatus further includes: a second receiving module 603, configured to receive an execution result from the second operating system.
  • the second receiving module may further include: a first address receiving submodule, configured to receive an offset address of the execution result from the second operating system in the shared memory; the first reading submodule, Used to read the execution result from the shared memory based on the offset address of the shared memory according to the execution result.
  • a first address receiving submodule configured to receive an offset address of the execution result from the second operating system in the shared memory
  • the first reading submodule Used to read the execution result from the shared memory based on the offset address of the shared memory according to the execution result.
  • the remote call of the OpenGL API is implemented on the basis of the shared memory, thereby realizing the virtualization of the GPU.
  • a virtualization device of a GPU is also provided in the embodiment of the present application.
  • the principle of the device is similar to the virtualization method of the GPU provided by the second embodiment of the present application. See the implementation of the method, and the repetition will not be repeated.
  • FIG. 7 is a schematic structural diagram of a virtualization device of a GPU according to Embodiment 5 of the present application.
  • the virtualization device 700 of the GPU includes: an obtaining module 701, configured to acquire a graphics processing instruction from a first operating system by using a shared memory; and an executing module 702, configured to: The operating system executes the graphics processing instruction to obtain a processing result; the display module 703 is configured to display the processing result as a response of the graphics processing operation; wherein the graphics processing operation is received by the first operating system;
  • the shared memory is readable and writable for both the first operating system and the second operating system.
  • the first operating system may be a guest guest operating system
  • the second operating system may be a host guest operating system
  • the acquiring module may specifically include: a second address receiving submodule, configured to receive an offset address of the graphics processing instruction from the first operating system in the shared memory; and a second read submodule configured to process according to the graphic The instruction reads the graphics processing instruction from the shared memory at the offset address of the shared memory.
  • the graphics processing instruction may include a graphics processing function and a parameter; and the second reading sub-module may be specifically configured to: read the graphics processing instruction from the first storage area of the shared memory.
  • the graphics processing instruction may further include synchronization information, where the synchronization information may be used to indicate a time when the second operating system executes the graphics processing instruction, and the execution module may be configured to: execute at a time indicated by the synchronization information The graphics processing instructions.
  • the graphics processing instruction may further include graphics content data;
  • the shared memory may further include a second storage area; and the second reading submodule may be further configured to: read the graphic content from the second storage area of the shared memory data.
  • the first storage area includes a plurality of channels, wherein each channel corresponds to a different thread; the device may further include: a second determining module, configured to determine, according to the thread corresponding to the graphics processing instruction, the graphics processing instruction Channel.
  • the graphics processing instruction may include a number and a parameter corresponding to the graphics processing function; the second reading submodule may be specifically configured to: read the graphics processing function number and parameters from the first storage area; The processing function number determines the corresponding graphics processing function.
  • the GPU virtualization apparatus may further include: a second delivery module, configured to deliver the execution result to the first operating system through the shared memory.
  • the second delivery module may specifically include: a second write submodule, configured to write an execution result to the shared memory; and a second sending submodule, configured to send the execution result to an offset address of the shared memory Go to the first operating system, so that the first operating system obtains an execution result according to the offset result of the shared memory in the processing result.
  • the remote call of the OpenGL API is implemented on the basis of the shared memory, thereby realizing the virtualization of the GPU.
  • a virtualization system of a GPU is also provided in the embodiment of the present application.
  • the principle of solving the problem is similar to the virtualization method of the GPU provided in Embodiments 1 and 2 of the present application.
  • Implementation can refer to the implementation of the method, and the repetition will not be repeated.
  • FIG. 8 is a schematic structural diagram of a virtualization system of a GPU according to Embodiment 6 of the present application.
  • the virtualization system 800 of the GPU includes: a first operating system 801, a virtualization device 600 including a GPU, and a shared memory 802 for storing graphics from the first operating system. Operation instructions and processing results from the second operating system; wherein, the total The memory is in a readable and writable state for both the first operating system and the second operating system; the second operating system 803 includes a virtualization device 700 of the GPU.
  • first operating system 801 For the implementation of the first operating system 801, refer to the implementation of the first operating system 201 in the first embodiment of the present application, and details are not described herein again.
  • shared memory 802 For the implementation of the shared memory 802, refer to the implementation of the shared memory 203 in the first embodiment of the present application, and details are not described herein again.
  • the first operating system may be a guest guest operating system
  • the second operating system may be a host guest operating system
  • the remote call of the OpenGL API is implemented on the basis of the shared memory, thereby realizing the virtualization of the GPU.
  • an electronic device 900 as shown in FIG. 9 is also provided in the embodiment of the present application.
  • an electronic device 900 includes: a display 901, a memory 902, one or more processors 903, a bus 904, and one or more modules, the one or more modules being stored In the memory, and configured to be executed by the one or more processors, the one or more modules include instructions for performing the steps in any of the methods of the first embodiment of the present application.
  • a computer program product for use in conjunction with an electronic device 900 including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein are also provided.
  • the computer program mechanism includes instructions for performing the various steps of the method of any of the first embodiment of the present application.
  • an electronic device 1000 as shown in FIG. 10 is also provided in the embodiment of the present application.
  • an electronic device 1000 includes: a display 1001, a memory 1002, one or more processors 1003, a bus 1004, and one or more modules, the one or more modules being stored in The memory is configured to be executed by the one or more processors, the one or more modules comprising instructions for performing the steps of any of the methods of the second embodiment of the present application.
  • a computer program product for use in conjunction with an electronic device 1000 including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein are also provided.
  • the computer program mechanism includes instructions for performing the various steps of the method of any of the second embodiment of the present application.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • These computer program instructions can also be stored in a bootable computer or other programmable data processing device.
  • a computer readable memory that operates in a particular manner, causing instructions stored in the computer readable memory to produce an article of manufacture comprising an instruction device implemented in one or more flows and/or block diagrams of the flowchart The function specified in the box or in multiple boxes.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Digital Computer Display Output (AREA)

Abstract

L'invention concerne également un système, un dispositif, un procédé de virtualisation GPU et un appareil électronique, et un produit de programme informatique. Le procédé consiste : à recevoir une opération de traitement d'image au niveau d'un premier système d'exploitation (201), et à déterminer, selon l'opération de traitement d'image, une instruction de traitement d'image correspondante ; et à transmettre, au moyen d'une mémoire partagée (203), à un second système d'exploitation (202) l'instruction de traitement d'image, la mémoire partagée (203) étant lisible et inscriptible pour le premier système d'exploitation (201) et le second système d'exploitation (202). Le procédé ci-dessus de la présente invention peut être utilisé afin de virtualiser une GPU.
PCT/CN2016/113260 2016-12-29 2016-12-29 Système, dispositif, procédé de virtualisation gpu et appareil électronique et produit de programme d'ordinateur WO2018119951A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/113260 WO2018119951A1 (fr) 2016-12-29 2016-12-29 Système, dispositif, procédé de virtualisation gpu et appareil électronique et produit de programme d'ordinateur
CN201680002845.1A CN107003892B (zh) 2016-12-29 2016-12-29 Gpu虚拟化方法、装置、系统及电子设备、计算机程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113260 WO2018119951A1 (fr) 2016-12-29 2016-12-29 Système, dispositif, procédé de virtualisation gpu et appareil électronique et produit de programme d'ordinateur

Publications (1)

Publication Number Publication Date
WO2018119951A1 true WO2018119951A1 (fr) 2018-07-05

Family

ID=59431118

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113260 WO2018119951A1 (fr) 2016-12-29 2016-12-29 Système, dispositif, procédé de virtualisation gpu et appareil électronique et produit de programme d'ordinateur

Country Status (2)

Country Link
CN (1) CN107003892B (fr)
WO (1) WO2018119951A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111114320A (zh) * 2019-12-27 2020-05-08 深圳市众鸿科技股份有限公司 一种车载智能座舱共享显示方法及系统
CN112925737A (zh) * 2021-03-30 2021-06-08 上海西井信息科技有限公司 Pci异构系统数据融合方法、系统、设备及存储介质
CN113793246A (zh) * 2021-11-16 2021-12-14 北京壁仞科技开发有限公司 图形处理器资源的使用方法及装置、电子设备
CN115344226A (zh) * 2022-10-20 2022-11-15 亿咖通(北京)科技有限公司 一种虚拟化管理下的投屏方法、装置、设备及介质
CN116485628A (zh) * 2023-06-15 2023-07-25 摩尔线程智能科技(北京)有限责任公司 图像显示方法、装置及系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107436797A (zh) * 2017-08-14 2017-12-05 深信服科技股份有限公司 一种基于虚拟化环境的指令数据处理方法及装置
CN108124475B (zh) * 2017-12-29 2022-05-20 达闼机器人股份有限公司 虚拟系统蓝牙通信方法及装置、虚拟系统、存储介质及电子设备
CN109542829B (zh) * 2018-11-29 2023-04-25 北京元心科技有限公司 多系统中gpu设备的控制方法、装置及电子设备
CN110442389B (zh) * 2019-08-07 2024-01-09 北京技德系统技术有限公司 一种多桌面环境共享使用gpu的方法
CN111522670A (zh) * 2020-05-09 2020-08-11 中瓴智行(成都)科技有限公司 一种用于Android系统的GPU虚拟化方法、系统及介质
CN112581650A (zh) * 2020-11-12 2021-03-30 江苏北斗星通汽车电子有限公司 基于智能座舱的视频数据处理方法、装置以及电子终端
CN114579072A (zh) * 2022-03-02 2022-06-03 南京芯驰半导体科技有限公司 一种跨多操作系统的显示投屏方法及装置
CN115686748B (zh) * 2022-10-26 2023-11-17 亿咖通(湖北)技术有限公司 虚拟化管理下的服务请求响应方法、装置、设备及介质
CN115775199B (zh) * 2022-11-23 2024-04-16 海光信息技术股份有限公司 数据处理方法和装置、电子设备和计算机可读存储介质
CN116597025B (zh) * 2023-04-24 2023-09-26 北京麟卓信息科技有限公司 一种基于异构指令穿透的压缩纹理解码优化方法
CN116243872B (zh) * 2023-05-12 2023-07-21 南京砺算科技有限公司 一种私有内存分配寻址方法、装置、图形处理器及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541618A (zh) * 2010-12-29 2012-07-04 中国移动通信集团公司 一种通用图形处理器虚拟化的实现方法、系统及装置
US20140229935A1 (en) * 2013-02-11 2014-08-14 Nvidia Corporation Virtual interrupt delivery from a graphics processing unit (gpu) of a computing system without hardware support therefor
CN104503731A (zh) * 2014-12-15 2015-04-08 柳州职业技术学院 二值图像连通域标记快速识别方法
CN104754464A (zh) * 2013-12-31 2015-07-01 华为技术有限公司 一种音频播放方法、终端及系统
CN105487915A (zh) * 2015-11-24 2016-04-13 上海君是信息科技有限公司 一种基于延迟发送机制的gpu虚拟化性能提升的方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100417077C (zh) * 2002-10-11 2008-09-03 中兴通讯股份有限公司 一种静态动态结合的存储区管理的方法
KR100592105B1 (ko) * 2005-03-25 2006-06-21 엠텍비젼 주식회사 공유 메모리의 분할 영역의 다중 억세스 제어 방법 및 공유메모리를 가지는 휴대형 단말기
US8463980B2 (en) * 2010-09-30 2013-06-11 Microsoft Corporation Shared memory between child and parent partitions
US9047686B2 (en) * 2011-02-10 2015-06-02 Qualcomm Incorporated Data storage address assignment for graphics processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541618A (zh) * 2010-12-29 2012-07-04 中国移动通信集团公司 一种通用图形处理器虚拟化的实现方法、系统及装置
US20140229935A1 (en) * 2013-02-11 2014-08-14 Nvidia Corporation Virtual interrupt delivery from a graphics processing unit (gpu) of a computing system without hardware support therefor
CN104754464A (zh) * 2013-12-31 2015-07-01 华为技术有限公司 一种音频播放方法、终端及系统
CN104503731A (zh) * 2014-12-15 2015-04-08 柳州职业技术学院 二值图像连通域标记快速识别方法
CN105487915A (zh) * 2015-11-24 2016-04-13 上海君是信息科技有限公司 一种基于延迟发送机制的gpu虚拟化性能提升的方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111114320A (zh) * 2019-12-27 2020-05-08 深圳市众鸿科技股份有限公司 一种车载智能座舱共享显示方法及系统
CN112925737A (zh) * 2021-03-30 2021-06-08 上海西井信息科技有限公司 Pci异构系统数据融合方法、系统、设备及存储介质
CN112925737B (zh) * 2021-03-30 2022-08-05 上海西井信息科技有限公司 Pci异构系统数据融合方法、系统、设备及存储介质
CN113793246A (zh) * 2021-11-16 2021-12-14 北京壁仞科技开发有限公司 图形处理器资源的使用方法及装置、电子设备
CN113793246B (zh) * 2021-11-16 2022-02-18 北京壁仞科技开发有限公司 图形处理器资源的使用方法及装置、电子设备
CN115344226A (zh) * 2022-10-20 2022-11-15 亿咖通(北京)科技有限公司 一种虚拟化管理下的投屏方法、装置、设备及介质
CN115344226B (zh) * 2022-10-20 2023-03-24 亿咖通(北京)科技有限公司 一种虚拟化管理下的投屏方法、装置、设备及介质
CN116485628A (zh) * 2023-06-15 2023-07-25 摩尔线程智能科技(北京)有限责任公司 图像显示方法、装置及系统
CN116485628B (zh) * 2023-06-15 2023-12-29 摩尔线程智能科技(北京)有限责任公司 图像显示方法、装置及系统

Also Published As

Publication number Publication date
CN107003892A (zh) 2017-08-01
CN107003892B (zh) 2021-10-08

Similar Documents

Publication Publication Date Title
WO2018119951A1 (fr) Système, dispositif, procédé de virtualisation gpu et appareil électronique et produit de programme d'ordinateur
CN107077377B (zh) 一种设备虚拟化方法、装置、系统及电子设备、计算机程序产品
US10310879B2 (en) Paravirtualized virtual GPU
TWI475488B (zh) 虛擬機器系統、虛擬化方法及含有用於虛擬化之指令的機器可讀媒體
US9798565B2 (en) Data processing system and method having an operating system that communicates with an accelerator independently of a hypervisor
JP5620506B2 (ja) アプリケーション画像の表示方法及び装置
EP1691287A1 (fr) Dispositif de traitement d'information, methode de commande de processus et programme informatique
US20140359613A1 (en) Physical/virtual device failover with a shared backend
US11204790B2 (en) Display method for use in multi-operating systems and electronic device
CN107077376B (zh) 帧缓存实现方法、装置、电子设备和计算机程序产品
US10002016B2 (en) Configuration of virtual machines in view of response time constraints
US20220050795A1 (en) Data processing method, apparatus, and device
CN113419845A (zh) 计算加速方法和装置、计算系统、电子设备及计算机可读存储介质
US10467078B2 (en) Crash dump extraction of guest failure
WO2017045272A1 (fr) Procédé et dispositif de migration de machine virtuelle
EP3850479B1 (fr) Mise à jour d'une machine virtuelle tandis que des dispositifs sont rattachés à la machine virtuelle
Park et al. Virtualizing graphics architecture of android mobile platforms in KVM/ARM environment
US11954534B2 (en) Scheduling in a container orchestration system utilizing hardware topology hints
US20210133914A1 (en) Multiple o/s virtual video platform
CN114138423B (zh) 基于国产gpu显卡的虚拟化构建系统及方法
CN117331704B (zh) 图形处理器gpu调度方法、装置和存储介质
Lee VAR: Vulkan API Remoting for GPU-accelerated Rendering and Computation in Virtual Machines
CN115904617A (zh) 一种基于sr-iov技术的gpu虚拟化实现方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16925946

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16925946

Country of ref document: EP

Kind code of ref document: A1