CN112445568A - Data processing method, device and system based on hardware acceleration - Google Patents

Data processing method, device and system based on hardware acceleration Download PDF

Info

Publication number
CN112445568A
CN112445568A CN201910822676.1A CN201910822676A CN112445568A CN 112445568 A CN112445568 A CN 112445568A CN 201910822676 A CN201910822676 A CN 201910822676A CN 112445568 A CN112445568 A CN 112445568A
Authority
CN
China
Prior art keywords
hardware
acceleration
api
call request
api call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910822676.1A
Other languages
Chinese (zh)
Inventor
郑晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910822676.1A priority Critical patent/CN112445568A/en
Publication of CN112445568A publication Critical patent/CN112445568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17331Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Stored Programmes (AREA)

Abstract

The invention discloses a data processing method, a device and a system based on hardware acceleration, relates to the field of data processing, and can solve the problem that the prior art cannot support high-density hardware-accelerated virtual machine service and hot migration. The method mainly comprises the following steps: an application program running on the virtual machine sends an API call request based on hardware acceleration to a paravirtualized driver program through an API forwarding program; and the paravirtualization driver sends the API call request to corresponding acceleration hardware through a host machine. The method is mainly suitable for a scene of realizing hardware acceleration based on the virtual machine.

Description

Data processing method, device and system based on hardware acceleration
Technical Field
The present invention relates to the field of data processing, and in particular, to a data processing method, apparatus, and system based on hardware acceleration.
Background
Hardware acceleration is the use of hardware modules instead of software algorithms to take full advantage of the inherent fast nature of hardware. Hardware acceleration is generally higher than software algorithm efficiency, and various hardware acceleration functions such as Graphics Processing Unit (GPU) hardware acceleration, Field Programmable Gate Array (FPGA) hardware acceleration, and the like have been implemented at present.
For security enforcement and other reasons, all services on the public cloud are currently running in virtual machines. And the hardware acceleration function can also be realized based on the virtual machine, which specifically comprises: the acceleration hardware is directly communicated to the virtual machine, in order to realize the Remote hardware acceleration capability, a Remote call function can be added in the virtual machine, a local API call is converted into a Remote API call, and a Remote Direct data Access (RDMA) and other low-delay channels are deployed between the virtual machine and the Remote acceleration hardware, so that the Remote API call can be efficiently called.
However, because the inside of the virtual machine needs to have the RDMA network capability, the RDMA network card needs to be transmitted to the virtual machine in a direct connection manner, but the general RDMA network card can only support 10 to 32 virtual machines in the direct connection manner, so that the service of hundreds of hardware accelerated virtual machines or more cannot be supported at the same time in high density; and the cut-through mode does not support live migration, so the virtual machine cannot realize the live migration.
Disclosure of Invention
In view of the above, the present invention provides a data processing method, apparatus and system based on hardware acceleration, which aims to solve the problem that the prior art cannot support high-density hardware-accelerated virtual machine services and hot migration.
In a first aspect, the present invention provides a data processing method based on hardware acceleration, where the method includes:
an application program running on the virtual machine sends an API call request based on hardware acceleration to a paravirtualized driver program through an API forwarding program;
and the paravirtualization driver sends the API call request to corresponding acceleration hardware through a host machine.
Optionally, sending, by the application program running on the virtual machine, the API call request based on the hardware acceleration to the paravirtualization driver through the API forwarding program includes:
an application program running on a virtual machine sends an API (application program interface) calling request based on hardware acceleration to a paravirtualized front-end driver program positioned in the virtual machine through an API forwarding program;
the paravirtualized front-end driver sends the API call request to a paravirtualized back-end driver located in a hypervisor;
the sending of the API call request to the corresponding acceleration hardware by the para-virtualization driver through the host comprises:
and the paravirtualization back-end driver sends the API call request to corresponding acceleration hardware through the host machine.
Optionally, before the paravirtualized back-end driver sends the API call request to the corresponding acceleration hardware through the host, the method further includes:
when determining that the API call request is a request for acquiring static information of the acceleration hardware, the paravirtualized back-end driver queries the static information of the acceleration hardware corresponding to the API call request from a cache;
the sending, by the paravirtualized back-end driver, the API call request to the corresponding acceleration hardware via the host includes:
if the static information of the acceleration hardware corresponding to the API call request is not inquired in the cache, the semi-virtualization back-end driver sends the API call request to the corresponding acceleration hardware through the host machine, so that the semi-virtualization back-end driver receives the static information fed back by the acceleration hardware corresponding to the API call request through the host machine.
Optionally, if the static information of the acceleration hardware corresponding to the API call request is queried in the cache, the method further includes:
and the semi-virtualization back-end driver acquires the acceleration hardware static information corresponding to the API call request from a cache, and forwards the acquired acceleration hardware static information to the application program sequentially through the semi-virtualization front-end driver and the API forwarding program.
Optionally, the method further includes:
and after receiving the static information of the acceleration hardware fed back by the acceleration hardware corresponding to the API calling request, the para-virtualization back-end driver performs associated cache on the received static information of the acceleration hardware and the called API.
Optionally, the sending, by the paravirtualization back-end driver, the API call request to the corresponding acceleration hardware through the host includes:
when the API call request is used for calling local acceleration hardware, the paravirtualized back-end driver sends the API call request to corresponding acceleration hardware in the host machine;
when the API call request is used for calling remote acceleration hardware, the semi-virtualization back-end driver sends the API call request to a network card in the host machine, and the network card sends the API call request to corresponding remote acceleration hardware through a preset remote transmission protocol.
Optionally, before the paravirtualized backend driver sends the API call request to the network card in the host, the method further includes:
and if the API call requests are all requests for acquiring dynamic information of the acceleration hardware, packaging the API call requests.
Optionally, sending, by the application program running on the virtual machine, the API call request based on the hardware acceleration to the paravirtualization driver through the API forwarding program includes:
and the application program running in the container on the virtual machine sends the API call request based on hardware acceleration to the paravirtualization driver program through the API forwarding program.
In a second aspect, the present invention provides a data processing apparatus based on hardware acceleration, the apparatus comprising:
the hardware acceleration-based application program comprises a first sending unit, a second sending unit and a para-virtualization driver, wherein the first sending unit is used for sending an API (application program) calling request based on hardware acceleration to the para-virtualization driver through an API forwarding program by an application program running on a virtual machine;
and the second sending unit is used for sending the API call request to corresponding acceleration hardware by the para-virtualization driver through a host machine.
Optionally, the first sending unit includes:
the system comprises a first sending module, a second sending module and a third sending module, wherein the first sending module is used for sending an API (application program) calling request based on hardware acceleration to a paravirtualization front-end driver positioned in a virtual machine through an API forwarding program by an application program running on the virtual machine;
the second sending module is used for sending the API calling request to a paravirtualized back-end driver program positioned in the hypervisor by the paravirtualized front-end driver program;
and the second sending unit is used for sending the API call request to corresponding acceleration hardware by the para-virtualization back-end driver through the host machine.
Optionally, the apparatus further comprises:
the query unit is used for querying the acceleration hardware static information corresponding to the API call request from a cache when the semi-virtualization back-end driver determines that the API call request is a request for acquiring the acceleration hardware static information before the semi-virtualization back-end driver sends the API call request to the corresponding acceleration hardware through the host;
the second sending unit is configured to send the API call request to corresponding acceleration hardware through the host when static information of the acceleration hardware corresponding to the API call request is not queried in the cache;
and the receiving unit is used for receiving the static information fed back by the acceleration hardware corresponding to the API calling request by the para-virtualization back-end driver through the host machine.
Optionally, the apparatus further comprises:
the obtaining unit is used for obtaining the acceleration hardware static information corresponding to the API calling request from the cache by the paravirtualized back-end driving program when the acceleration hardware static information corresponding to the API calling request is inquired in the cache;
and the forwarding unit is used for forwarding the acquired static information of the acceleration hardware to the application program sequentially through the paravirtualization front-end driver and the API forwarding program.
Optionally, the apparatus further comprises:
and the cache unit is used for performing associated cache on the received acceleration hardware static information and the called API after the semi-virtualization back-end driver receives the acceleration hardware static information fed back by the acceleration hardware corresponding to the API calling request.
Optionally, the second sending unit is configured to, when the API call request is used to call local acceleration hardware, send the API call request to corresponding acceleration hardware in the host by the paravirtualization back-end driver; when the API call request is used for calling remote acceleration hardware, the semi-virtualization back-end driver sends the API call request to a network card in the host machine, and the network card sends the API call request to corresponding remote acceleration hardware through a preset remote transmission protocol.
Optionally, the apparatus further comprises:
and the packing unit is used for packing the API calling requests before the semi-virtualization back-end driver sends the API calling requests to the network card in the host machine if the API calling requests are all requests for acquiring dynamic information of the acceleration hardware.
Optionally, the first sending unit is configured to send, by an application program running in a container on the virtual machine, an API call request based on hardware acceleration to the paravirtualization driver through the API forwarding program.
In a third aspect, the present invention provides a data processing system based on hardware acceleration, the system comprising: the system comprises a virtual machine, an application program running on the virtual machine, an API forwarding program running on the virtual machine, a paravirtualization driver and acceleration hardware;
the application program is used for sending an API call request based on hardware acceleration to the API forwarding program;
the API forwarding program is used for sending the API calling request to the paravirtualization driver program;
and the paravirtualization driver is used for sending the API call request to corresponding acceleration hardware through a host machine.
In a fourth aspect, the present invention provides a storage medium storing a plurality of instructions, the instructions being adapted to be loaded by a processor and to perform the data processing method based on hardware acceleration according to the first aspect.
In a fifth aspect, the present invention provides an electronic device comprising a storage medium and a processor;
the processor is suitable for realizing instructions;
the storage medium adapted to store a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform the method of data processing based on hardware acceleration as described in the first aspect.
By means of the technical scheme, the data processing method, the device and the system based on hardware acceleration can generate the API call request based on hardware acceleration by the application program running on the virtual machine, transmit the API call request to the paravirtualization driver program through the API forwarding program, and transmit the API call request to corresponding acceleration hardware through the host machine by the paravirtualization driver program. Therefore, when the virtual machine realizes the hardware acceleration function, the access of the virtual machine to the acceleration hardware is not realized in a mode of directly connecting the virtual machine and the acceleration hardware, but the API call request to the acceleration hardware is sent to the semi-virtual driver by using the semi-virtualization technology, then the semi-virtual driver realizes the access to the corresponding acceleration hardware through the host, and the semi-virtualization is realized by one piece of software, can support any plurality of virtual machines and also supports the hot migration, so the invention can realize the high-density hardware accelerated virtual machine service and the hot migration.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating a data processing method based on hardware acceleration according to an embodiment of the present invention;
FIG. 2 is a flow chart of another data processing method based on hardware acceleration according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram illustrating a hardware acceleration implemented based on a virtual machine according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram illustrating another virtual machine-based hardware acceleration implementation provided by an embodiment of the present invention;
FIG. 5 is a schematic structural diagram illustrating a container-based GPU hardware acceleration implementation according to an embodiment of the present invention;
FIG. 6 is a block diagram illustrating a data processing apparatus based on hardware acceleration according to an embodiment of the present invention;
fig. 7 is a block diagram illustrating another data processing apparatus based on hardware acceleration according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
An embodiment of the present invention provides a data processing method based on hardware acceleration, and as shown in fig. 1, the method mainly includes:
101. and the application program running on the virtual machine sends the API call request based on hardware acceleration to the paravirtualization driver program through the API forwarding program.
The virtual machine can be provided with various application programs, such as an application program with artificial intelligence operation requirements, an application program with graphic operation requirements and an application program with rendering requirements. In the process of running an Application program, when it is necessary to query acceleration hardware static information such as a certain GPU video memory size, a CUDA (computer Unified Device Architecture, which is an operation platform proposed by a video card manufacturer NVIDIA) core (core) number, FPGA (Field-Programmable Gate Array) information, and the like, or to request acceleration hardware dynamic information such as a GPU performing graphics operation rendering, and the like, the Application program may generate an API (Application Programming Interface) call request based on hardware acceleration. After the API forwarding program obtains the API call request, the API forwarding program may send the API call request to the paravirtualization driver, so that the paravirtualization driver performs subsequent transmission processing on the API call request. Among them, para-virtualization is a hot technology similar to full virtualization, and it uses hypervisor to share and access the underlying hardware, but its guest operating system integrates the code in virtualization, and the operating system itself can cooperate well with the virtual process, so the method does not need to recompile or cause trap. In order to realize the paravirtualization function, a paravirtualization driver is required to be installed to work. The paravirtualization driver may be a Virtio driver, or another driver based on paravirtualization technology.
102. And the paravirtualization driver sends the API call request to corresponding acceleration hardware through a host machine.
After receiving the API call request, the paravirtualization driver may send the API call request to corresponding acceleration hardware, such as a GPU, an FPGA, or other acceleration hardware, through a host of the virtual machine. And the para-virtualization driver can acquire data fed back by the acceleration hardware through the host machine. The API call request may include a unique identifier of the acceleration hardware, so as to determine the target acceleration hardware through the unique identifier, or may include only the IP address of the server, so that the corresponding server determines the target acceleration hardware according to the load condition of the acceleration hardware.
Furthermore, the currently mainstream container technology already supports hardware acceleration functions. For security enforcement and other reasons, all container services on the public cloud are currently running in virtual machines. The existing method for implementing hardware acceleration function by a container running in a virtual machine is as follows: and (3) directly connecting acceleration hardware to the virtual machine, and starting container service in the virtual machine, wherein the ratio of the virtual machine to the container is 1: 1. That is, the conventional technology has a problem that it cannot support high-density hardware accelerated container service and hot migration. In order to solve the technical problem, an application program running in a container on a virtual machine may first send an API call request based on hardware acceleration to a paravirtualization driver through an API forwarding program, and then the paravirtualization driver sends the API call request to corresponding acceleration hardware through a host.
The data processing method based on hardware acceleration provided by the embodiment of the invention can be used for sending the API call request to the paravirtualization driver program through the API forwarding program after the application program running on the virtual machine generates the API call request based on hardware acceleration, and sending the API call request to the corresponding acceleration hardware through the host machine by the paravirtualization driver program. Therefore, when the virtual machine realizes the hardware acceleration function, the access of the virtual machine to the acceleration hardware is not realized in a mode of directly connecting the virtual machine and the acceleration hardware, but the API call request to the acceleration hardware is sent to the semi-virtual driver by using the semi-virtualization technology, then the semi-virtual driver realizes the access to the corresponding acceleration hardware through the host, and the semi-virtualization is realized by one piece of software, can support any plurality of virtual machines and also supports the hot migration, so the invention can realize the high-density hardware accelerated virtual machine service and the hot migration.
Further, according to the method shown in fig. 1, another embodiment of the present invention further provides a data processing method based on hardware acceleration, as shown in fig. 2, the method mainly includes:
201. and the application program running on the virtual machine sends the API call request based on hardware acceleration to a paravirtualized front-end driver program positioned in the virtual machine through the API forwarding program.
202. And the paravirtualized front-end driver program sends the API call request to a paravirtualized back-end driver program positioned in the hypervisor.
Specifically, a paravirtualization front-end driver (such as a Virtio front-end driver) may be installed in the virtual machine or the container, a paravirtualization back-end driver (such as a Virtio back-end driver) may be installed in the hypervisor, and the paravirtualization technology may be implemented through communication between the paravirtualization front-end driver and the paravirtualization back-end driver.
Therefore, after the application program running on the virtual machine generates the API call request based on hardware acceleration, the API forwarding program may send the API call request to the paravirtualization front-end driver first, and then send the API call request to the paravirtualization back-end driver by the paravirtualization front-end driver.
203. And the paravirtualization back-end driver sends the API call request to corresponding acceleration hardware through the host machine.
Specifically, when the API call request is used to call local acceleration hardware, the paravirtualization back-end driver may directly send the API call request to corresponding acceleration hardware in the host; when the API call request is used for calling remote acceleration hardware, the semi-virtualization back-end driver sends the API call request to a network card in the host machine, and the network card sends the API call request to corresponding remote acceleration hardware through a preset remote transmission protocol.
The preset Remote transport protocol may be a Remote transport protocol such as bitfusion, RCUDA (Remote computer Unified Device Architecture, which is a Remote computing platform provided by NVIDIA, a video card vendor), and the like. In addition, to increase the remote transmission rate, a high-bandwidth low-latency remote channel may be deployed. For example, an RDMA network card may be deployed.
The remote acceleration hardware can be pooled acceleration hardware or common acceleration hardware, and the embodiment of the invention can realize the regulation and control of the acceleration hardware load no matter the pooled acceleration hardware or the common acceleration hardware. For example, if one GPU1 is currently being used by a guest and another GPU0 is idle, an API call request may be sent to GPU 0. When an RDMA network card is deployed and the remote acceleration hardware is pooled acceleration hardware, an architectural diagram of an embodiment of the invention may be as shown in fig. 3.
According to the data processing method based on hardware acceleration, when the virtual machine realizes a hardware acceleration function, the virtual machine and the acceleration hardware are not directly communicated to access the acceleration hardware, but a paravirtualization technology is utilized to send an API call request for the acceleration hardware to a paravirtual front-end driver, the paravirtualization front-end driver sends the API call request to a paravirtualization rear-end driver, and finally the paravirtualization driver realizes access to corresponding acceleration hardware through a host machine.
Optionally, since the embodiment of the present invention can implement a high-density hardware-accelerated virtual machine service, in order to efficiently process a large number of API call requests generated in a virtual machine, the following two schemes may be adopted:
the first scheme is as follows: the static information of the acceleration hardware which does not change can be cached, so that the static information of the acceleration hardware can be directly obtained from the cache next time. Specifically, before the paravirtualized back-end driver sends the API call request to the corresponding acceleration hardware through the host, the paravirtualized back-end driver may query the acceleration hardware static information corresponding to the API call request from a cache when determining that the API call request is a request for obtaining the acceleration hardware static information; if the static information of the acceleration hardware corresponding to the API call request is not inquired in the cache, the semi-virtualization back-end driver sends the API call request to the corresponding acceleration hardware through the host machine, so that the semi-virtualization back-end driver receives the static information fed back by the acceleration hardware corresponding to the API call request through the host machine; and if the acceleration hardware static information corresponding to the API call request is inquired in the cache, the semi-virtualization back-end driver acquires the acceleration hardware static information corresponding to the API call request from the cache, and forwards the acquired acceleration hardware static information to the application program sequentially through the semi-virtualization front-end driver and the API forwarding program.
When the acceleration hardware static information corresponding to the API call request is not queried in the cache, after the para-virtualization back-end driver receives the acceleration hardware static information fed back by the acceleration hardware corresponding to the API call request, the received acceleration hardware static information and the called API are cached in an associated manner, so that when the same API call request is received next time, data can be directly obtained from the local cache without remote obtaining.
Scheme II: before the semi-virtualization back-end driver sends the API calling request to the host, if the API calling requests are all requests for acquiring dynamic information of the acceleration hardware, packaging the API calling requests, and then sending the packaged API calling requests to a far-end acceleration hardware server so that the far-end acceleration hardware server unpacks the API calling requests and distributes the API calling requests to the corresponding acceleration hardware.
The paravirtualization back-end driver may package a plurality of API call requests received within a preset time interval. For example, API call requests received within 0.1ms are packaged.
When the two methods of caching and packaging are included, the architecture diagram of the embodiment of the present invention may be as shown in fig. 4, where the aggregation module is used to package a plurality of API call requests, and the caching module and the aggregation module may be located in the paravirtualized backend driver, or may be located outside the paravirtualized backend driver, in the Hypervisor.
For example, when the embodiment of the present invention is applied to the scenario of container-based GPU hardware acceleration, the architecture diagram may be as shown in fig. 5. The AI APPS is an application program with artificial intelligence computing requirements, and the aggregation module is used for packaging a plurality of API call requests. Since the GPU is not directly connected to the virtual machine, the virtual machine may be deployed in any number, the corresponding container may be deployed in any number, and the live migration may be implemented. In addition, the cache module and the convergence module can be located in the virtio back-end driver, or can be located outside the virtio back-end driver and in the Hypervisor.
It is added that the present invention is also detectable, and the detection method includes but is not limited to the following two methods:
the first method comprises the following steps: since the implementation of the paravirtualization function requires the installation of a front-end driver, it can be determined by determining whether the front-end driver exists.
Specifically, the virtual machine is started, then whether a virtio front-end driver exists is checked through the ispci | grep virtio, all the virtio front-end drivers which are inquired are unloaded, and whether the calling of the application program is normal is judged; if not, the invention is determined to be adopted. Whether the virtio related drive is called or not can be checked through tools such as perf and strace when the application program is called; if the virtio-related driver is invoked, then it is determined that the present invention is employed.
And the second method comprises the following steps: when the acceleration hardware pool is used remotely, no acceleration hardware driver exists inside the local virtual machine, so that the acceleration hardware pool can be determined by detecting whether the acceleration hardware driver exists inside the virtual machine.
Specifically, it can be performed inside linux: "lsmod | grep nvidia" to determine if an acceleration hardware drive is present. If the application program does not exist, the application program runs normally, and the invention is adopted.
Further, according to the foregoing method embodiment, another embodiment of the present invention further provides a data processing apparatus based on hardware acceleration, as shown in fig. 6, where the apparatus includes:
a first sending unit 31, configured to send, by an application program running on the virtual machine, an API call request based on hardware acceleration to the paravirtualization driver through an API forwarding program;
a second sending unit 32, configured to send, by the paravirtualization driver, the API call request to the corresponding acceleration hardware through the host.
Optionally, as shown in fig. 7, the first sending unit 31 includes:
a first sending module 311, configured to send, by an application program running on a virtual machine, an API call request based on hardware acceleration to a paravirtualization front-end driver located in the virtual machine through the API forwarding program;
a second sending module 312, configured to send, by the paravirtualized front-end driver, the API call request to a paravirtualized back-end driver located in a hypervisor;
the second sending unit 32 is configured to send, by the paravirtualized back-end driver, the API call request to the corresponding acceleration hardware through the host.
Optionally, as shown in fig. 7, the apparatus further includes:
the query unit 33 is configured to, before the paravirtualized back-end driver sends the API call request to the corresponding acceleration hardware through the host, query, from a cache, acceleration hardware static information corresponding to the API call request when the paravirtualized back-end driver determines that the API call request is a request for obtaining acceleration hardware static information;
the second sending unit 32 is configured to, when the static information of the acceleration hardware corresponding to the API call request is not queried in the cache, send the API call request to the corresponding acceleration hardware by the paravirtualized back-end driver through the host;
a receiving unit 34, configured to receive, by the paravirtualization back-end driver, static information fed back by the acceleration hardware corresponding to the API call request through the host.
Optionally, as shown in fig. 7, the apparatus further includes:
an obtaining unit 35, configured to, when the acceleration hardware static information corresponding to the API call request is queried in the cache, obtain, by the paravirtualized backend driver, the acceleration hardware static information corresponding to the API call request from the cache;
and a forwarding unit 36, configured to forward the acquired static information of the acceleration hardware to the application program sequentially through the paravirtualization front-end driver and the API forwarding program.
Optionally, as shown in fig. 7, the apparatus further includes:
the cache unit 37, configured to, after the paravirtualized back-end driver receives the acceleration hardware static information fed back by the acceleration hardware corresponding to the API call request, perform associated cache on the received acceleration hardware static information and the called API.
Optionally, the second sending unit 32 is configured to, when the API call request is used to call local acceleration hardware, send the API call request to corresponding acceleration hardware in the host by the paravirtualization back-end driver; when the API call request is used for calling remote acceleration hardware, the semi-virtualization back-end driver sends the API call request to a network card in the host machine, and the network card sends the API call request to corresponding remote acceleration hardware through a preset remote transmission protocol.
Optionally, as shown in fig. 7, the apparatus further includes:
a packing unit 38, configured to pack, before the paravirtualized back-end driver sends the API call request to the network card in the host, the API call requests if all of the API call requests are requests for acquiring dynamic information of the acceleration hardware.
Optionally, the first sending unit 31 is configured to send, by an application program running in a container on the virtual machine, an API call request based on hardware acceleration to the paravirtualization driver through the API forwarding program.
The data processing device based on hardware acceleration provided by the embodiment of the invention can generate an API call request based on hardware acceleration by an application program running on a virtual machine, then send the API call request to the paravirtualization driver program through the API forwarding program, and send the API call request to corresponding acceleration hardware through a host machine by the paravirtualization driver program. Therefore, when the virtual machine realizes the hardware acceleration function, the access of the virtual machine to the acceleration hardware is not realized in a mode of directly connecting the virtual machine and the acceleration hardware, but the API call request to the acceleration hardware is sent to the semi-virtual driver by using the semi-virtualization technology, then the semi-virtual driver realizes the access to the corresponding acceleration hardware through the host, and the semi-virtualization is realized by one piece of software, can support any plurality of virtual machines and also supports the hot migration, so the invention can realize the high-density hardware accelerated virtual machine service and the hot migration.
Further, another embodiment of the present invention provides a data processing system based on hardware acceleration, including: the system comprises a virtual machine, an application program running on the virtual machine, an API forwarding program running on the virtual machine, a paravirtualization driver and acceleration hardware;
the application program is used for sending an API call request based on hardware acceleration to the API forwarding program;
the API forwarding program is used for sending the API calling request to the paravirtualization driver program;
and the paravirtualization driver is used for sending the API call request to corresponding acceleration hardware through a host machine.
Optionally, the paravirtualization driver includes a paravirtualization front-end driver and a paravirtualization back-end driver; a paravirtualized front-end driver is located within the virtual machine, and the paravirtualized back-end driver is located within the hypervisor;
the API forwarding program is used for sending the API calling request to a paravirtualized front-end driver;
the paravirtualized front-end driver is used for sending the API call request to the paravirtualized back-end driver;
and the paravirtualization back-end driver is used for sending the API call request to corresponding acceleration hardware through the host machine.
Optionally, the paravirtualized back-end driver is configured to, before sending the API call request to the corresponding acceleration hardware through the host, query, from a cache, the acceleration hardware static information corresponding to the API call request when it is determined that the API call request is a request for obtaining the acceleration hardware static information; if the static information of the acceleration hardware corresponding to the API calling request is not inquired in the cache, the API calling request is sent to the corresponding acceleration hardware through the host machine, so that the semi-virtualization back-end driver receives the static information fed back by the acceleration hardware corresponding to the API calling request through the host machine.
Optionally, the paravirtualized back-end driver is configured to, when the acceleration hardware static information corresponding to the API call request is queried in the cache, obtain the acceleration hardware static information corresponding to the API call request from the cache, and forward the obtained acceleration hardware static information to the application program sequentially through the paravirtualized front-end driver and the API forwarding program.
Optionally, the paravirtualized back-end driver is configured to, after receiving the acceleration hardware static information fed back by the acceleration hardware corresponding to the API call request, perform associated caching on the received acceleration hardware static information and the called API.
Optionally, the paravirtualized back-end driver is configured to send the API call request to corresponding acceleration hardware in the host when the API call request is used to call local acceleration hardware; and when the API calling request is used for calling remote acceleration hardware, sending the API calling request to a network card in the host machine, and sending the API calling request to corresponding remote acceleration hardware by the network card through a preset remote transmission protocol.
Optionally, the paravirtualized back-end driver is configured to, before sending the API call request to the network card in the host, package the API call requests if the API call requests are all requests for acquiring dynamic information of the acceleration hardware.
Optionally, the system further includes a container, where the container runs on the virtual machine, and the application is located in the container.
The data processing system based on hardware acceleration provided by the embodiment of the invention can generate an API call request based on hardware acceleration by an application program running on a virtual machine, then send the API call request to the paravirtualization driver program through the API forwarding program, and send the API call request to corresponding acceleration hardware through a host machine by the paravirtualization driver program. Therefore, when the virtual machine realizes the hardware acceleration function, the access of the virtual machine to the acceleration hardware is not realized in a mode of directly connecting the virtual machine and the acceleration hardware, but the API call request to the acceleration hardware is sent to the semi-virtual driver by using the semi-virtualization technology, then the semi-virtual driver realizes the access to the corresponding acceleration hardware through the host, and the semi-virtualization is realized by one piece of software, can support any plurality of virtual machines and also supports the hot migration, so the invention can realize the high-density hardware accelerated virtual machine service and the hot migration.
Further, another embodiment of the present invention also provides a storage medium storing a plurality of instructions, the instructions being adapted to be loaded by a processor and to execute the data processing method based on hardware acceleration as described above.
The instructions stored in the storage medium provided by the embodiment of the invention can be sent to the paravirtualization driver through the API forwarding program after the application program running on the virtual machine generates the API call request based on hardware acceleration, and the paravirtualization driver sends the API call request to the corresponding acceleration hardware through the host machine. Therefore, when the virtual machine realizes the hardware acceleration function, the access of the virtual machine to the acceleration hardware is not realized in a mode of directly connecting the virtual machine and the acceleration hardware, but the API call request to the acceleration hardware is sent to the semi-virtual driver by using the semi-virtualization technology, then the semi-virtual driver realizes the access to the corresponding acceleration hardware through the host, and the semi-virtualization is realized by one piece of software, can support any plurality of virtual machines and also supports the hot migration, so the invention can realize the high-density hardware accelerated virtual machine service and the hot migration.
Further, another embodiment of the present invention also provides an electronic device including a storage medium and a processor;
the processor is suitable for realizing instructions;
the storage medium adapted to store a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform the GPU-based data processing method as described above.
The electronic device provided by the embodiment of the invention can generate an API call request based on hardware acceleration by an application program running on the virtual machine, then send the API call request to the paravirtualization driver program through the API forwarding program, and send the API call request to corresponding acceleration hardware through the host machine by the paravirtualization driver program. Therefore, when the virtual machine realizes the hardware acceleration function, the access of the virtual machine to the acceleration hardware is not realized in a mode of directly connecting the virtual machine and the acceleration hardware, but the API call request to the acceleration hardware is sent to the semi-virtual driver by using the semi-virtualization technology, then the semi-virtual driver realizes the access to the corresponding acceleration hardware through the host, and the semi-virtualization is realized by one piece of software, can support any plurality of virtual machines and also supports the hot migration, so the invention can realize the high-density hardware accelerated virtual machine service and the hot migration.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method and apparatus described above are referred to one another. In addition, "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent merits of the embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the hardware acceleration-based data processing method, apparatus and system according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (19)

1. A data processing method based on hardware acceleration, the method comprising:
an application program running on the virtual machine sends an API call request based on hardware acceleration to a paravirtualized driver program through an API forwarding program;
and the paravirtualization driver sends the API call request to corresponding acceleration hardware through a host machine.
2. The method of claim 1, wherein sending, by an application running on the virtual machine, the hardware acceleration-based API call request to the paravirtualized driver via the API forwarder comprises:
an application program running on a virtual machine sends an API (application program interface) calling request based on hardware acceleration to a paravirtualized front-end driver program positioned in the virtual machine through an API forwarding program;
the paravirtualized front-end driver sends the API call request to a paravirtualized back-end driver located in a hypervisor;
the sending of the API call request to the corresponding acceleration hardware by the para-virtualization driver through the host comprises:
and the paravirtualization back-end driver sends the API call request to corresponding acceleration hardware through the host machine.
3. The method of claim 2, wherein before the paravirtualized back-end driver sends the API call request to the corresponding acceleration hardware through the host, the method further comprises:
when determining that the API call request is a request for acquiring static information of the acceleration hardware, the paravirtualized back-end driver queries the static information of the acceleration hardware corresponding to the API call request from a cache;
the sending, by the paravirtualized back-end driver, the API call request to the corresponding acceleration hardware via the host includes:
if the static information of the acceleration hardware corresponding to the API call request is not inquired in the cache, the semi-virtualization back-end driver sends the API call request to the corresponding acceleration hardware through the host machine, so that the semi-virtualization back-end driver receives the static information fed back by the acceleration hardware corresponding to the API call request through the host machine.
4. The method of claim 3, wherein if static acceleration hardware information corresponding to the API call request is found in the cache, the method further comprises:
and the semi-virtualization back-end driver acquires the acceleration hardware static information corresponding to the API call request from a cache, and forwards the acquired acceleration hardware static information to the application program sequentially through the semi-virtualization front-end driver and the API forwarding program.
5. The method of claim 3, further comprising:
and after receiving the static information of the acceleration hardware fed back by the acceleration hardware corresponding to the API calling request, the para-virtualization back-end driver performs associated cache on the received static information of the acceleration hardware and the called API.
6. The method of claim 2, wherein sending, by the paravirtualized back-end driver, the API call request to the corresponding acceleration hardware via the host comprises:
when the API call request is used for calling local acceleration hardware, the paravirtualized back-end driver sends the API call request to corresponding acceleration hardware in the host machine;
when the API call request is used for calling remote acceleration hardware, the semi-virtualization back-end driver sends the API call request to a network card in the host machine, and the network card sends the API call request to corresponding remote acceleration hardware through a preset remote transmission protocol.
7. The method of claim 6, wherein before the para-virtualized back-end driver sends the API call request to a network card in the host, the method further comprises:
and if the API call requests are all requests for acquiring dynamic information of the acceleration hardware, packaging the API call requests.
8. The method of any of claims 1-7, wherein sending, by an application running on the virtual machine, a hardware acceleration-based API call request to the paravirtualized driver via an API forwarder comprises:
and the application program running in the container on the virtual machine sends the API call request based on hardware acceleration to the paravirtualization driver program through the API forwarding program.
9. A data processing apparatus based on hardware acceleration, the apparatus comprising:
the hardware acceleration-based application program comprises a first sending unit, a second sending unit and a para-virtualization driver, wherein the first sending unit is used for sending an API (application program) calling request based on hardware acceleration to the para-virtualization driver through an API forwarding program by an application program running on a virtual machine;
and the second sending unit is used for sending the API call request to corresponding acceleration hardware by the para-virtualization driver through a host machine.
10. The apparatus of claim 9, wherein the first sending unit comprises:
the system comprises a first sending module, a second sending module and a third sending module, wherein the first sending module is used for sending an API (application program) calling request based on hardware acceleration to a paravirtualization front-end driver positioned in a virtual machine through an API forwarding program by an application program running on the virtual machine;
the second sending module is used for sending the API calling request to a paravirtualized back-end driver program positioned in the hypervisor by the paravirtualized front-end driver program;
and the second sending unit is used for sending the API call request to corresponding acceleration hardware by the para-virtualization back-end driver through the host machine.
11. The apparatus of claim 10, further comprising:
the query unit is used for querying the acceleration hardware static information corresponding to the API call request from a cache when the semi-virtualization back-end driver determines that the API call request is a request for acquiring the acceleration hardware static information before the semi-virtualization back-end driver sends the API call request to the corresponding acceleration hardware through the host;
the second sending unit is configured to send the API call request to corresponding acceleration hardware through the host when static information of the acceleration hardware corresponding to the API call request is not queried in the cache;
and the receiving unit is used for receiving the static information fed back by the acceleration hardware corresponding to the API calling request by the para-virtualization back-end driver through the host machine.
12. The apparatus of claim 11, further comprising:
the obtaining unit is used for obtaining the acceleration hardware static information corresponding to the API calling request from the cache by the paravirtualized back-end driving program when the acceleration hardware static information corresponding to the API calling request is inquired in the cache;
and the forwarding unit is used for forwarding the acquired static information of the acceleration hardware to the application program sequentially through the paravirtualization front-end driver and the API forwarding program.
13. The apparatus of claim 11, further comprising:
and the cache unit is used for performing associated cache on the received acceleration hardware static information and the called API after the semi-virtualization back-end driver receives the acceleration hardware static information fed back by the acceleration hardware corresponding to the API calling request.
14. The apparatus according to claim 10, wherein the second sending unit is configured to, when the API call request is used to call local acceleration hardware, send the API call request to corresponding acceleration hardware in the host by the paravirtualized backend driver; when the API call request is used for calling remote acceleration hardware, the semi-virtualization back-end driver sends the API call request to a network card in the host machine, and the network card sends the API call request to corresponding remote acceleration hardware through a preset remote transmission protocol.
15. The apparatus of claim 14, further comprising:
and the packing unit is used for packing the API calling requests before the semi-virtualization back-end driver sends the API calling requests to the network card in the host machine if the API calling requests are all requests for acquiring dynamic information of the acceleration hardware.
16. The apparatus according to any of claims 9-15, wherein the first sending unit is configured to send, by an application running in a container on the virtual machine, a hardware acceleration-based API call request to the paravirtualization driver through the API forwarding program.
17. A data processing system based on hardware acceleration, the system comprising: the system comprises a virtual machine, an application program running on the virtual machine, an API forwarding program running on the virtual machine, a paravirtualization driver and acceleration hardware;
the application program is used for sending an API call request based on hardware acceleration to the API forwarding program;
the API forwarding program is used for sending the API calling request to the paravirtualization driver program;
and the paravirtualization driver is used for sending the API call request to corresponding acceleration hardware through a host machine.
18. A storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method of data processing based on hardware acceleration according to any one of claims 1 to 8.
19. An electronic device, comprising a storage medium and a processor;
the processor is suitable for realizing instructions;
the storage medium adapted to store a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform the method of hardware acceleration-based data processing according to any of claims 1 to 8.
CN201910822676.1A 2019-09-02 2019-09-02 Data processing method, device and system based on hardware acceleration Pending CN112445568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910822676.1A CN112445568A (en) 2019-09-02 2019-09-02 Data processing method, device and system based on hardware acceleration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910822676.1A CN112445568A (en) 2019-09-02 2019-09-02 Data processing method, device and system based on hardware acceleration

Publications (1)

Publication Number Publication Date
CN112445568A true CN112445568A (en) 2021-03-05

Family

ID=74734880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910822676.1A Pending CN112445568A (en) 2019-09-02 2019-09-02 Data processing method, device and system based on hardware acceleration

Country Status (1)

Country Link
CN (1) CN112445568A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114499945A (en) * 2021-12-22 2022-05-13 天翼云科技有限公司 Intrusion detection method and device for virtual machine
CN115858102A (en) * 2023-02-24 2023-03-28 珠海星云智联科技有限公司 Method for deploying virtual machine supporting virtualization hardware acceleration

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114499945A (en) * 2021-12-22 2022-05-13 天翼云科技有限公司 Intrusion detection method and device for virtual machine
CN114499945B (en) * 2021-12-22 2023-08-04 天翼云科技有限公司 Intrusion detection method and device for virtual machine
CN115858102A (en) * 2023-02-24 2023-03-28 珠海星云智联科技有限公司 Method for deploying virtual machine supporting virtualization hardware acceleration

Similar Documents

Publication Publication Date Title
US9665921B2 (en) Adaptive OpenGL 3D graphics in virtual desktop infrastructure
CN104199718B (en) A kind of dispatching method of the virtual processor based on NUMA high performance network cache resources affinity
CN113110910A (en) Method, system and equipment for implementing android container
CN103677878B (en) A kind of method and apparatus of patch installing
US20240184607A1 (en) Accelerating para-virtualization of a network interface using direct memory access (dma) remapping
CN112445568A (en) Data processing method, device and system based on hardware acceleration
CN107222545B (en) Data transmission method and device
CN104516885A (en) Implementation method and device of browse program double-kernel assembly
CN114205342B (en) Service debugging routing method, electronic equipment and medium
CN107071007B (en) Method, device and client for obtaining configuration resources
WO2022242358A1 (en) Image processing method and apparatus, and computer device and storage medium
CN106778275A (en) Based on safety protecting method and system and physical host under virtualized environment
CN103440111B (en) The extended method in magnetic disk of virtual machine space, host and platform
CN105763670A (en) Method and device for allocating IP address to container
KR20140101370A (en) Autonomous network streaming
CN106886429A (en) The method and server of a kind of load driver program
CN113467970B (en) Cross-security-area resource access method in cloud computing system and electronic equipment
CN113141511A (en) Graph rendering method and equipment
US8860740B2 (en) Method and apparatus for processing a display driver in virture desktop infrastructure
CN108228309A (en) Data packet method of sending and receiving and device based on virtual machine
CN106850382B (en) Flow traction method and device
CN110659104A (en) Service monitoring method and related equipment
CN108667750B (en) Virtual resource management method and device
CN106991057A (en) The call method and virtual platform of internal memory in a kind of shared video card virtualization
CN114584618A (en) Information interaction method, device, equipment, storage medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046333

Country of ref document: HK