CN107077377A - A kind of device virtualization method, device, system and electronic equipment, computer program product - Google Patents

A kind of device virtualization method, device, system and electronic equipment, computer program product Download PDF

Info

Publication number
CN107077377A
CN107077377A CN201680002834.3A CN201680002834A CN107077377A CN 107077377 A CN107077377 A CN 107077377A CN 201680002834 A CN201680002834 A CN 201680002834A CN 107077377 A CN107077377 A CN 107077377A
Authority
CN
China
Prior art keywords
operating system
shared drive
memory block
physical equipment
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201680002834.3A
Other languages
Chinese (zh)
Other versions
CN107077377B (en
Inventor
温燕飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Inc
Original Assignee
Cloudminds Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Inc filed Critical Cloudminds Inc
Publication of CN107077377A publication Critical patent/CN107077377A/en
Application granted granted Critical
Publication of CN107077377B publication Critical patent/CN107077377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Multi Processors (AREA)

Abstract

A kind of device virtualization method, device, system and electronic equipment, computer program product are provided in the embodiment of the present application, this method includes:Shared drive is created at the first operating system, and the shared drive is mapped as to the Peripheral Component Interconnect standard device PCI memory headroom of the second operating system;Wherein, the shared drive corresponds to a physical equipment;The application interface API operational orders of physical equipment are received at the second operating system, and according to API operational orders, determine corresponding process instruction;Process instruction is transferred to the first operating system by shared drive;Process instruction is performed at the first operating system, and the second operating system is returned to using result as the response of API operational orders or through shared drive.Using the scheme in the application, the system delay in virtualization process can be reduced, systematic function is improved.

Description

A kind of device virtualization method, device, system and electronic equipment, computer program Product
Technical field
The application is related to computer technology, in particular it relates to which a kind of device virtualization method, device, system and electronics are set Standby, computer program product.
Background technology
Shown in Fig. 1 based on Qemu/KVM (Kernel-based Virtual Machine, based on the virtual of kernel Machine) technology virtualization architecture.
As shown in figure 1, the virtualization architecture based on Qemu/KVM technologies is by a main Host operating systems, it is one or more Fictionalize the objective Guest operating systems composition come.Host operating systems include multiple Host user space programs, Host Linux Kernel, i.e. Host linux kernels.Each visitor Guest operating systems include user's space, Guest Linux respectively Kernel and Qemu.These operating systems are on same set of hardware processor chip, shared processor and peripheral hardware resource. Support the arm processor of virtualization architecture is comprised at least to run Virtual Machine Manager under EL2, EL1, EL0 Three models, EL2 patterns Device Hypervisor programs;Linux kernel program is run under EL1 patterns, i.e. Linux kernel programs;Run under EL0 patterns User space program.The hardware resources such as Hypervisor layer-managements CPU, internal memory, timer, interruption, and pass through central processing unit CPU, internal memory, timer, the virtual resources interrupted, can be loaded into different operating system time-sharings on concurrent physical processor and transport OK, so as to realize the function of system virtualization.
KVM/Hypervisor crosses over two layers of Host Linux kernel and Hypervisor, is on the one hand simulation process Device Qemu provides driving node, i.e. allows Qemu to create virtual cpu by KVM nodes, and manages virtual resources;The opposing party Face KVM/Hypervisor can also switch out Host linux systems from physical cpu, then Guest Linux systems System is loaded on concurrent physical processor and run, and handles the subsequent transaction that Guest linux systems are exited extremely.
Qemu is run as one of Host Linux application, is provided virtual physics for Guest Linux operation and is set Standby resource, by the equipment KVM nodes of KVM/Hypervisor modules, creates virtual cpu, distributes physics device resource, realize One unmodified Guest Linux is loaded into concurrent physical processor to get on to run.
When Guest Linux need to access physical equipment, such as GPU (Graphics Processing Unit, figure Processor) equipment, multimedia equipment, picture pick-up device etc. be generally logical at present, it is necessary to these physical equipments are carried out with native virtualization Qemu switchings are crossed to go to call Host Linux kernel driving node;Specifically, these physical equipments provide a greater number API (Application Programming Interface, application programming interface) function, long-range API can be passed through The virtualization for realizing these equipment is called, specifically, can select suitable from Host and Guest system software architecture levels Layer carries out API switchings.For example, for android system, Guest Android can be selected from HAL (Hard Abstract Layer, hardware abstraction layer) carry out API switchings;And realize a back-end server Backend in Host Linux user's spaces Server, finally enables Guest systems pass through Host systems to realize the far call of api function.
When cross-system API far calls are mainly concerned with the transmission of function parameter, the return of operation result, the execution of function Between and it is synchronous.The system architecture of cross-system API far calls in the prior art is shown in Fig. 2.As shown in Fig. 2 an API Call via Guest android systems initiate, through HAL layer, Guest Linux Kernel, Qemu, arrival Host Backend server and then Host Linux kernel drivers are called to realize access to physical equipment.For performance It is required that higher physical equipment, such as GPU equipment, multimedia equipment, picture pick-up device etc., above-mentioned software architecture are extremely difficult to ideal Performance requirement.
The content of the invention
A kind of device virtualization method, device, system and electronic equipment, computer program are provided in the embodiment of the present application Product, the problem of being mainly used in solving device virtualization method poor-performing of the prior art.
According to the one side of the embodiment of the present application there is provided a kind of device virtualization method, including:In the first operation Shared drive is created at system, and the shared drive is mapped as to the Peripheral Component Interconnect standard device PCI of the second operating system Memory headroom;Wherein, the shared drive corresponds to a physical equipment;The application that physical equipment is received at the second operating system connects Mouth API operational orders, and according to API operational orders, determine corresponding process instruction;Process instruction is transmitted by shared drive To the first operating system;At the first operating system perform process instruction, and using result as API operational orders response Or return to the second operating system through shared drive.
According to the second of the embodiment of the present application aspect there is provided a kind of device virtualization device, including:Shared drive is created Block is modeled, the outer of the second operating system is mapped as creating shared drive at the first operating system, and by the shared drive If component connection standard device PCI memory headroom;Wherein, the shared drive corresponds to a physical equipment;Receiving module, for The application interface API operational orders of physical equipment are received at second operating system, and according to API operational orders, are determined corresponding Process instruction;Sending module, for process instruction to be transferred into the first operating system by shared drive;Processing module, is used for Process instruction is performed at the first operating system, and using result as the response of API operational orders or through shared drive Return to the second operating system.
According to the 3rd of the embodiment of the present application the aspect there is provided a kind of electronic equipment, including:Display, memory, one Individual or multiple processors;And one or more modules, one or more modules are stored in memory, and are configured to Performed by the one or more processors, one or more modules include being used to perform first according to the embodiment of the present application The instruction of each step in the virtual method of aspect.
According to the 4th of the embodiment of the present application the aspect, there is provided a kind of computer program product, computer program production Product for performing a kind of instruction of process to encoding, and the process includes the void of the one side according to the embodiment of the present application Plan method.
Produced using device virtualization method, device, system and the electronic equipment according to the embodiment of the present application, computer program Product, by creating shared drive between the first operating system and the second operating system, then pass through the Sharing Memory Realization thing The virtualization of equipment is managed, because the first operating system and the second operating system are transferred API Calls by the shared drive, so as to subtract Lack the system delay in virtualization process, improve systematic function.
Brief description of the drawings
Accompanying drawing described herein is used for providing further understanding of the present application, constitutes the part of the application, this Shen Schematic description and description please is used to explain the application, does not constitute the improper restriction to the application.In the accompanying drawings:
The virtualization architecture schematic diagram based on Qemu/KVM technologies is shown in Fig. 1;
The system architecture of cross-system API far calls in the prior art is shown in Fig. 2;
Fig. 3 shows a kind of system architecture for implementing device virtualization method in the embodiment of the present application;
Fig. 4 shows the flow chart of the device virtualization method according to the embodiment of the present application one;
Fig. 5 shows the flow chart of the device virtualization method according to the embodiment of the present application two;
Fig. 6 shows the structural representation of the device virtualization device according to the embodiment of the present application three;
Fig. 7 shows the structural representation of the device virtualization system according to the embodiment of the present application four;
Fig. 8 shows the structural representation of the electronic equipment according to the embodiment of the present application five.
Embodiment
During the application is realized, inventor has found, in the prior art using virtualization flow as shown in Figure 2, System from Guest user space programs, to HAL, again to Kernel layers of Guest Linux is called, from Qemu to back-end services Device Backend server process switching, each link will consume the processor time, and once calling for long-range API needs The transmission of many subparameters is wanted, it is also possible to the sizable parameter of data volume, so the operating system after virtualization is calling these During equipment, system delay can be greatly increased, and performance declines several times than Host system.
In view of the above-mentioned problems, being set in the embodiment of the present application there is provided a kind of device virtualization method, device, system and electronics Standby, computer program product, by creating shared drive between the first operating system and the second operating system, then by this The virtualization of Sharing Memory Realization physical equipment, because the first operating system and the second operating system are transferred by the shared drive API Calls, so as to reduce the system delay in virtualization process, improve systematic function.
Scheme in the embodiment of the present application can apply in various scenes, for example, using based on Qemu/KVM technologies Intelligent terminal, Android simulator, server virtualization platform of virtualization architecture etc..
Scheme in the embodiment of the present application can be realized using various computer languages, for example, the program of object-oriented is set Count language Java etc..
In order that the technical scheme and advantage in the embodiment of the present application are more clearly understood, below in conjunction with accompanying drawing to the application Exemplary embodiment be described in more detail, it is clear that described embodiment be only the application a part implementation Example, rather than all embodiments exhaustion.It should be noted that in the case where not conflicting, embodiment and reality in the application Applying the feature in example can be mutually combined.
Embodiment one
Fig. 3 shows a kind of system architecture for implementing device virtualization method in the embodiment of the present application.Such as Fig. 3 institutes Show, it is common to include the first operating system 301, the second operating system 302, polylith according to the device virtualization system of the embodiment of the present application Enjoy internal memory 303a, 303b, 303c and multiple physical equipment 304a, 304b, 304c.Specifically, first operating system can be with It is Host operating systems;Second operating system can be Guest operating systems.It should be appreciated that in the specific implementation, this first Operating system can also be Guest operating systems, and second operation can also be Host operating systems, and the application is not limited this System.
Next, will be Host operating systems to the first operating system, the second operating system is that Guest operating systems are Example, the embodiment to the application describes in detail.
Specifically, in Guest operating systems 302 can include user's space 3021, Guest Linux Kernel3022, With analog processor Qemu 3023;Virtual a variety of physical equipments can be provided with the user's space of Guest operating systems Or the interface of module, specifically, the multiple interfaces can include graphic package interface, Multimedia Program interface, imaging program and connect Mouthful etc.;More specifically, for example, the graphic package interface can be OpenGL (Open Graphics Library, open figure Storehouse) the graphic package interface such as api interface, Direct 3D, Quick Draw 3D, the multimedia/video program's interface can be OpenMAX (Open Media Acceleration, open multimedia acceleration layer) interface etc., the application is not restricted to this.
Specifically, user's space 3011 and Host Linux Kernel3012 can be included in Host operating systems 301; Back-end server corresponding to each interface in Guest operating systems can be provided in the user's space of Host operating systems Backend Server.For example, when the graphic package interface in Guest operating systems is OpenGL API, back-end server can To be OpenGL Backend Server;Back-end server can be by the GPU driver in Host Linux Kernel Go to operate GPU equipment;When multimedia/video program's interface in Guest operating systems is OpenMAX API, back-end server Can be OpenMAX Backend Server;Back-end server can be by the multimedia in Host Linux Kernel/regard Frequency driver goes to operate corresponding multimedia/video equipment.
In the specific implementation, shared drive 303a, 303b, 303c is Guest operating systems and Host operating systems The polylith internal memory seen;And the internal memory is in readable and writable state for Guest operating systems and Host operating systems, That is, Guest operating systems and Host operating systems can perform read and write operation on shared drive.
In the specific implementation, the quantity of shared drive can correspond to realize the physical equipment of virtualization;That is, one physics Equipment one piece of shared drive of correspondence.For example, GPU equipment, which corresponds to shared drive 303a, multimedia equipment, corresponds to shared drive 303b, picture pick-up device correspond to shared drive 303c etc..
In the specific implementation, the size of each shared drive can be set by developer, and adapt to each self-corresponding thing Manage equipment.For example, the corresponding shared drive of GPU equipment could be arranged to 128M;The corresponding shared drive of multimedia equipment can be with It is set to 64M;The corresponding shared drive of picture pick-up device could be arranged to 64M etc., and the application is not restricted to this.
Next, will be by taking the corresponding shared drive 303a of GPU equipment as an example, to the shared drive in the embodiment of the present application Division is described in detail.
In the specific implementation, shared drive 303a can only include the first memory block 3031;First can also be divided into The memory block 3032 of memory block 3031 and second.Specifically, first memory block is referred to as privately owned internal memory;Second memory block It is referred to as public internal memory.In the specific implementation, the no ad hoc rules of the division of the first memory block and the second memory block, can be with It is that size of data, the experience of foundation designer being each commonly stored according to the first memory block and the second memory block are divided; The strategy that can be pre-set according to other is divided, and the application is not restricted to this.
Specifically, the first memory block can be used for each thread and Backend Server threads of Guest operating systems Between function and parameter, and/or synchronizing information transmission;Specifically, the privately owned internal memory can also be further subdivided into many Individual block a, block is defined as a passage, and a passage corresponds to a thread of Guest operating systems;In specific divide, The quantity of the passage can be default by developer;In specific divide, the plurality of piece can be average division, size Equal-sized piece or according to the big of the function and parameter, and/or synchronizing information that thread dispatching GPU is commonly used in system Small intelligently to divide, the application is not restricted to this.In the specific implementation, the user program of Guest operating systems can be to private There is the passage in internal memory to enter Mobile state management, i.e. user program can be allocated, again to the passage in privately owned internal memory at any time Distribution and release operation.
Specifically, the second memory block can be used for all threads and Backend Server threads of Guest operating systems Between long data block, for example, the transmission of graphical content data.In the specific implementation, public internal memory can be divided into some The individual unequal bulk of size, specifically, the quantity of the block can be default by developer.Specifically, Guest is grasped Making the user program in system can be managed to the block in public internal memory, i.e. user program can be at any time to public internal memory In passage be allocated and discharge operation, and per sub-distribution and release is handled by whole block.
In the specific implementation, the size of block can adapt to conventional GPU graphics process data in public internal memory.For example, Research staff has found, during GPU vitualization, and usual first operating system passes 2M to 16M or so graphical content data Transport to the demand that the second operating system just disclosure satisfy that the processing of GPU graphical virtualizations;And therefore, the block in public internal memory is distributed Size when, public internal memory can be divided into 2M, 4M, 8M, multiple memory blocks such as 16M.
For example, if total public memory size is 32M, it is divided into 2M, 2M, 4M, 8M, 16M5 memory block, user During program application 3M spaces, 4M memory block directly can be distinguished the corresponding thread of dispensing, and one is put when the thread discharges Individual idle marker gives 4M blocks area.
In the specific implementation, physical equipment 304a, 304b, 304c can be the thing not being integrated on central processor CPU Manage equipment;It is highly preferred that can be the physical equipment with high-throughput, for example, GPU equipment, multimedia equipment, picture pick-up device Deng.
It should be appreciated that illustrate only a Guest operating systems, a Host operations system for illustrative purposes, in Fig. 3 The situation of system, three shared drives and three physical equipments;But in the specific implementation, it can be one or more Guest behaviour Make system or one or more Host operating systems, can also be the shared drive of other quantity, and other quantity Physical equipment;That is, can be arbitrary for Guest operating systems, Host operating systems, shared drive and physical equipment Quantity, the application is not restricted to this.
It should be appreciated that for illustrative purposes, the shared drive shown in Fig. 3 includes privately owned internal memory and public internal memory two Memory block;And privately owned internal memory is divided into 3 equal-sized passages;Public internal memory be divided into 4 differ in size it is logical Road.In the specific implementation, shared drive can only include privately owned one memory block of internal memory;And privately owned internal memory can be without Divide or be divided into multiple passages differed in size;Public internal memory can be not present, and can also be divided into multiple size phases Deng passage etc., the application is not restricted to this.
Next, will be retouched with reference to system shown in Figure 3 framework to the device virtualization method according to the embodiment of the present application State.
Fig. 4 shows the flow chart of the device virtualization method according to the embodiment of the present application one.In the embodiment of the present application, With by a Guest operating systems, a Host operating systems, a GPU equipment, one piece correspond to GPU equipment it is shared in Example is saved as, the device virtualization method to GPU equipment is described in detail.As shown in figure 4, according to the equipment of the embodiment of the present application Virtual method comprises the following steps:
S401, when the corresponding Qemu of Guest systems starts, creates the corresponding shared drive of GPU equipment.
Specifically, Qemu can call to create corresponding shared drive by system.
Specifically, one piece of specific address space can be divided from internal memory as the shared drive of GPU equipment.This is shared The size of internal memory be able to can be set by developer, and adapt to each self-corresponding physical equipment.For example, GPU equipment correspondence Shared drive could be arranged to 128M;The corresponding shared drive of multimedia equipment could be arranged to 64M;Picture pick-up device is corresponding Shared drive could be arranged to 64M etc., and the application is not restricted to this.
It should be appreciated that can be each physical equipment weight by the Qemu of each Guest systems when there is multiple Guest systems Newly create one piece of shared drive or the plurality of Guest systems share the corresponding shared drive of one piece of physical equipment;May be used also To use different schemes for different physical equipments, such as GPU equipment;Each Guest system is total to using independent Internal memory is enjoyed, and for multimedia equipment, each Guest system shares one piece of shared drive;The application is not restricted to this.
The shared drive is further mapped as PCI (the Peripheral Component of Guest systems by S402, Qemu Interconnect, Peripheral Component Interconnect standard) device memory space;And make for the PCI register of Guest system providing virtuals For pci configuration space.
The shared drive is divided into privately owned internal memory and public internal memory by S403, Guest Linux Kernel.
Specifically, Guest Linux Kernel can be divided when being initialized to GPU equipment to shared drive;With Shared drive is set to support the access of multiple processes or thread.
Specifically, can be by privately owned internal memory, i.e. the first memory block is divided into multiple passages of the first predetermined number;Can be with By public internal memory, i.e. the second memory block is divided into multiple pieces of the second predetermined number.Specifically, first predetermined number and Two predetermined numbers can be set by developer.
Specifically, the size of multiple passages of the privately owned internal memory can be with equal;Multiple pieces of big I of the public internal memory To adapt to the processing data of the corresponding physical equipment of the shared drive.
S404, is that the front end thread and the distribution of corresponding rear end thread are corresponding shared interior when in front end, thread starts Deposit address space.
In the specific implementation, when receiving API Calls instruction, it can create and the corresponding front end of API Calls instruction Thread, i.e. first thread.And instruct corresponding thread creation instruction to be sent to Host operating systems API Calls, to trigger Host operating systems create corresponding rear end thread, i.e. the second thread.
In the specific implementation, a certain thread that user can be directed in Guest operating systems performs user's operation, for example, User can perform in the threads such as wechat, QQ and open a new window, makes a call to new page etc., plays multimedia/video etc. Operation.
In the specific implementation, when receiving user's operation, thread can produce an API Calls according to user's operation and instruct Call corresponding front end thread, for example, when user perform be open a new window, when plaing the operation of a new page etc., Corresponding graphics process interface can be called, when user perform be to play the operation such as multimedia/video when, correspondence can be called Multimedia/video interface etc..
Specifically, when calling front end thread, Host operating systems is generally also triggered and create relative with the front end thread The rear end thread answered.Specifically, if what Guest systems called is graphic package Processing Interface, then can be operated in Host and be A corresponding rear end thread is created in graphics process background server in system;If what user called is Multimedia Program processing Interface, then a corresponding rear end thread is created in multi-media processing background server that can be in Host operating systems.
In the specific implementation, the front end of line can be obtained at Guest Linux Kernel when front end thread starts The address space of the corresponding privately owned main memory access of journey and the public memory address space for distributing to the front end thread;And should The address space of the corresponding privately owned main memory access of front end thread and distribute to the public memory address space of the front end thread and reflect Penetrate as the address space of the front end thread;So as to set up Synchronization Control passage with Qemu.Specifically, generally by privately owned internal memory The front end thread is given in a certain channel allocation, and public internal memory is entirely distributed into the front end thread.
Next, can be by the address space of the corresponding privately owned main memory access of the front end thread and the ground of public internal memory Location space passes to Qemu by pci configuration space;Then Qemu by inter-process communication mechanisms the corresponding private of front end thread There are the address space of main memory access and the address space of public internal memory to be sent to back-end server;And be mapped as after this The address space of end line journey.
So far, the initialization of shared drive between front end thread and rear end thread is just completed.
S405, between front end thread and corresponding rear end thread, passes through the virtual of the Sharing Memory Realization physical equipment Change.
In the specific implementation, the API operations for GPU equipment are received at the front end thread in Guest user's spaces During instruction, corresponding process instruction can be determined according to the API operational orders;And the process instruction is passed through into the shared drive It is transferred to the rear end thread in the Backend Server in Host systems;Then process instruction is performed at the thread of rear end, and Response that result is instructed as the API Calls returns to front end thread through shared drive.
Specifically, the process instruction is transferred in the Backend Server in Host systems by the shared drive Rear end thread can be realized using following various ways:
In the first embodiment, when process instruction includes API Calls function and parameter;Front end thread can be with By the corresponding privately owned main memory access of function and parameter read-in;And by the offset address where function and parameter, it is sent to back end of line Journey;Triggering rear end thread goes the acquisition process instruction of this in shared drive according to the offset address.Specifically, it can be incited somebody to action by Qemu Offset address is sent to the back-end server of Host operating systems, then is synchronized to rear end thread by back-end server.
In second of embodiment, when process instruction includes API Calls function, parameter and synchronizing information;Before Function, parameter and synchronizing information can be write corresponding privately owned main memory access by end line journey;And will be inclined where function and parameter Address is moved, rear end thread is sent to;Triggering rear end thread goes the acquisition process instruction of this in shared drive according to the offset address.Tool Body, offset address can be sent to the back-end server of Host operating systems by Qemu, then it is synchronous by back-end server To rear end thread.
In the third embodiment, when process instruction includes API Calls function, parameter and graphical content data When;Front end thread can be by function, the corresponding privately owned main memory access of parameter read-in;And the write-in of graphical content data is public interior Deposit;And by the offset address of the shared drive where process instruction, it is sent to rear end thread;Rear end thread is triggered according to the skew The acquisition process instruction of this in shared drive is gone in address.Specifically, offset address can be sent into Host operations by Qemu is The back-end server of system, then rear end thread is synchronized to by back-end server.
In the 4th kind of embodiment, when process instruction includes API Calls function, parameter, synchronizing information and figure During content-data;Function, parameter, synchronizing information can be write corresponding privately owned main memory access by front end thread;And by figure Hold data and write public internal memory;And by the offset address of the shared drive where process instruction, it is sent to rear end thread;After triggering End line journey goes the acquisition process instruction of this in shared drive according to the offset address.Specifically, can be by Qemu by offset address The back-end server of Host operating systems is sent to, then rear end thread is synchronized to by back-end server.
In the specific implementation, from front end of line journey to the switching of rear end thread, and the first operating system and the second operation system Switching between system uses the common technology means of those skilled in the art, and the application is not repeated this.
In the specific implementation, rear end thread drives corresponding physical equipment/module to perform corresponding process instruction, and obtains Result.
In the specific implementation, rear end thread can be anti-directly as the response of application interface call instruction by the result Feed user, the result can also be returned to front end thread, thread is responded by front end.
So far, far call of the user program to physical equipment in Guest operating systems is realized;That is, physics is realized The virtualization of equipment.
Using the device virtualization method in the embodiment of the present application, by the first operating system and the second operating system it Between create shared drive, then by the virtualization of the Sharing Memory Realization physical equipment, due to the first operating system and second Operating system, so as to reduce the system delay in virtualization process, improves system by shared drive switching API Calls Performance.
Embodiment two
Next, system shown in Figure 3 framework will be combined to being carried out according to the device virtualization method of the embodiment of the present application two Description.
Fig. 5 shows the flow chart of the device virtualization method according to the embodiment of the present application two.In the embodiment of the present application, By a Guest operating systems, a Host operating systems, three physical equipments:GPU equipment, multimedia equipment, picture pick-up device Exemplified by, the device virtualization method to multiple physical equipments is described in detail.As shown in figure 5, according to the embodiment of the present application Device virtualization method comprises the following steps:
S501, when the corresponding Qemu of Guest systems starts, is respectively created GPU equipment, multimedia equipment, picture pick-up device Each self-corresponding shared drive.
In the specific implementation, the establishment process of each self-corresponding shared drive of multimedia equipment, picture pick-up device may be referred to In the embodiment of the present application one in S401 the corresponding shared drive of GPU equipment establishment process, do not repeat to repeat herein.
Each shared drive is further each mapped to the device PCI memory headroom of Guest systems by S502, Qemu;And be The virtual PCI register that Guest systems provide respective amount is used as pci configuration space.
In the specific implementation, the quantity of the virtual PCI register corresponds to the quantity of shared drive, and respective 1 a pair Should.
The plurality of shared drive is respectively divided into privately owned internal memory and public internal memory by S503, Guest Linux Kernel.
In the specific implementation, the implementation of this step may be referred to the implementation of S403 in the embodiment of the present application one, not weigh herein Repeat again.
S504, when thread starts in front end, according to calling the API Calls of front end thread to instruct, determines the front end thread pair The shared drive answered, and be that the front end thread and corresponding rear end thread distribute corresponding shared drive address space.
Specifically, it can determine that API Calls instruct corresponding physics to set according to calling the API Calls of front end thread to instruct It is standby, and corresponding shared drive is determined according to physical equipment.Specifically, if calling the API Calls of front end thread to instruct is OpenGL interface interchanges are instructed;It is GPU equipment that corresponding physical equipment, which can then be determined, then can determine the front end thread pair The shared drive answered is the corresponding shared drive of physical equipment, for example, it may be 303a;If calling the API of front end thread to adjust It is that OpenMAX interface interchanges are instructed with instruction;It can then determine that corresponding physical equipment is multimedia equipment, then can determine The corresponding shared drive of front end thread is the corresponding shared drive of multimedia equipment, for example, it may be 303b;If before calling The API Calls instruction of end line journey is the instruction of Camera interface interchanges;It can then determine that corresponding physical equipment is picture pick-up device, that It can determine that the corresponding shared drive of front end thread is the corresponding shared drive of picture pick-up device, for example, it may be 303c.
In the specific implementation, it is that the front end thread and the distribution of corresponding rear end thread are corresponding shared interior in this step The implementation for depositing address space may be referred to the implementation of S404 in the embodiment of the present application one, not repeat to repeat herein.
S505, between front end thread and corresponding rear end thread, passes through the virtual of the Sharing Memory Realization physical equipment Change.
In the specific implementation, the implementation of this step may be referred to the implementation of S405 in the embodiment of the present application one, not weigh herein Repeat again.
So far, far call of the user program to multiple physical equipments in Guest operating systems is realized;That is, realize The virtualization of multiple physical equipments.
Using the device virtualization method in the embodiment of the present application, by the first operating system and the second operating system it Between create shared drive, then by the virtualization of the Sharing Memory Realization physical equipment, due to the first operating system and second Operating system, so as to reduce the system delay in virtualization process, improves system by shared drive switching API Calls Performance.
Based on same inventive concept, a kind of device virtualization device is additionally provided in the embodiment of the present application, due to the device The principle solved the problems, such as is similar to device virtualization method that the embodiment of the present application one and two is provided, therefore the reality of the device The implementation for the method for may refer to is applied, part is repeated and repeats no more.
Embodiment three
Fig. 6 shows the structural representation of the device virtualization device according to the embodiment of the present application three.
As shown in fig. 6, being included according to the device virtualization device 600 of the embodiment of the present application three:Shared drive creation module 601, for creating shared drive at the first operating system, and the shared drive is mapped as to the peripheral hardware portion of the second operating system Part interconnection standards device PCI memory headroom;Wherein, the shared drive corresponds to a physical equipment;Receiving module 602, for The application interface API operational orders of the physical equipment are received at second operating system, and according to the API operational orders, it is determined that Corresponding process instruction;Sending module 603, for the process instruction to be transferred into the first operating system by the shared drive; Processing module 604, the API operational orders are used as performing the process instruction at the first operating system, and using result Response or return to second operating system through the shared drive.
Specifically, shared drive creation module, is specifically included:Shared drive creates submodule, in the second operation system When the corresponding Qemu that unites starts, shared drive is created for the physical equipment;Mapping submodule, for the shared drive to be mapped as The device PCI memory headroom of second operating system;And provide virtual PCI register for second operating system and match somebody with somebody as PCI Between being empty.
Specifically, the physical equipment is multiple, shared drive creation module, specifically for:In the second operating system correspondence Analog processor Qemu start when, be that shared drive is respectively created in each physical equipment;The plurality of shared drive is mapped respectively For the device PCI memory headroom of the second operating system;And virtual multiple PCI register conducts are provided for second operating system Pci configuration space, the plurality of PCI register corresponds respectively to the plurality of shared drive.
Specifically, also included according to the device virtualization device of the embodiment of the present application three:Division module, for this to be shared Internal memory is divided into the first memory block and the second memory block, wherein, first memory block includes multiple passages of the first predetermined number; Second memory block includes multiple pieces of the second predetermined number.
Specifically, multiple passages of first memory block is equal in magnitude;Multiple pieces of size of second memory block is fitted Processing data assigned in the corresponding physical equipment of the shared drive.
Specifically, the physical equipment is multiple, and the device also includes:Shared drive determining module, for being grasped according to the API Instruct, determine the corresponding physical equipment of API operational orders, and corresponding shared drive is determined according to the physical equipment.
Specifically, also included according to the device virtualization device of the embodiment of the present application three:First mapping block, for In two operating systems, when receiving API Calls instruction, create and the corresponding first thread of API Calls instruction;And by API The corresponding thread creation instruction of call instruction is sent to the first operating system;And distributed for the first thread in first memory block In corresponding passage address space and the address space of corresponding second memory block;Will be logical in the first memory block The address space in road and the address space of the second memory block pass to second operating system by pci configuration space Qemu;Second mapping block, in the first operating system, corresponding thread creation instruction to be instructed receiving API Calls Afterwards, corresponding second thread is created;And by the address space of corresponding passage in first memory block and it is corresponding this The Address space mappinD of two memory blocks is the address space of second thread;The sending module, specifically for passing through first thread The address that process instruction is write into the address space of corresponding passage and corresponding second memory block in the first memory block is empty Between in;And offset address of the process instruction in the address space is sent to the first operating system by Qemu;First In operating system, the offset address received is synchronized to corresponding second thread.
Using the device virtualization device in the embodiment of the present application, by the first operating system and the second operating system it Between create shared drive, then by the virtualization of the Sharing Memory Realization physical equipment, due to the first operating system and second Operating system, so as to reduce the system delay in virtualization process, improves system by shared drive switching API Calls Performance.
Based on same inventive concept, a kind of device virtualization system is additionally provided in the embodiment of the present application, due to the system The principle solved the problems, such as is similar to the device virtualization method that the embodiment of the present application one and two is provided, therefore the implementation of the system The implementation of method is may refer to, part is repeated and repeats no more.
Example IV
Fig. 7 shows the structural representation of the device virtualization system according to the embodiment of the present application four.
As shown in fig. 7, being included according to the device virtualization system 700 of the embodiment of the present application four:Second operating system 701, Application interface API Calls for receiving physical equipment are instructed, and determine the corresponding process instruction of application interface call instruction, and The process instruction is sent to the first operating system 702 through the corresponding shared drive of the physical equipment;It is one or more shared interior 703 are deposited, for transmitting process instruction between first operating system and the second operating system;Wherein, this is one or more common Enjoy internal memory and correspond respectively to each physical equipment;First operating system 702, for receiving and performing the process instruction, and will place Reason result as the application interface call instruction response or through the corresponding shared drive of the physical equipment return to this second Operating system.
In the specific implementation, the implementation of the second operating system 701 may refer to the second operation system in the embodiment of the present application one The implementation of system 302, repeats part and repeats no more.
In the specific implementation, the implementation of the first operating system 702 may refer to the first operation system in the embodiment of the present application one The implementation of system 301, repeats part and repeats no more.
In the specific implementation, the implementation of shared drive 703 may refer to shared drive 303a in the embodiment of the present application one, 303b, 303c implementation, repeat part and repeat no more.
Specifically, first operating system can be visitor's Guest operating systems, and second operating system can the Host based on Operating system.
Using the device virtualization system in the embodiment of the present application, by the first operating system and the second operating system it Between create shared drive, then by the virtualization of the Sharing Memory Realization physical equipment, due to the first operating system and second Operating system, so as to reduce the system delay in virtualization process, improves system by shared drive switching API Calls Performance.
Embodiment five
Based on same inventive concept, a kind of electronic equipment 800 as shown in Figure 8 is additionally provided in the embodiment of the present application.
As shown in figure 8, being included according to the electronic equipment 800 of the embodiment of the present application five:Display 801, memory 802, place Manage device 803;Bus 804;And one or more modules, one or more modules are stored in the memory, and by with Be set to and performed by the one or more processors, one or more modules include being used to perform according to the embodiment of the present application one or The instruction of each step in any one of embodiment two.
Based on same inventive concept, additionally provided in the embodiment of the present application it is a kind of can be with the electronic equipment including display 800 computer program products being used in combination, the computer program product includes computer-readable storage medium and is embedded in Computer program mechanism therein, the computer program mechanism includes being used to perform in the embodiment of the present application one or embodiment two appointing The instruction of each step in one this method.
It should be understood by those skilled in the art that, embodiments herein can be provided as method, system or computer program Product.Therefore, the application can be using the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Apply the form of example.Moreover, the application can be used in one or more computers for wherein including computer usable program code The computer program production that usable storage medium is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.) The form of product.
The application is the flow with reference to method, equipment (system) and computer program product according to the embodiment of the present application Figure and/or block diagram are described.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for real The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which is produced, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that in meter Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in individual square frame or multiple square frames.
Although having been described for the preferred embodiment of the application, those skilled in the art once know basic creation Property concept, then can make other change and modification to these embodiments.So, appended claims are intended to be construed to include excellent Select embodiment and fall into having altered and changing for the application scope.
Obviously, those skilled in the art can carry out the essence of various changes and modification without departing from the application to the application God and scope.So, if these modifications and variations of the application belong to the scope of the application claim and its equivalent technologies Within, then the application is also intended to comprising including these changes and modification.

Claims (16)

1. a kind of device virtualization method, it is characterised in that including:
Shared drive is created at the first operating system, and the shared drive is mapped as to the external components of the second operating system Interconnection standards device PCI memory headroom;Wherein, the shared drive corresponds to a physical equipment;
The application interface API operational orders of the physical equipment are received at second operating system, and are grasped according to the API Instruct, determine corresponding process instruction;The process instruction is transferred to the first operating system by the shared drive;
Perform the process instruction at the first operating system, and using result as the response of the API operational orders or Person returns to second operating system through the shared drive.
2. according to the method described in claim 1, it is characterised in that create shared drive at the first operating system, and by institute The Peripheral Component Interconnect standard device PCI memory headroom that shared drive is mapped as the second operating system is stated, is specifically included:
It is that the physical equipment creates shared drive when the corresponding analog processor Qemu of the second operating system starts;
The shared drive is mapped as to the device PCI memory headroom of second operating system;And be the described second operation system System provides virtual PCI register and is used as pci configuration space.
3. according to the method described in claim 1, it is characterised in that the physical equipment is multiple, at the first operating system Shared drive is created, and the shared drive is mapped as to the Peripheral Component Interconnect standard device PCI internal memory of the second operating system Space, is specifically included:
It is that shared drive is respectively created in each physical equipment when the corresponding Qemu of the second operating system starts;
The multiple shared drive is each mapped to the device PCI memory headroom of the second operating system;And be the described second behaviour Make multiple PCI registers of system providing virtual as pci configuration space, the multiple PCI register corresponds respectively to described Multiple shared drives.
4. according to the method described in claim 1, it is characterised in that create shared drive at the first operating system, and by institute State shared drive to be mapped as after the Peripheral Component Interconnect standard device PCI memory headroom of the second operating system, described second Before the application interface API operational orders that the physical equipment is received at operating system, in addition to:
The shared drive is divided into the first memory block and the second memory block, wherein, it is pre- that first memory block includes first If multiple passages of quantity;Second memory block includes multiple pieces of the second predetermined number.
5. method according to claim 4, it is characterised in that multiple passages of first memory block it is equal in magnitude; Multiple pieces of size of second memory block adapts to the processing data of the corresponding physical equipment of the shared drive.
6. according to the method described in claim 1, it is characterised in that the physical equipment is multiple, and the process instruction is led to The shared drive is crossed to be transferred to before the first operating system, in addition to:
According to the API operational orders, the corresponding physical equipment of the API operational orders is determined, and according to the physical equipment Determine corresponding shared drive.
7. method according to claim 2, it is characterised in that the shared drive is being mapped as the second operating system After Peripheral Component Interconnect standard device PCI memory headroom, answering for the physical equipment is received at second operating system Before interface API operational orders, in addition to:
In the second operating system, when receiving API Calls instruction, create and the corresponding First Line of API Calls instruction Journey;And instruct corresponding thread creation instruction to be sent to the first operating system API Calls;And exist for first thread distribution The address space of corresponding passage and the address space of corresponding second memory block in first memory block;Will be The address space of passage in first memory block and the address space of the second memory block pass to institute by pci configuration space State the Qemu of the second operating system;
In the first operating system, after the corresponding thread creation instruction of API Calls instruction is received, corresponding second line is created Journey;And by the address space of corresponding passage in first memory block and the address of corresponding second memory block Space reflection is the address space of second thread;
The process instruction is transferred to the first operating system by the shared drive, specifically included:
The process instruction is write in first memory block and right by the address space of corresponding passage by first thread In the address space for second memory block answered;And pass through offset address of the process instruction in the address space Qemu is sent to the first operating system;In the first operating system, the offset address received is synchronized to corresponding second line Journey.
8. a kind of device virtualization device, it is characterised in that including:
Shared drive creation module, for creating shared drive at the first operating system, and the shared drive is mapped as The Peripheral Component Interconnect standard device PCI memory headroom of second operating system;Wherein, the shared drive is set corresponding to a physics It is standby;
Receiving module, the application interface API operational orders for receiving the physical equipment at second operating system, and According to the API operational orders, corresponding process instruction is determined;
Sending module, for the process instruction to be transferred into the first operating system by the shared drive;
Processing module, for performing the process instruction at the first operating system, and is operated result as the API The response of instruction returns to second operating system through the shared drive.
9. device according to claim 8, it is characterised in that shared drive creation module, is specifically included:
Shared drive creates submodule, for being that the physical equipment is created when the corresponding Qemu of the second operating system starts Shared drive;
Mapping submodule, the device PCI memory headroom for the shared drive to be mapped as to the second operating system;And be described Second operating system provides virtual PCI register and is used as pci configuration space.
10. device according to claim 8, it is characterised in that the physical equipment is multiple, and shared drive creates mould Block, specifically for:
It is that shared drive is respectively created in each physical equipment when the corresponding analog processor Qemu of the second operating system starts;
The multiple shared drive is each mapped to the device PCI memory headroom of the second operating system;And be the described second behaviour Make multiple PCI registers of system providing virtual as pci configuration space, the multiple PCI register corresponds respectively to described Multiple shared drives.
11. device according to claim 8, it is characterised in that also include:
Division module, for the shared drive to be divided into the first memory block and the second memory block, wherein, first storage Area includes multiple passages of the first predetermined number;Second memory block includes multiple pieces of the second predetermined number.
12. device according to claim 11, it is characterised in that the size phase of multiple passages of first memory block Deng;Multiple pieces of size of second memory block adapts to the processing data of the corresponding physical equipment of the shared drive.
13. device according to claim 8, it is characterised in that the physical equipment is multiple, and described device also includes:
Shared drive determining module, for according to the API operational orders, determining that the corresponding physics of the API operational orders is set It is standby, and corresponding shared drive is determined according to the physical equipment.
14. device according to claim 9, it is characterised in that also include:
First mapping block, in the second operating system, when receiving API Calls instruction, creating and the API Calls Instruct corresponding first thread;And instruct corresponding thread creation instruction to be sent to the first operating system API Calls;And for institute State first thread the distribution address space of corresponding passage and corresponding second memory block in first memory block Address space;The address space of the address space of passage in the first memory block and the second memory block is matched somebody with somebody by PCI The Qemu of second operating system is passed between being empty;
Second mapping block, in the first operating system, corresponding thread creation instruction to be instructed receiving API Calls Afterwards, corresponding second thread is created;And by the address space of corresponding passage in first memory block and corresponding institute The Address space mappinD for stating the second memory block is the address space of second thread;
The sending module, it is corresponding in first memory block specifically for being write the process instruction by first thread In the address space of passage and the address space of corresponding second memory block;And by the process instruction described Offset address in the space of location is sent to the first operating system by Qemu;In the first operating system, by the skew received Address synchronization gives corresponding second thread.
15. a kind of electronic equipment, it is characterised in that the electronic equipment includes:Display, memory, one or more processing Device;And one or more modules, one or more of modules are stored in the memory, and are configured to by described One or more processors are performed, and one or more of modules include being used for any methods described in perform claim requirement 1-7 In each step instruction.
16. a kind of computer program product, the computer program product to being encoded for performing a kind of instruction of process, The process includes the method according to any one of claim 1-7.
CN201680002834.3A 2016-12-29 2016-12-29 Equipment virtualization method, device and system, electronic equipment and computer program product Active CN107077377B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113265 WO2018119952A1 (en) 2016-12-29 2016-12-29 Device virtualization method, apparatus, system, and electronic device, and computer program product

Publications (2)

Publication Number Publication Date
CN107077377A true CN107077377A (en) 2017-08-18
CN107077377B CN107077377B (en) 2020-08-04

Family

ID=59623873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680002834.3A Active CN107077377B (en) 2016-12-29 2016-12-29 Equipment virtualization method, device and system, electronic equipment and computer program product

Country Status (2)

Country Link
CN (1) CN107077377B (en)
WO (1) WO2018119952A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107741863A (en) * 2017-10-08 2018-02-27 深圳市星策网络科技有限公司 The driving method and device of a kind of video card
CN108124475A (en) * 2017-12-29 2018-06-05 深圳前海达闼云端智能科技有限公司 Virtual system Bluetooth communication method and device, virtual system, storage medium and electronic equipment
CN108932213A (en) * 2017-10-10 2018-12-04 北京猎户星空科技有限公司 The means of communication, device, electronic equipment and storage medium between multiple operating system
CN109343922A (en) * 2018-09-17 2019-02-15 广东微云科技股份有限公司 A kind of method and device that GPU vitualization picture is shown
WO2019072182A1 (en) * 2017-10-13 2019-04-18 阿里巴巴集团控股有限公司 Hardware abstraction layer multiplexing method and apparatus, operating system and device
CN109725867A (en) * 2019-01-04 2019-05-07 中科创达软件股份有限公司 Virtual screen sharing method, device and electronic equipment
WO2019127191A1 (en) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 File system sharing method and apparatus for multi-operating system, and electronic device
CN110442389A (en) * 2019-08-07 2019-11-12 北京技德系统技术有限公司 A kind of shared method using GPU of more desktop environments
CN111510780A (en) * 2020-04-10 2020-08-07 广州华多网络科技有限公司 Video live broadcast control, bridging, flow control and broadcast control method and client
CN111522670A (en) * 2020-05-09 2020-08-11 中瓴智行(成都)科技有限公司 GPU virtualization method, system and medium for Android system
CN112015605A (en) * 2020-07-28 2020-12-01 深圳市金泰克半导体有限公司 Memory test method and device, computer equipment and storage medium
CN112131146A (en) * 2019-06-24 2020-12-25 维塔科技(北京)有限公司 Method and device for acquiring equipment information, storage medium and electronic equipment
CN112764872A (en) * 2021-04-06 2021-05-07 阿里云计算有限公司 Computer device, virtualization acceleration device, remote control method, and storage medium
CN113379589A (en) * 2021-07-06 2021-09-10 湖北亿咖通科技有限公司 Dual-system graphic processing method and device and terminal
CN113805952A (en) * 2021-09-17 2021-12-17 中国联合网络通信集团有限公司 Peripheral virtualization management method, server and system
CN114047960A (en) * 2021-11-10 2022-02-15 北京鲸鲮信息系统技术有限公司 Operating system running method and device, electronic equipment and storage medium
CN114327944A (en) * 2021-12-24 2022-04-12 科东(广州)软件科技有限公司 Method, device, equipment and storage medium for sharing memory by multiple systems
CN114816417A (en) * 2022-04-18 2022-07-29 北京凝思软件股份有限公司 Cross compiling method and device, computing equipment and storage medium
WO2022194156A1 (en) * 2021-03-16 2022-09-22 华为技术有限公司 Distributed access control method and related apparatus and system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860506B (en) * 2019-11-28 2024-05-17 阿里巴巴集团控股有限公司 Method, device, system and storage medium for processing monitoring data
CN112685197B (en) * 2020-12-28 2022-08-23 浪潮软件科技有限公司 Interface data interactive system
CN114661497B (en) * 2022-03-31 2023-01-10 慧之安信息技术股份有限公司 Memory sharing method and system for partition of operating system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477511A (en) * 2008-12-31 2009-07-08 杭州华三通信技术有限公司 Method and apparatus for sharing memory medium between multiple operating systems
CN101847105A (en) * 2009-03-26 2010-09-29 联想(北京)有限公司 Computer and internal memory sharing method of a plurality of operation systems
US20110264841A1 (en) * 2010-04-26 2011-10-27 International Business Machines Corporation Sharing of class data among virtual machine applications running on guests in virtualized environment using memory management facility
CN103077071A (en) * 2012-12-31 2013-05-01 北京启明星辰信息技术股份有限公司 Method and system for acquiring process information of KVM (Kernel-based Virtual Machine)
CN104216862A (en) * 2013-05-29 2014-12-17 华为技术有限公司 Method and device for communication between user process and system service
CN102541618B (en) * 2010-12-29 2015-05-27 中国移动通信集团公司 Implementation method, system and device for virtualization of universal graphic processor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661381B (en) * 2009-09-08 2012-05-30 华南理工大学 Data sharing and access control method based on Xen
CN102262557B (en) * 2010-05-25 2015-01-21 运软网络科技(上海)有限公司 Method for constructing virtual machine monitor by bus architecture and performance service framework

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477511A (en) * 2008-12-31 2009-07-08 杭州华三通信技术有限公司 Method and apparatus for sharing memory medium between multiple operating systems
CN101847105A (en) * 2009-03-26 2010-09-29 联想(北京)有限公司 Computer and internal memory sharing method of a plurality of operation systems
US20110264841A1 (en) * 2010-04-26 2011-10-27 International Business Machines Corporation Sharing of class data among virtual machine applications running on guests in virtualized environment using memory management facility
CN102541618B (en) * 2010-12-29 2015-05-27 中国移动通信集团公司 Implementation method, system and device for virtualization of universal graphic processor
CN103077071A (en) * 2012-12-31 2013-05-01 北京启明星辰信息技术股份有限公司 Method and system for acquiring process information of KVM (Kernel-based Virtual Machine)
CN104216862A (en) * 2013-05-29 2014-12-17 华为技术有限公司 Method and device for communication between user process and system service

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107741863A (en) * 2017-10-08 2018-02-27 深圳市星策网络科技有限公司 The driving method and device of a kind of video card
CN108932213A (en) * 2017-10-10 2018-12-04 北京猎户星空科技有限公司 The means of communication, device, electronic equipment and storage medium between multiple operating system
CN109669782A (en) * 2017-10-13 2019-04-23 阿里巴巴集团控股有限公司 Hardware abstraction layer multiplexing method, device, operating system and equipment
WO2019072182A1 (en) * 2017-10-13 2019-04-18 阿里巴巴集团控股有限公司 Hardware abstraction layer multiplexing method and apparatus, operating system and device
WO2019127191A1 (en) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 File system sharing method and apparatus for multi-operating system, and electronic device
CN108124475B (en) * 2017-12-29 2022-05-20 达闼机器人股份有限公司 Virtual system Bluetooth communication method and device, virtual system, storage medium and electronic equipment
CN108124475A (en) * 2017-12-29 2018-06-05 深圳前海达闼云端智能科技有限公司 Virtual system Bluetooth communication method and device, virtual system, storage medium and electronic equipment
CN109343922A (en) * 2018-09-17 2019-02-15 广东微云科技股份有限公司 A kind of method and device that GPU vitualization picture is shown
CN109343922B (en) * 2018-09-17 2022-01-11 广东微云科技股份有限公司 GPU (graphics processing Unit) virtual picture display method and device
CN109725867A (en) * 2019-01-04 2019-05-07 中科创达软件股份有限公司 Virtual screen sharing method, device and electronic equipment
CN112131146B (en) * 2019-06-24 2022-07-12 维塔科技(北京)有限公司 Method and device for acquiring equipment information, storage medium and electronic equipment
CN112131146A (en) * 2019-06-24 2020-12-25 维塔科技(北京)有限公司 Method and device for acquiring equipment information, storage medium and electronic equipment
CN110442389A (en) * 2019-08-07 2019-11-12 北京技德系统技术有限公司 A kind of shared method using GPU of more desktop environments
CN110442389B (en) * 2019-08-07 2024-01-09 北京技德系统技术有限公司 Method for sharing GPU (graphics processing Unit) in multi-desktop environment
CN111510780B (en) * 2020-04-10 2021-10-26 广州方硅信息技术有限公司 Video live broadcast control, bridging, flow control and broadcast control method and client
CN111510780A (en) * 2020-04-10 2020-08-07 广州华多网络科技有限公司 Video live broadcast control, bridging, flow control and broadcast control method and client
CN111522670A (en) * 2020-05-09 2020-08-11 中瓴智行(成都)科技有限公司 GPU virtualization method, system and medium for Android system
CN112015605A (en) * 2020-07-28 2020-12-01 深圳市金泰克半导体有限公司 Memory test method and device, computer equipment and storage medium
CN112015605B (en) * 2020-07-28 2024-05-14 深圳市金泰克半导体有限公司 Memory testing method and device, computer equipment and storage medium
WO2022194156A1 (en) * 2021-03-16 2022-09-22 华为技术有限公司 Distributed access control method and related apparatus and system
CN112764872A (en) * 2021-04-06 2021-05-07 阿里云计算有限公司 Computer device, virtualization acceleration device, remote control method, and storage medium
CN112764872B (en) * 2021-04-06 2021-07-02 阿里云计算有限公司 Computer device, virtualization acceleration device, remote control method, and storage medium
CN113379589A (en) * 2021-07-06 2021-09-10 湖北亿咖通科技有限公司 Dual-system graphic processing method and device and terminal
CN113805952B (en) * 2021-09-17 2023-10-31 中国联合网络通信集团有限公司 Peripheral virtualization management method, server and system
CN113805952A (en) * 2021-09-17 2021-12-17 中国联合网络通信集团有限公司 Peripheral virtualization management method, server and system
CN114047960A (en) * 2021-11-10 2022-02-15 北京鲸鲮信息系统技术有限公司 Operating system running method and device, electronic equipment and storage medium
CN114327944A (en) * 2021-12-24 2022-04-12 科东(广州)软件科技有限公司 Method, device, equipment and storage medium for sharing memory by multiple systems
CN114816417A (en) * 2022-04-18 2022-07-29 北京凝思软件股份有限公司 Cross compiling method and device, computing equipment and storage medium

Also Published As

Publication number Publication date
CN107077377B (en) 2020-08-04
WO2018119952A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
CN107077377A (en) A kind of device virtualization method, device, system and electronic equipment, computer program product
CN107003892A (en) GPU vitualization method, device, system and electronic equipment, computer program product
CN103034524B (en) Half virtualized virtual GPU
CN102707986B (en) Shared storage between child partition and father's subregion
CN104714846B (en) Method for processing resource, operating system and equipment
CN104965757B (en) Method, virtual machine (vm) migration managing device and the system of live migration of virtual machine
CN101128807B (en) Systems and methods for an augmented interrupt controller and synthetic interrupt sources
CN103856547B (en) The mapping method of multi-dummy machine, system and client device
CN100385403C (en) Method and system for transitioning network traffic between logical partitions
CN106385329B (en) Processing method, device and the equipment of resource pool
EP1783609A1 (en) Processing management device, computer system, distributed processing method, and computer program
CN109086241A (en) Dynamic heterogeneous multiple nucleus system and method based on application
CN103999051A (en) Policies for shader resource allocation in a shader core
CN103106058A (en) Double-screen display method and intelligent display terminal based on android platform
CN103346981A (en) Virtual exchange method, related device and computer system
CN106371894A (en) Collocation method, collocation device and data processing server
CN109324903A (en) Display resource regulating method and device for embedded system
JP2013508869A (en) Application image display method and apparatus
CN106462522B (en) The input/output of storage equipment based on flash memory virtualizes (IOV) host controller (HC) (IOV-HC)
CN106796530A (en) A kind of virtual method, device and electronic equipment, computer program product
CN114972607B (en) Data transmission method, device and medium for accelerating image display
US12008389B2 (en) Flexible resource assignment to physical and virtual functions in a virtualized processing system
CN108351838B (en) Memory management functions are provided using polymerization memory management unit (MMU)
CN115904617A (en) GPU virtualization implementation method based on SR-IOV technology
CN114691037A (en) System and method for managing unloading card name space and processing input/output request

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant