WO2014015725A1 - 基于应用效果即时反馈的显卡虚拟化下资源调度系统、方法 - Google Patents
基于应用效果即时反馈的显卡虚拟化下资源调度系统、方法 Download PDFInfo
- Publication number
- WO2014015725A1 WO2014015725A1 PCT/CN2013/077457 CN2013077457W WO2014015725A1 WO 2014015725 A1 WO2014015725 A1 WO 2014015725A1 CN 2013077457 W CN2013077457 W CN 2013077457W WO 2014015725 A1 WO2014015725 A1 WO 2014015725A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- graphics card
- scheduling
- application
- physical
- executor
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 108
- 230000000694 effects Effects 0.000 title claims abstract description 16
- 238000012986 modification Methods 0.000 claims abstract description 6
- 230000004048 modification Effects 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 33
- 238000009877 rendering Methods 0.000 claims description 25
- 238000004891 communication Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 230000000737 periodic effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 230000005540 biological transmission Effects 0.000 abstract description 6
- 230000001934 delay Effects 0.000 abstract description 2
- 239000003795 chemical substances by application Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30072—Arrangements for executing specific machine instructions to perform conditional operations, e.g. using predicates or guards
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
Definitions
- the present invention relates to a system and method applied to the field of computer application technology, in particular to a physical graphics card (GPU) scheduling system method between virtual machines based on application immediate effect feedback, specifically A resource scheduling system based on virtualization of graphics cards based on instant feedback of application effects.
- GPU graphics card
- GPU Virtualization is being widely used in data centers that perform GPU Computing, including but not limited to: Cloud Gaming, Video Rendering, and Generic Graphics Computing (General Purpose GPU Computing).
- VGA Passthrough Graphics Graphics Array Passthrough assigns each available physical graphics card to each virtual machine running.
- VGA Passthrough assigns each available physical graphics card to each virtual machine running.
- the disadvantages of this method are as follows: First, the general commercial motherboard only supports two to three graphics cards, so a special motherboard is required to simultaneously run multiple virtual machines that require graphics card support; second, for each virtual machine, inside During its operation, it usually cannot use up the physical graphics resources it owns. In this technology, the remaining graphics resources cannot be given to other virtual machines, thus causing waste of physical graphics resources.
- GPU Virtualization on VM ware's Hosted I/O Architecture was published in Volume 43 Issue 3 of the SIGOPS Operating Systems Review in 2009. Later, in 2008, Multimedia Computing and Networking ⁇ Bautin ⁇ et al. proposed a scheduling strategy for the division of physical graphics resources among multiple applications in Graphic Engine Resource Management. Then in the Usenix ATC in 2011, Kato et al. in the Timegraph: GPU scheduling for real-time multi-tasking environments paper proposed to increase the physical graphics acceleration user by introducing the use of graphics resources and modifying the operating system graphics driver. The ability of the program.
- the above two methods can maximize the utilization of available physical graphics resources while providing the ability to accelerate graphics for multiple virtual machines.
- the disadvantages of these methods are: On the one hand, you need to modify the operating system or the graphics card driver. When you apply to a virtual machine, you even need to modify the hypervisor or the guest application in the virtual machine.
- the existing method can not obtain the accelerated Guest Application running effect feedback data, the existing physical GPU resource scheduling system and method are blind, and the obtained resource scheduling effect is one - invention content
- the present invention is directed to the above-mentioned deficiencies of the prior art, and provides a physical graphics resource scheduling system and method between virtual machines based on application immediate effect feedback.
- the traditional GPU virtualization technology sends the graphics card command and data in the virtual machine to the host physical graphics application interface (Host GPU API) through the host physical graphics card instruction transmitter (GPU HostOps Dispatch).
- the present invention provides The method inserts a dispatching executor (Agent) between the GPU HostOps Dispatch and the Host GPU API by means of a function hook, delays the transmission of instructions and data in the GPU HostOps Dispatch, and monitors the display performance related to the Guest Application and the usage of the physical graphics card resources.
- Agent dispatching executor
- the time- or timing-based graphics card resource scheduling algorithm refers to the beginning, end, and use of the graphics card resources, either partially or completely based on absolute or relative time.
- the system described in the present invention immediately accepts the user's decision to enable or disable each agent through the Scheduling Controller, and selects and selects the scheduling method to be used, and changes the corresponding parameter settings of each agent according to the same.
- the Scheduling Controller displays or records one or more contents of the current physical graphics card resource scheduling and usage, and the usage of the application graphics card resources in each virtual machine.
- the present invention employs Advanced Prediction (Advanced Prediction) to delay the transmission of instructions and data in the GPU HostOps Dispatch to achieve precise control of Frame Latency.
- This advanced prediction technique includes Frame Rendering Performance Prediction and Flush Single Queued Frame:.
- the render/display command queue frame-by-frame advancement technique also includes a Mark Flush Frame and a Commit Flush Frame:.
- the tag operation is optional and is used to mark a frame of the virtual machine in the render/display command queue (including but not limited to the previous frame or the first few frames) to indicate the frame that can be moved out of the graphics buffer (including But not limited to the frame forced display).
- the commit operation forces a frame (which is the corresponding marked frame after the markup operation) to move out of the physical graphics card buffer, leaving the physical graphics buffer with free space.
- the system and method described herein do not require modification of the host operating system, host graphics driver, Hyper V i SOT , virtual machine operating system, virtual machine graphics driver, or applications within the virtual machine. Moreover, the system and method described herein achieves a performance overhead of less than 5% at runtime, enabling or stopping without incurring significant virtual machine pause times (only millisecond pauses are required).
- a resource scheduling system for video card virtualization based on instant feedback of application effects including a host physical graphics card instruction transmitter, a host physical graphics application
- the interface also includes the following modules: a scheduling executor connected between the host physical graphics card instruction transmitter and the host physical graphics application interface; a scheduling controller connected to the scheduling executor,
- the scheduling controller receives the user command and passes the user command to the scheduling executor; the scheduling executor receives the user command from the scheduling controller, monitors the running state of the application, and transmits the application graphics card status result to the scheduling controller At the same time, according to the scheduling algorithm specified by the scheduling controller, the periodic/event computing needs to meet the delay time required for the lowest application graphics card state, delaying the instruction and data transmission of the host physical graphics instruction transmitter to the host physical graphics application.
- Program interface The dispatch controller receives and processes the display of scheduling results and scheduling status from the dispatch executor.
- the scheduling controller receives the user command, parses the operation of each scheduling executor, the configuration of the scheduling algorithm, and corresponding parameters, and passes the user command to the scheduling executor module, and receives the status result from the scheduling executor module. Displayed to the user.
- the scheduling controller comprises the following modules:
- a console for receiving user commands, which input configuration and corresponding parameters about the scheduling algorithm, and obtain scheduling results from the scheduling communicator and display them to the user;
- a scheduling communicator for communicating with the dispatch controller and one or more dispatch executors, for installing/unloading the dispatch executor, passing the user command to the dispatch executor, and receiving the application graphics status from the dispatch executor result.
- the scheduling executor comprises a module, a scheduler, configured to receive a designation of a scheduling algorithm and a parameter configuration thereof in a user command, and is responsible for finding a location of a corresponding scheduling algorithm, configuring a scheduling algorithm, and running a corresponding scheduling.
- Algorithm delaying the instruction and data sent by the host physical graphics instruction transmitter to the host physical graphics application interface on demand;
- An application graphics card status monitor which is used to collect the status of the graphics card from the host physical graphics application interface, and thereby generate the application graphics status result, and simultaneously feed the application graphics status report to the scheduler and pass it to the scheduling control. Dispatched communicator in the device.
- the application graphics state comprises: a graphics card physical state and/or a logic state metric associated with the application category.
- Physical state measures such as the graphics card's load factor (GPU Load), temperature, voltage, and so on.
- Logic state measure for computer 3D games, the corresponding graphics state measurement is Frames per second (FPS). For computer general GPU operations, the corresponding graphics state measurement is calculated per second (Ops), The application's graphics GPU Usage and so on.
- a graphics card resource scheduling method for video card virtualization in a resource scheduling system is provided, and a physical hook instruction method transmitter and a host physical graphics card application are implemented by a function hook method. Inserting a scheduler (Agent) between interfaces, delaying the instructions and data sent by the host physical graphics card to the host
- the physical graphics application interface which monitors the application-related display performance and physical graphics resource usage, and provides feedback to any time or timing-based graphics resource scheduling algorithm, without the need for virtual machine applications, host operating systems,
- the virtual machine operating system, graphics driver, and virtual machine manager make any modifications and have low performance loss.
- the specific process of the method is: after one or more virtual machines are started, when the client needs to install the resource scheduling system, the method executed by the application is searched by the scheduling controller or scheduled to execute according to a user-specified process.
- the device is bound to the corresponding virtual machine; then the scheduling communicator in the scheduling controller establishes communication with each of the bound scheduling executors; when scheduling the graphics card resources, the client issues an instruction to select a scheduling algorithm (may be a third-party development) Scheduling algorithm) and provide corresponding parameters, the console distributes the user command to each dispatching executor by the dispatching communicator after receiving the client instruction; the scheduling executor runs the selected graphics card resource scheduling algorithm according to the user command, delaying the host physics
- the instruction and data in the graphics card instruction transmitter are sent to the host physical graphics application interface; at the same time, the application graphics status monitor collects the status of the graphics card from the host physical graphics application interface, and thereby generates the application graphics status.
- Subsequent periodicity/event will The program card status result is fed back to the scheduler and passed to the dispatch communicator in the dispatch controller; when the client needs to uninstall the resource dispatch system, the client issues an uninstall command through the dispatch controller, and the console receives the client command.
- the user command is distributed by the dispatch communicator to each dispatch executor, and the dispatch executor stops its own operation after receiving the uninstall command.
- the graphics card resource usage method uses an advanced prediction method in the graphics card resource scheduling method, and cooperates with the transmission of the instruction and data in the transmitter of the delayed host physical graphics card instruction to achieve the purpose of accurately controlling the interframe delay, and the advanced prediction method of the graphics card resource is used.
- the rendering/display overhead prediction is to predict the current consumption time of the physical graphics card resource according to the physical graphics resource consumption time history corresponding to the host physical graphics application interface;
- the rendering/display command queue advances the frame-by-frame operation including the mark operation and the submit operation, wherein the mark operation is an optional operation for using a frame of the virtual machine in the render/display command queue (including but not limited to the previous frame or before)
- the mark operation is an optional operation for using a frame of the virtual machine in the render/display command queue (including but not limited to the previous frame or before)
- a number of frames are marked to indicate the frame that can be removed from the graphics card buffer (including but not limited to the frame forcibly displayed), and the commit operation forces a frame (before the tagged operation is executed to correspond to the marked frame) to move out of the physical graphics buffer. , so that the physical graphics card buffer has free space.
- the step of binding the scheduling executor to the corresponding virtual machine is specifically - step 1.1, searching for a specified virtual machine image rendering process according to the user specified information (depending on the virtual machine manager design, the processes may also be Virtual machine process), or select all relevant virtual machine image rendering processes, perform step 1.2 to step 1.6 for each of these virtual machine processes;
- Step 1.2 create a new thread (Thread) in the process, and load a schedule executor therein;
- Step 1.3 accessing the schedule executor entry, initializing the schedule executor
- Step 1.4 Find a set of physical interface application interface addresses of the host loaded by the process, and modify each of the hosts.
- Step 1.5 After the processing function returns the address of the old host physical graphics application interface address, the instruction is executed, and the contents of each register are restored, so that the original host physical graphics application interface can be correctly executed after the processing function is finished; Step 1.6 , the thread must not end.
- the step of delaying the transmission of instructions and data from the host physical graphics instruction transmitter to the host physical graphics application interface is specifically -
- Step 2.1a in the processing function specified by the resource scheduling algorithm, predicting the current consumption time of the physical graphics card resource according to the physical video card resource consumption time history corresponding to the physical graphics application interface of the host, and stopping the timing of the central processing unit ( Central Processing Unit, CPU)
- CPU Central Processing Unit
- Step 2.2a suspending the execution of the central processing unit for a period of time, wherein the length of the period is calculated by the scheduling algorithm according to the current consumption time of the central processing unit and the predicted consumption time of the physical graphics card resource;
- Step 2.3a start timing the physical graphics card resource consumption time
- Step 2.4a calling the original host physical graphics application interface
- Step 2.5a stopping the timing of the physical graphics card resource consumption time, and updating the current physical graphics card resource consumption time to the physical graphics card resource consumption time history corresponding to the host physical graphics application interface;
- Step 2.1b in the processing function specified by the resource scheduling algorithm, stop timing the current CPU consumption time, and start timing the physical graphics card resource consumption time;
- Step 2.2b calling the original host physical graphics application interface
- Step 2.3b stop timing the physical graphics card resource consumption time
- Step 2.4b suspending the central processing unit for a period of time, the length of the period is calculated by the scheduling algorithm according to the current consumption time of the central processing unit and the current consumption time of the physical graphics card resource;
- step 2.1c in the processing function specified by the resource scheduling algorithm, the rendering/display command queue commit operation of the frame-by-frame push operation is performed, which forcibly A frame (such as a tagged frame before the markup operation) is removed, so that the physical graphics card buffer has free space; stop timing the central processor consumes time;
- step 2.2c the rendering/display overhead prediction is based on the physical graphics card corresponding to the host physical graphics application interface.
- the source consumption time history predicts the current consumption time of the physical graphics card resource;
- Step 2.3c suspending the execution of the central processing unit for a period of time, wherein the length of the period is calculated by the scheduling algorithm according to the current consumption time of the central processing unit and the predicted consumption time of the physical graphics card resource;
- Step 2.4c start timing the physical graphics card resource consumption time
- Step 2.5c calling the original host physical graphics application interface
- Step 2.6c stop timing the physical graphics card resource consumption time
- Step 2.7c start timing the next CPU consumption time, and optionally perform a marking operation of the rendering/display command queue frame-by-frame advance operation, which is a frame of the virtual machine in the rendering/display command queue (including but not limited to the previous frame) Or the first few frames) are marked to indicate the frame that can be moved out of the graphics card buffer (including but not limited to the frame forced display).
- the application graphics card status monitor collects the status of the graphics card from the physical graphics application interface of the host, specifically - step 3.1, in the processing function specified by the resource scheduling algorithm, invokes the host physical graphics application Interfaces, operating system kernels, or graphics card drivers provide interfaces to collect graphics card status according to resource scheduling algorithm requirements and user commands, such as graphics card load rate (GPU Load), temperature, voltage, FPS, Ops, graphics card load rate of the application, etc.
- GPU Load graphics card load rate
- Step 3.2 In the processing function specified by the resource scheduling algorithm, invoke the original host physical graphics application interface; preferably, the step of generating the status of the application graphics card is specifically:
- Step 4.1 The user specifies a status reporting frequency, and the status reporting frequency is obtained in the scheduling executor;
- Step 4.2 When the status reporting time point arrives, the application graphics card status monitor in the scheduling executor transmits the accumulated status result to the scheduling communicator in the scheduling controller;
- Step 4.3 the scheduling executor clears its own status result buffer
- the step of the scheduling executor stopping the operation after receiving the uninstallation instruction is specifically - step 5.1, after each scheduling executor receives the uninstallation instruction, starting the uninstallation process from step 5.2 to step 5.3; Restore the host physical card application interface address set loaded by the process, modify the code at the address of each host physical graphics application interface to the content of the original application interface address, so that the process will use the host physical graphics card each time later.
- the original application interface logic is run when the application interface is used;
- Step 5.3 Binding the scheduling executor to the end of the thread inserted in the corresponding virtual machine, thereby uninstalling the scheduling executor;
- the resource scheduling algorithm specifically includes the following steps - Step 6.1, for the virtual machine group VM1, VM2- To VMn, the scheduler in the scheduling executor of each virtual machine parses the user method configuration, and obtains the load rate of the minimum graphics card to be satisfied, and the minimum number of frames per second (the scope of application of the patent is not limited to Computer games, for other graphics applications, can be measured for different states), user-specified detection period T;
- Step 6.2 during the operation, the processing function will be called multiple times.
- the prediction method technique
- step 2.1b to step 2.5a
- step 2.4b use the prediction method (technique) to perform step 2.1b to Step 2.4b ;
- Step 6.3 For each T cycle, if a virtual machine VMm does not satisfy the state measurement, find and reduce the minimum number of frames per second of the virtual machine having the largest minimum number of frames per second; reduce the number of frames per second Depending on the application GPU Load of the application of the last few frames, the number of frames per second and the load rate of the application graphics cards of the most recent frames are linear;
- Step 6.4 For each T cycle, if the physical graphics usage rate does not meet the minimum graphics card loading rate, increase the minimum number of frames per second for all virtual machines; increase the number of frames per second depending on the application of the last few frames.
- the graphics card load rate, the number of frames per second, and the graphics card load rate of the application of the most recent frames are linear;
- Step 6.5, Step 6.2 to Step 6.4 remain valid until the user specifies the method to end or replace the method or uninstall the scheduler.
- the GPU HostOps Dispatch corresponding to each virtual machine is installed in the present invention by a separately owned scheduling executor.
- a globally unique dispatch controller is connected to one or more dispatch executors.
- the present invention has the following advantages over the prior art: First, there is no need to make any modifications to the virtual machine application, the host operating system, the virtual machine operating system, the graphics card driver, and the virtual machine manager. Existing systems typically require extensive modification of one of the above to achieve similar scheduling capabilities, which can result in existing systems having to evolve to be compatible with the latest applications, operating systems, or graphics drivers.
- the present invention does not require a pause in the operation of the machine during installation or unloading. This feature makes the system easy to deploy in commercial systems, especially for commercial servers requiring 7X 24 hour availability.
- the present invention has extremely high performance when the performance of the graphics card resource scheduling capability between virtual machines is greatly improved, and the overall performance loss is less than 5%.
- Figure 1 is a schematic view of the module of the present invention.
- FIG. 2 is a schematic diagram of the architecture of the present invention.
- a resource scheduling system for video card virtualization based on application effect instant feedback includes: a scheduling executor module and a scheduling controller module, wherein: a scheduling controller module and a scheduling executor module Connect, pass user commands to the dispatch executor and receive the status of the graphics card status it returns.
- the dispatch executor is inserted into the host Between the graphics card sender GPU HostOps Dispatch and the host physical graphics application interface Host GPU API, the corresponding call and data are delayed.
- the dispatch executor module is also responsible for collecting the physical state of the graphics card and/or the logic state measure using the Host GPU API of the host physical graphics application interface. This embodiment is directed to a computer game running in a virtual machine, and thus the collected physical state and logical state include: Application GPU Load and FPS.
- the scheduling controller module includes: a console sub-module and a scheduling communicator sub-module, wherein: the control station sub-module is configured to receive user commands, where the commands input configuration and corresponding parameters about the scheduling algorithm.
- the console sub-module periodically/events gets the scheduling results from the scheduling communicator sub-module and displays them to the user.
- the dispatch communicator sub-module is responsible for scheduling the controller module to communicate with one or more dispatch executor modules, and is responsible for installing/uninstalling the dispatch executor, passing user commands to the dispatch executor sub-module, and the like.
- the eventuality refers to the occurrence of a target event one or more times, but the time interval of occurrence is not constant, and the distribution of events in time can be expressed mathematically as a time series of non-periodic nature.
- the scheduling executor module includes: a scheduler submodule, an application graphics card status monitor submodule, where: a scheduler submodule receives a designation of a scheduling algorithm and its parameter configuration in a user command, and responsible for running the corresponding scheduling algorithm according to the configuration, delaying the instruction and data in the GPU HostOps Dispatch to the Host GPU API as needed.
- the application graphics status monitor sub-module is responsible for collecting the status of the graphics card from the Host GPU API, and thereby generating the application graphics status, and then periodically/evently feeding back the application graphics status results to the scheduler sub-module and passing it to the scheduler.
- a dispatch communicator submodule in the controller module is responsible for collecting the status of the graphics card from the Host GPU API, and thereby generating the application graphics status, and then periodically/evently feeding back the application graphics status results to the scheduler sub-module and passing it to the scheduler.
- the application graphics state refers to the physical state of the graphics card and/or the logical state measure associated with the type of application.
- the collected physical state and logical state include: Application GPU Load and FPS.
- This embodiment is directed to the VMWare Player 4.0 virtual machine manager system, thus specifying the virtual machine image rendering process, i.e., the virtual machine process.
- the virtual machine image rendering process i.e., the virtual machine process.
- only the user selects all relevant virtual machine image rendering processes.
- Step 1 The user selects all related virtual machine processes, and performs step two to step six for each of the virtual machine processes:
- step two a new thread (Thread) is created in the process, and the schedule executor module is loaded therein.
- Thread a new thread
- Step 3 Access the scheduling executor module entry, and initialize the scheduling executor module.
- Step 4 Find the host physical card application interface address set loaded by the process, modify the code of each host physical graphics application interface address, point it to the entry of the corresponding processing function in the scheduling executor module, and save each Register contents. Causes the process to run the handler function each time the host physical graphics application interface is used (Handlers).
- Step 5 After the processing function returns the address of the old host physical graphics application interface address, the instruction is run, and the contents of each register are restored, so that the original host physical graphics application interface can be correctly executed after the processing function is finished.
- Step six the thread must not end.
- the scheduling executor module is bound to the corresponding virtual machine, and after the scheduling communicator sub-module in the scheduling controller module establishes communication with each bound scheduling executor module, the scheduling executor module can transmit the status result. Go to the dispatch controller module and respond to the user commands issued by the dispatch controller module.
- Step 2 During the run, Handlers will be called multiple times for graphics card status acquisition and delay. GPU HostOps Dispatch instructions and data are sent to the Host GPU API. For each Handlers call, perform steps 2.1 through 2.6.
- Step 2.1 In the Handlers specified by the resource scheduling algorithm, predict the GPU current consumption time according to the GPU consumption time history corresponding to the host physical graphics application interface.
- Step 2.2 Using the Host GPU API and the graphics card driver interface, measure the Application GPU Load and the FPS in the current T time, and stop timing the CPU consumption time.
- Step 2.3 Suspend the execution of the CPU for a period of time.
- the length of the period is calculated by the scheduling algorithm according to the current consumption time of the CPU and the current consumption time of the GPU.
- Step 2.4 Start timing
- the GPU consumes time this time.
- Step 2.5 call the original host physical graphics application interface.
- Step 2.6 Stop Timing
- the GPU consumes this time and updates to the GPU consumption time history corresponding to the host's physical graphics application interface.
- Step 3 For each T cycle, if a virtual machine VMm does not satisfy the minimum FPS, find and reduce the minimum FPS setting of the virtual machine with the largest and smallest FPS. Reducing the number of FPS depends on the Application GPU Load of several recent frames, and the FPS and the Application GPU Load of several recent frames are linear.
- Step 4 For each T cycle, if the physical graphics usage does not satisfy the minimum GPU Load, the minimum FPS setting of all virtual machines is increased. Increasing the number of FPS depends on the application GPU Load of several recent frames, and the FPS and the application GPU Load of several recent frames are linear. Step 5, Step 2 to Step 4 remain valid until the user specifies that the algorithm ends or replaces the algorithm or uninstalls the Scheduler Module.
- the dispatch executor module uninstallation implementation is as follows:
- Step a after each dispatch executor module receives the uninstall command, it starts the uninstall process from step b to step c.
- Step b Restoring the host physical card application interface address set loaded by the process, and modifying the code at the address of each host physical graphics application interface to be the content of the original application interface address. The process will run the original application interface logic each time the host uses the physical graphics application interface.
- Step c Binding the scheduling executor module to the end of the thread inserted in the corresponding virtual machine process, thereby uninstalling the scheduling executor module.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Debugging And Monitoring (AREA)
- Processing Or Creating Images (AREA)
- Stored Programmes (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/563,951 US10922140B2 (en) | 2012-07-26 | 2013-06-19 | Resource scheduling system and method under graphics processing unit virtualization based on instant feedback of application effect |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210261862.0 | 2012-07-26 | ||
CN201210261862.0A CN102890643B (zh) | 2012-07-26 | 2012-07-26 | 基于应用效果即时反馈的显卡虚拟化下的资源调度系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014015725A1 true WO2014015725A1 (zh) | 2014-01-30 |
Family
ID=47534151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/077457 WO2014015725A1 (zh) | 2012-07-26 | 2013-06-19 | 基于应用效果即时反馈的显卡虚拟化下资源调度系统、方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10922140B2 (zh) |
CN (1) | CN102890643B (zh) |
WO (1) | WO2014015725A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111524058A (zh) * | 2019-02-01 | 2020-08-11 | 纬创资通股份有限公司 | 硬件加速方法及硬件加速系统 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102890643B (zh) * | 2012-07-26 | 2015-01-21 | 上海交通大学 | 基于应用效果即时反馈的显卡虚拟化下的资源调度系统 |
CN103645872A (zh) * | 2013-12-08 | 2014-03-19 | 侯丰花 | 一种显卡的优化计算方法 |
CN108345492A (zh) * | 2014-04-08 | 2018-07-31 | 华为技术有限公司 | 一种虚拟化环境中的数据通信的方法、装置及处理器 |
US9690928B2 (en) * | 2014-10-25 | 2017-06-27 | Mcafee, Inc. | Computing platform security methods and apparatus |
CN105426310B (zh) * | 2015-11-27 | 2018-06-26 | 北京奇虎科技有限公司 | 一种检测目标进程的性能的方法和装置 |
JP6823251B2 (ja) * | 2016-10-13 | 2021-02-03 | 富士通株式会社 | 情報処理装置、情報処理方法及びプログラム |
CN108073440B (zh) * | 2016-11-18 | 2023-07-07 | 南京中兴新软件有限责任公司 | 一种虚拟化环境下的显卡管理方法、装置及系统 |
US11144357B2 (en) * | 2018-05-25 | 2021-10-12 | International Business Machines Corporation | Selecting hardware accelerators based on score |
US10977098B2 (en) | 2018-08-14 | 2021-04-13 | International Business Machines Corporation | Automatically deploying hardware accelerators based on requests from users |
US10892944B2 (en) | 2018-11-29 | 2021-01-12 | International Business Machines Corporation | Selecting and using a cloud-based hardware accelerator |
CN109656714B (zh) * | 2018-12-04 | 2022-10-28 | 成都雨云科技有限公司 | 一种虚拟化显卡的gpu资源调度方法 |
CN109712060B (zh) * | 2018-12-04 | 2022-12-23 | 成都雨云科技有限公司 | 一种基于gpu容器技术的云桌面显卡共享方法及系统 |
CN109934327B (zh) * | 2019-02-18 | 2020-03-31 | 星汉智能科技股份有限公司 | 一种智能卡的计时方法及系统 |
CN110532071B (zh) * | 2019-07-12 | 2023-06-09 | 上海大学 | 一种基于gpu的多应用调度系统和方法 |
CN111522692B (zh) * | 2020-04-20 | 2023-05-30 | 浙江大学 | 一种基于虚拟机的多操作系统输入及输出设备冗余保障系统 |
CN112230931B (zh) * | 2020-10-22 | 2021-11-02 | 上海壁仞智能科技有限公司 | 适用于图形处理器的二次卸载的编译方法、装置和介质 |
US20220180588A1 (en) * | 2020-12-07 | 2022-06-09 | Intel Corporation | Efficient memory space sharing of resources for cloud rendering |
CN113674132B (zh) * | 2021-07-23 | 2024-05-14 | 中标软件有限公司 | 一种通过检测显卡能力切换窗口管理渲染后端的方法 |
CN116157186A (zh) * | 2021-09-23 | 2023-05-23 | 谷歌有限责任公司 | 基于游戏交互状态的自动化帧调步 |
CN113975816B (zh) * | 2021-12-24 | 2022-11-25 | 北京蔚领时代科技有限公司 | 一种基于hook的通过DirectX接口使用显卡的显卡分配方法 |
CN115686758B (zh) * | 2023-01-03 | 2023-03-21 | 麒麟软件有限公司 | 一种基于帧统计的VirtIO-GPU性能可控方法 |
CN116777730B (zh) * | 2023-08-25 | 2023-10-31 | 湖南马栏山视频先进技术研究院有限公司 | 一种基于资源调度的gpu效能提高方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1700171A (zh) * | 2004-04-30 | 2005-11-23 | 微软公司 | 提供从虚拟环境对硬件的直接访问 |
CN101419558A (zh) * | 2008-11-13 | 2009-04-29 | 湖南大学 | Cuda图形子系统虚拟化方法 |
CN102890643A (zh) * | 2012-07-26 | 2013-01-23 | 上海交通大学 | 基于应用效果即时反馈的显卡虚拟化下的资源调度系统 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7673304B2 (en) * | 2003-02-18 | 2010-03-02 | Microsoft Corporation | Multithreaded kernel for graphics processing unit |
US8274518B2 (en) * | 2004-12-30 | 2012-09-25 | Microsoft Corporation | Systems and methods for virtualizing graphics subsystems |
US8341624B1 (en) * | 2006-09-28 | 2012-12-25 | Teradici Corporation | Scheduling a virtual machine resource based on quality prediction of encoded transmission of images generated by the virtual machine |
US7650603B2 (en) * | 2005-07-08 | 2010-01-19 | Microsoft Corporation | Resource management for virtualization of graphics adapters |
US8310491B2 (en) * | 2007-06-07 | 2012-11-13 | Apple Inc. | Asynchronous notifications for concurrent graphics operations |
US8122229B2 (en) * | 2007-09-12 | 2012-02-21 | Convey Computer | Dispatch mechanism for dispatching instructions from a host processor to a co-processor |
US8284205B2 (en) * | 2007-10-24 | 2012-10-09 | Apple Inc. | Methods and apparatuses for load balancing between multiple processing units |
KR100962531B1 (ko) * | 2007-12-11 | 2010-06-15 | 한국전자통신연구원 | 동적 로드 밸런싱을 지원하는 멀티 쓰레딩 프레임워크를 수행하는 장치 및 이를 이용한 프로세싱 방법 |
GB2462860B (en) * | 2008-08-22 | 2012-05-16 | Advanced Risc Mach Ltd | Apparatus and method for communicating between a central processing unit and a graphics processing unit |
US8368701B2 (en) * | 2008-11-06 | 2013-02-05 | Via Technologies, Inc. | Metaprocessor for GPU control and synchronization in a multiprocessor environment |
US8910153B2 (en) * | 2009-07-13 | 2014-12-09 | Hewlett-Packard Development Company, L. P. | Managing virtualized accelerators using admission control, load balancing and scheduling |
US20110102443A1 (en) * | 2009-11-04 | 2011-05-05 | Microsoft Corporation | Virtualized GPU in a Virtual Machine Environment |
CN101706742B (zh) * | 2009-11-20 | 2012-11-21 | 北京航空航天大学 | 一种基于多核动态划分的非对称虚拟机i/o调度方法 |
US8669990B2 (en) * | 2009-12-31 | 2014-03-11 | Intel Corporation | Sharing resources between a CPU and GPU |
EP2383648B1 (en) * | 2010-04-28 | 2020-02-19 | Telefonaktiebolaget LM Ericsson (publ) | Technique for GPU command scheduling |
CN102262557B (zh) * | 2010-05-25 | 2015-01-21 | 运软网络科技(上海)有限公司 | 通过总线架构构建虚拟机监控器的方法及性能服务框架 |
CN101968749B (zh) * | 2010-09-26 | 2013-01-02 | 华中科技大学 | 虚拟机过度分配环境下的mpi消息接收方法 |
US8463980B2 (en) * | 2010-09-30 | 2013-06-11 | Microsoft Corporation | Shared memory between child and parent partitions |
US8970603B2 (en) * | 2010-09-30 | 2015-03-03 | Microsoft Technology Licensing, Llc | Dynamic virtual device failure recovery |
US9170843B2 (en) * | 2011-09-24 | 2015-10-27 | Elwha Llc | Data handling apparatus adapted for scheduling operations according to resource allocation based on entitlement |
US9135189B2 (en) * | 2011-09-07 | 2015-09-15 | Microsoft Technology Licensing, Llc | Delivering GPU resources across machine boundaries |
US8941670B2 (en) * | 2012-01-17 | 2015-01-27 | Microsoft Corporation | Para-virtualized high-performance computing and GDI acceleration |
EP2742425A1 (en) * | 2012-05-29 | 2014-06-18 | Qatar Foundation | Graphics processing unit controller, host system, and methods |
-
2012
- 2012-07-26 CN CN201210261862.0A patent/CN102890643B/zh active Active
-
2013
- 2013-06-19 WO PCT/CN2013/077457 patent/WO2014015725A1/zh active Application Filing
- 2013-06-19 US US15/563,951 patent/US10922140B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1700171A (zh) * | 2004-04-30 | 2005-11-23 | 微软公司 | 提供从虚拟环境对硬件的直接访问 |
CN101419558A (zh) * | 2008-11-13 | 2009-04-29 | 湖南大学 | Cuda图形子系统虚拟化方法 |
CN102890643A (zh) * | 2012-07-26 | 2013-01-23 | 上海交通大学 | 基于应用效果即时反馈的显卡虚拟化下的资源调度系统 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111524058A (zh) * | 2019-02-01 | 2020-08-11 | 纬创资通股份有限公司 | 硬件加速方法及硬件加速系统 |
CN111524058B (zh) * | 2019-02-01 | 2023-08-22 | 纬创资通股份有限公司 | 硬件加速方法及硬件加速系统 |
Also Published As
Publication number | Publication date |
---|---|
CN102890643A (zh) | 2013-01-23 |
CN102890643B (zh) | 2015-01-21 |
US20180246770A1 (en) | 2018-08-30 |
US10922140B2 (en) | 2021-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014015725A1 (zh) | 基于应用效果即时反馈的显卡虚拟化下资源调度系统、方法 | |
US11797327B2 (en) | Dynamic virtual machine sizing | |
Qi et al. | VGRIS: Virtualized GPU resource isolation and scheduling in cloud gaming | |
EP3039540B1 (en) | Virtual machine monitor configured to support latency sensitive virtual machines | |
US9405585B2 (en) | Management of heterogeneous workloads | |
EP3161628B1 (en) | Intelligent gpu scheduling in a virtualization environment | |
Zhang et al. | vGASA: Adaptive scheduling algorithm of virtualized GPU resource in cloud gaming | |
US10970129B2 (en) | Intelligent GPU scheduling in a virtualization environment | |
US20090077564A1 (en) | Fast context switching using virtual cpus | |
CN105550040B (zh) | 基于kvm平台的虚拟机cpu资源预留算法 | |
US9189293B2 (en) | Computer, virtualization mechanism, and scheduling method | |
Bai et al. | Task-aware based co-scheduling for virtual machine system | |
US10846088B2 (en) | Control of instruction execution in a data processor | |
US20090241112A1 (en) | Recording medium recording virtual machine control program and virtual machine system | |
US20200334075A1 (en) | Process scheduling in a processing system having at least one processor and shared hardware resources | |
US20130125131A1 (en) | Multi-core processor system, thread control method, and computer product | |
Zhao et al. | Efficient sharing and fine-grained scheduling of virtualized GPU resources | |
Yu et al. | Colab: a collaborative multi-factor scheduler for asymmetric multicore processors | |
US20220382587A1 (en) | Data processing systems | |
Lin et al. | Improving GPOS real-time responsiveness using vCPU migration in an embedded multicore virtualization platform | |
Elmougy et al. | Diagnosing the interference on cpu-gpu synchronization caused by cpu sharing in multi-tenant gpu clouds | |
Lee et al. | Interrupt handler migration and direct interrupt scheduling for rapid scheduling of interrupt-driven tasks | |
CN110333899B (zh) | 数据处理方法、装置和存储介质 | |
US20230229473A1 (en) | Adaptive idling of virtual central processing unit | |
Xia et al. | PaS: A preemption-aware scheduling interface for improving interactive performance in consolidated virtual machine environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13822194 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 07/04/2015) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13822194 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15563951 Country of ref document: US |