CN103309748B - Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game - Google Patents
Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game Download PDFInfo
- Publication number
- CN103309748B CN103309748B CN201310244765.5A CN201310244765A CN103309748B CN 103309748 B CN103309748 B CN 103309748B CN 201310244765 A CN201310244765 A CN 201310244765A CN 103309748 B CN103309748 B CN 103309748B
- Authority
- CN
- China
- Prior art keywords
- virtual machine
- gpu
- module
- control module
- dispatching control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Stored Programmes (AREA)
Abstract
An adaptive scheduling host system of GPU (Graphics Processing Unit) virtual resources in a cloud game comprises a scheduling control module, as well as an agent module, an image application programming interface analysis module and a virtual machine list which are connected with the scheduling control module respectively, wherein the agent module consists of a scheduling module and a monitoring module; all the modules are allocated in a host; the scheduling control module is responsible for information communication among the other modules; and the scheduling control module obtains information of all running virtual machines from the virtual machine list and transmits the information to the requiring module. The system realizes scientific management of the GPU resources in the virtual machines, and increases the utilization ratio of the GPU resources, the enough GPU resources are distributed to all the running virtual machines based on a fair QoS (Quality of Service) adaptive scheduling method in a fair distribution manner, the QoS requirements are met, and the utilization ratio of a whole GPU is maximized.
Description
Technical field
The present invention relates to be in GPU vitualization field resource-adaptive scheduling host machine system, be applied to cloud game platform, specifically a kind of dispatching patcher of carrying out intervening in operating system image API.
Background technology
The ripe gradually development facilitating cloud game application of GPU vitualization technology, the cloud game application therefore in cloud service is popular.But because the GPU resource shared mechanism performance given tacit consent at present is bad, the Consumer's Experience of cloud game is inevitably destroyed by some real-time uncertain factors, such as play up complicated scene of game.
In video flow quality is analyzed, FPS (Frames Per Second) is defined as transmission frame number per second, measures the information content for preserving, showing dynamic video, i.e. the frame numbers of animation or video.
In cloud game Consumer's Experience, qos requirement refers to the minimum FPS and maximum delay that ensure that user normally uses.
Affect the principal element of game running in virtual environment: CPU running time, GPU perform game synchronization render time, intervention between the change of uncertain scene of game with virtual machine.These factors cause the FPS played not meet qos requirement, affect Consumer's Experience.
The virtualization solution of current existence, its Resources Sharing Mechanism performance is bad, is mainly reflected in resource utilization low and do not meet service-level agreement (QoS) requirement.Resource Scheduling Mechanism in present most virtual machine adopts first-in first-out, some virtual machines are caused not meet qos requirement, and on single server, run multiple virtual machine can bring performance inconsistency to the game application in each virtual machine, particularly more obvious on cloud game platform.
Through finding the retrieval of prior art, Carnegie Mellon Univ USA proposes this difficult problem in GPU accelerating windows system.IBM Tokyo research institute develops resource automatic dispatching system to accelerate the operation of stencil application in general GPU cluster.Branch school, mountain, North Carolina University church proposes two kinds of methods, GPU is integrated into the performance improving total system in soft Real-Time Multiprocessor Systems.New York university Shi Xi branch school proposes GERM, has aimed to provide a kind of GPU resource allocation algorithm of justice, and uses the fixing frame per second method of such as vertical synchronization to avoid playing and too rely on hardware resource.But they have following deficiency, GERM does not consider qos requirement and fixing frame per second method can not use hardware resource effectively.TimeGraph can not ensure to meet qos requirement, especially when important load is less at all virtual machines.
Summary of the invention
The present invention is directed to the deficiency that existing GPU resource dispatching method exists, propose the GPU virtual resource adaptive scheduling host machine system in a kind of cloud game and method, virtual and the storehouse injection technique by utilization GPU half, host being implanted the scheduling controlling of lightweight, have employed the "Green" software technology of the image-driven without the need to changing host, handset operating system and application.Dispatching algorithm alleviates the impact of real-time uncertain factor, and ensure that the high usage of system resource.Specifically, QoS adaptive scheduling algorithm based on fairness not only ensures that each virtual machine meets the basic demand of QoS, and GPU resource is reallocated, the GPU resource of the virtual machine with higher FPS is distributed to the virtual machine that those do not meet qos requirement, therefore the QoS adaptive scheduling algorithm based on fairness not only meets qos requirement, more achieves equity dispatching and significantly improves GPU utilization rate.
Technical solution of the present invention is as follows:
GPU virtual resource adaptive scheduling host machine system in a kind of cloud game, its feature is, the proxy module comprising dispatching control module and be connected respectively with this dispatching control module, image application program DLL analysis module and virtual machine list, described proxy module is made up of scheduler module and monitoring module, and all modules are all deployed in host;
Described dispatching control module is responsible for the communication of information between other each modules: dispatching control module obtains the information of the virtual machine of all operations from virtual machine list, and sends to the module of needs;
Dispatching control module also has a function to be exactly automatically to regulate the parameter FPS of configuration scheduling method to run well to make dispatching method;
Described monitoring modular obtains the operation real time information of virtual machine from dispatching control module and it is sent to scheduler module,
Described scheduler module receives the order from upper strata GPU order distributor, and the operation real time information of the virtual machine sent according to monitoring modular, the order received is processed, is finally sent to lower floor main frame GPU and drives;
Described proxy module determines to need how many GPU resource are distributed to virtual machine together with dispatching control module;
Described image application program DLL analysis module is responsible for calculating the GPU order produced by image application program DLL and is run the expense brought, and result is sent to dispatching control module, thus other modules needing this information are obtained.
Virtual machine list saves the virtual machines run all in host machine system, each virtual machine can by system index, when there being new virtual machine add and bring into operation, system will automatically add it in virtual machine list and again reschedule GPU resource.
The dispatching method of the GPU virtual resource adaptive scheduling host machine system in the cloud game described in utilization, its feature is, the method comprises the steps:
1. family arranges the number of pictures per second reached in dispatching control module, and have recorded the real time information that virtual machine runs in virtual machine list, dispatching control module obtains the information of the virtual machine of all operations from virtual machine list;
2. image application program DLL analysis module calculates GPU order and runs the expense brought, and is sent to dispatching control module;
3. monitoring modular obtains the real time information of the virtual machine operation of recording in virtual machine list by dispatching control module, and run by the GPU order that dispatching control module acquisition image application program DLL analysis module calculates the expense brought, and they are together sent to scheduler module;
4. scheduler module receives the order from GPU order distributor, and the real time information run according to the virtual machine that obtains from monitoring modular and GPU order run the running time of the overhead computational sleep function brought, the order received is processed, is finally sent to lower floor main frame GPU and drives;
Described step concrete steps are 4. as follows:
The real time information display FPS run from the virtual machine that monitoring modular obtains when scheduler module is greater than 30fps, then scheduler module inserts sleep function before calling rendered picture function, then calls rendered picture function rendered picture;
The real time information display FPS that the virtual machine obtained from monitoring modular when scheduler module runs is less than 30fps, then scheduler module distributes more GPU resource for this virtual machine, and then calls rendered picture function rendered picture;
The real time information display FPS run from the virtual machine that monitoring modular obtains when scheduler module equals 30fps, then scheduler module directly calls rendered picture function rendered picture.
The principle of the invention is as follows:
(1) based on half virtualized system architecture (2) adaptive scheduling algorithm.Wherein: be a kind of lightweight scheduler module based on host's hangar implanted prosthetics based on half virtualized system architecture, for the image API of host, the operating system of client computer and application program without the need to amendment; Wherein meet minimum FPS and the maximum delay of cloud game Consumer's Experience based on the QoS adaptive scheduling algorithm of fairness, and emphasize the fairness of scheduling of resource.
The computing time that when virtual machine runs, Present calls is highly stable, even if it is also smooth variation gradually that fluctuation occurs, therefore we think that the agency of each virtual machine can predict the amount of calculation of Present by the historical information of each self virtualizing machine, and this algorithm adopts the prediction of arithmetic mean of instantaneous value as next frame of continuous print 20 Present allocating times in the past.In order to the value FPS making the frame length of each virtual machine be stabilized in setting gently, called by the last Present postponing it, expand each frame.And the effect postponed is called reach by being inserted Sleep before each Present calls, the amount of sleep is provided by formula.We find that the time quantum of Present significant change can occur when there is a large amount of GPU resource and competing, and we then by inserting flush order to obtain more accurate GPU computing time before Sleep calls.
The realization of fairness: the QoS adaptive scheduling algorithm 1, based on fairness can maintain a virtual machine list, the virtual machine that current operation is recorded in virtual machine list in detail occupies the FPS of situation and each virtual machine running game to GPU resource.3, operation image information of playing in image API analyzer real-time analysis virtual machine is to calculate its FPS, and prediction runs expense.4, dispatching control module is come the image information of game running as situation write virtual machine lists such as FPS by image API analyzer on the one hand, decide to discharge those GPU resource having the virtual machine of too high FPS by the virtual machine operation information of virtual machine list on the other hand, and give the virtual machine lower than QoS standard their reallocation.5, the GPU resource that scheduler module is responsible for realizing mentioning in 4 discharges and reallocation, on the one hand discharging GPU resource by inserting Sleep () deferred frame length, on the other hand the idle GPU resource obtained due to insertion Sleep being distributed to those virtual machines lower than Qos.6, in above-mentioned release and the process of reallocation, the virtual machine equal proportion having too high FPS bears the redistribution section of GPU resource.The virtual machine having maximum GPU resource occupies the first place of GPU utilization rate, and therefore the redistribution algorithm of this GPU resource is fair relatively.
Compared with prior art, the invention has the beneficial effects as follows the scientific management achieving GPU resource in virtual machine, and improve GPU resource utilization rate, distribute enough GPU resource based on the QoS self-adapting dispatching method of fairness by the virtual machine that the mode of fair allocat is all operations and ensure to meet qos requirement, making overall GPU utilization rate maximize simultaneously.
Accompanying drawing explanation
Fig. 1 is the configuration diagram of schedule virtual resources system in prior art.
Fig. 2 is the schematic diagram of the GPU virtual resource adaptive scheduling host machine system in cloud game of the present invention.
The composition of Fig. 3 frame and frame delay.
Fig. 4 GPU virtual resource adaptive scheduling host machine system dispatching method flow chart.
Fig. 5 control system different parameters is on the impact of performance, and (a), (b) are two different test programs respectively.
GPU resource reallocation between Fig. 6 two virtual machines.
Fig. 7 is based on the QoS self-adapting dispatching method performance evaluation result of fairness.
Detailed description of the invention
Below embodiments of the invention are elaborated, the present embodiment premised on technical solution of the present invention under implement, give detailed embodiment and concrete operating process, the platform that is suitable for of the present invention is not limited to following embodiment.
Fig. 1 is the configuration diagram of schedule virtual resources system in prior art, it is the schematic diagram of the GPU virtual resource adaptive scheduling host machine system in cloud game of the present invention, the module of present system is all deployed in host, it is a kind of dispatching patcher between GPU and host GPU application programming interface, the proxy module comprising dispatching control module 1 and be connected respectively with this dispatching control module 1, image application program DLL analysis module 4 and virtual machine list 5, described proxy module is made up of scheduler module 2 and monitoring module 3.
Dispatching control module 1 obtains the feedback of performance information and automatically regulates the parameter of configuration scheduling method to run well to make dispatching method from the virtual machine of all operations; The proxy module that each virtual machine run is made up of scheduler module 2 and monitoring module 3 by one, monitoring module sends the real-time performance information of respective virtual machine to dispatching control module 1, and scheduler module receives self-driven order and dispatches GPU calculation task.Proxy module also determines to need how many GPU resource are distributed to virtual machine usually together with dispatching control module 1.Image application program DLL analysis module 4 calculates GPU order and runs the expense brought, and these orders are produced by image API, such as Present order.Virtual machine list comprises the virtual machine run in all host machine systems, each virtual machine can by system index, when a new virtual machine brings into operation, host machine system can automatically add it in virtual machine list and reschedule GPU resource again.Described here adopts II type virtual based on half virtualized system architecture.When client application wake up criteria GPU plays up API, client computer GPU calculates storehouse and in main memory, has prepared corresponding GPU buffer memory and issue GPU order bag, these bags are pressed into also successively by host process in virtual GPU I/O queue, IOQ, and last dispatch layer sends a command to host with asynchronous form and drives.Direct memory access mode is used to a buffer memory and is transported to GPU internal memory from handset internal memory.
As shown in Figure 2, the definition of frame delay in this method: the time difference between adjacent twice Present return of value is defined as frame delay.Each frame is played up by GPU, Present (), Sleep (), target calculate and image is painted, and four parts form, and due to the change that GPU plays up, target calculates and image is painted, the length of different frame is therefore different.As shown in the false code of Fig. 4 algorithm, this algorithm is that the time of one of the composition Sleep () adjusting frame stablizes the size of each frame, finally makes each frame maintain a close level.Specifically, the QoS adaptive scheduling algorithm based on fairness is divided into two aspects, is the scheduling realization of adaptivity and the realization of scheduling fairness respectively.The logical flow chart of method program as shown in Figure 3.
The realization of adaptive scheduling part: 1, first user sets the FPS (number of pictures per second) of system for reaching in dispatching control module, virtual machine list have recorded the real time information that virtual machine runs in detail, and dispatching control module is responsible for the virtual machine operation information receiving self virtualizing machine list.2, image application program DLL analysis module calculates GPU order (such as rendered picture function command) and runs the expense brought and be sent to dispatching control module.3, monitoring modular obtains operation real time information and the GPU order expense of virtual machine from dispatching control module there, and they are together sent to scheduler module.On the one hand, scheduler module receives order from GPU order distributor and combines the running time that the virtual machine operation information that obtains from monitoring modular and GPU order run the overhead computational sleep function brought from upper strata, process receiving the order come, finally deliver to bottom main frame GPU again to drive: if information displaying FPS is greater than 30, before calling rendered picture function, insert sleep function, thus reach the object of extended delays frame, finally call rendered picture function rendered picture.If be less than 30, scheduler module is this virtual machine (FPS is less than 30) the more GPU resource of distribution, then calls rendered picture function rendered picture.If equal 30, directly call rendered picture function rendered picture.
The computing time that when virtual machine runs, Present calls is highly stable, even if it is also smooth variation gradually that fluctuation occurs, therefore we think that the agency of each virtual machine can predict the amount of calculation of Present by the historical information of each self virtualizing machine, and this algorithm adopts the prediction of arithmetic mean of instantaneous value as next frame of continuous print 20 Present allocating times in the past.In order to the value FPS making the frame length of each virtual machine be stabilized in setting gently, called by the last Present postponing it, expand each frame.And the effect postponed is called reach by being inserted Sleep before each Present calls, the amount of sleep is provided by formula.We find that the time quantum of Present significant change can occur when there is a large amount of GPU resource and competing, and we then by inserting flush order to obtain more accurate GPU computing time before Sleep calls.
Here simple elaboration is made to an example of the present invention.Tentative standard FPS is 30, system has been run altogether 3 virtual machines V1, V2, V3, and we suppose that history GPU uses and the ratio of FPS is t1, t2, t3 respectively.If V1 operates in 20FPS at the beginning, V2 operates in 40FPS, V3 operates in 60FPS, so first, schedule driven module receives the performance parameter transmitted from watch-dog, once it detects V1 do not meet qos requirement, QoS adaptive scheduling algorithm based on fairness starts working, and first calculates the GPU resource reallocated to V1,10t1.Other two virtual machines V2, V3 will prepare the resource discharging 5t1 respectively
The concrete operations platform of the present embodiment is the CPU of i7-2600K 3.4GHz, 16GB RAM, the video memory of AMD HD6750 video card and 2GB, host operating system and sub operating system all adopt Windows7x64, the version of VMware is 4.0, and each host operating system is double-core and has the RAM of 2GB, and screen resolution is 1280*720 (high-definition quality), in order to simplify Performance comparision, the present embodiment have disabled swap space, GPU accelerating windows system on host.
The present embodiment adopts the load of two types, and the first is " ideal model game ", and another kind is " real model game ", and three game code names are respectively A, B, C.First we test and have evaluated the effect of the control system using PI to control, as Fig. 5 shows control system respectively at kp=0.5, kp=0.25, kp=0.1 and ki=0.5, ki=0.1, performance under ki=0.05, the QoS adaptive scheduling algorithm that can reach a conclusion improves the performance of system.
The analysis of Sleep effect and GPU reallocation, the technology that system development storehouse is injected is used for inserting sleep function when calling GPU API, we have evaluated the effect of sleep function on control FPS and GPU resource utilization rate, in this test, a virtual machine is only had to run in the controls, image display is set to 1920*1200 resolution ratio, is adjusted to every frame 300ms the initial length of one's sleep, per second from subtracting 1ms afterwards.As Fig. 6, the experimental data obtained shows that the GPU of one of them virtual machine and cpu resource can be obtained by other resources of virtual machine, and namely system architecture and algorithm can dispatch GPU resource effectively in multiple virtual machine.
The dispatching algorithm performance of last test evaluating system.Based on the QoS adaptive scheduling algorithm of fairness, first, C has the FPS of the highest FPS and B because GPU resource competition is lower than 30.Then FSA scheduling strategy starts to play a role, and in the 2nd second, to the FPS of B, probably about 28, other two loads are greater than 30 to system looks.System releases the GPU resource from A and C and reallocates to B, therefore in the 3rd second, the FPS of B is increased to 33 and other two loads only decrease a bit, and in whole scheduling process, C has the highest FPS and the FPS of A does not reduce to the below the mark of setting.As shown in Figure 7, for GPU resource utilization rate, maximum is 99.1%, and minimum be 85.2%, on average 92.7%, although also have the waste of sub-fraction GPU resource here, the QoS adaptive scheduling algorithm based on fairness can maintain a high level GPU utilization rate substantially in most of time.
By test, the GPU virtual resource adaptive scheduling host machine system in cloud game of the present invention and self-adapting dispatching method, achieve the scientific management of GPU resource in virtual machine and improve GPU resource utilization rate.Wherein distribute enough GPU resource based on the QoS adaptive scheduling algorithm of fairness by the virtual machine that the mode of fair allocat is all operations and ensure to meet qos requirement, making overall GPU utilization rate maximize simultaneously.Test shows that this algorithm all achieves respective objects under various different load, and expense is limited in 5-10%.
Claims (3)
1. the GPU virtual resource adaptive scheduling host machine system in a cloud game, it is characterized in that, the proxy module comprising dispatching control module (1) and be connected respectively with this dispatching control module (1), image application program DLL analysis module (4) and virtual machine list (5), described proxy module is made up of scheduler module (2) and monitoring modular (3), all modules are all deployed in host
Described dispatching control module (1) is responsible for the communication of information between other each modules: dispatching control module (1) obtains the information of the virtual machine of all operations from virtual machine list (5), and sends to the module of needs;
Described monitoring modular (3) obtains the operation real time information of virtual machine from dispatching control module (1) and it is sent to scheduler module (2),
Described scheduler module (2) receives the order from upper strata GPU order distributor, and the operation real time information of the virtual machine sent according to monitoring modular (3), the order received is processed, is finally sent to lower floor main frame GPU and drives;
Described proxy module determines to need how many GPU resource are distributed to virtual machine together with dispatching control module (1);
Described image application program DLL analysis module (4) is responsible for calculating the GPU order produced by image application program DLL and is run the expense brought, and result is sent to dispatching control module (1), thus other modules needing this information are obtained
Virtual machine list (5) saves the virtual machines run all in described host machine system, each virtual machine can by system index, when there being new virtual machine add and bring into operation, system will automatically add it in virtual machine list (5) and again reschedule GPU resource.
2. utilize the dispatching method of the GPU virtual resource adaptive scheduling host machine system in the cloud game described in claim 1, it is characterized in that, the method comprises the steps:
1. user arranges the number of pictures per second (FPS) reached in dispatching control module, have recorded the real time information that virtual machine runs in virtual machine list, dispatching control module obtains the information of the virtual machine of all operations from virtual machine list (5);
2. image application program DLL analysis module (4) calculates GPU order and runs the expense brought, and is sent to dispatching control module (1);
3. monitoring modular (3) obtains the real time information of the virtual machine operation of recording in virtual machine list by dispatching control module (1), and run by the GPU order that dispatching control module (1) acquisition image application program DLL analysis module calculates the expense brought, and they are together sent to scheduler module (2);
4. scheduler module (2) receives the order from GPU order distributor, and the real time information run according to the virtual machine that obtains from monitoring modular and GPU order run the running time of the overhead computational sleep function brought, the order received is processed, is finally sent to lower floor main frame GPU and drives.
3. dispatching method according to claim 2, is characterized in that, described step concrete steps are 4. as follows:
The transmission frame number per second shown from the real time information that the virtual machine that monitoring modular obtains runs when scheduler module (2) is greater than 30fps, then scheduler module inserts sleep function before calling rendered picture function, then calls rendered picture function rendered picture;
The transmission frame number per second of the real time information display that the virtual machine obtained from monitoring modular when scheduler module (2) runs is less than 30fps, then scheduler module distributes more GPU resource for this virtual machine, and then calls rendered picture function rendered picture;
The transmission frame number per second shown from the real time information that the virtual machine that monitoring modular obtains runs when scheduler module (2) equals 30fps, then scheduler module directly calls rendered picture function rendered picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310244765.5A CN103309748B (en) | 2013-06-19 | 2013-06-19 | Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310244765.5A CN103309748B (en) | 2013-06-19 | 2013-06-19 | Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103309748A CN103309748A (en) | 2013-09-18 |
CN103309748B true CN103309748B (en) | 2015-04-29 |
Family
ID=49135005
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310244765.5A Active CN103309748B (en) | 2013-06-19 | 2013-06-19 | Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103309748B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334409A (en) * | 2018-01-15 | 2018-07-27 | 北京大学 | A kind of fine-grained high-performance cloud resource management dispatching method |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105025061B (en) * | 2014-04-29 | 2018-09-21 | 中国电信股份有限公司 | Build method and server that scene of game is shared in high in the clouds |
CN104216783B (en) * | 2014-08-20 | 2017-07-11 | 上海交通大学 | Virtual GPU resource autonomous management and control method in cloud game |
CN104598292B (en) * | 2014-12-15 | 2017-10-03 | 中山大学 | A kind of self adaptation stream adaptation and method for optimizing resources applied to cloud game system |
CN104539716A (en) * | 2015-01-04 | 2015-04-22 | 国网四川省电力公司信息通信公司 | Cloud desktop management system desktop virtual machine dispatching control system and method |
CN105338372B (en) * | 2015-10-30 | 2019-04-26 | 中山大学 | A kind of adaptive video crossfire code-transferring method applied to game live streaming platform |
CN105521603B (en) * | 2015-12-11 | 2019-05-31 | 北京奇虎科技有限公司 | The method, apparatus and system of virtual input control are carried out for the game of cool run class |
WO2017107055A1 (en) * | 2015-12-22 | 2017-06-29 | Intel Corporation | Apparatus and method for cloud-based graphics validation |
CN105933727B (en) * | 2016-05-20 | 2019-05-31 | 中山大学 | A kind of video stream transcoding and distribution method applied to game live streaming platform |
CN106776022B (en) * | 2016-12-09 | 2020-06-12 | 武汉斗鱼网络科技有限公司 | System and method for optimizing CPU utilization rate of game process |
CN114968478A (en) | 2018-03-06 | 2022-08-30 | 华为技术有限公司 | Data processing method, device, server and system |
CN108733490A (en) * | 2018-05-14 | 2018-11-02 | 上海交通大学 | A kind of GPU vitualization QoS control system and method based on resource-sharing adaptive configuration |
CN110162397B (en) * | 2018-05-28 | 2022-08-23 | 腾讯科技(深圳)有限公司 | Resource allocation method, device and system |
CN108829516B (en) * | 2018-05-31 | 2021-08-10 | 安徽四创电子股份有限公司 | Resource virtualization scheduling method for graphic processor |
CN109324903B (en) | 2018-09-21 | 2021-03-02 | 深圳前海达闼云端智能科技有限公司 | Display resource scheduling method and device for embedded system |
CN111111163B (en) * | 2019-12-24 | 2022-08-30 | 腾讯科技(深圳)有限公司 | Method and device for managing computing resources and electronic device |
CN111913799B (en) * | 2020-07-14 | 2024-04-19 | 北京华夏启信科技有限公司 | Video stream online analysis task scheduling method and computer equipment |
CN113674131B (en) * | 2021-07-21 | 2024-09-13 | 山东海量信息技术研究院 | Hardware accelerator device management method and device, electronic device and storage medium |
CN117971513B (en) * | 2024-04-01 | 2024-05-31 | 北京麟卓信息科技有限公司 | GPU virtual synchronization optimization method based on kernel structure dynamic reconstruction |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102541618A (en) * | 2010-12-29 | 2012-07-04 | 中国移动通信集团公司 | Implementation method, system and device for virtualization of universal graphic processor |
CN102650950A (en) * | 2012-04-10 | 2012-08-29 | 南京航空航天大学 | Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8274518B2 (en) * | 2004-12-30 | 2012-09-25 | Microsoft Corporation | Systems and methods for virtualizing graphics subsystems |
-
2013
- 2013-06-19 CN CN201310244765.5A patent/CN103309748B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102541618A (en) * | 2010-12-29 | 2012-07-04 | 中国移动通信集团公司 | Implementation method, system and device for virtualization of universal graphic processor |
CN102650950A (en) * | 2012-04-10 | 2012-08-29 | 南京航空航天大学 | Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334409A (en) * | 2018-01-15 | 2018-07-27 | 北京大学 | A kind of fine-grained high-performance cloud resource management dispatching method |
Also Published As
Publication number | Publication date |
---|---|
CN103309748A (en) | 2013-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103309748B (en) | Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game | |
Brandenburg et al. | Global scheduling not required: Simple, near-optimal multiprocessor real-time scheduling with semi-partitioned reservations | |
CN104750543B (en) | Thread creation method, service request processing method and relevant device | |
US8341624B1 (en) | Scheduling a virtual machine resource based on quality prediction of encoded transmission of images generated by the virtual machine | |
CN102890643B (en) | Resource scheduling system based on immediate feedback of application effect under display card virtualization | |
US20120262463A1 (en) | Virtualization method of vertical-synchronization in graphics systems | |
CN104503838B (en) | A kind of virtual cpu dispatching method | |
CN102270159B (en) | Access controlling and load balancing method for virtualized environment | |
US11645117B2 (en) | System and method for multi-tenant implementation of graphics processing unit | |
CN112905326B (en) | Task processing method and device | |
US20230342207A1 (en) | Graphics processing unit resource management method, apparatus, and device, storage medium, and program product | |
CN104572307A (en) | Method for flexibly scheduling virtual resources | |
AU2008258132B2 (en) | Load balancing in multiple processor rendering systems | |
Zhang et al. | A cloud gaming system based on user-level virtualization and its resource scheduling | |
US20090183166A1 (en) | Algorithm to share physical processors to maximize processor cache usage and topologies | |
CN103268253A (en) | Intelligent scheduling management method for multi-scale parallel rendering jobs | |
JP2024096226A (en) | System and method for efficient multi-gpu rendering of geometry by geometry analysis while rendering | |
CN115686758A (en) | VirtIO-GPU performance controllable method based on frame statistics | |
US12118641B2 (en) | System and method for efficient multi-GPU rendering of geometry by performing geometry analysis while rendering | |
CN110704195A (en) | CPU adjusting method, server and computer readable storage medium | |
US11961159B2 (en) | Region testing of geometry while rendering for efficient multi-GPU rendering | |
EP4089533A2 (en) | Pooling user interface engines for cloud ui rendering | |
US11847720B2 (en) | System and method for performing a Z pre-pass phase on geometry at a GPU for use by the GPU when rendering the geometry | |
CN116510312A (en) | Cloud game multi-opening implementation method, device, equipment and storage medium | |
CN109448092B (en) | Load balancing cluster rendering method based on dynamic task granularity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |