WO2013148595A2 - Système et procédé d'amélioration des performances graphiques d'applications hébergées - Google Patents

Système et procédé d'amélioration des performances graphiques d'applications hébergées Download PDF

Info

Publication number
WO2013148595A2
WO2013148595A2 PCT/US2013/033744 US2013033744W WO2013148595A2 WO 2013148595 A2 WO2013148595 A2 WO 2013148595A2 US 2013033744 W US2013033744 W US 2013033744W WO 2013148595 A2 WO2013148595 A2 WO 2013148595A2
Authority
WO
WIPO (PCT)
Prior art keywords
video stream
video
stages
game
pipeline
Prior art date
Application number
PCT/US2013/033744
Other languages
English (en)
Other versions
WO2013148595A3 (fr
Inventor
Douglas Sim Dietrich, Jr.
Nico Benitez
Timothy Cotter
Original Assignee
Onlive, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/430,269 external-priority patent/US9446305B2/en
Application filed by Onlive, Inc. filed Critical Onlive, Inc.
Publication of WO2013148595A2 publication Critical patent/WO2013148595A2/fr
Publication of WO2013148595A3 publication Critical patent/WO2013148595A3/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/335Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game

Definitions

  • the present disclosure relates generally to the field of data processing systems and particularly to a system and method for improving the graphics
  • FIG. 1 illustrates a system architecture for executing online video games according to one embodiment of the invention.
  • FIG. 2 illustrates different communication channels over which an online video game may be played in accordance with one embodiment of the invention.
  • FIG. 3 illustrates one embodiment of a system architecture for compressing audio/video generated by a video game.
  • FIG. 4 illustrates a system architecture according to one embodiment of the invention.
  • FIGS. 5-12 illustrate data flow between various system components and feedback employed in one embodiment of the invention.
  • FIG. 13 illustrates distinctions between a predicted camera location and an actual camera location.
  • FIG. 1 illustrates one embodiment of a video game/application hosting service 21 0 described in the co-pending applications.
  • the Hosting Service 21 0 hosts applications running on Servers 1 02, that accept input from an Input device 1 21 received by Home or Office Client 1 1 5, and sent through the Internet 1 1 0 to the Hosting Service 21 0.
  • the Servers 1 02 are responsive to the input, and update their video and audio output accordingly which is compressed through Low-Latency Video
  • the compressed video is then streamed through the Internet 1 1 0 to be decompressed by the Home or Office Client 1 1 5, and then displayed on a monitor or SD/HDTV 1 22.
  • This system is a low-latency streaming interactive video system as more thoroughly described in the aforementioned "co-pending applications.”
  • the network connection between the Hosting Service 210 and Home and Office Client 215 may be implemented through a wide range of network technologies, of varying degrees of reliability, such as wired or optical fiber technologies that are typically more reliable and wireless technologies that may be subject to unpredictable interference or range limitations (e.g. Wi-Fi) and are typically less reliable.
  • any of these client devices may have their own user input devices (e.g., keyboards, buttons, touch screens, track pads or inertial-sensing wands, video capture cameras and/or motion-tracking cameras, etc.), or they may use external input devices 221 (e.g., keyboards, mice, game controllers, inertial sensing wand, video capture cameras and/or motion tracking cameras, etc.), connected with wires or wirelessly.
  • the hosting service 210 includes servers of various levels of performance, including those with high-powered CPU/GPU processing capabilities.
  • a home or office client device 215 receives keyboard and/or controller input from the user, and then it transmits the controller input through the Internet 206 to the hosting service 210 that executes the gaming program code in response and generates successive frames of video output (a sequence of video images) for the game or application software (e.g., if the user presses a button which would direct a character on the screen to move to the right, the game program would then create a sequence of video images showing the character moving to the right).
  • This sequence of video images is then compressed using a low-latency video compressor, and the hosting service 210 then transmits the low-latency video stream through the Internet 206.
  • the home or office client device then decodes the compressed video stream and renders the decompressed video images on a monitor or TV. Consequently, the computing and graphical hardware requirements of the client device 215 are significantly reduced.
  • the client 215 only needs to have the processing power to forward the keyboard/controller input to the Internet 206 and decode and decompress a compressed video stream received from the Internet 206, which virtually any personal computer is capable of doing today in software on its CPU (e.g., a Intel Corporation Core Duo CPU running at approximately 2GHz is capable of decompressing 720p HDTV encoded using compressors such as H.264 and Windows Media VC9).
  • home client devices 205 do not require any specialized graphics processing units (GPUs), optical drive or hard drives.
  • GPUs graphics processing units
  • FIG 3 illustrates an embodiment of components of a server center for hosting service 210 utilized in the following feature descriptions. As with the hosting service 210 illustrated in Figures 1-2, the components of this server center are controlled and coordinated by a hosting service 210 control system 101 unless otherwise qualified.
  • Inbound internet traffic 301 from user clients 215 is directed to inbound routing 302.
  • inbound internet traffic 301 will enter the server center via a high-speed fiber optic connection to the Internet, but any network connection means of adequate bandwidth, reliability and low latency will suffice.
  • Inbound routing 302 is a system of network (the network can be implemented as an Ethernet network, a fiber channel network, or through any other transport means) switches and routing servers supporting the switches which takes the arriving packets and routes each packet to the appropriate application/game (“app/game”) server 321 -325.
  • a packet which is delivered to a particular app/game server represents a subset of the data received from the client and/or may be translated/changed by other components (e.g., networking components such as gateways and routers) within the data center.
  • packets will be routed to more than one server 321 -325 at a time, for example, if a game or application is running on multiple servers at once in parallel.
  • RAID arrays 31 1 - 312 are connected to the inbound routing network 302, such that the app/game servers 321 -325 can read and write to the RAID arrays 31 1 -312.
  • a RAID array 315 (which may be implemented as multiple RAID arrays) is also connected to the inbound routing 302 and data from RAID array 315 can be read from app/game servers 321 - 325.
  • the inbound routing 302 may be implemented in a wide range of prior art network architectures, including a tree structure of switches, with the inbound internet traffic 301 at its root; in a mesh structure interconnecting all of the various devices; or as an interconnected series of subnets, with concentrated traffic amongst intercommunicating device segregated from concentrated traffic amongst other devices.
  • One type of network configuration is a SAN which, although typically used for storage devices, it can also be used for general high-speed data transfer among devices.
  • the app/game servers 321 -325 may each have multiple network connections to the inbound routing 302.
  • a server 321 -325 may have a network connection to a subnet attached to RAID Arrays 31 1 -312 and another network connection to a subnet attached to other devices.
  • the app/game servers 321 -325 may all be configured the same, some differently, or all differently, as previously described.
  • each user, when using the hosting service is typically using at least one app/game server 321 -325.
  • app/game server 321 For the sake of simplicity of explanation, we shall assume a given user is using app/game server 321 , but multiple servers could be used by one user, and multiple users could share a single app/game server 321 -325.
  • the user's control input, sent from client 215 as previously described is received as inbound Internet traffic 301 , and is routed through inbound routing 302 to app/game server 321 .
  • App/game server 321 uses the user's control input as control input to the game or application running on the server, and computes the next frame of video and the audio associated with it.
  • App/game server 321 then outputs the uncompressed video/audio 329 to shared video compression 330.
  • App/game server may output the uncompressed video via any means, including one or more Gigabit Ethernet connections, but in one embodiment the video is output via a DVI connection and the audio and other compression and communication channel state information is output via a Universal Serial Bus (USB) connection.
  • USB Universal Serial Bus
  • the shared video compression 330 compresses the uncompressed video and audio from the app/game servers 321 -325.
  • the compression maybe implemented entirely in hardware, or in hardware running software. There may a dedicated compressor for each app/game server 321 -325, or if the compressors are fast enough, a given compressor can be used to compress the video/audio from more than one app/game server 321 -325. For example, at 60fps a video frame time is 16.67ms.
  • a compressor is able to compress a frame in 1 ms, then that compressor could be used to compress the video/audio from as many as 16 app/game servers 321 -325 by taking input from one server after another, with the compressor saving the state of each video/audio compression process and switching context as it cycles amongst the video/audio streams from the servers. This results in substantial cost savings in compression hardware.
  • the compressor resources are in a shared pool 330 with shared storage means (e.g., RAM, Flash) for storing the state of each compression process, and when a server 321 -325 frame is complete and ready to be compressed, a control means determines which compression resource is available at that time, provides the compression resource with the state of the server's compression process and the frame of uncompressed video/audio to compress.
  • shared storage means e.g., RAM, Flash
  • part of the state for each server's compression process includes information about the compression itself, such as the previous frame's decompressed frame buffer data which may be used as a reference for P tiles, the resolution of the video output; the quality of the compression; the tiling structure; the allocation of bits per tiles; the compression quality, the audio format (e.g., stereo, surround sound, Dolby® AC-3).
  • the compression process state also includes communication channel state information regarding the peak data rate and whether a previous frame is currently being output (and as result the current frame should be ignored), and potentially whether there are channel characteristics which should be considered in the compression, such as excessive packet loss, which affect decisions for the
  • the app/game server 321 -325 As the peak data rate or other channel characteristics change over time, as determined by an app/game server 321 -325 supporting each user monitoring data sent from the client 215, the app/game server 321 -325 sends the relevant information to the shared hardware compression 330.
  • the shared hardware compression 330 also packetizes the compressed video/audio using means such as those previously described, and if appropriate, applying FEC codes, duplicating certain data, or taking other steps to as to adequately ensure the ability of the video/audio data stream to be received by the client 215 and decompressed with as high a quality and reliability as feasible.
  • Some applications require the video/audio output of a given app/game server 321 -325 to be available at multiple resolutions (or in other multiple formats) simultaneously. If the app/game server 321 -325 so notifies the shared hardware compression 330 resource, then the uncompressed video/audio 329 of that app/game server 321 -325 will be simultaneously compressed in different formats, different resolutions, and/or in different packet/error correction structures. In some cases, some compression resources can be shared amongst multiple
  • compression processes compressing the same video/audio (e.g., in many compression algorithms, there is a step whereby the image is scaled to multiple sizes before applying compression. If different size images are required to be output, then this step can be used to serve several compression processes at once). In other cases, separate compression resources will be required for each format.
  • the compressed video/audio 339 of all of the various resolutions and formats required for a given app/game server 321 -325 (be it one or many) will be output at once to outbound routing 340.
  • the output of the compressed video/audio 339 is in UDP format, so it is a unidirectional stream of packets.
  • the outbound routing network 340 comprises a series of routing servers and switches which direct each compressed video/audio stream to the intended user(s) or other destinations through outbound Internet traffic 399 interface (which typically would connect to a fiber interface to the Internet) and/or back to the delay buffer 315
  • the outbound routing 340 may output a given video/audio stream to multiple destinations at once.
  • IP Internet Protocol
  • the multiple destinations of the broadcast may be to multiple users' clients via the Internet, to multiple app/game servers 321 -325 via inbound routing 302, and/or to one or more delay buffers 31 5.
  • the output of a given server 321 -322 is compressed into one or multiple formats, and each compressed stream is directed to one or multiple destinations.
  • the video output of multiple servers 321 -325 can be combined by the shared hardware compression 330 into a combined frame, and from that point forward it is handled as described above as if it came from a single app/game server 321 -325.
  • each compressed video/audio output 339 stream being routed to a user client 21 5 is also being multicasted to a delay buffer 31 5.
  • a directory on the delay buffer 31 5 provides a cross reference between the network address of the app/game server 321 -325 that is the source of the delayed video/audio and the location on the delay buffer 31 5 where the delayed video/audio can be found.
  • each application/game server 321 is equipped with a central processing unit (CP U) 401 for executing video game program code 408 stored in memory 403 and a graphics processing unit (G P U) for executing graphics commands to render the video game output 408.
  • CP U central processing unit
  • G P U graphics processing unit
  • the architectures of the CPU and GPU are well known and, as such, a detailed description of these units and the instructions/commands executed by these units will not be provided herein.
  • the GPU is capable of processing a library of graphics commands as specified by one or more graphics application programming interfaces (APIs) such as Open GL or Direct 3D.
  • APIs graphics application programming interfaces
  • the program code for executing these graphics APIs is represented in Figure 4 as graphics engine 430.
  • the CPU processes the video game program code 408 it hands off graphics commands specified by the API to the GPU which executes the commands and generates the video output 408. It should be noted, however, that the underling principles of the invention are not limited to any particular graphics standard.
  • both the CPU and GPU are pipelined processors, meaning that a set of data processing stages are connected in series within the CPU and GPU, so that the output of one stage is the input of the next one.
  • the CPU pipeline typically includes an instruction fetch stage, an instruction decode stage, an execution stage and a retirement stage, each of which may have multiple sub-stages.
  • a GPU pipeline may have many more stages including, by way of example and not limitation, transformation, vertex lighting, viewing transformation, primitive generation, project transformation, clipping, viewport transformation, rasterization, texturing, fragment shading and display. These pipeline stages are well understood by one of ordinary skill in the art and will not be described in detail herein.
  • the elements of a pipeline are often executed in parallel or in time-sliced fashion and some amount of queuing storage is often required between stages of the pipeline.
  • Each of the above stages and the queuing required between the stages adds a certain amount of latency to the execution of graphics commands.
  • the embodiments of the invention below provide techniques for minimizing this latency. Reducing latency is important because it expands the markets in which a device can be used. Moreover, the manufacturer of a device may not have control over significant sources of latency. For example, a user may attach a high latency television to a video game console or a multimedia device may be used remotely (e.g., online video games, a medical device controlled over the internet or military devices engaging targets on the front line while the operator remains safely behind the lines).
  • one embodiment of the invention includes a back buffer 405 and a front buffer 406 for storing video game image frames generated by the graphics engine 430 as the user plays a video game.
  • Each "frame" is comprised of a set of pixel data representing one screen image of the video game.
  • each frame is created in the back buffer as graphics commands are executed using graphics data.
  • the scan-out process may occur at a predetermined standard frequency (e.g., such as 60Hz or 120Hz as implemented on standard CRT or LCD monitors).
  • the uncompressed video output 408 may then be compressed using the various advanced low latency video compression techniques described in the co-pending applications.
  • the frame buffer doesn't need to be scanned out of the video card (e.g., via a digital video interface (DVI)) as implied above. It may be transferred directly to the compression hardware, for example over the application server's internal bus (e.g., a PCI Express bus).
  • the frame buffer may be copied in memory either by one of the CPUs or GPUs.
  • the compression hardware may be (by way of example and not limitation) the CPU, the GPU, hardware installed in the server, and/or hardware on the GPU card.
  • Figure 5 shows an asynchronous pipeline with queues (Q12, Q23, Q34) between each processing stage (P1 , P2, P3, P4) to hold the data produced by the previous stage before it's consumed by the next stage.
  • the various stages described herein are stages within the GPU 402.
  • the latency of such a pipeline is the sum of the time the data spends being transformed in each stage (Tp1 , Tp2, Tp3) plus the time the data spends sitting in each queue (Tq1 , Tq2, Tq3).
  • the obvious first step to minimizing latency is to minimize the queues or even get rid of them entirely.
  • One common way to do this is to synchronize the pipeline stages as per Figure 6. Every stage operates simultaneously on different sets of data. When all stages are ready, they all pass their data to the next stage in the pipeline. Queuing becomes trivial and will no longer be shown in the figures. Latency of a synchronized pipeline is the number of stages times the time for the slowest stage to complete.
  • the first pipeline stage may throttle down its clock to slow down data processing based on when the new data will be needed by the bottleneck stage.
  • This technique may be referred to as a phase locked pipeline.
  • the total latency is the sum of the times for each pipeline stage.
  • FIG. 9 Another embodiment is illustrated in Figure 9 in which the bottleneck stage is artificially moved to the first pipeline stage by slowing the first pipeline stage down to be slightly slower than the actual bottleneck stage.
  • the box labeled 5 in P1 starts after box 3 in P4.
  • Box 4 in P1 should also be slightly lower than the top of box 2 in P4.
  • This is common practice in video games where the bottleneck stage is the physical connection between the computer and the monitor.
  • One drawback in Figure 9 is there must be some latency inducing queuing (not shown) between stages P3 and P4.
  • Another drawback is that the latency experienced by the user may drift over time, decreasing steadily and then suddenly increasing only to begin decreasing again. It may also result in dropped frames.
  • the first stage is limited to be the same rate as the bottleneck stage.
  • the tops of the numbered boxes in P1 should be the distance apart as the tops of the boxes in P4.
  • the rates at which P1 is producing frames exactly matches the rate at which P4 is consuming them.
  • Feedback is necessarily provided from the bottleneck stage to the first stage to ensure the rates match exactly. Every stage provides feedback including but not limited to the time required to operate on the data and time spent queued.
  • the phase locking component maintains statistical information on each stage and can accurately predict with a predetermined confidence level that the data will be ready when the bottleneck stage requires it with a minimum amount of queuing. Note that a universal clock is not necessary in this embodiment. The phase locking component only requires relative times.
  • the pipeline stages may use different clocks.
  • the clocks may be in separate physical devices that could potentially be thousands of miles apart.
  • a bottleneck phase is identified based on timing constraints. Feedback is then provided to upstream stages from the bottleneck phase to allow the upstream stages to match the bottleneck stage rate precisely. The phase of the upstream stages is adjusted to minimize time wasted in queues.
  • the video stream is subdivided into two logical parts which may be processed independently: (a) a resource light, latency critical part, and (b) a resource heavy, latency tolerant part. These two parts can be combined in a hybrid system as illustrated in Figure 12.
  • One specific example would be a computer game known as a "first person shooter" in which a user navigates around from the perspective of a game character in a 3-dimensional world.
  • rendering the background and non-player characters is resource heavy and latency tolerant, denoted in Figure 12 with a "b" for "background,” while rendering the image of the player's character is made resource light and latency-intolerant (i.e., because anything less than very low latency performance will result in an undesirable user experience), denoted in Figure 12 with an “a” for “avatar.”
  • the game is implemented on a personal computer with a central processing unit (CPU) as stage P1 and a graphics processing unit (GPU) as stage P2.
  • the monitor represented as P3, is the bottleneck stage.
  • “Monitor” in the case, means any device that consumes the uncompressed video stream. Which could be the
  • the CPU completes its work on the background image, represented by 3b, before completing its work on avatar image, represented by 2a. Nonetheless, to reduce latency associated with the avatar, the GPU processes 2a ahead of 3b, rendering the avatar 2a on a previously rendered background 2b (to render the motion of the avatar as efficiently as possible) outputs that frame, and then immediately begins rendering the background of the next frame, represented by 3b.
  • the GPU may sit idle for a short time waiting for data from the CPU to complete the next frame. In this embodiment, the CPU sits idle waiting for the phase lock to signal that it's time to make a list of drawing commands for the user's avatar and pass it on to the GPU. The CPU then immediately begins to draw the background of a new frame but it can't be the next frame because the GPU will start drawing the next frame.
  • the high latency path predicts the inputs to generate the data.
  • the location of the camera is predicted ahead of time.
  • the output of the high latency path e.g., the background
  • the background would be translated, scaled, and/or rotated in order to match the actual camera position.
  • Figure 13 shows an actual camera location 1301 , a predicted camera location 1302, an actual background 1303 and a rendered background 1304.
  • Figure 13 shows an actual camera location 1301 , a predicted camera location 1302, an actual background 1303 and a rendered background 1304.
  • the program draws the firing gun on top of a previously rendered background and the game times it so that the frame is done just in time to be picked up by the next stage in the pipeline (which is the dvi output (vsync) or the encoder input or some other bottleneck). Then the game draws its best guess at what the background should be for the next frame. If the guess is poor, then one
  • FIG. 13 modifies the background to more closely match what it would have been if the it had been rendered from the correct camera position.
  • the technique shown in Figure 13 is a simple affine warp. More sophisticated techniques employed in other embodiments use the z-buffer to do a better job.
  • the various functional modules illustrated herein and the associated steps may be performed by specific hardware components that contain hardwired logic for performing the steps, such as an application-specific integrated circuit ("ASIC") or by any combination of programmed computer components and custom hardware components.
  • ASIC application-specific integrated circuit
  • the modules may be implemented on a programmable digital signal processor ("DSP") such as a Texas Instruments' TMS320x architecture (e.g., a TMS320C6000, TMS320C5000, . . . etc).
  • DSP programmable digital signal processor
  • Embodiments may include various steps as set forth above.
  • the steps may be embodied in machine-executable instructions which cause a general-purpose or special-purpose processor to perform certain steps.
  • Various elements which are not relevant to these underlying principles such as computer memory, hard drive, input devices, have been left out of some or all of the figures to avoid obscuring the pertinent aspects.
  • Elements of the disclosed subject matter may also be provided as a machine- readable medium for storing the machine-executable instructions.
  • the machine- readable medium may include, but is not limited to, flash memory, optical disks, CD- ROMs, DVD ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of machine-readable media suitable for storing electronic instructions.
  • the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
  • elements of the disclosed subject matter may also be provided as a computer program product which may include a machine- readable medium having stored thereon instructions which may be used to program a computer (e.g., a processor or other electronic device) to perform a sequence of operations. Alternatively, the operations may be performed by a combination of hardware and software.
  • the machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions.
  • elements of the disclosed subject matter may be downloaded as a computer program product, wherein the program may be transferred from a remote computer or electronic device to a requesting process by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
  • a communication link e.g., a modem or network connection

Abstract

L'invention concerne un système et un procédé destinés à traiter de façon efficiente un flux vidéo en utilisant des ressources matérielles et / ou logicielles limitées. Par exemple, un mode de réalisation d'un procédé informatisé de traitement efficient d'un flux vidéo au moyen d'un pipeline de processeur comprenant une pluralité d'étapes de pipeline comporte les étapes consistant à : identifier une étape-goulot au sein du pipeline de processeur, l'étape-goulot traitant des images du flux vidéo ; recevoir un signal de rétroaction en provenance de l'étape-goulot à une ou plusieurs étapes en amont, le signal de rétroaction donnant une indication de la vitesse à laquelle l'étape-goulot traite les images du flux vidéo ; et régler en réaction la vitesse à laquelle l'étape ou les étapes en amont traitent des images du flux vidéo de telle façon qu'elle s'approche de la vitesse à laquelle l'étape-goulot traite les images du flux vidéo.
PCT/US2013/033744 2012-03-26 2013-03-25 Système et procédé d'amélioration des performances graphiques d'applications hébergées WO2013148595A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/430,269 2012-03-26
US13/430,269 US9446305B2 (en) 2002-12-10 2012-03-26 System and method for improving the graphics performance of hosted applications

Publications (2)

Publication Number Publication Date
WO2013148595A2 true WO2013148595A2 (fr) 2013-10-03
WO2013148595A3 WO2013148595A3 (fr) 2013-11-28

Family

ID=49261388

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/033744 WO2013148595A2 (fr) 2012-03-26 2013-03-25 Système et procédé d'amélioration des performances graphiques d'applications hébergées

Country Status (2)

Country Link
TW (1) TWI615803B (fr)
WO (1) WO2013148595A2 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947044B1 (en) * 1999-05-21 2005-09-20 Kulas Charles J Creation and playback of computer-generated productions using script-controlled rendering engines
US20070165035A1 (en) * 1998-08-20 2007-07-19 Apple Computer, Inc. Deferred shading graphics pipeline processor having advanced features

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060133513A1 (en) * 2004-12-22 2006-06-22 Kounnas Michael K Method for processing multimedia streams
US9215467B2 (en) * 2008-11-17 2015-12-15 Checkvideo Llc Analytics-modulated coding of surveillance video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070165035A1 (en) * 1998-08-20 2007-07-19 Apple Computer, Inc. Deferred shading graphics pipeline processor having advanced features
US6947044B1 (en) * 1999-05-21 2005-09-20 Kulas Charles J Creation and playback of computer-generated productions using script-controlled rendering engines

Also Published As

Publication number Publication date
WO2013148595A3 (fr) 2013-11-28
TWI615803B (zh) 2018-02-21
TW201351342A (zh) 2013-12-16

Similar Documents

Publication Publication Date Title
US10099129B2 (en) System and method for improving the graphics performance of hosted applications
US11471763B2 (en) System and method for improving the graphics performance of hosted applications
US9272220B2 (en) System and method for improving the graphics performance of hosted applications
US11344799B2 (en) Scene change hint and client bandwidth used at encoder for handling video frames after a scene change in cloud gaming applications
US9682318B2 (en) System and method for improving the graphics performance of hosted applications
US8961316B2 (en) System and method for improving the graphics performance of hosted applications
US11539960B2 (en) Game application providing scene change hint for encoding at a cloud gaming server
US8845434B2 (en) System and method for improving the graphics performance of hosted applications
US20230016903A1 (en) Beginning scan-out process at flip-time for cloud gaming applications
US8851999B2 (en) System and method for improving the graphics performance of hosted applications
WO2013148595A2 (fr) Système et procédé d'amélioration des performances graphiques d'applications hébergées
WO2013040261A1 (fr) Système et procédé permettant d'améliorer la performance graphique des applications hébergées

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13769575

Country of ref document: EP

Kind code of ref document: A2

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 02/12/2014)

122 Ep: pct application non-entry in european phase

Ref document number: 13769575

Country of ref document: EP

Kind code of ref document: A2