EP3068504A1 - Data collection for multiple view generation - Google Patents
Data collection for multiple view generationInfo
- Publication number
- EP3068504A1 EP3068504A1 EP14860984.5A EP14860984A EP3068504A1 EP 3068504 A1 EP3068504 A1 EP 3068504A1 EP 14860984 A EP14860984 A EP 14860984A EP 3068504 A1 EP3068504 A1 EP 3068504A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- view
- scene
- client
- representation
- section
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000013480 data collection Methods 0.000 title claims abstract description 104
- 238000000034 method Methods 0.000 claims description 121
- 230000005540 biological transmission Effects 0.000 claims description 56
- 230000015654 memory Effects 0.000 claims description 28
- 101150034459 Parpbp gene Proteins 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 264
- 238000009877 rendering Methods 0.000 description 118
- 238000003860 storage Methods 0.000 description 51
- 230000015572 biosynthetic process Effects 0.000 description 35
- 238000004891 communication Methods 0.000 description 32
- 238000010586 diagram Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 13
- 230000009471 action Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 239000003086 colorant Substances 0.000 description 6
- 238000007789 sealing Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 102220585520 T cell receptor gamma constant 1_S20A_mutation Human genes 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 102220352372 c.148T>G Human genes 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000010304 firing Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 239000010454 slate Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010012289 Dementia Diseases 0.000 description 1
- 102220516862 Eosinophil cationic protein_I40A_mutation Human genes 0.000 description 1
- 206010041235 Snoring Diseases 0.000 description 1
- 102220585512 T cell receptor gamma constant 1_S60A_mutation Human genes 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000386 athletic effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000004568 cement Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 229910052738 indium Inorganic materials 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 102200081478 rs121908458 Human genes 0.000 description 1
- 102220005308 rs33960931 Human genes 0.000 description 1
- 102200033501 rs387907005 Human genes 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/33—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
- A63F13/335—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
- A63F13/352—Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
- A63F13/355—Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5252—Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/77—Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
Definitions
- each client may have associated state information corresponding to actions, events or other information associated with the client's participation in the game.
- state information may include information associated with actions performed by a particular character or other entity controlled by the respective client.
- One conventional approach to enable multiple client interaction involves periodically transmitting game state information from each participating client to a server, which in turn may forward back, to each client, updated state information received from each of the other clients.
- Each of the clients ma use this updated state information to maintain its own respective individual game state, which in turn may be used to render, at each client, a respective presentation of the video game.
- each particular client may present scenes within the video game from a perspective of a particular character or other entity controlled by the respective client,
- FIG. 1 is a diagram illustrating an example computing system that may be used in some embodiments.
- FIG. 2 Is a diagram illustrating a example computing system that may be used in some embodiments.
- FIG, 3A is a diagram illustrating an example system for multiple view generation in accordance with the present disclosure
- FIG. 3B is a diagram illustrating an example system for identical view generation in accordance with the present disclosure
- FIG. 4 is a diagram illustrating a first example content transmission system in accordance with the present disclosure
- FIG, S is a diagram illustrating a second example content transmission system In accordance with the present disclosure.
- FIG. 6 is a diagram Illustrating a third example content transmission system in accordance with the present disclosure.
- FIG, 7 is a diagram illustrating a first example graphics processing unit scaling scenario in accordance with the present disclosure
- FiG. 8 is a diagram illustrating a second example graphics processing unit scaling scenario In accordance with the present disclosure,
- FIG, 9 is a diagram Illustrating a third example graphics processing unit scaling scenario in accordance with the present disclosure.
- FIG, 10 is a diagram illustrating a fourth example graphics processing unit sealing scenario in accordance with the present disclosure.
- FiG. 1 1 is a diagram illustrating an example stitching technique in accordance with the present disclosure.
- FiG. 12 is a diagram illustrating example layers in accordance with the present disclosure.
- FIG. 13 is a diagram illustrating an example layering technique in accordance with the present disclosure
- FIG. 14 is a diagram illustrating an example content provider system in accordance with the present disclosure.
- FIG, 15 L s a flowchart depicting an example procedure for generating one or more views based on shared state information in accordance with the present disclosure.
- FSG. 16 is a flowchart depicting an exam le procedure for rendering using one or more graphics processing units in accordance with the present disclosure.
- FIG. 17 is a diagram illustrating an example system employing a data collection for multiple view generation in accordance with the present disclosure.
- FSG. 18 is a diagram illustrating a first example data collection including data associated with multiple views in accordance with the present disclosure
- FIG, 1 is a diagram illustrating an example representation formation sequence in accordance with the present disclosure.
- FIG. 20 is a diagram illustrating a second example data collection including data associated with multiple views in accordance with the present disclosure.
- FIG. 21 is a flowchart depicting an example procedure for employing a data collection for multiple view generation in accordance with the present disclosure
- one or more rendered views of a scene of a particular content item may be generated by a content provider and transmitted from the content provider to multiple different clients.
- a content provider may generate multiple views of a scene of a particular content item.
- Each of the multiple views may, for example, be associated with one or more respective clients and may be transmitted From the content provider to the respective clients.
- each view may present a scene from a viewpoint of a particular character or other entity controlled by a respective client to which the view is transmitted.
- the content provider may transmit an identical view of a scene of a particular content item to multiple clients.
- Identical views may, for example, be transmitted to clients that control closely related characters or that collaborate to control a single character,
- each of the different participating clients may collect respective client state information.
- the client state information may include, for example, information regarding operations performed at the respective client, such as movements or other actions performed by a respective character or other entity controlled by the respective client.
- Each of the respective clients may periodically transmit an update of its respective client state information to the content provider.
- the content provider may then use the client state information updates received from each client to update shared content item state in formal ion maintained by the content provider.
- the content provider may then use the shared content item state information to generate the one or more views transmitted to the different participating clients.
- one or more of the participating clients may operate in a hybrid mode in which, in addition to receiving one or more views from the content provider, the hybrid mode eiients execute their own focal version of the content item and generate their own iocal client streams, Each hybrid mode client may then combine, locally at the client, a received content provider stream of views with the local client stream to generate and disp!ay a hybrid content item stream.
- a content provider may employ multiple graphics processing units to generate the one or more views of a scene of a particular content item.
- the multiple graphics processing units may generate renderings associated with a particular scene at least partially simultaneously with one another.
- the use of multiple graphics processing units may assist in enabling real time or near-real time generation and presentation of rendered views.
- multiple graphics processing units may each render a respective portion of a scene that is used to generate one or more resulting views for display.
- the renderings may be combined to form the view by, for example, stitching the renderings together or employing a representation in which the renderings are logically combined at different associated layers.
- the number of graphics processing units that are used to render a particular content item may be elastic such thai the number changes depending on various factors.
- factors may include, for example, a performance rate associated with one or more graphics processing units, a complexity of rendered scenes, a number of views associated with the rendered scenes, availability of additional graphics processing units and any other relevant factors,
- multiple different views of a scene may be combined into a single data collection, such as a render target.
- a singie data collection may include multiple sections, each associated with a respective one of the multiple views.
- Each section of the data collection may then be separately retrieved, encoded and transmitted over a network, in some cases, each object within the scene may have an associated representation that is formed in each section of the data collection prior to moving on to a next object, For example,
- representations of a first object may be formed across each section of the data collection prior to forming representations of a second object, This formation sequence may, in some cases, reduce state changes associated with loading of data associated with each object including, for example, various geometry, textures, shaders and the like.
- a cement provider may, in some cases, render and transmit content item views to clients over an electronic network, such as the Internet, Content may, in some cases, he provided upon request to clients using, for example, streaming content deliver ' techniques.
- FIG. 1 illustrates an example computing environment in which the embodiments described herein may be implemented.
- I is a diagram schematically illustrating an example of a data center 210 that can provide computing resources to users 200a and 200b (which may be referred herein singularly as user 20G or in the plural as users 200) via user computers 202a and 202b (which may be referred herein singularly as computer 202 or in the plural as computers 202) via a communications network 230.
- Data center 210 may be configured to provide computing resources for executing applications on a permanent or an as-needed basis.
- the computing resources provided by data center 210 may include various types of resources, such as gateway resources, load balancing resources, touting resources, networking resources, computing resources, volatile and non-volatile memory resources, content delivery resources, data processing resources, data storage resources, data communication resources and the like.
- Each type of computing resource may be general-purpose or may be available in a number of specific configurations.
- data processing resources may be available as virtual machine instances that may be configured to provide various web services, in addition, combinations of resources may be made available via a network and may be configured as one or more web services.
- the instances may be configured to execute applications, including web services, such as application services, media services, database services, processing services, gateway services, storage services, routing services, security services, encryption services, load balancing services, application services and the like, These services may be configurable with set or custom applications and may be configurable in size, execution, cost, latency, type, duration, accessibility and in any other dimension.
- These web services may be configured as available infrastructure for one or more clients and can include one or more applications configured as a platform or as software for one or more clients. These web services may be made available via one or more communications protocols. These communications protocols may include, for example, hypertext transfer protocol (HTTP ⁇ or non-HTTP protocols. These communications protocols may also include, for example, more reliable transport layer protocols, such as transmission control protocol (TCP), and less reliable transport layer protocols, such as user datagram protocol (UDP), Data storage resources may Include file storage devices, block storage devices and the like,
- Each type or configuration of computing resource may be available in different sizes, such as large resources-— consisting of many processors, large amounts of memory and/or large storage capacity— and small resource— consisting of fewer processors, smaller amounts of memory and/or smaller storage capacity.
- large resources- consisting of many processors, large amounts of memory and/or large storage capacity—
- small resource consisting of fewer processors, smaller amounts of memory and/or smaller storage capacity.
- Customers may choose io allocate a number of small processing resources as web servers and/or one large processing resource as a database server, for example,
- Data center 21 Q may include servers 216a and 216b (which may be referred herein singularly as server 256 or in the plural as servers 216 ⁇ that provide computing resources. These resources may be available as bare metal resources or as virtual machine instances 21 Ba ⁇ d (which may be referred herein singularly as virtual machine instance 218 or in the plural as virtual machine instances 2 i 8).
- Virtual machine instances 218c and 218d are shared state virtual machine f'SSYM" instances.
- the SSVM virtual machine instances 2 ! 8c and 218d may be configured to perform all or any portion of the shared content item state techniques and/or any other of the disclosed techniques in accordance with the present disclosure and described in detail below.
- FIG, 1 includes one SSVM virtual machine in each server, this is merely an example.
- a server may include more than one SSVM virtual machine or may not include any SS VM virtual machines.
- visualization technologies may allow a physical computing device to be shared among multiple users by providing each user with one or more virtual machine instances hosted by the physical computing device.
- a virtual machine instance may be a software emulation of a particular physical computing system thai acts as a distinct logical computing system.
- Such a virtual machine instance provides isolation among multiple operating systems sharing a given physical computing resource.
- some visualization technologies may provide virtual resources that span one or more physical resources, such as a single virtual machine instance with multiple virtual processors that span multiple distinct physical computing systems.
- communications network 230 may, for example, foe a publicly accessible network of linked networks and possibly operated by various distinct parties, such as the Internet.
- communications network 230 may be a private network, such as a corporate or university network that is wholly or partially inaccessible to non- privileged users.
- communications network 230 may include one or more private networks with access to and/or from the Internet.
- Communication network 230 may provide access to computers 2 ⁇ 2.
- User computers 202 may be computers utilized by users 200 or other customers of data center 230,
- user computer 202a or 202b may be a server, a desktop or laptop personal computer, a tablet computer, a 3 ⁇ 4'ireiess telephone, a personal digital assistant (PDA), an e-book reader, a game console, a set-top box or any other computing device capable of accessing data center 210.
- User computer 202a or 202b may connect directly to the Internet (e.g., via a cab!e modem or a Digsta! Subscriber Line (DSL)).
- DSL Digsta! Subscriber Line
- User computers 2 ⁇ 2 may also be utilized to configure aspects of the computing resources provided by data center 210.
- data center 210 might provide a gateway or web interface through which aspects of its operation may be configured through the use of a web browser application program executing on user computer 202.
- a stand-alone application program executing on user computer 202 might access an application programming interface (API) exposed by data center 210 for performing the configuration operations.
- API application programming interface
- Other mechanisms for configuring the operation of various web services available at data center 2 10 might also be utilized.
- Servers 216 shown in FIG. I may be standard servers configured appropriately for providing the computing resources described above and may provide computing resources for executing one or more web services and/or applications.
- the computing resources may be virtual machine instances 218.
- each of the servers 216 may be configured to execute an instance manager 220a or 220b (which may be referred herein singularly as instance manager 220 or in the plural as instance managers 220) capable of executing the virtual machine instances 2 I S.
- the instance managers 220 may be a virtual machine monitor (VMM) or another type of program configured to enable the execution of virtual machine instances 218 on server 216, for example.
- VMM virtual machine monitor
- each of the virtual machine instances 218 may be configured to execute a!l or a portion of an application.
- a roueer 214 may be utilized to interconnect the servers 216a and 216b, Router 214 may also be connected to gateway 240, which is connected to communications network 23 ⁇ .
- Router 2 ! 4 may be connected to one or more load balancers, and alone or in combination may manage communications within networks in data center 210, for example, by forwarding packets or other data communications as appropriate based on characteristics of such communications (e.g., header information including source and/or destination addresses, protocol identifiers, size, processing requirements, etc.) and/or the characteristics of the private network (e.g., routes based on network topology, etc.).
- characteristics of such communications e.g., header information including source and/or destination addresses, protocol identifiers, size, processing requirements, etc.
- the characteristics of the private network e.g., routes based on network topology, etc.
- a server manager 215 is also employed to at least in part direct various communications to, from and/or be ween servers 216a and 216b. While FIG, 1 depicts router 214 positioned between gateway 240 and server manager 215, this is mereiy an exemplary configuration, in some eases, tor example, server manager 2 i 5 may be positioned between gateway 240 and router 214, Server manager 2 I S may, in some cases, examine portions of incoming communications frost? user computers 202 to determine one or more appropriate servers 216 to receive and/or process the incoming communications. Server manager 2 15 may determine appropriate servers to receive and/or process the incoming communications based on factors such as an identity, location or other attributes associated with user computers 202.
- Server manager 215 may, for example, coiieet or otherwise have access to state information and other information associated with various tasks in order to, for example, assist in managing communications and other operations associated with such tasks,
- FIG. i has been greatly simplified and that many more networks and networking devices may be utilized io interconnect the various computing systems disclosed herein. These network topologies and devices should be & i zrs i to those skilled in the art.
- data center 210 described in FIG. 1 is merely illustrative and that other implementations might be utilized. Additionally, it should be appreciated that the functionality disclosed herein might be implemented in software, hardware o a combination of software and hardware. Other implementations should be apparent to those skilled in the art.
- a server, gateway or other computing device may comprise any combination of hardware or software that can interact and perform the described types of functionality, including without limitation desktop or other computers, database servers, network storage devices and other network devices, PDAs, tablets, cellphones, wireiess phones, pagers, electronic organizers, Internet appliances, television-based systems (e.g., using set top boxes and/or personal/digital video recorders) and various other consumer products that Include appropriate communication capabilities.
- the functionality provided by the illustrated modules may in some embodiments be combined in fewer modules or distributed in additional modules. Similarly, in some embodiments the functionality of some of the illustrated modules may not fee provided and/or other additional functionality ma be available.
- a server that implements a portion or all of one or more of the technologies described herein may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media
- FIG. 2 depicts a general-purpose computer system that includes or is configured to access one or more computer- accessible media.
- computing device 100 includes one or more processors 10a, 10b and/or 10n (which may be referred herein singularly as “a processor 10" or in the plural as “the processors 10") coupled to a system memory 20 via an input/output (I/O) interface 30.
- Computing device 100 further includes a network interface 40 coupled to I/O interface 30.
- computing device 100 may be a uniprocessor system including one processor 1 or a multiprocessor system including several processors i 0 (e.g., two, four, eight or another suitable number).
- Processors 1 may be any suitable processors capable of executing instructions.
- processors 10 may be general- purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86 > PowerPC, SPARC or MIPS IS As or any other suitable ISA, In multiprocessor systems, each of processors 10 may commonly, but not necessarily, implement the same ISA.
- ISAs instruction set architectures
- System memory 20 may be configured to store Instructions and data accessible by processor(s) 1 Q.
- system memory 20 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile? Flash y -type memory or any other type of memory, in the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques arid data described above, are shown stored within system memory 20 as code 25 and data 26.
- I/O interface 30 may be configured to coordinate I/O traffic between processor 10, system memory 20 and any peripherals in the device, including network interface 40 or other peripheral interfaces.
- I/O interface 30 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 20) into a format suitable for use by another component (e.g., processor 10),
- I/O interface 30 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
- PCI Peripheral Component interconnect
- USB Universal Serial Bus
- I/O interface 30 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O In terface 30, such as an interface to system memory 20, may be incorporated directly Into processor 10.
- Network interface 40 may be configured to allow data to be exchanged between computing device 100 and other device or devices 6 ⁇ attached to a network or networks 50, such as other computer systems or devices, for example,
- network interface 40 may support communication via any suitable wired or wireless general data networks, such as types of Ethernet networks, for example.
- network interface 4 ⁇ may support communication via telecommunications/telephony networks, such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs (storage area networks) or via any other suitable type of network and/or protocol
- system memory 20 may he one embodiment of a computer-accessible medium configured to store program Instructions and data as described above for implementing embodiments of the corresponding methods and apparatus.
- program instructions and/or data may he received, sent or stored upon different types of computer-accessible media.
- a computer-accessible medium may include non-transitory storage media or memory media, such as magnetic or optical media e.g., disk or DVD/CD coupled to computing device 500 via I/O interface 30.
- a non-transitory computer-access ble storage medium may also include any volatile or non-volatile media, such as RAM (e.g. SDRAM.
- a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic or digital signals conveyed via a communication medium, such as a network and/or a wireless link, such as those that may be implemented via network interface 40. Portions or alt of multiple computing devices, such as those illustrated in FIG.
- computing device refers to at Seas! all these types of devices and is not limited to these types of devices.
- a compute node which may be referred to also as a computing node, may he implemented on a wide variety of computing environments, such as commodity-hardware computers, virtual machines, web services, computing clusters and computing appliances. Any of these computing devices or environments may, for convenience, be described as compute nodes.
- a network set up by an entity, such as a company or a public sector organization, to provide one or more web endees (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to a distributed set of clients may be termed a provider network.
- a provider network may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualimJ computer servers, storage devices, networking equipment and the like, needed to implement and distribute the infrastructure and web services offered by the provider network.
- the resources may in some embodiments he offered to clients in various units related to the web service, such as an amount of storage capacity for storage, processing capability for processing, as Instances, as sets of related services and the like.
- a virtual computing instance may, for example, comprise one or more servers with a speci ied computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor),
- a speci ied computational capacity which may be specified by indicating the type and number of CPUs, the main memory size and so on
- a specified software stack e.g., a particular version of an operating system, which may in turn run on top of a hypervisor
- a number of different types of computing devices may be used singly or in combination to implement the resources of the provider network in different embodiments, including general-purpose or special-purpose computer servers, storage devices, network devices and the like, Sn some embodiments a client or user may be provided direct access to a resource instance, e.g., by giving a user an administrator login and password. In other embodiments the provider neiwork operator may allow clients to specify execution requirements for specified client applications and schedule execution of the applications on behalf of the client on execution platforms (such as application server instances, JavaTM virtual machines (JVMs), general-purpose or special-purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby.
- execution platforms such as application server instances, JavaTM virtual machines (JVMs), general-purpose or special-purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby.
- a given execution platform may utilize one or more resource instances in some implementations; in other implementations, multiple execution platforms may be mapped to a single resource instance.
- the computing resource provider may provide facilities for customers to select and launch the desired computing resources, deploy application components to the computing resources and maintain an application executing in the environment.
- the computing resource provider may provide further facilities for the customer to quickly and easily scale up or scale down the numbers and types of resources allocated to the application, either manually or through automatic scaling, as demand for or capacity requirements of the application change.
- the computing resources provided by the computing resource provider may be made available in discrete units, which may be referred to as instances, An instance may represent a physical server hardware platform, a virtual machine instance executing on a server or some combination of the two. Various types and configurations of instances may be made available, including different sizes of resources executing different operating systems (OS) and/or hypervisors, and with various installed software applications, runtimes and the like.
- OS operating systems
- hypervisor hypervisors
- Instances may further be available in specific availability zones representing a logical region, a fault tolerant region, a data center or other geographic location of the underlying computing hardware, for example, instances may be copied within an availability zone or across availability zones to improve the redundancy of the instance, and instances may be migrated within a particular availability zone or across availability zones, As one example, the latency for client communications with a particular server in an availability zone may be less than the latency for client communications with a different server. As such, an instance may be migrated from the higher latency server to the lower latency server to improve the overall client experience.
- the provider network may be organized Into a plurality of geographical regions, and each region may include one or more availability zones.
- An availability zone (which may also be referred to as an availability container) in turn may comprise one or more distinct locations or data centers, configured in such a way that the resources in a given availability zone may be isolated or insulated from failures In other availability zones. That is, a failure in one availability zone may not be expected to result in a failure in any other availability zone.
- the availability profile of a resource instance is intended to be independent of the availability profile of a resource instance in a different availability one. Clients may be able to protect their applications from failures at a single location by launching multiple application instances in respective availability zones.
- inexpensive and low latency network connectivity may be provided between resource instances that reside within the same geographical region (and network transmissions between resources of the same availability zone may be even faster),
- FIG. 3A includes a content provider 300 in communication with clients 31 OA and 3 10B. Centers! provider 300 executes a content item 307.
- Content provider 300 may. for example, provide one or more content providing services for providing content to clients, such as clients 31 OA and 3 1 GB, The content providing services may reside on one or mor servers.
- the content providing services may be scalable to meet the demands of one or more customers and may increase or decrease in capability based on the number and type of incoming client requests. Portions of content providing services may also be migrated to be placed in positions of reduced latency with requesting clients.
- Content provider 300 and some of its example architectures are described in greater detail below.
- content item 307 may include graphics content such as a video game.
- content item 307 may include two-dimensional content, which, as used herein, refers Co content that may be represented in accordance with two-dimensional scenes.
- content item 307 may include three-dimensional content, which, as used herein, refers to content that may be represented in accordance with three-dimensional scenes.
- the two-dimensional or three-dimensional scenes may be considered logical representations in the sense ihat they may, for example, not physically occupy the areas that they are intended to logically model or represent,
- a scene may, for example, include or otherwise be associated with information or data that describes the scene.
- scenes associated with the content item 307 may be used to generate resulting images for display.
- the images may be generated by way of a process commonly referred to as rendering, which may incorporate concepts such as, for example, projection, reflection, lighting, shading and others.
- An image may include, for example, information associated with a displayable output, such as information associated with various pixel values and/or attributes. As will be described below, each generated image may, in some cases, correspond to a particular view of a scene,
- Content item 307 may be displayed and otherwise presented to users at clients 31 OA and 3 10B, Clients 31 OA and 310B may communicate with content provider 300 via an electronic network, such as, for example, the Internet or another type of wide area network (WAN) or local area network (LAN). Clients 31 OA and 31 B may, in some cases, be physically positioned at remote locations with respect to one another.
- an electronic network such as, for example, the Internet or another type of wide area network (WAN) or local area network (LAN).
- WAN wide area network
- LAN local area network
- FIG. 3 A depicts two clients 31 OA and 310B
- the disclosed techniques may be employed in association with any number of different participating clients thai receive transmissions corresponding to content item 307.
- one or more of the participating connected clients may operate in a hybrid mode in which, in addition to receiving one or more views from the content provider 300, the hybrid mode clients execute their own local version of content item 307 and generate their own focal diem streams.
- Each hybrid mode client may then combine, locally at the client, a received content provider stream of views with the local client stream to generate and display a hybrid contest item stream.
- the hybrid mode may, for example, allow clients to receive a stream from the content provider 300 in addition to their local stream in good network conditions, while also allowing the clients to continue to use their own local streams in poor network conditions when the content provider stream may be unavailable.
- the content provider 300 may periodically send state information to each of the hybrid mode clients.
- Certain clients may switch back and forth between the hybrid mode and a full stream mode, in which the clients receive only a content provider stream and do not generate a local client stream.
- a single shared state may be maintained for a large group of clients.
- some clients may operate in hybrid mode, some clients may operate in full stream mode and some clients may switch between hybrid mode and full stream mode.
- the amount of date sent to each hybrid mode client may vary depending on factors such as a quality of a connection between the content provider and the client, which may be based on conditions such as bandwidth, throughput, latency, packet loss rates and the like.
- the content provider 300 may transmit to the first hybrid mode client a higher complexity view of a scene that includes a larger amount of data.
- the content provider 300 may transmit to the second hybrid mode client a lower complexity view of the same scene that includes a smaller amount of data.
- the higher complexity view sent to the tlrst hybrid mode client may include more detail textures, patterns, shapes and other features that may not be included in the lower complexity view sent to the second hybrid mode client.
- clients 310A and 3 10B may be associated with one or more different respective entities corresponding to content item 307.
- Such entities may include, for example, various characters, vehicles, weapons, athletic equipment or any other entities corresponding to a video game or other content item.
- the respective entities may be controlled by the clients with whom they are associated.
- the respective entities may b associated in any way with the client, such as, for example, by being selected by or for particular users of the clients.
- client 3 10A controls a respective controlled character 315A
- client 3 1 ⁇ controls a respective controlled character 3 1 SB.
- Clients 31 OA and 3 1 OB may collect respective client state Information associated with the respective presentation of content item 307 at clients 3 l OA and 3 S OB, Client state Information is any information associated with a state of a content item as it relates in any way to one or more clients. Client state information may Include, for example, information
- client state information may Indicate various actions or operations performed by controlled characters 3 I SA and 3 S 5B. It is noted, however, that client state information collected by each client 3 10 is not limited to information corresponding to characters or other entities controlled by each of the respective clients 310 and may include information corresponding to any aspect associated with content item 307,
- client 3 i OA may transmit its respective client state information updates 320A to content provider 300, while client 310B may transm t its respective state information updates 320B to content provider 300.
- Client state information updates 320A and 3208 may be transmitted from clients 31 OA and 3 10B periodically at any appropriate scheduled or non-scheduled times.
- Client state information updates 320A may. for example, be transmitted at specified intervals or other times or in response to certain events. For example, a transmission of client state information updates 320A and 320B may be triggered by a character performing an action, such as moving to a specified location. There is no requirement that client state information updates 320A and 320B necessarily be sent simultaneously with one another.
- one or more of clients 31 OA and 3 S OB may transmit all client state information and/or some portion of previously transmitted client state information.
- the first transm ission of state information from a particular client may include ail client state information, while subsequent transmissions may include only updates or updates along with some portion of previously transmitted information.
- Content provider 300 may receive client state information updates 320A and 320B and use the updates to adjust shared content item state information 305, The adjusting may include, for example, adding, deleting and/or modifying various portions of the shared content item state information 305, Shared content item state information 305 may then, for example, be used in combination with content item 30? to produce one or more content item scenes,
- controlled character 3 I SA may fire a loaded weapon and launch a bullet towards a particular doorway, while controlled character 31 SB may simultaneously enter into the same doorway and face controlled character 315A.
- Client 3 10A may send client state information updates 320A, which may indicate the firing of the weapon by controlled character 3 ! 5A and the direction of the bullet.
- Client 3 1 OB may send client state information updates 320B S which may indicate the movement of controlled character 3 i 513 to enter the doorway.
- Content provider 300 may update shared content hem state information 305 to indicate the received client state information updates 320A and 320B, Content item 30? may then access shared content item state information 305 to produce a subsequent content item scene in which controlled character 31 SB stands in the doorway with a bullet wound in his chest, while controlled character 3 I SA stands facing the doorway from the position at which controlled character 315 A fired his weapon.
- content provider 300 may render the scene for display at clients 31 OA and 310B.
- content provider 300 generates multiple rendered views 330A and 330B.
- content provider 300 may generate and transmit rendered views 330A to client 31 OA, while content provider 300 may also generate and transmit rendered views 33GB to client 310B.
- Rendered views 330A and 330B are different from one another.
- a view refers to a particular image associated with a scene, When multiple different views of a particular scene are generated, each of the multiple different views may include a different respective image that is generated based on the scene,
- rendered views 330A and 330B may, in some eases, be associated with one or more respective entities associated with clients 3 1 OA and 3 i OB.
- rendered views 330A may be associated with controlled character 3 I SA
- rendered views 33QB may be associated with controlled character 31 SB.
- the rendered views 330A and 330B may present a view of a scene from a perspective that corresponds to an associated respective entity,
- rendered view 330A may depict a scene as it would be viewed through the eyes of controlled character 315A
- rendered view 330B may depict a scene as it would be viewed through the eyes of controlled character 3 S 5B.
- the rendered views 330A and 330B may present a view of a scene, such that an associated respective entity is In the center of the view or is otherwise positioned at a location of high interest and/or high visibility within the view.
- rendered view 330A may depict a scene, such that controlled character 3 I SA is positioned in the center of the view
- rendered view 330B may depict a scene, such that controlled character 315B is positioned in the center of the view.
- certain objects within a scene are blocking a view of an associated respective entity, then those objects may be removed from or otherwise adjusted within the rendered view.
- certain modi ications may be added or otherwise associated with a particular rendered view.
- an associated respective character or other entity may be enlarged or highlighted for the purposes of drawing attention or increasing visibility
- certain other entities within a view may be modified if they are somehow associated with a particular associated respective character or entity. For example, if an associated respective character is looking for a particular weapon, then that weapon could be enlarged or highlighted in a rendered view for the purposes of drawing attention to or increasing visibility of the weapon,
- a rendered view 330A for client 3 1 OA may, for example, provide a view from a perspective associated with controlled character 315 A.
- the rendered view 330A may, for example, depict the example scene as it would be viewed through the eyes of controlled character 3 I SA,
- the rendered view 330A may, for example, depict controlled character 31 5B standing in the doorway with a bullet wound in his chest, as this is what would be seen by controlled character 315A.
- a rendered view 330B for client 310B may, for example, provide a view from a perspective associated with controlled character 3 i SB.
- the controlled character 3 1 SB is standing in the doorway facing controlled character 3 i 5 A that has just fired his weapon.
- the rendered view 33GB may, for example, depict the example scene as it would be viewed through the eyes of controlled character 3 15B.
- the rendered view 330B may, for example, depict controlled character 3 S 5A with a recently tired weapon in his hand, as this h what would be seen by controlled character 315B.
- client state information updates 320A and 320B may include any information that may be used to assist in formation of rendered views 330A and 3308. Such information may include information indicating one or more respective entities associated with clients 31 OA and 3 10B. For example, FIG. 3A depicts controlled characters 15A and 315B as respective entities associated with clients 31 OA and 310B. Client state information updates 320A and 320B may also indicate, for example, any other entities that may be related in any way to clients 3 S UA and 3 ! 0B. Client state information updates 320A and 320B may also indicate, for example, if clients 1 OA and 310B switch control of various characters or entities or connect or disconnect from participation in a content transmission session.
- Client state information updates 320A and 320B may also indicate, for example, whether each of clients 3 I 0A and 3 10B is operating in a hybrid mode or a full stream mode and/or indicate a switch between operating in such modes. Any other appropriate information that may be used to assist in formation of rendered views 330A and 330B may also be included in client state information updates 32QA and 320B, in any other collection of information transmitted to content provider 300 or, in some eases, in information that is stored by or otherwise available to content provider 300.
- content provider 300 may store information about clients 3 10A and 3 S OB including, for example, an indication of characters or other entities controlled by clients 31 OA and 3 1 QB or any other appropriate Information that may be used to assist in formation of rendered views 330A and 330B.
- information that may be used to assist in formation of rendered views 330A and 330B may be included in shared content item state information 305.
- a content provider may render and transmit multiple views of a content item to multiple different client devices.
- an identical view may sometimes be transmitted to different clients that collaborate to jointly control the same character.
- an identical view may sometimes be transmitted to different clients that control different but closely related characters, such as teammates or members of the same unit or organization.
- identical views may be transmitted when one or more clients operate in a spectator mode in which the spectator clients do not control any entities within the content item, while one or more other clients operate in an active mode in which they do control one or more entities within the content item.
- one or more of the spectator mode clients may receive an identical view.
- one or more of the spectator mode clients and one or more of the active mode clients may receive an identical view. For example, a particular spectator mode client may have interest in a particular entity controlled by a particular active mode client and, therefore, may wish to receive the identical view that is sent to the particular active mode client.
- Identical views may also be transmitted based on any other appropriate reason or rationale,
- FIG. 3B An example system for identical view generation in accordance with the present disclosure is illustrated in FIG, 3B,
- client 3 1 OA controls its respective controlled teammate S A
- client 310B controls its respective controlled teammate 316B
- identical rendered views 340 are generated by content provider 300 aiid transmitted to both clients 31 OA and 3 10B.
- content provider 300 aiid transmitted to both clients 31 OA and 3 10B.
- any combination of identical and different views may also be generated and transmitted to any number of different clients. For example, for a content item that is being transmitted to three participating clients, two of the three clients may receive identical views, while the third client may receive a different view.
- the configuration of clients as receiving identical or different views may change throughout a particular content item transmission session. For example, two clients may initially control two teammates and may receive identical views of a particular content item. However, at some point during transmission of the content item, one of the clients may relinquish control of its character and initiate control of a different character on an opposing team, In this case, after switching control of the character, the switching client may begin to receive, a different view than is transmitted to the other client, As set forth above, the switching of characters or any other view-related information may, in some eases, be
- views for each client may be generated based on information associated with the el lent, such as respective entities or any other appropriate information.
- two clients may receive different views of some scenes and identical or near-identical views of other scenes without necessarily designating such views as similar or identical.
- two unrelated characters controlled by two different clients may happen to be positioned in close proximity to one another within a particular scene.
- identical or near-identical views of that particular scene may sometimes be transmitted to the two different clients.
- the same two clients may receive different views.
- rendering of the one or more views at a content provider may, in some cases, reduce or eliminate any need to send state information from the content provider to the clients. Additionally, rendering of the one or more views at the content provider may, in some cases, reduce the cost, complexity and usage requirements of content presentation software installed on the client devices. This may, for example, sometimes allow content to be presented on the client devices using thin client content presentation software as opposed to thick client content presentation software. Furthermore rendering of the one or more views at the content provider may, in some cases, reduce piracy and other security concerns for creators and distributors of the content.
- an amount or quantity of virtual machine instances and/or other resources used to execute a content item need not necessarily be dependent on a number of views generated in association with a content stem, For example, in some cases, a single virtual machine instance may be employed to execute a content em with multiple different rendered views being transmitted to muitipie different clients. In some cases, however, muitipie virtual machine instances may be employed if desired, for example, to reduce latency.
- the disclosed techniques may also enable multiple graphics processing units to be employed in association with a particular content Item,
- the multiple graphics processing units may generate renderings associated with a particular scene at least partially simultaneously with one another,
- a rendering refers to data that is generated at least in part by one or more graphics processing u its and that is associated with at least a portion of one or more images,
- the use of multiple graphics processing units may assist irs enabling real time or near-real time generation and presentation of rendered views.
- the multiple graphics processing units may, in some cases, be distributed across any number of different machines or devices at any number of different physical locations, In some eases, muitipie graphics processing units may be used to render only a single view of a scene, while, in other cases, muitipie graphics processing u its may be used to render multiple views of a scene, it is noted, however, that muitipie graphics processing units are not necessarily required to render multiple views of a scene. In some cases, a single graphics processing unit may he sufficient to render multiple views of a scene.
- FIGS, 4-6 Some example content transmission systems that depict various interactions between the above described concepts of multiple views and multiple graphics processing units are illustrated in FIGS, 4-6.
- FIG. 4 depicts the example scenario in which a single graphics processing unit is employed to generate multiple views.
- content provider 400 includes graphics processing unit 403A that generates rendered views 42QA, 4208 and 420C, which are transmitted, respectively, io clients 41 OA, 410B and 410C.
- FiG, 5 depicts the example scenario in which multiple graphics processing units are employed to generate a single view. As shown in FIG.
- content provider 500 includes graphics processing units 5 ⁇ 3 ⁇ , 503B and 503C that combine to generate rendered view 520A, which is transmitted to client 51 OA
- FIG, 6 depicts an example scenario in which multiple graphics processing units are employed to generate multiple views
- content provider 600 includes graphics processing units 6G3A, 603B and 603C that are employed to generate rendered views 62QA, 62 ⁇ and 620C, which are transmitted, respectively, to clients 61 OA, 61 OB and 6 I OC.
- each graphics processing unit 6G3A-C may generate a respective rendered view 620A-C.
- graphics processing unit 603A may generate rendered view 620A
- graphics processing unit 6038 may generate rendered view 620B
- graphics processing unit S03C may generate rendered view 620C
- two or more of graphics processing units 603 A-C may combine to generate one or more of rendered views 620A-C.
- graphics processing units 603A and 603B may combine to form rendered views 620A and 620B
- graphics processing unit 603C may separately generate rendered view 62 OC
- each of the multiple graphics processing units may be assigned a respective portion of the scene for rendering.
- Each portion of the scene may include, for example, an area of the scene indicated by various coordinates, dimensions or other indicators.
- a scene distributed across two graphics processing units may be divided into two equal sized halves, with each half assigned to a respective one of the two graphics processing units,
- a scene may include multiple objects—such as characters, buildings, vehicles, weapons, trees, water, fire, animals and others, in some cases, each of the multiple graphics processing units may be assigned a respective object, portion of an object or collection of objects within the scene for rendering,
- object refers to any portion of a scene, image or other collection of information.
- An object may be, for example, a particular pixel or collection of pixels,
- An object may be, for example, all or any portion of a particular asset.
- An object may also be, for example, all or any portion of a collection of assets.
- An object may also be, for example, ail or any portion of an entity such as a tree, ike, water, a cloud, a cloth, clothing, a human, an animal and others.
- an object may be a portion of a tree.
- An object may also, for example, include all or any portion of a collection of objects, entities and/or assets.
- an object may be a group of multiple trees or clouds that may be located, for example, at any location with respect to one another,
- each of the multiple graphics processing units may be assigned one or more respective views of the scene for rendering. Any combination of the example techniques described above and/or any other appropriate techniques may be employed to distribute rendering of a scene across multiple graphics processing units.
- the number of graphics processing units that are used to render a particular content item may be elastic, such that the number changes depending on various factors. Such factors may include, tor example, a rate at which a graphics processing unit generates renderings or other performance rates of one or more graphics processing units, a complexity of rendered scenes, a number of views associated with the rendered scenes, availability of additional graphics processing units and any combination of these or other relevant factors.
- In some cases, the performance rate of one or more graphics processing units associated with rendering of a particular content item may be monitored to determine an efficiency at which the graphics processing tanks are performing.
- a graphics processing unit is rendering scenes or portions of scenes below a certain threshold performance rate
- a decision may be made to add one or more additional graphics processing units to assist in rendering of the scenes or portions of scenes.
- a decision may be made to relinquish one or more of those graphics processing units such that they can be made available to assist with other content items or content provider tasks
- Q87J There are a number of factors that may affect the rendering rate of one or snore graphics processing units.
- One such example factor may he scene complexity.
- a scene complexity associated with a particular content item may vary from one scene to the next. Any number of different factors may be responsible for such a change in scene complexity,
- certain objects or portions of objects may be added or removed or otherwise adjusted, obscured or made visible.
- scene complexity may be increased from one scene to the next when certain characters, buildings, vehicles or other objects are added into the subsequent scene, in some cases, when scene complexity is increased, one or more graphics processing units may become overburdened such that they can no longer efficiently render their respective scenes or scene portions.
- scerse complexity when scerse complexity is reduced, one or more graphics processing units may gain additional available capacity such that the number of graphics processing units used to render the content item may be consolidated and reduced,
- Another example factor that may affect the performance rate of one or more graphics processing units is a number of views associated with various scenes or portions of scenes. For example, when one or more client-controlled characters enter a particular portion of a scene, then the number of views associated with that portion of the scene may increase. This may occur, for example, when one or more client-controlled characters enter a particular building or room within a building. By contrast, when one or more client-controlled characters leave a particular portion of a scene, then the number of views associated with that portion of the scene may decrease. In some cases, when a number of views is increased, one or more graphics processing units may become overburdened such that they can no longer efficiently render their respective scenes or scene portions. By contrast, in some cases, when a number of views is decreased, one or more graphics processing units may gain additional available capacity such that the number of graphics processing units used to render the content item may be consolidated and reduced.
- FIG. 7 illustrates a first example graphics processing unit seating scenario in accordance with the disclosed techniques.
- FIG. ? depicts a scene 700 rendered by a single graphics processing unit 720. It is noted that scene 700 is depicted in FIG, 7 as a three-dimensional scene (as indicated by its cubic form). However, the disclosed techniques are not limited to use with three-dimensional scenes and may also be used with, for example, two-dimensional scenes.
- FIG. 7 indicates that graphics processing unit 720 is operating below the lower threshold performance rate.
- this lower performance rate may be due to factors, such as a scene complexity that is too high and/or has too many associated views to be efficiently rendered by the single graphics processing unit 720. Accordingly, in some cases, a content provider may, based on graphics processing unit 720 operating beiow the lower threshold performance rate, determine that additional graphics processing units should be employed to render subsequent scenes,
- FIG. 8 depicts the scenario in which a content provider adds an additional graphics processing unit based on graphics processing unit 720 of FIG. 7 operating below the lower threshold performance rate.
- FIG, 8 depicts a scene 800, whkh is one or more scenes subsequent to scene 700 of FIG. 7, Scene 800 Is divided Into two scene portions 810 ⁇ and 810B, Additionally, the rendering of scene 800 is distributed across two graphics processing units 820A and 820B.
- scene portion 81 OA is rendered by graphics processing unit S2QA
- scene portion 810B is rendered by graphics processing unit S2QB
- the rectangular shapes of scene portions ⁇ 1 QA and S I 0B are selected merely for descriptive purposes and are not limiting.
- a scene may, in accordance with the disclosed techniques, be divided into any number of different portions having any number of different shapes or sizes. It is further noted that, when an additional graphics processing unit is being added, it is not necessarily required to divide portions of previous scenes Into equal sized halves.
- FIG. 8 indicates that graphics processing unit 820A is operating at a rate between the upper and lower threshold performance rates. Based on graphics processing unit 820A operating between the upper and lower thresholds, a content provider may, in some cases, determine no changes are necessary with respect to the graphics processing unit scaling of scene portion 81 OA. By contrast, FIG. 8 also indicates that graphics processing unit 8208 is operating below the lower threshold performance rate. Accordingly, in some cases, a content provider may, based on graphics processing unit 82QB operating below the lower threshold performance rate, determine that additional graphics processing units should be employed to render the area corresponding to scene portion 83 GB in subsequent scenes,
- FIG, 9 depicts the scenario in which a content provider adds an additional graphics processing unit based on graphics processing unit 82GB of FIG, 8 operating below the lower threshold performance rate.
- FIG, 9 depicts a scene 900, which is one or more scenes subsequent to scene 800 of FIG. 8.
- Scene 900 is divided into three scene portions 93 OA, 9 I 0B and 9 I OC. Additionally, the rendering of scene 900 is distributed across three graphics processing units 920A, 920B and 920C Ira particular, scene portion 9 !
- scene portion 910B is rendered by graphics processing unit 920B
- scene portion I OC is rendered by graphics processing unit 92QC It is noted that scene portions 910B and S 0C were formed by dividing scene portion 10B of FIG, 8 into two equal half-portions. Scene portion 8 ! 0B was divided because its respective graphics processing unit 820B was operating below the lower threshold performance rate,
- FIG. 9 indicates that graphics processing unit 920A is operating at a rate between the upper and lower threshold performance rates. Based on graphics processing unit 920A operating between the upper and lower thresholds, a content provider may, in some cases, determine no changes are necessary with respect to the graphics processing unit scaling of scene portion 3 OA. By contrast, FIG. 9 also indicates that both graphics processing unit 920B and 920C are operating above the upper threshold performance rate. As set forth above, these higher performance rates may be due to factors, such as a lower scene complexity and/or a lower number of rendered views associated with respective scene portions 910B and 9 I OC.
- a content provider may, based on graphics processing un s 920B and 920C operating above the upper threshold performance rate, determine that fewer graphics processing units should be employed to render the combined area corresponding to scene portions 10B and 930C in subsequent scenes.
- FIG. 30 depicts the scenario in which a content provider removes a graphics processing unit based on graphics processing units 920B and 920C of FIG. 9 operating above the upper threshold performance rate.
- FIG. 10 depicts a scene 1000, which is one or more scenes subsequent to scene 900 of FIG. 9.
- Scene 1000 is divided into two scene portions 1010A and 10108, Additionally, the rendering of scene 1000 is distributed across two graphics processing units 1010A and 1030B,
- scene portion 101 OA is rendered by graphics processing unit 1 2QA
- scene portion 101 B is rendered by graphics processing unit 1020B
- scene portion i O l OB was formed by combining scene portions 30B and I OC of FIG. 9 into a single portion.
- Scene portions 930B and 930C were combined because their respective graphics processing units 920B and 920C were operating above the upper threshold performance rate, Use combination of scene portions 1 OB and 9 I OC may, in some cases, allow one of the two graphics processing units 9208 or 920C to be re-assigned to another task that may have a greater need for an additional graphics processing unit.
- FIGS. 7- 10 scene portions and graphics processing unit distributions shown in FIGS. 7- 10 are merely examples.
- the disclosed techniques may aiiow scenes to be divided into portions in any described manner.
- the disclosed techniques may also allow graphics processing units to be distributed across scenes or scene portions in any described manner.
- a number of graphics processing units may be determined based, at least in part on scene complexity information that may, for example, be associated with a particular content item and that may indicate a level of complexity associated with various portions of one or more scenes associated with the content item.
- a number of graphics processing units may be determined, at least in part, by monitoring a number of clients that are participating in the transmission of a particular content item and/or by monitoring or otherwise determining a number of different views that are being rendered in association with the transmission of a particular content item. Furthermore, in some cases, a number of graphics processing units may be determined, at least in part, based on any particular rules or preferences set by a particular content provider or any customer or other entity associated with a content provider. Any combination of these or other appropriate techniques may also be employed.
- graphics processing unit distribution techniques may involve assigning one or more portions of a scene to a single graphics processing unit, it is not required that each portion of a scene be assigned to one and only one graphics processing unit for rendering. In some cases, multiple graphics processing units may collaborate to collectively render a complete scene or any portion of a scene,
- a number of example techniques for distributing rendering of a scene across multiple graphics processing units are described in detail above, in some cases, after different portions of a scene are rendered by multiple graphics processing units, all or portions of the various different renderings may be combined to form one or more resulting views for transmission and display.
- the content provider may employ various techniques for combining renderings received from multiple graphics processing units into each view.
- One example combination technique which is referred to herein as a stitching technique, may involve inserting various renderings front different graphics processing units Into different identified areas within a view. For example, a first rendering by a first graphics processing unit may be Inserted at a first identified view area, while a second rendering by a second graphics processing unit may be inserted at a second identified view area.
- Each view area may be identified using, for example, coordinate values identified based on the scene from which the view is generated.
- FIG. I I An example depiction of the stitching technique is illustrated in FIG. I I .
- FIG, 1 1 depicts four renderings 1 130A-1 BOD generated by lour different graphics processing units I I 20A-D.
- rendering 1 BOA is generated by graphics processing unit ! I 20A
- rendering 1 BOB is generated by graphics processing unit 1 I 20B
- rendering ⁇ ! 30C is generated by graphics processing unit 1 120C
- rendering I BOD is generated by graphics processing unit 1 S 20D.
- FIG. 1 1 depicts four renderings 1 130A-1 BOD generated by lour different graphics processing units I I 20A-D.
- rendering 1 BOA is generated by graphics processing unit ! I 20A
- rendering 1 BOB is generated by graphics processing unit 1 I 20B
- rendering ⁇ ! 30C is generated by graphics processing unit 1 120C
- rendering I BOD is generated by graphics processing unit 1 S 20D.
- FIG. 12 depicts four layers I 260A, 1260B, 1260C and I 260D.
- Layer I 260A includes rendering 1230A received from graphics processing unit 1220 A.
- Layer I 26 B includes rendering 1230B received from graphics processing unit 1220B,
- Layer 1260C includes rendering I 230C received from graphics processing unit 1220C,
- Layer 1260D includes rendering 1230D received from graphics processi g unit 1220D,
- FIG. 13 An example depiction of the layering technique is Illustrated in FIG. 13, In particular, a logical representation 1300 is shown, in which layers 126GA-D are logically represented as being stacked vertically with layer I 26GD at the bottom, layer 1260C second to the bottom, layer 1260B third from the bottom and layer I260A on the top, It is noted that logical representation 1300 is not intended to be a physical structure in which layers 126GA-D are physically stacked on top and beneath each other. Rather, logical representation 1300 is merely a logical representation that is intended to indicate an example manner in which dat corresponding to various portions of a view may be logically associated. Additionally, it should be appreciated that the example order of placement of layers shown in FIG. 13 is merely provided for illustrative purposes and is non-limiting. Referring back to FIG. 13, it is shown that logical representation 1300 is used to generate a resulting view 1350 that includes renderings 1230A-D.
- FIG. 1 includes clients I 410A-C in communication with content provider 1400.
- Clients 14 I 0A-C may, for example, each participate in a transmission session of a particular content item, such as a video game.
- Clients 1410A-C may, for example, Include active clients and spectator clients.
- Active clients are clients thai control one or more characters or other entities within the content item.
- Spectator clients are clients that do not control any characters or other entities within the content item.
- Clients 14 10A ⁇ C each recei ve a transmission of rendered content Item views from a respective one of three streaming servers I 450A-C. It is noted, however, that, while streaming servers 14S0A-C are included in the particular example of FIG, 14, the disclosed techniques are not limited to the use of streaming content transmission and may employ any other appropriate form of content delivery.
- the use of a separate respective streaming server 145GA-C for transmission of content to each client 14I QA-C may be advantageous, for example, because It may, in some cases, enable improved ability to adjust various transmission characteristics to Individual clients based on factors, such as the quality of service associated with a network connection to each client.
- the adjusted transmission characteristics may include, for example, encoding rates, transmission speed, image quality and other relevant factors.
- Each of clients I 4 I GA-C may periodically send client state information updates to content provider 1400,
- content provider 1400 may receive state information only from active clients and not from spectator clients,
- the client state information updates may inciude, for example, information corresponding to a slate of various features, events, actions or operations associated with the presentation of a content item at each of clients 1410A-C,
- the client state Information updates may indicate various actions or operations performed by characters or other entities controlled by clients 1410A-C.
- the client state information updates may include any information that may assist in generating one or more views of a scene, such as an indication of characters or other entities controlled by a client, Information regarding a switching of control from one character or entity to another and information regarding a connection or disconnection of a client form participation in a content transmission session.
- the client state information updates may also indicate, for example, whether each of clients 1410A-C is operating in a hybrid mode or a full stream mode and/or indicate a switch between operating in such modes.
- Client state information updates transmitted from clients 1410A-C are received at content provider 1400 by input control plane S4S0.
- the received state information from each client I 4 I 0A-C may be collectively used to adjust shared state information 1470 for the content item being transmitted.
- the adjusting may include, for example, adding, deleting and/or modifying various portions of shared state information 1470,
- the shared state information 1470 may be used in combination with the content item to produce various content item scenes.
- the shared state information 1 70 may also be used in combination with the content item to produce one or more views of each content item scene,
- Each content item scene may then be rendered into one or more views by graphics processing unit collection 1490, which may include one or more graphics processing units 14G3A-C.
- graphics processing unit collection 1490 may include one or more graphics processing units
- FIG. 14 depicts one graphics processing unit 1403B with a solid border, and the remaining graphics processing units 1403 A and 14G3C with dashed borders.
- the multiple graphics processing units 1403A-C may, in some cases, be distributed across any number of different machines or devices at any number of different physical locations.
- the number of graphics processing units 14G3A-C used in association with a particular content item may be determined, at least in part, by graphics processing unit scaling component 1460.
- Graphics processing unit sealing component 1460 may, in some cases, monitor, command and otherwise communicate with graphics processing unit collection 1490,
- graphics processing unit scaling component 1460 may, as set forth above, monitor various workloads, available capacities, rates at which graphics processing unit generate renderings and other performance rates and any other appropriate characteristics associated with graphics processing units 1403A-C.
- Graphics processing unit scaling component 1460 may also, in some cases, communicate with input control piane 1480, shared state information 1470, various content items and various other components tn order to determine information, such as scene complexity, a number of connected clients and associated views, associated content provider or customer rules or preferences and any other relevant information,
- graphics processing unit scaling component 1460 is included within input control piane 1480, but graphics processing unit sealing component 1460 may, in some eases, also be a separate component or be included as part of one or more other components.
- graphics processing unit scaling component 1460 may also, In some eases, determine how a total scene rendering load is distributed across the total number of participating graphics processing units 1403. For example, graphics processing unit scaling component 1460 may assign one or more particular graphics processing units to render particular portions of a scene. Some example distributions of various scene portions among various graphics processing units are illustrated in FIGS, 7-10 and described in detail above. As set forth above, in some cases, one or more graphics processing units may be assigned to render particular dimensions or coordinates of a scene, particular views of a scene and/or one or more scene objects such as characters, buildings, vehicles, weapons, trees, water, fire, animals and others.
- graphics processing unit scaling component 1460 may be made by other components, such as one or more of the graphics processing units 1403A-C or any other of the components depicted in FIG. 14 or other components.,
- the various renderings may be combined to form one or more resulting views.
- the combination of these different renderings may, in some eases, be performed by one or more of the graphics processing units 14G3A-C and/or by any other appropriate components.
- Various example techniques for combining renderings from multiple graphics processing units, such as stitching and layering techniques, are illustrated in FIGS. S 1 -13 and described in detail above.
- the one or more rendered views may then be provided to streaming servers I45GA-C for transmission to respective clients i4 I OA ⁇ C.
- various operations may be performed to prepare the rendered views for transmission, such as encoding and compression. These various operations may be performed by components within streaming servers 145QA-C or by various other components,
- At least some of clients 14 I GA-C may receive different views of a particular scene, Aiso, in some cases, at least some of ciients 14 I 0A-C may receive identical views of a particular scene, For example, clients 143 OA and ! 41 OB may receive identical views of a sc ne, while client 141 GC may receive a different view of the same scene.
- FIG. 15 is a flowchart depicting an example procedure for generating one or more views based on shared state information in accordance with the present disclosure.
- a content item transmission session is initiated.
- the content item transmission session may, in some cases, be initiated based on one or more requests from one or more participating client devices.
- the participating client device may, for example, Include any client devices that receive one or more transmissions associated with the content item.
- the participating client devices may, for example, include active clients and spectator clients. Active clients arc clients that control one or more characters or other entities within the content item.
- Spectator clients arc clients that do not control any characters or other entities within the content item,
- the content item may be transmitted using multimedia streaming or any other appropriate content delivery technology.
- client state information is received by a content provider from one or more of the participating cl ient devices.
- the content provider may receive state information only from active clients and not from spectator clients.
- the client state information received at operation 1512 may include all client state Information from a particular client or only a portion of client state information from a particular client.
- the client state information received at operation 15 12 may include a client state information update.
- Such a client state information update may, tor example, include client state Information not previously transmitted to the content provider,
- a client state Information update may aiso, for example, exclude client state information previously transmitted to the content provider.
- client state information may include, for example, information corresponding to a state of various features, events, actions or operations associated with the presentation of a content item at each participating client device.
- client state information may indicate various actions or operations performed by characters or other entities controlled by a client.
- client state information may include any information thai may assist in generating one or more views of a scene, such as an indication of characters or other entities controlled by a client, information regarding a switching of control from one character or entity to another and information regarding a connection or disconnection of a client form participation in a content transmission session.
- the client state information updates may also indicate, for example, whether a client is operating in a hybrid mode or a full stream mode and/or indicate a switch between operating in such modes,
- the content provider uses the client state information received at operation 1 5 12 to adjust shared content item stale information maintained by the content provider.
- the adjusting performed at operation 1514 may include, for example, adding, deleting and/or modifying various portions of shared content item state information.
- the shared content item state information may, in some cases, reflect the collective content item state based on the most recently received updated information from each connected client.
- next content item scene is generated.
- the next content item scene may be generated based on, for example, information within the content em itself and also the shared content item state information maintained by the content provider.
- FIGS, 15 and 16 merely depict some example orders in which operations may be performed and are non-limiting. Thus, tor example, while FIG, 15 depicts operations I S 12 and 15 14 as occurring prior to operation 1 1 , it is not required that these operations be performed in this order in any, each or every case,.
- client state information updates may be received simultaneously or non-siniultaneousl from one or more participating clients periodically at any appropriate scheduled or non-scheduled times. Thus, for example, it is not required that client state information updates be received from any or every client and/or that shared state information be updated prior to every instance of a generation of a new scene.
- the content provider renders one or more views of the scene generated at operation 1516.
- each view of the scene may be a different image associated with the same scene.
- the one or more views of the scene may be rendered based on, for example, information within the content item itself and also the shared content item state information maintained by the content provider.
- at least some participating clients may receive different views of the same scene. Also, in some cases, at least some participating clients may receive an identical view of the same scene.
- multiple different views of a scene may, for example, each depict the scene from a different respective perspective associated with each view, Each view may, for example, be generated from the perspective of one or more respective content item entities.
- the respective entities may, for example, be controlled by or otherwise associated with the one or more clients to whom the rendered view is transmitted.
- the respective entities may include, for example, characters, vehicles or any other entity associated with a content item scene,
- a perspective associated with a view may depict a scene as would be viewed through the eyes of a respective character or from another position associated with a respective entity.
- a perspective associated with a view may depict a scene such thai a respective character or other entity is in the center of the view or is otherwise positioned at a location of high interest and/or high visibility within the view.
- each of the rendered views is transmitted by the content provider to the participating clients.
- a different respective streaming server may be employed for transmissions to a respective client.
- the amount of data sent to each hybrid mode client may sometimes vary depending on factors such as a quality of a connection between the content provider aud the client, which may be based on conditions such as bandwidth, throughput, latency, packet loss rates and the like. For example, for a first hybrid mode client that has a higher quality connection to the content provider, the content provider may transmit to the first hybrid mode client a higher complexity view of a scene that includes a larger amount of data.
- the content provider may transmit to the second hybrid mode client a lower complexity view of the same scene that includes a smaller amount of data.
- the higher complexity view sent to the first hybrid mode client may ineiude more detail textures, patterns, shapes and other features that may not be included in the lower complexity view sent to the second hybrid mode client.
- FIG, 16 is a flowchart depicting an example procedure for rendering using one or more graphics processing units in accordance with the present disclosure.
- a content item transmission session is initiated.
- the content item transmission session may, in some cases, be initiated based upon one or more requests from one or more participating client devices and may employ, for example, multimedia streaming or any other appropriate content delivery technology.
- the participating client devices may, for example, include active clients and spectator clients.
- a nest content item scene is identified.
- the next content item scene may be generated based on, for example, information within the content Item itself and also shared content item state information maintained by the content provider.
- the generated content item scene may, for example, be identified by any combination of one or more graphics processing units, by a graphics processing unit scaling component or by any other appropriate component.
- the scene may, for example, be identified so that it can be accessed rid rendered at least in part by one or more graphics processing units.
- graphics processing unit scaling information is obtained.
- the graphics processing unit scaling information obtained at operation 1614 may include any information associated with graphics processing unit scaling operations. As set forth above, such information may include, for example, a rate at which a graphics processing unit generates renderings or other performance rate associated with one or more graphics processing units, information regarding a number of clients participating in the content item transmission session, information regarding a number of different views being rendered In association with the content item transmission session, information regarding availability of additional graphics processing units or other resources, rules or preferences associated with a content provider and/or customer and any other appropriate information.
- graphics processing unit scaling determinations are made.
- the graphics processing unit scaling determinations may, for example, be made based, at least in part, on the graphics processing unit scaling information obtained at operation 16 14.
- the graphics processing unit sealing determinations may include, for example, deter inations to employ one or more additional graphics processing units for rendering of the transmitted content item, to relinquish one or more graphics processing units from rendering of the transmitted content item and to otherwise re-distribt!te or re-assign one or more graphics processing units involved with rendering of the transmitted content item.
- the graphics processing unit scaling determinations may include, for example, determinations regarding a number of employed graphics processing units and aiso determinations regarding how to distribute various portions of the scene generated at operation 1612 among the employed graphics processing units.
- one or more graphics processing units are employed to generate renderings in association with the scene, The renderings may be generated in accordance with the graphics processing unit scaling determinations made at operation 1616. As set forth above, If multiple graphics processing units are employed at operation 1618, the multiple graphics processing units may, in some cases, generate renderings associated with the scene at least partially simultaneously with one another, Also, in some cases, the use of multiple graphics processing units may reduce the overall time required for rendering of a scene as compared to when only a single graphics processing unit is employed to render the scene,
- the renderings generated at operation 1618 are associated with one or more views of the scene.
- a single graphics processing unit may be empbyed to generate a single view of the scene.
- multiple graphics processing units may each generate a respective view of the scene.
- multiple graphics processing units may combine to form a single view of the scene,
- multiple graphics processing units may combine to form multiple views of the scene,
- any combination of the above or other example scenarios may also be employed, Accordingly, Accordingly, operation 1620 may include, for example, determining and/or identifying which portions of the generated renderings will be incorporated into each rendered view that is generated based on the scene.
- a rendering or portion of a rendering may, for example, be associated with each view that includes the rendering or portion of the rendering.
- Operation 1620 may also include, for example, combining portions of renderings from multiple graphics processing units into one or more views.
- Some example techniques for combining renderings from muitipie graphics processing units into a view such as stitching and layering techniques, are illustrated in FIGS, 1 1 -13 and described In detail above.
- combining techniques such as stitching and layering, may be wholly or partially repeated for each of the multiple views,
- the one or more views of the scene are transmitted by the content provider to one or more participating clients, As set forth above, in some eases, a different respective streaming server may be employed for transmissions to a respective client. As also set forth above, in some cases, multiple different views may be formed in association with a scene. In some of these cases, each of the multiple different views may include a different respective image associated with the scene. Thus, in some cases, muitipie different images may be formed at operation 1620 and transmitted at operation 1622.
- renderings from different graphics processing units may be combined together to form one or more views of a scene.
- Some of the examples described above may indicate that the renderings from different graphics processing units may be combined together by the content provider.
- the renderings from different graphics processing units may be combined together by a client in accordance with the disclosed techniques.
- a content provider may transmit renderings from multiple graphics processing units to a client without first combining the multiple renderings into one or views, The client may then receive the renderings and combine the renderings into one or more views at the client The client may employ any combination of the stitching and layering techniques described above or any other appropriate techniques to combine the received renderings,
- data associated with multiple different views of a scene may be combined Into a single data collection, such as a render target.
- An example system for employing a data collection for multiple view generation in accordance with the present disclosure is illustrated in FIG. 17.
- content provider 1700 includes graphics processing unit 1 02, which, as described above, may be used to generate data associated with multiple views I 730A-C of a content item scene.
- three views I 730A-C are transmitted to three clients 3750A-C. in particular, view I 73 A is transmitted to client 1750A, view I 730B is transmitted to client 1750B and view 1730C is transmitted to client 1 50C,
- graphics processing unit 1702 and/or other components generate a data collection 1710 that includes and/or stores data associated with multiple different views.
- Data collection 1710 may be, for example, a render target or another collection of data.
- render target refers to a collection of data associated with one or more renderings or other representa io s of information associated with a scene.
- Data collection 1710 may be generated by, for example, including within the data collection 1710 data associated with one or more renderings or other representations of information associated with a scene.
- the data included within the data collection 1710 may include, for example, data corresponding to manipulated geometry, vertices, pixels, colors, textures, shading and any other data associated with views of a scene.
- the data collection 1710 includes multiple sections 1 2QA-C each associated with a respective view 1730A-C.
- section I 720A is associated with view 173 OA
- section 1 2GB is associated with view I 30B
- section 1 20C is associated with view 1730C
- FIG, 17 depicts three sections 1720A-C associated with three views 1730A-C
- a data collection in accordance with the disclosed techniques may include any number of different sections associated with any number of different views.
- encoding components 174GA-C may each extract data from a respective section 1720A-C of dat collection 1710 associated with a respective view I 730A-C, in particular, encoding components 1740A may extract data from section I 720A, encoding components 174GB may extract data from section 17208 and encoding components 1740C may extract data from section I 2GC, Transmission components 1 741 A-C may then each respectively transmit views I 730A-C to clients 3750A-C,
- each of clients 1750A-C may have a respective dedicated streaming server that enables transmission of a respective view I 730A-C to each client I 50A-C.
- Each dedicated respective streaming server may, in some cases, include respective encoding components and transmission components.
- a dedicated respective streaming server for client 3 SOA may, in some cases, include encoding components 1740A and transmission components 1741 A.
- Inpu control plane 1 80 and/or another component may, for example, be employed to determine a number of views being generated in connection with a given scene. As set forth in detail above, shared state information from clients I 750A-C may, i some eases, be employed to in part determine information associated with multiple views. Input control plane 3780 and/or another component may also, for example, assist with provisioning data collection 3710 to include sections 172GA-C, which are each associated with a respective one of the multiple views 1 730A-C. Each of sections 1 72GA-C may, for example, he defined by parameters such as various dimensions, data addresses, data ranges, data quantities, sizes and other parameters that would allow one potion of data to be distinguishable from another.
- input control plane 1780 may determine and inform graphics processing unit 1702 and/or encoding components I 74GA-C of the parameters associated with the data collection 1710 and sections 1 20A-C.
- the parameters may also be determined, in some cases, by the graphics processing unit 1702 or by another component,
- each section I 72GA-C may be equally sized and may have a length L and a width W. This may result in data collection 1710 having a size of W * 3L to account for the length of each of the three sections 1 2QA-C. In some cases, the data collection may include additional information that may result in the data collection exceeding a size of W * 3L. In some cases, each of sections I 720A-C may have different sizes with respect to one another. The use of sections with different sizes may be advantageous, for example, when views I 730A-C are associated with different resolutions.
- different clients and/or different applications on a client may present video at different resolutions with respect to one another.
- higher resolution views may have associated data col lections sections with larger sizes
- lower resolution views may have associated data collections sections with smaller sizes.
- the use of a larger data collection section size for a higher resolution view may, for example, enable an increased quantity of data to be included in the larger section, which may assist in producing a higher resolution for the view.
- input control plane 1 80 or another component may determine a resolution associated with each of the views based on information provided by each client 1750A-C.
- input control plane 1780 or another component may then provision data collection 1710 and sections I 720A-C based on the resolution
- the term data collection generation component is used herein to refer to any component that is employed at leasl in part to assist in the generation of data collection 1710.
- Example data collection generation components may include, for example, input control plane 17S0, graphics processing unit 1 02 and any other components that assist in the generation of data collection 1710,
- One or more of the data collection generation components may, for example, determine a number of views of a scene to be generated.
- a data collection generation component may also be referred to as a render target generation component.
- multiple different views of a scene may, for example, each depict the scene from a different respective perspective associated with each view, Each view may, for example, be generated from the perspective of one or more respective content item entities.
- the respective entities may, for example, be controlled by or otherwise associated with the one or more clients to whom the rendered view is transmitted.
- the respective entities may Include, for example, characters, vehicles or any other entity associated with a content item scene.
- a perspective associated with a view may depict a scene as would be viewed through the eyes of a respective character or from another position associated with a respective entity.
- a perspective associated with a view may depict a scene such that a respective character or other entity is in the center of the view or is otherwise positioned at a location of high interest and/or high visibility within the view.
- FIG. 18 A first example data collection including data associated with multiple views in accordance with the present disclosure is illustrated in FIG, 18, As shown, data collection 1810 includes sections 1820A-C. Sections 182QA-C depict representations i SSGA-C, 1860A ⁇ C and ⁇ 870A-C from three different perspectives associated with three different views 1830A-C of a scene 1805.
- section S 82GA includes representations I S50A, I 86QA and 1870A
- section 1 20B includes representations I 850B, 1860B and 1 870B
- section S H2QC includes representations 1 ⁇ 50C, 1860C and 1870C.
- Representations 185GA-C, 186GA-C and 187GA-C are representations of objects 1850, I 860 and 1870 included within scene 1805.
- representations 185QA-C are representations of object 1850
- representations 1 60A-C are representations of object i 860
- representations I 87GA-C are representations of object 1870.
- thai objects 1850, i 860 and 1870 and representations 1850A-C, 1 S60A-C and 187GA-C may include any number of different textures and colors and other visual effects. However, for purposes of simplicity, these visual effects are not shown in FIG, 18-20,
- a graphies.processlng unit may form representations of an object in each section of a data collection before moving on to form representations of another object.
- FIG. 1 shows data collection 1810 of FIG. 1 at three different stages of formation.
- Stage 191 OA is a first stage of formation, which occurs prior to second stage i i 0B and third stage 1910C, As shown, at first stage 1 1 OA, only representations I S50A-C associated with object 1850 have been formed in sections 1 S2GA- j0140]
- the formation of representations i 850A-C may include the performance of operations, such as various geometry manipulations, coloring, texturing and shading.
- representation 1 850A may first be formed in section I S20A.
- the formation of representation I 850A may include, for example, loading geometry associated with object 1850 in scene 1 SOS and manipulating the geometry of object 1850 such that it is presented from a perspective associated with view 183GA.
- the formation of representation 1850A may also include, for example, applying various colors, textures and/or shaders to representation I B50A.
- the application of textures to representation 1850A may include, for example, loading one or more stored texture Hies associated with object 1850.
- the application of shaders to representation 1850A may include, for example, loading one or more shade programs associated with object 1 850.
- representation 1 850B may he formed In section 1 20B.
- representation 1 850B Is formed ai er representation 1850A
- the geometry, textures, shaders and various other programs and information associated with object 1850 may, in some cases, already be loaded by the graphics processing unit.
- the formation of representation 1 S50B may, in some cases, require significantly less loading and other retrieval operations than were required to form representation 1 85QA
- the formation of representation 185QB may include, for example, manipulation of the already loaded geometry of object 1850 such that it is presented from a perspective associated with view 1 S3QB.
- representation I S50B may also include, for example, applying various colors, and textures and/or shaders to representation 185GA.
- the textures and shaders applied to representation 1850B may Include, for example, previously loaded textures and shaders that were previously used for the formation of representation E 85GA.
- representation 1 850C may be formed In section 1820C.
- representation 1850C s formed after representations 1850A and 1 85QB, the geometry, textures, shaders and various other programs and Information associated with object 1 S50 may, in some cases, already be loaded by the graphics processing unit
- representation S H50C may include, for example, manipulation of the already loaded geometry of objeci 1850 such thai it is presented from a perspective associated with view 1 83QC
- the formation of representation 1850C may also include, for example, applying various colors, and textures and/or shaders to representation 1 S50C, As set forth above, the textures and shaders applied to representation 1850C may include, for example, previously loaded textures and shaders that were previously used
- Stage 101 OB of FIG. 1 is a second stage of formation, which occurs subsequent to first stage 1910A and prior to third stage I 9 I0C, As shown, at second stage 191 GB, representations S 85GA-C associated with ohject 1850 and representation S E6GA-C associated with object I 860 have been formed in sections f 820A-C. In some cases, representations 1860A-C may be formed by first forming representation 1860A followed by 186GB followed by 1860C. The formation of representation I 860A may include, for example, loading of the geometry associated with ohject I 60, loading of one or more textures associated with object I 860 and loading of one or more shaders associated with object I 860.
- representation I 860B and 1 6GC are formed after representation 1860A
- the geometry, textures, shaders and various other programs and information associated with object 1860 may, in some cases, already be loaded by the graphics processing unit.
- the formation of representations 186GB and I 860C may, in some cases, reqnire significantly less loading and other retrieval operations than were required to form representation 186QA.
- Stage 1 10C Is a third stage of formation, which occurs subsequent to first stage 1910 ⁇ and second stage 1 10C, As shown, at third stage 1 I OC, representations I 850A-C, 186QA-C and 1 870A-C have been formed in sections 182GA-C In some cases, representations 1870A-C may be formed by first forming representation i 870A fol lowed by 1 70B followed by I 870C. The formation of representation I 870A may include, for example, loading of the geometry associated with object 1870, loading of one or more textures associated with object 1870 and loading of one or more shaders associated with ohject 1 70.
- representation 1870B and I 70C are formed after representation 1 70A, the geometry, textures, shaders and various other programs and information associated with object 1870 may, in some cases, already be loaded by the graphics processing unit.
- representations 187GB and 187QC may, in some eases, require significantly less loading and other retrieval operations than were required to form representation 1870A,
- graphics operations may be performed in accordance with the formation of any or ail of representations m sections ! 82GA-C, Such other graphics operations may include, for example, various other transformation operations, lighting, clipping, scan conversion, rasterization, blurring and the like.
- FIG. 1 depicts an example in which a graphics processing unit forms representations of an object in each section of a data collec ion before moving on to form representations of another object.
- this formation sequence may, In some cases, be advantageous by, far example, reducing or eliminating a need to repeatedly retrieve or load geometry, textures, shaders and other programs or information associated with each object, In some cases, at Ieast some of the geometry, textures, shaders and other programs or information may be loaded only once for the first representation formed in association with each object.
- Subsequent representations of the same object may then be formed without re-loading ihe already loaded geometry, textures, shaders and other programs or information, in some cases, each instance of loading of geometry, textures or shaders may cause the graphics processing unit to undergo a processing state change. Such state changes may increase the latency associated with generation of multiple views of a scene.
- use of the formation sequence such as illustrated in FIG, 1 may significantly reduce the amoun of state changes required to generate multiple views of a scene. For example, in some eases, the number of state changes may be reduced by a factor
- data collection 1810 of FIG. 18 includes equal size sections I H20A-C.
- a data collection may include sections having different respective sank.
- Each of the different respective sizes may include or may be capable of including different quantities of data with respect to one another.
- larger sized sections may, in some cases, include or may be capable of including a larger quantity of data in comparison to smaller sized sections.
- the size of each section and/or the quantity of data included in each section may, in some cases, be determined based on a resolution, corresponding to one or more clients that receive a view with which the section is associated.
- FIG. 20 depicts a data collection 2010 that includes sections 2020A-C each having different sizes with respect to one another.
- sections 2020A-C each having different sizes with respect to one another.
- Section 2020A is associated with high resolution view 2030A
- section 2020B is associated with moderate resolution view 2030B
- section 2020C is associated with low resolution view 2030C
- Section 2020A may include a larger quantity of data than section 2020B, which, in turn, may include a larger quantity of data than section 202GC
- FIG. 2 ! is a flowchart depicting an example procedure for employing a data collection for multiple view generation in accordance with the present disclosure.
- the flowchart of FIG. 21 is directed to a particular example in which a data collection includes three sections that are respectively associated with three views of a current scene. It is once again noted, however, that the disclosed techniques may be employed irs association with a data collection that includes any number of different sections that are respectively associated with any number of different views.
- a current scene is produced.
- a scene may be produced at least in part by a content item, such as a video game and/or other components.
- the current scene may be produced based upon, for example, information in the content item and state information provided by one or more clients,
- the data collection arrangement information may include, fo example, a number of views being generated for each scene and/or the current scene of the content item, a resolution associated with each view and any other information that may be used to provision the data collection.
- the content provider may employ a number of different techniques to determine the number of views being generated. For x m le. in some cases, each different client to which a content item is transmitted may receive lis own respective view. Also, In some eases, each client that controls or is otherwise associated with a different character or other entity may receive its own respective view.
- certain clients that control different entities may, in some cases, receive an identical view. Also, in some cases, clients that control the same character or another entity may receive an identical view. In some cases, each client that employs or is otherwise associated with a different display resolution may receive its own respective view, !n some cases, a number of views may be determined based on state information or other information provided by one or more clients.
- a data collection is arranged based on the arrangement information identified at operation 2106,
- the arrangement of the data collection may include, for example, determining a number of sections to be included in the data collection.
- the arrangement of the data collection may also include, for example, defining parameters, such as various dimensions, data addresses, data ranges, data quantities, sizes and other parameters associated with each section.
- the size of each section may be determined based on a resolution associated with one or more clients thai receive a view corresponding to each section.
- an input control plane and/or another component may determine and inform a graphics processing unit and/or various encoding and transmission components of the dimensions or other parameters associated with the data collection and its sections.
- the dimensions or other parameters may aiso be determined, in some cases, by a graphics processing unit or by another component,
- operations 2106 and 2108 need not necessarily be repeated for each different scene that is produced in association with a playing of a content item.
- operations 2106 and 2 ! 08 may be performed at the initiation of a playing of a content item, and the arrangement of each data collection for each scene may remain constant for as long as the arrangement information remains substantially consistent from one scene to the next.
- certain changes may occur that may cause the data collection to be rearranged for the next scene that is produced after the changes are detected. For example, when it is detected that one or more clients have joined or terminated their participation in a playing of a video game, then a data collection for a subsequent scene may be re-arranged based on the detection of this information. In particular, for example, the data collection for the subsequent scene may be re-arranged to Include additional of fewer sections as necessary based on the information,
- a current object is iterated such that a current object is set to be a next object.
- the current object Is the object whose representations are formed in the data collection at operations 21 12-21 16.
- a first iteration of operation 21 10 may include setting object 50 to be a current object
- a second iteration of operation 2 ⁇ 0 may include setting object ) 860 to be a current object
- operation 21 10 is included for purposes of simplicity to clarify to the reader that operations in the process of FIG. 21 may be repeated for one or more objects in the current scene.
- Operation 2 ⁇ 0 need not necessarily require any processing or computation by the content provider.
- the order may be determined by a content item, by a graphics processing unit or by arsother component.
- the order may be determined based on factors such as a relative depth of the objects with respect to perspectives associated with one or more views or any other appropriate factors,
- a representation of the current object is formed in the first section of the data collection.
- the first iteration of operation 2 I I 2 may include forming of representation 1 850A in section 1820A.
- Sub-operation 21 12A indicates that operation 21 12 may include, for example, loading and using geometry, textures and shaders associated with the current object.
- the geometry of the current object may be loaded and manipulated to form a representation presented from a perspective corresponding to a view associated with the first section, Additionally, various textures and shaders associated with the current object may be loaded and applied to the representation being formed in the first section. Any number of other additional or alternative operations may also be performed in order to form the representation in the first section,
- a representation of the current object is formed in the second section of the data collection.
- the first iteration of operation 21 14 may include forming of representation 1850B in section 182GB
- Sub-operation 2 ⁇ 4 ⁇ indicates that operation 21 14 may include, for example, using already loaded geometry, textures and shaders associated with the current object.
- the geometry of the current object that was loaded at sub-operation 21 12A may be manipulated to form a representation presented from a perspective corresponding to a view associated with the second section.
- various textures and shaders associated with the current object that were loaded at sub-operation 21 12A may be applied to the representation being formed in the second section. Any number of other additional or alternative operations may also be performed in order to form the representation in the second section,
- a representation of the current object is formed in the third section of the data collection.
- the first iteration of operation 2 ⁇ 6 may include forming of representation 18S0C in section 182GC,
- Sub-operation 21 16 A indicates thai operation 21 16 may include, for example, using already !oaded geometry, textures and shaders associated with the current object.
- the geometry of the current object that was loaded at sub-operation 21 12A may be manipulated to form a representation presented from a perspective corresponding to a view associated with the third section, Additionally, in some cases, various textures and shaders associated with the current object that were loaded at sub-operation 1 12A may be applied to the representation being formed in the third section. Any number of other additional or alternative appropriate geometric operations may also be performed in order to form the representation in the third section,
- sub-operations 21 12A, 2 1 14A and 2 ⁇ 6 ⁇ are merely intended to identify some example sub-operations that may be performed respectively at operations 21 12, 21 14 and 21 16 and that all such sub-operations are not required and do not necessarily include a complete list of all sub-operations that may be performed in ail cases.
- operations 2 S 14 and 21 16 may include the use of some geometry, textures, shaders and/or other components that were not previously loaded at operation 21 12 or another operation.
- the data collection may be generated by, for example, performing operations 21 12- 21 16 tor each object In the scene to the extent appropriate for each view. In some cases, however, the data collection may be generated without necessarily forming object representations in the order depicted in FIG, 21 , In some cases, certain objects within a scene need not necessarily be included within a particular view or have a corresponding representation formed within a section of the data collection associated with the particular view. This may occur, for example, when an object is positioned outside of a viewing area associated with the particular view.
- each of the first, second and third views may he transmitted to different respective first, second and third clients, In other cases, one or more of the views may be transmitted to a single client.
- each of the first, second and third views may be encoded and transmitted by dedicated respective encoding and transmission components thai may, for example, include or be included within dedicated respective streaming servers.
- Each of ihe processes, methods and algorithms described in the preceding sections may he embodied in, and fully or partially automated by, code modules executed by one or more computers or computer processors,
- the code modules may be stored on any type of non- transitory computer-readable medium or computer storage device, such as hard drives, solid state memory, optica! disc and/or the like.
- the processes and algorithms may be implemented partially or wholly in application-specific circuitry.
- the results of the disclosed processes and process steps may be stored, persistently or otherwise, in any type of non-transitory computer storage such as, e.g., volatile or non-volatile storage.
- One or more compute nodes storing instructions that, upon execution by the one or more compute nodes, cause the or more compute nodes to perform operations comprising;
- a computer-implemented method of generating, by one or more compute nodes, at least part of a first image comprising:
- the combining comprises inserting the at least part of the first rendering into a first identified area of the first view and inserting the at least part of the second rendering into a second identified area of the first view.
- the combining is performed in accordance with a representation that includes multiple layers, wherein a first layer corresponds to the at least part of the first rendering, and wherein a second layer corresponds to the at least part of the second rendering.
- One or more non-iransitory computer-readable storage media having stored thereon instructions that, upon execution on at least one computing node, cause the at least one computing node to perform operations comprising: determining to employ a first number of graphics processing units to generate renderings based on a first scene of a content item, wherein the first number is greater than one;
- One or more compute nodes storing instructions that, upon execution by the one or more compute nodes, cause the or more compute nodes to perform operations comprising:
- a computer-implemented method of generating, by one or more compute nodes, a first view of a first scene comprising:
- One or more compute nodes comprising:
- one or more render target generation components configured to:
- the first view presents the scene from a first perspective associated with a first client and wherein the second view presents the scene from a second perspective associated with a second client, wherein the scene comprises at least a first object and a second object;
- a render target that comprises a first section associated with the first view and a second section associated with the second view by at least:
- a first encoding component configured to encode data from at least pari of the first section of the render target to form an encoded first view
- a second encoding component configured to encode data from at least part of the second section of the render target to form an encoded second view
- a first transmission component configured to transmit the encoded first view to the first client
- a second transmission component configured to transmit the encoded second view to the second client.
- a computer-implemented method of generating, by one or more compute nodes, at least a first view and a second view of a scene of a content item comprising;
- the first view presents the scene from a first perspective associated with a first client and wherein the second view presents the scene from a second perspective associated with a second client, wherein the scene comprises one or more objects;
- generating a data collection that comprises a first section associated with the first view and a second section associated with the second view, wherein the first section comprises a representation of at least one of the one or more objects formed from the first perspective, and wherein the second section comprises a representation of at least one of the one or more objects formed from the second perspective; extracting data from at least part of the first section of the data collection to form the first view;
- forming the first representation of the first object comprises performing a manipulation on a geometry associated with the first object, applying one or more textures to the first representation of the first object and applying one or more shaders to the first representation of the first object.
- forming the first representation of the first object comprises loading a shader and applying the shader to first representation of the first object
- forming the second representation of the first abject comprises applying the shader to the second representation of the second object without re-loading the shader
- the one or more compute nodes comprise a first streaming server that transmits the first view to the first client and a second streaming server that transmits the second view to the second client.
- One or more non-trans story computer-read abie storage media having stored thereon instructions that, upon execution on at least orse compute node, cause the at ieast one compute node to perform operations comprising:
- the first view presents the scene from a Orst perspective associated with a first client and wherein the second view presents the scene From a second perspective associated with a second client, wherein the scene comprises one or more objects;
- generating a data collection that comprises a Orst section associated with the first view and a second section associated with the second view, wherein the Orst section comprises a representation of at ieast one of the one or more objects formed from the first perspective, and wherein the second section comprises a representation of at least one of the one or more objects formed from the second perspective;
- the non-transitory computer-readable storage media of clause 18, wherein forming the first representation of the first object comprises performing a manipulation on a geometry associated with the first object applying one or more textures to the first representation of the first object and apply ing one or more shaders to the first representation of the first object.
- forming the first representation of the first object comprises loading a shader and applying the shader to first representation of die first object
- forming the second representation of the first object comprises applying the shader to the second representation of the second object without reloading the shader.
- arranging the data collection to include a number of sections determined based on the number of views of the scene being generated
- some or all of the systems and/or modules may be implemented or provided in other ways, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application- specific integrated circuits (ASICs), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field- programmable gate arrays (FPCJAS), complex programmable logic devices (CPLDs), etc.
- ASICs application- specific integrated circuits
- controllers e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers
- FPCJAS field- programmable gate arrays
- CPLDs complex programmable logic devices
- Some or all of the modules, systems and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, a memory, a network or a portable media article to be read by an appropriate drive or via an appropriate connection.
- the systems, modules and data structures may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer- readable transmission media, including wireless-based and wired/eable-based media, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames).
- generated data signals e.g., as part of a carrier wave or other analog or digital propagated signal
- Such computer program products may also take other forms in other embodiments. Accordingly, the present invention may be practiced with other computer system configurations.
- Conditional language used herein such as, among others, "can,” ''could,” “might,” “may,” “e.g.” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply thai features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- General Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Information Transfer Between Computers (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/077,180 US20150133216A1 (en) | 2013-11-11 | 2013-11-11 | View generation based on shared state |
US14/077,149 US20150130814A1 (en) | 2013-11-11 | 2013-11-11 | Data collection for multiple view generation |
US14/077,165 US20150130815A1 (en) | 2013-11-11 | 2013-11-11 | Multiple parallel graphics processing units |
PCT/US2014/065055 WO2015070235A1 (en) | 2013-11-11 | 2014-11-11 | Data collection for multiple view generation |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3068504A1 true EP3068504A1 (en) | 2016-09-21 |
EP3068504A4 EP3068504A4 (en) | 2017-06-28 |
Family
ID=53042246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14860984.5A Withdrawn EP3068504A4 (en) | 2013-11-11 | 2014-11-11 | Data collection for multiple view generation |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP3068504A4 (en) |
JP (1) | JP2017504986A (en) |
CN (1) | CN106132494A (en) |
CA (1) | CA2929588A1 (en) |
WO (1) | WO2015070235A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018206605A1 (en) * | 2017-05-08 | 2018-11-15 | Trimoo Ip Europe B.V. | Transport simulation in a location-based mixed-reality game system |
EP3400992A1 (en) * | 2017-05-08 | 2018-11-14 | Trimoo IP Europe B.V. | Providing a location-based mixed-reality experience |
EP3400991A1 (en) * | 2017-05-09 | 2018-11-14 | Trimoo IP Europe B.V. | Transport simulation in a location-based mixed-reality game system |
US11971865B2 (en) | 2017-09-11 | 2024-04-30 | Bentley Systems, Incorporated | Intelligent model hierarchy for infrastructure modeling |
CN110059329B (en) * | 2018-12-05 | 2023-06-20 | 中国航空工业集团公司西安飞机设计研究所 | Comprehensive simulation method and comprehensive simulation system for energy of refined electromechanical system |
CN113449228A (en) * | 2020-03-24 | 2021-09-28 | 北京沃东天骏信息技术有限公司 | Page rendering method and device |
CN117651159B (en) * | 2024-01-29 | 2024-04-23 | 杭州锐颖科技有限公司 | Automatic editing and pushing method and system for motion real-time video |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1154169A (en) * | 1994-05-27 | 1997-07-09 | 碧特斯特雷姆有限公司 | Apparatus and methods for creating and using portable fonts |
JP2000132705A (en) * | 1998-10-26 | 2000-05-12 | Square Co Ltd | Image processor, image processing method, game device and recording medium |
US20060036756A1 (en) * | 2000-04-28 | 2006-02-16 | Thomas Driemeyer | Scalable, multi-user server and method for rendering images from interactively customizable scene information |
US20030212742A1 (en) * | 2002-05-07 | 2003-11-13 | Hochmuth Roland M. | Method, node and network for compressing and transmitting composite images to a remote client |
JP3883560B2 (en) * | 2005-10-31 | 2007-02-21 | 株式会社バンダイナムコゲームス | GAME DEVICE AND INFORMATION STORAGE MEDIUM |
JP4807499B2 (en) * | 2006-02-23 | 2011-11-02 | セイコーエプソン株式会社 | Image processing system, display device, program, and information storage medium |
EP1837060A1 (en) * | 2006-03-21 | 2007-09-26 | In Fusio (S.A.) | Method for displaying interactive video content from a video stream in a display of a user device |
US8187104B2 (en) * | 2007-01-29 | 2012-05-29 | Sony Online Entertainment Llc | System and method for creating, editing, and sharing video content relating to video game events |
EP2020802A3 (en) * | 2007-08-03 | 2012-05-09 | Nintendo Co., Ltd. | Handheld wireless game device server, handheld wireless device client and system using same |
US8527646B2 (en) * | 2009-04-14 | 2013-09-03 | Avid Technology Canada Corp. | Rendering in a multi-user video editing system |
US8803892B2 (en) * | 2010-06-10 | 2014-08-12 | Otoy, Inc. | Allocation of GPU resources across multiple clients |
US8171137B1 (en) * | 2011-05-09 | 2012-05-01 | Google Inc. | Transferring application state across devices |
JP5076132B1 (en) * | 2011-05-25 | 2012-11-21 | 株式会社スクウェア・エニックス・ホールディングス | Drawing control apparatus, control method therefor, program, recording medium, drawing server, and drawing system |
US8775850B2 (en) * | 2011-06-28 | 2014-07-08 | Amazon Technologies, Inc. | Transferring state information between electronic devices |
CN102867073B (en) * | 2011-07-08 | 2014-12-24 | 中国民航科学技术研究院 | Flight program design system for performance-based navigation, verification platform and verification method |
US9250966B2 (en) * | 2011-08-11 | 2016-02-02 | Otoy, Inc. | Crowd-sourced video rendering system |
-
2014
- 2014-11-11 CA CA2929588A patent/CA2929588A1/en not_active Abandoned
- 2014-11-11 WO PCT/US2014/065055 patent/WO2015070235A1/en active Application Filing
- 2014-11-11 EP EP14860984.5A patent/EP3068504A4/en not_active Withdrawn
- 2014-11-11 JP JP2016529941A patent/JP2017504986A/en active Pending
- 2014-11-11 CN CN201480061316.XA patent/CN106132494A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
EP3068504A4 (en) | 2017-06-28 |
CN106132494A (en) | 2016-11-16 |
WO2015070235A1 (en) | 2015-05-14 |
CA2929588A1 (en) | 2015-05-14 |
JP2017504986A (en) | 2017-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150133216A1 (en) | View generation based on shared state | |
US20150130814A1 (en) | Data collection for multiple view generation | |
WO2015070235A1 (en) | Data collection for multiple view generation | |
US10097596B2 (en) | Multiple stream content presentation | |
US10601885B2 (en) | Adaptive scene complexity based on service quality | |
US10300382B1 (en) | Three dimensional terrain modeling | |
US10347013B2 (en) | Session idle optimization for streaming server | |
US10431011B2 (en) | Virtual area generation and manipulation | |
US20150130815A1 (en) | Multiple parallel graphics processing units | |
US9582904B2 (en) | Image composition based on remote object data | |
US9604139B2 (en) | Service for generating graphics object data | |
US10146877B1 (en) | Area of interest subscription | |
US10130885B1 (en) | Viewport selection system | |
US10792564B1 (en) | Coordination of content presentation operations | |
US10729976B1 (en) | Coordination of content presentation operations | |
US10708639B1 (en) | State-based image data stream provisioning | |
US11161045B1 (en) | Content item forking and merging | |
US10722798B1 (en) | Task-based content management | |
Farooq et al. | Faster dynamic spatial partitioning in opensimulator | |
US10715846B1 (en) | State-based image data stream provisioning | |
US10610780B1 (en) | Pre-loaded content attribute information | |
US11212562B1 (en) | Targeted video streaming post-production effects | |
EP3468684A1 (en) | Sectional terrain editing | |
EP3069264B1 (en) | Multiple stream content presentation | |
US9358464B1 (en) | Task-based content management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20160511 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20170526 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06T 15/00 20110101ALI20170519BHEP Ipc: G06F 9/50 20060101ALI20170519BHEP Ipc: A63F 13/5252 20140101ALI20170519BHEP Ipc: A63F 13/335 20140101ALI20170519BHEP Ipc: A63F 13/77 20140101ALI20170519BHEP Ipc: A63F 13/355 20140101AFI20170519BHEP Ipc: A63F 13/352 20140101ALI20170519BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20180103 |