CN112513969A - Centralized rendering - Google Patents

Centralized rendering Download PDF

Info

Publication number
CN112513969A
CN112513969A CN201980051078.7A CN201980051078A CN112513969A CN 112513969 A CN112513969 A CN 112513969A CN 201980051078 A CN201980051078 A CN 201980051078A CN 112513969 A CN112513969 A CN 112513969A
Authority
CN
China
Prior art keywords
computer system
data
client
scene graph
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980051078.7A
Other languages
Chinese (zh)
Inventor
P·巴布吉德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magic Leap Inc
Original Assignee
Magic Leap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/011,413 external-priority patent/US10977858B2/en
Application filed by Magic Leap Inc filed Critical Magic Leap Inc
Publication of CN112513969A publication Critical patent/CN112513969A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/025LAN communication management
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data

Abstract

A method is disclosed, comprising the steps of: receiving first graph data including a first node from a first client application; receiving second graph data comprising a second node from a second client application independent of the first client application; and generating a scene graph, wherein the scene graph describes a hierarchical relationship between the first node and the second node according to visual occlusion relative to a perspective from the display.

Description

Centralized rendering
Cross Reference to Related Applications
This application claims the benefit of U.S. patent application No.16/011,431 filed 2018, 6, 18, the entire contents of which are incorporated herein by reference for all purposes.
Technical Field
The present disclosure relates generally to systems and methods for visually rendering graphical data on a display, and in particular, to systems and methods for visually rendering data from multiple computer applications on a single display.
Background
Various techniques exist for rendering graphics data for computer applications to a display. It is desirable for these techniques to render graphical data realistically (i.e., consistent with the desires of viewers based on the physical world) and efficiently. It is also desirable for rendering techniques to accommodate computer systems having various topologies, including, for example, computer systems in which multiple applications contribute graphics data to be displayed on a single display.
Conventional systems are generally unable to render content realistically and efficiently in such multi-application systems. For example, in some such systems, rendering graphics data from multiple applications onto a single display can cause the data to be incorrectly ordered (sort) on the display, producing unexpected visual results that compromise the authenticity of the (composite) display. Furthermore, graphical data from one application may not be able to realistically interact with graphical data from another application (e.g., through lighting and shadow cast (shadowcast) effects or through a shading program (shader)). Additionally, some such systems are limited in their ability to use rendering optimizations, such as culling (cut) invisible surfaces, to improve computational efficiency.
Systems involving Augmented Reality (AR) or "mixed reality" particularly require better solutions to the problem of rendering graphics data from multiple applications to a single display. For example, AR systems have the potential for multiple users to interact in a shared virtual space, where virtual content from all users is rendered to a single display. It is desirable that such interactions be trusted and meaningful to the user, which may require that the graphical output of the AR system be convincing and consistent with the user's visual expectations; flexible enough to accommodate different types and numbers of users, user hardware and user software, and different ways in which users wish to interact with the system; and efficient enough to maintain continuous operation at high frame rates and maximize the battery life of the mobile device. Furthermore, it is desirable that applications and application data associated with a single user in an AR system remain independent of other users, to provide security (which may be compromised by data access between untrusted users) and to remain scalable, particularly as the number of system users grows. Additionally, such systems may benefit from minimizing technical constraints on users and user applications; for example, limiting the hardware requirements of a user to participate in an AR system may encourage more user participation. This may be achieved by limiting the extent to which a single user or application running on user hardware needs to perform complex rendering operations, for example by offloading such operations to a shared system, such as a server-side host application running on dedicated hardware.
Disclosure of Invention
Examples of the present disclosure describe a computer system in which multiple applications contribute graphical data to be displayed on a single display. Examples of the present disclosure may be used to render graphical data realistically (i.e., consistent with a viewer's desire based on the physical world) and efficiently. According to examples of the present disclosure, first graphical data may be received from a first client application, and second graphical data may be received from a second, independent client application. The first graph data and the second graph data may be combined into a "centralized" data structure, such as a scene graph, that may be used to describe relationships between nodes represented by the first graph data and the second graph data. Thus, the centralized data structure may be used to render a scene reflecting the first and second graphics data to a display in a realistic and efficient manner.
Drawings
Fig. 1A-1E illustrate an example computer system including a graphical display, according to an example of the present disclosure.
Fig. 2A illustrates an example data flow in an example computer system according to an example of the present disclosure.
Fig. 2B illustrates an example renderer output corresponding to an example computer system, according to an example of the present disclosure.
Fig. 2C illustrates an example data flow in an example computer system including multiple independent applications, according to an example of the present disclosure.
Fig. 2D illustrates an example renderer output corresponding to an example computer system including multiple stand-alone applications according to an example of the present disclosure.
Fig. 3A illustrates components of an example computer system that can render 3D data from multiple independent applications to a display using a centralized scene graph, according to an example of the present disclosure.
Fig. 3B illustrates an aspect of an example client application with respect to an example computer system including multiple independent client applications, according to an example of the present disclosure.
Fig. 3C illustrates aspects of an example client-server interface with respect to an example computer system including multiple independent applications, according to an example of the present disclosure.
Fig. 3D illustrates aspects of an example host application 340 with respect to an example computer system including multiple independent client applications, according to an example of the present disclosure.
Fig. 3E illustrates an aspect of an example renderer 360 with respect to an example computer system including multiple independent applications, according to an example of the present disclosure.
Fig. 4 illustrates an example of a system architecture that may be embodied within any portable or non-portable device in accordance with examples of the present disclosure.
Detailed Description
In the following description of examples, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples that may be practiced. It is to be understood that other examples may be used and structural changes may be made without departing from the scope of the disclosed examples.
Fig. 1A-1E illustrate various example computer systems having a display. FIG. 1A shows an example desktop computer connected to an external monitor. FIG. 1B illustrates an example notebook computer including a display. FIG. 1C illustrates an example mobile device including an integrated display. FIG. 1D shows an example television that includes a display. FIG. 1E illustrates an example computer system that includes a head mounted display. The present disclosure is not limited to any particular type of computer system, any particular type of display, or any particular means of connecting a computer system to a display. The present disclosure is further not limited to two-dimensional displays; in particular, three-dimensional displays, such as stereoscopic displays, are considered.
In some example computer systems, data to be graphically presented on a display (as a "rendered scene") includes data representing objects in three-dimensional space (e.g., 2D or 3D geometric primitives (primatives) including polygons) ("3D data"), and presenting the 3D data on the display includes presenting images corresponding to the objects in the three-dimensional space viewed from a view origin oriented along a view axis ("display scene"). For example, in a software application running on a computer system (e.g., a video game using a 3D engine), the 3D data may include spatial coordinates, orientation, and/or visual properties of objects in a three-dimensional game world, as well as data describing the view origin and view axis in the game world. The 3D data may also include data related to textures associated with the objects to be rendered, shading program parameters related to the objects, and other information that affects how the objects are displayed. For example, during a "render" or "draw" phase, the game may direct software and/or hardware "pipeline" to create a rendered scene for presentation on the display as a display scene. In such examples, it is often desirable that the resulting image reflect the user's expectations of the visual world. In particular, it is often desirable that a first opaque object, which is closer to the origin of the view, occludes a second object, which is located behind the first object. Objects that are not properly occluded may be confusing to the user and may not be able to clearly present where the object is located in three-dimensional space. In some example computer systems, occlusion is achieved by sorting, where objects closer to the origin of the view are sorted or drawn over objects farther from the origin of the view.
Ordering a plurality of objects for presentation on a display such that one object realistically occludes another object requires information about the relationship between the objects, for example the spatial relationship between the objects in three-dimensional space. Some example computer systems utilize a scene graph to represent relationships (e.g., hierarchical relationships) between one or more objects, such as objects to be rendered as a scene. As used herein, a scene graph is any data structure representing such a relationship. For example, in a scene graph, rendered objects to be rendered may be represented as nodes in the graph, where the relationships between the nodes represent logical or spatial relationships between the objects. The renderer may then traverse (transpose) the scene graph to render or prepare to display the at least one object in a manner that will achieve the appropriate occlusion according to techniques known in the art. In other words, the renderer may create a scene with objects of nodes, but the rendering on the display may be only a subset of the rendered objects, so that objects in the renderer that are occluded by another object will only be partially rendered in the resulting display scene (in such embodiments, the output is the non-occluded part of the object). Such selective display may be beneficial for obscuring content embodied by a first object run by a first application if the user only desires content embodied by a second object run by the first application to be visible within a given period of time. In some examples, the scene graph is an intermediate data structure located between an application that includes 3D data and a renderer that renders the 3D data for presentation to a screen: in some examples, the application writes the scene information to a scene graph, which may be later rendered by the renderer or output display of the scene.
FIG. 2A illustrates an example data flow in an example computer system 200. In the system 200, a single application 210 may write data to the scene graph 240, and the renderer 250 may render the objects 220 using the scene graph 240 for presentation on the display 290. For example, object 220 may include a plurality of objects (e.g., polygons) that together comprise a 3D representation of a person's two hands (hand 232 and hand 236); and the application may direct the object 220 to be presented on the display, for example, from the perspective of the view origin oriented along the view axis during the rendering or drawing phase. In this example, hand 232 and hand 236 interlock in the handshake; due to the relative positioning of the hands, the viewer desires that some portions of hand 232 block portions of hand 236 and some polygons including hand 236 block portions of hand 232 with respect to the view origin and the view axis. The application 210 may write information describing the relationships between the objects 220 (e.g., spatial relationships between polygons including the objects 220) to the scene graph 240, which may be used to identify which polygons should occlude other polygons, i.e., which polygons should be ordered to be displayed over other polygons. For example, the scene graph 240 may reflect that the polygon 234 (which belongs to the hand 232) is located between the view origin and the polygon that includes the hand 236, and therefore those polygons in the hand 236 should be occluded; and polygon 238 (which belongs to hand 236) is located between the view origin and the polygon comprising hand 232, and therefore those polygons in hand 232 should be occluded. The renderer 250 may then output the object 220 or a subset (e.g., only the hand 232 or non-occluded portion of the hand 232 or only the hand 236) consistent with the desired occlusion for presentation as output to the display 290.
FIG. 2B illustrates an example output of the renderer 250 of the example computer system 200 shown in FIG. 2A. In the example described above with respect to fig. 2A, based on the relative positions of objects 220, the viewer desires some objects (e.g., polygon 234) including hand 232 to occlude hand 236, and desires some objects (e.g., polygon 238) including hand 236 to occlude hand 232. The example output shown in FIG. 2B is consistent with expected occlusions. That is, object 220 in FIG. 2A is properly displayed to present a handshake on display 290 that is consistent with the desires of the viewer.
In the example computer system 200 shown in fig. 2A and 2B, the scene graph 240 is written directly by only a single application (application 210). Renderer 250 then traverses scene graph 240 to render hand 232 and hand 236 with the appropriate occlusions. When a scene graph (e.g., scene graph 240) receives direct input from multiple independent applications, a conventional system (e.g., example computer system 200) that uses the scene graph as part of a rendering process may not be able to properly occlude objects. In these cases, unlike the example computer system 200, no single application may provide the scene graph with all of the object relationship data that may be required to properly order the objects on the display.
FIG. 2C illustrates an example data flow in an example computer system 201 using two independent applications, and illustrates the occlusion problem described above. Unlike the example computer system 200 described above, the example computer system 201 includes two separate applications, namely, an application 212 and an application 214. In the example computer system 201, both the application 212 and the application 214 write data to the scene graph 240 to render their respective 3D data to a single display 290. In FIG. 2C, application 212 attempts to render and present object 222 (which includes an object containing hand 232); and application 214 attempts to render and present object 224 (which includes the object containing hand 236). In this example, as in the example depicted in fig. 2A and 2B, if hand 232 and hand 236 are displayed simultaneously in the same 3D environment, they will interlock in the handshake so that the viewer desires a portion of hand 232 to block hand 236, as well as a portion of hand 236 to block hand 232.
The example shown in fig. 2C may have difficulty achieving true occlusion of objects to be rendered. In this example, application 212 may write data corresponding to object 222 (including hand 232) to scene graph 240, and application 214 may write data corresponding to object 224 (including hand 236) to the same scene graph 240. However, in example computer system 201, if application 212 and application 214 are independent ("sandboxed") applications, application 212 cannot access data related to object 224 of application 214 (including hand 236 and its constituent objects), and likewise, application 214 cannot access data related to object 222 of application 212 (including hand 232 and its constituent objects). That is, in some examples, neither application 212 nor application 214 is able to fully identify the relationship between object 222 and object 224. Thus, both application 212 and application 214 cannot write to scene graph 240 the information needed to identify which objects occlude other objects or in what order the objects should be ordered on the display screen.
FIG. 2D illustrates an example output of the renderer 250 of the example computer system 201 shown in FIG. 2C. In the example described above with respect to fig. 2C, based on the relative positioning of object 222 and object 224, the viewer desires some portion of hand 232 to occlude hand 236, and some portion of hand 236 to occlude hand 232. Unlike fig. 2A and 2B, however, the scene graph 240 of fig. C fails to properly order the objects 222 and 224 to produce the desired occlusion as described above. Rather, in the illustrated example, all of the objects 222 are ordered above all of the objects 224. The example output shown in fig. 2D is therefore inconsistent with the expected occlusion. Thus, the resulting handshake image does not accurately reflect objects in applications 212 and 214; furthermore, the image may look unnatural and may confuse the viewer.
Other disadvantages of conventional scenegraphs (e.g., scenegraph 240 in fig. 2A-2D) are apparent when used with multiple stand-alone applications such as in the example computer system 201. For example, rendering efficiency may be achieved using data corresponding to an entire scene to be rendered, e.g., in application 210 in fig. 2A. By knowing which surfaces are to be occluded, the example computer system 200 in FIG. 2A, for example, can instruct the system 200 to cull those surfaces, thereby avoiding unnecessary consumption of computing resources. In a multi-application system (e.g., the example computer system 201 in fig. 2C), such culling may not be possible because each application may not have scene knowledge to determine which surfaces should be culled. Further, in some examples involving only a single application (e.g., example computer system 200), beneficial effects may be applied to objects based on the presence of other objects in the scene. For example, applying realistic lighting and shading (shadow) effects to an object may require data corresponding to nearby objects. In addition, some shading program effects would benefit from such data. Similarly, effects generated by particle (particle) systems or collision (collision) detection systems may benefit from such data. In systems where multiple independent applications provide 3D data, such effects may be limited or impossible, as no single application may be able to provide all of the node relationship information needed to apply such effects.
The present disclosure presents systems and methods that use a centralized scene graph to address the above-described shortcomings of systems that render 3D data from multiple independent applications. In a system (e.g., the example computer system 201 in fig. 2C) in which multiple independent applications provide 3D data to be rendered, a centralized scene graph may be used instead of a traditional scene graph, such as the scene graph 240 in fig. 2C. As described herein, in some examples, a centralized scenegraph may include a system that receives 3D data from multiple separate input sources; writing information corresponding to the 3D data into a central location; and maintaining the information for access by a renderer that creates a rendered scene including the object based on the 3D data. The rendered scene may be used to generate an output (such as a graphical output) that reflects realistic occlusion of objects; calculating efficiency; visual effects (e.g., lighting and shadow casting); or physical effects (e.g., collision detection), or occlusion of partial display of objects, which would be difficult or impossible to achieve in systems that do not utilize centralized scene graphs.
In some examples, an example computer system includes a plurality of applications, each application including 3D data representing one or more objects in a common 3D environment. Each of the multiple applications may exist in a "sandboxed" environment such that it is agnostic to other applications (agnostic): for example, the data for each respective application may be independent of the data for each other application; each application may not have access to the data of each other application; and each application maintains its own 3D environment instance, although the 3D data for each application may correspond to the same 3D environment. For example, each application may represent a player in an online multiplayer video game, where each player is present in an instance of the same game world or 3D environment, but has no direct access to other players' data. In such an example, it may be desirable to render all players simultaneously in a single instance of the game world, but for each player it may not be desirable (or computationally prohibitive) to maintain the information needed to render the 3D data of each other client participant. Further, it may be desirable to limit the information of players available to other players for security purposes.
In some examples, each of the plurality of sandboxed applications may independently write information corresponding to its 3D data to a local scene graph, which is then written to a common centralized scene graph. The centralized scene graph may then be traversed by a renderer to render a scene based on the aggregated (collective)3D data provided by each application for presentation on a display as an image. By transferring 3D data from each of multiple sandboxed applications to a single centralized scene graph, the renderer may apply beneficial techniques that require or benefit simultaneously from knowledge of the 3D data of all the applications, such as occlusion, lighting effects, and rendering optimization (e.g., surface culling). These benefits are achieved while limiting the computational overhead required for each sandboxed application: all the applications need to do is, from the perspective of a single application, update a single scene graph to reflect their 3D data, with other operations being performed by another component of the system. In addition, security benefits may be obtained by maintaining separation between sandboxed applications.
FIG. 3A illustrates components of an example computer system 300, which example computer system 300 can render 3D data from multiple independent applications to a display using a centralized scene graph. The illustrated example utilizes a client-server (client-server) topology; however, the present disclosure is not limited to client-server examples. In the example computer system 300, the first client application 310 and the second client application 320 each transmit 3D data (in some examples, over a network) to the client-server interface 330. In some examples, client applications 310 and 320 are "sandboxed" applications that operate independently of each other and independently transfer their 3D data to client-server interface 330. Client-server interface 330 may receive updated 3D data from client applications 310 and 320 and transmit the 3D data (in some examples, over a network) to server-side host application 340. In some examples, the client-server interface 330 uses multithreading to receive 3D data, process the 3D data, and/or transmit the 3D data to the host application 340 using multiple processor threads. In some examples, the client-server interface includes logic to control the rate at which 3D data is transferred (e.g., by throttling) to the host application 340. The host application 340 may update the centralized scene graph 350 using the 3D data received from the client-server interface such that the centralized scene graph 350 reflects the 3D data received from the client applications 310 and 320. In some examples, the centralized scene graph 350 includes multiple versions of the scene graph, and known versioning techniques are used to allow updates to the centralized scene graph 350 to occur in parallel. The renderer 360 may then traverse the centralized scene graph 350, apply optimizations and effects as appropriate, and generate output to be displayed on a display 370, such as a computer monitor (e.g., a graphical output including data of at least one of the client applications 310 and 320, and in some embodiments, only an occluded portion of one client application without occluding application data).
FIG. 3B illustrates aspects of an example client application 310 relative to the example computer system 300 shown in FIG. 3A. In the example shown, the 3D data 312 represents graphical objects (e.g., geometric primitives, such as polygons) in a 3D environment to be rendered on the display 370. The 3D data 312 may be updated by the client application 310 (314). For example, if client application 310 is an application with a rendering cycle that iterates sixty times per second, client application 310 may update 3D data 312 sixty times per second to reflect changes to that data during the course of operation of the application that should be reflected in the rendering output. In some examples, the 3D data 312 is represented as a local scene graph 316 that may be local to each client application 310. In some examples, the local scene graph 316 may include data (e.g., nodes) corresponding to the data in the centralized scene graph 350. As the 3D data 312 is updated (314), the client application 310 may update the local scene graph 316 to reflect the latest version of the 3D data 312. As the local scene graph 316 is updated, the client application 310 may use it to generate 317 client data 318. In some examples, the client data 318 may represent the local scene graph 316 in its entirety. In some examples, the client data 318 may represent changes made to the local scene graph 316 due to previous client data 318 sent to the client-server interface 330. For example, client data 318 may include: nodes added to or deleted from the local scene graph 316; changes to the relationships between nodes in the local scene graph 316; or changes to the properties of the nodes in the local scene graph 316. In some examples, the client data 318 can use an identifier (e.g., an identification number corresponding to a scene graph node) to identify a relationship between data from the local scene graph 316 and corresponding data on the centralized scene graph 350. The client data 318 may then be communicated to the client-server interface 330 for eventual communication to the host application 340. In some examples, communication of the client data 318 to the client-server interface 330 may occur over a network. In some examples, the client helper app may be used in conjunction with the client application 310 to generate client data 318 from the local scene graph 316 or the 3D data 312.
Aspects described with respect to client application 310 may similarly describe client application 320 or other client applications (along with client application 310) including example computer system 300. Those skilled in the art will appreciate that the systems and methods described herein may be extended to include any number of client applications and client data, and the present disclosure is not limited to any such number; furthermore, as the number of client applications increases, some benefits (e.g., increased computational efficiency) may become more apparent. As described above, client applications 310 and 320 may be sandboxed applications that do not share data or functionality. For example, in example computer system 300, client application 320 may have its own 3D data and local scene graph, independent of 3D data 312 and local scene graph 316 belonging to client application 310. However, in some examples, including example computer system 300, a single client-server interface 300 is shared by multiple client applications (e.g., client applications 310 and 320).
Fig. 3C illustrates aspects of an example client-server interface 330 relative to the example computer system 300 shown in fig. 3A and 3B. In this example, client data 318 and client data 328 are client data that is communicated to or updated by respective client applications 310 and 320 as described above with respect to fig. 3B. In some examples, client data 318 and 328 may be updated at different rates on client-server interface 330. This may occur, for example, in the following situations: if one client application executes on less capable computing hardware than another client application (causing the client application to update its client data less frequently); if one client application communicates with the client-server interface 330 over a lower bandwidth network than another client application; or the client data associated with one client application is more complex (and requires more processing time to generate) than the client data associated with another client application. The different rates at which the client data is updated on the client-server interface 330 may also be a result of temporary spikes (spikes) in operating conditions, for example if a network failure causes the client application to temporarily go offline. It is desirable that, for example, computer system 300 tolerate different rates of updating client data; for example, it may be desirable that a network failure affecting one client application does not negatively impact the rate at which centralized scene graph 350 is updated or the rate at which scenes are rendered using client data from other client applications. It may also be desirable to ensure that client data from one client application does not lag too much or lead too much relative to client data from other client applications when updating the centralized scene graph 350, as this may result in instability or desynchronization of the centralized scene graph or display rendered relative to the client applications.
In some examples, the role of the client-server interface 330 is to handle differences or fluctuations in the rate at which client data is updated. Referring to fig. 3C, an example client-server interface 330 may receive client data 318 and 328 via separate processing threads 332 and 334, respectively, and may include a thread manager 336 to process the threads. Utilizing multiple threads to update client data from different sources (e.g., client application 310 and client application 320) may prevent the problem of one source blocking or otherwise negatively affecting data from other sources. In the illustrated example, thread manager 336 may input client data 318 and 328 from client applications 310 and 320 using threads 332 and 334, respectively, and output host data 319 and 329 (corresponding to client data 318 and 328, client applications 310 and 320, and threads 332 and 334, respectively) to host application 340. Thread manager 336 may include logic to process threads 332 and 334, to identify and solve throughput (throughput) problems or other problems associated with threads 332 and 334, and/or to control the output of host data 319 and 329. For example, if client data 318 and client data 328 are updated at approximately the same rate (via threads 332 and 334, respectively), thread manager 336 may simply update host data 319 and 329 (corresponding to client data 318 and 319, respectively) at approximately the same rate. However, if client data 318 is updated at a much faster rate than client data 328, thread manager 336 can throttle (e.g., by passing it to host application 340 infrequently) client data 318 to prevent it from exceeding client data 328 far. The thread manager 336 can also control the overall rate at which host data is updated. For example, the thread manager 336 may throttle the rate at which the host data 319 and/or 329 is updated to prevent the host data from being updated faster than the host application 340 can process it (which may result in undesirable desynchronization of the client applications 310 and/or 320, the centralized scene graph 350, and/or the output to the display 370).
Fig. 3D illustrates aspects of an example host application 340 relative to the example computer system 300 shown in fig. 3A-3C. Described herein are operations performed within threads 341 within host application 340, where threads 341 may be executed concurrently with additional threads within host application 340. In some examples, multi-threaded processing within the host application 340 may have the advantage of allowing multiple client applications or sets of host data to simultaneously update the same centralized scene graph 350 (in some examples, by updating different versions of the centralized scene graph). This, in turn, may increase the overall throughput of client data to rendered scenes presented on the display. In some examples, multiple threads may need to place locks (locks) on the centralized scenegraph data, e.g., to prevent the threads from inadvertently writing the same data. However, in some examples, one or more of the operations described may not be performed within a thread.
In the example shown in fig. 3D, host data 319 (corresponding to client application 310 and client data 318) is updated (342) as described above with respect to fig. 3C. The host application 340 can then identify changes that the host data 319 can make to a previous version of the centralized scenegraph 350. For example, the host application 340 can identify that the host data 319 is to add a node, delete a node, change a relationship between two nodes, or change a property of a node, relative to the centralized scenario diagram 350. (in some examples (e.g., the example shown in FIG. 3D), the host application 340 may perform these operations or other operations using a host data handler (handler) 344.) the host application 340 may identify a version (352) of the centralized scene graph 350 to be created or updated from the host data 319. In some examples, prior to writing the version 352 of the centralized scenario diagram 350, the host application 340 may lock the version to prevent other processes from modifying it at the same time. The host application 340 may make changes to the version 352 to reflect the host data 319 (e.g., by adding or deleting scene graph nodes in the version 352 to correspond to the host data 319). In some examples, the host application 340 may then unlock the version 352 and update a value corresponding to the version number of the version 352 (356). The host application 340 may then update the host data (342) and repeat the process shown in fig. 3D. Because the centralized scene graph 350 is updated to reflect the various host data derived from the various client applications, the centralized scene graph 350 will still reflect the aggregate host data from the multiple client applications, even though the various client applications may be "sandboxed" and independent of each other.
FIG. 3E illustrates aspects of the example renderer 360 relative to the example computer system 300 shown in FIGS. 3A-3D. In some examples, the renderer 360 includes a portion of the host application 340. In some examples, the renderer 360 may be part of another component of the example computer system 300, or may be a separate component or application. In some examples, the renderer 360 may be implemented in different physical hardware than one or more components of the example computer system 300, and may communicate with one or more of those components over a network.
In the example shown in FIG. 3E, the renderer 360 operates on a version 352 of the centralized scene graph 350. In this example, the role of the renderer is to create a rendered scene (such as an output or graphical output) including data for presentation on the display 370 based on the version 352 of the centralized scene graph 350. As part of this process, renderer 360 may traverse 362 version 352 using known scene graph traversal techniques. During or after the traversal 362, the renderer 360 can update (364) the centralized scene graph 350 as appropriate to reflect the results of the traversal. For example, as part of the traversal 362, the renderer 360 can identify isolated nodes that should be deleted from the centralized scene graph 350. After traversing 362 and/or updating 364, the renderer 360 can apply various optimizations 366 to the scene. For example, the renderer 360 may cull surfaces that are occluded or not visible to avoid consuming unnecessary computing resources. After traversing 362 and/or updating 364, the renderer 360 may apply one or more visual effects 367 to the scene. For example, in some examples, the renderer 360 may apply light effects or shadow effects, apply one or more shading programs, apply particle effects, and/or apply physical effects. Finally, renderer 360 may output the data to a graphics output pipeline, the results of which may be displayed for output on display 370.
The above example processes of a computer system may be provided by any suitable logic circuitry. Suitable logic circuitry may include one or more computer processors (e.g., CPU, GPU, etc.), which when executing instructions embodied in a software program, perform the process. Additionally, such processes may also be provided through corresponding logic designs implemented in hardware logic circuits, e.g., programmable logic (e.g., PLDs, FPGAs, etc.) or custom logic implemented with logic designs that provide the processes (e.g., ASICs, etc.). Further, such processes are provided via an implementation of one or more processors that combine running software and hardware logic circuits.
Fig. 4 illustrates an example system 400 that can be used to implement any or all of the above examples. The above examples (in whole or in part) may be embodied in any portable (including wearable) or non-portable device, such as a communication device (e.g., mobile phone, smart phone), multimedia device (e.g., MP3 player, television, radio), portable or handheld computer (e.g., tablet, netbook, notebook), desktop computer, all-in-one desktop, peripheral device, head-mounted device (which may include, for example, an integrated display), or any other system or device suitable for including the example system architecture 400, including combinations of two or more of these types of devices. The above examples may be embodied in two or more physically separate devices, such as two or more computers communicating via a wireless network. The above examples may be embodied in two or more physically distinct devices, such as a belt pack that communicates data to and/or from the head mounted display. Fig. 4 is a block diagram of one example of a system 400, the system 400 generally including one or more computer-readable media 401, a processing system 404, an I/O subsystem 406, Radio Frequency (RF) circuitry 408, audio circuitry 410, and sensor circuitry 411. These components may be coupled by one or more communication buses or signal lines 403.
It should be apparent that the architecture shown in fig. 4 is only one example architecture for system 400, and that system 400 may have more or fewer components or a different configuration of components than shown. The various components shown in fig. 4 may be implemented in hardware, software, firmware, or any combination thereof, including one or more signal processing and/or application specific integrated circuits.
Referring to the example system architecture 400 in FIG. 4, the RF circuitry 408 may be used to transmit and receive information over a wireless link or network to one or more other devices and includes well-known circuitry for performing this function. The RF circuitry 408 and audio circuitry 410 may be coupled to the processing system 404 via a peripherals interface 416. Interface 416 may include various known components for establishing and maintaining communication between peripheral devices and processing system 404. The audio circuitry 410 may be coupled to an audio speaker 450 and a microphone 452 and may include known circuitry for processing voice signals received from the interface 416 to enable the user to communicate with other users in real-time. In some examples, audio circuitry 410 may include a headphone jack (not shown).
The sensor circuit 411 may be coupled to various sensors including, but not limited to, one or more Light Emitting Diodes (LEDs) or other light emitters, one or more photodiodes or other light sensors, one or more photo-thermal sensors, magnetometers, accelerometers, gyroscopes, barometers, compasses, proximity sensors, cameras, ambient light sensors, thermometers, GPS sensors, electro-eye graph (EOG) sensors, and various system sensors that may sense remaining battery life, power consumption, processor speed, CPU load, and the like. In examples such as those involving head-mounted devices, one or more sensors may be employed in connection with functions related to the user's eyes, such as tracking the user's eye movements or identifying the user based on images of his or her eyes.
Peripheral interface 416 may couple input and output peripherals of the system to processor 418 and computer-readable medium 401. The one or more processors 418 may be in communication with the one or more computer-readable media 401 via the controller 44. The computer-readable medium 401 may be any device or medium (excluding signals) that may store code and/or data for use by the one or more processors 418. In some examples, medium 401 may be a non-transitory computer-readable storage medium. The medium 401 may include a hierarchy of memories including, but not limited to, a cache, a main memory, and a secondary (secondary) memory. The memory hierarchy may be implemented using any combination of RAM (e.g., SRAM, DRAM, DDRAM), ROM, FLASH, magnetic and/or optical storage devices (e.g., disk drives, magnetic tape, CDs (compact discs) and DVDs (digital video discs)). Medium 401 may also comprise a transmission medium for carrying an information-bearing signal indicative of computer instructions or data (but not including a signal and not including a carrier wave upon which the signal is modulated). For example, the transmission medium may include a communication network including, but not limited to, the internet (also known as the world wide web), an intranet, a Local Area Network (LAN), a wide area network (WLAN), a Storage Area Network (SAN), a Metropolitan Area Network (MAN), and the like.
The one or more processors 418 may execute various software components stored in the medium 401 to perform various functions for the system 400. In some examples, the software components may include an operating system 422, a communication module (or set of instructions) 424, an I/O processing module (or set of instructions) 426, a graphics module (or set of instructions) 428, and one or more applications (or sets of instructions) 430. Each of these modules and the above-described applications may correspond to a set of instructions for performing one or more of the functions described above and the methods described herein (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various examples. In some examples, medium 401 may store a subset of the modules and data structures identified above. In addition, medium 401 may store additional modules and data structures not described above.
The operating system 422 may include various processes, sets of instructions, software components, and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitating communication between various hardware and software components.
The communication module 424 may facilitate communication with other devices through one or more external ports 436 or via the RF circuitry 408, and the communication module 424 may include various software components for processing data received from the RF circuitry 408 and/or the external ports 436.
Graphics module 428 may include various known software components for rendering, animating and displaying graphical objects on one or more display surfaces. The display surface may comprise a 2D or 3D display. The display surface may be directly or indirectly coupled to one or more components of the example system 400. In examples involving touch-sensing displays (e.g., touch screens), graphics module 428 may include components for rendering, displaying, and animating objects on the touch-sensing display. In some examples, graphics module 428 may include components for rendering to a remote display. In some examples (e.g., those involving cameras), graphics module 428 may include components for creating and/or displaying images formed by compositing camera data (e.g., captured from a head-mounted camera) or photographic data (e.g., images captured from a satellite) with rendered graphical objects. In some examples, the graphics module may include components for rendering images to a head mounted display. In some examples, the image may include a view of an element of the virtual content (e.g., an object in a three-dimensional virtual environment) and/or a view of the physical world (e.g., a camera input indicative of the user's physical environment). In some examples, the display may present a composite of the virtual content and a view of the physical world. In some examples, the view of the physical world may be a rendered image; in some examples, the view of the physical world may be an image from a camera.
The one or more applications 430 can include any application installed on the system 400, including but not limited to a browser, an address book, a contact list, email, instant messaging, word processing, keyboard emulation, controls, JAVA-enabled applications, encryption, digital rights management, voice recognition, voice replication, location determination functionality (e.g., Global Positioning System (GPS) -provided functionality), a music player, and so forth.
I/O subsystem 406 may be coupled to eye I/O device 412 and one or more other I/O devices 414 for controlling or performing various functions. For example, eye I/O device 412 may communicate with processing system 404 via eye I/O device controller 432, which may include various components for processing eye input (e.g., sensors for eye tracking) or user gesture input (e.g., optical sensors). One or more other input controllers 434 may receive electrical signals from/transmit electrical signals to other I/O devices 414 from/to other I/O devices 414. Other I/O devices 414 may include physical buttons, dials (dials), slider switches, joysticks, keyboards, touch pads, other display screens, or any combination thereof.
I/O processing module 426 may include various software components for performing various tasks associated with eye I/O device 412 and/or other I/O devices 414, including, but not limited to, receiving and processing input received from eye I/O device 412 via eye I/O device controller 432 or from other I/O devices 414 via I/O controller 434. In some examples, I/O device 414 and/or I/O processing module 426 may perform various tasks associated with gesture input that may be provided in a tactile or non-tactile manner. In some examples, the gesture input may be provided by a camera or another sensor for detecting motion of, for example, the user's eyes, arms, hands, and/or fingers. In some examples, I/O devices 414 and/or I/O processing module 426 may be configured to identify objects of the display with which a user wishes to interact, such as GUI elements pointed to by the user. In some examples, eye I/O device 412 and/or I/O processing module 426 may be configured to perform eye tracking tasks (e.g., with the aid of an optical or EOG sensor), such as identifying an object or region located on a display at which a user is looking. In some examples, a device (e.g., a hardware "beacon") may be worn or held by the user to assist in touching I/O device 412 and/or I/O processing module 426 to accomplish gesture-related tasks, such as identifying the location of the user's hand relative to a 2D or 3D environment. In some examples, eye I/O device 412 and/or I/O processing module 426 may be configured to identify the user based on sensor input (such as data from a camera sensor relating to the user's eye).
In some examples, graphics module 428 may display visual output to the user in a GUI. The visual output may include text, graphics, video, and any combination thereof. Some or all of the visual output may correspond to user interface objects. In some examples, I/O devices 412 and/or 414 and/or controllers 432 and/or 434 (and any associated modules and/or sets of instructions in medium 401) may detect and track gestures and/or eye movements and may translate the detected gestures and/or eye movements into interactions with graphical objects (e.g., one or more user interface objects). In examples where eye I/O device 412 and/or eye I/O device controller 432 are configured to track eye movements of a user, the user may interact directly with the graphical objects by viewing them.
Feedback may be provided, for example, by eye I/O device 412 or another I/O device 414 based on the content being displayed and/or one or more states of the computing system. The feedback may be sent optically (e.g., light signal or displayed image), mechanically (e.g., tactile feedback, touch feedback, force feedback, etc.), electrically (e.g., electrical stimulation), olfactory, audibly (e.g., beep, etc.), etc., or any combination thereof and in a variable or constant manner.
The system 400 may also include a power system 444 for powering various hardware components, and may include a power management system, one or more power sources, a charging system, power failure detection circuitry, a power converter or inverter, a power status indicator, and any other components typically associated with the generation, management, and distribution of power in portable devices.
In some examples, the peripherals interface 416, the one or more processors 418, and the memory controller 420 may be implemented on a single chip, such as the processing system 404. In some other examples, they may be implemented on separate chips.
In some examples, a method is disclosed. The method can comprise the following steps: receiving first graph data comprising a plurality of first nodes from a first client application of a computer system; and receiving second graph data comprising a plurality of second nodes from a second client application of the computer system; and generating a scene graph, wherein the scene graph describes a relationship (such as an occlusion relationship) between the first node and the second node, and the scene graph is configured to render a scene including the nodes when traversed by a processor of the computer system. Additionally or alternatively to one or more of the examples above, the method may further include traversing, by a processor of the computer system, the scene graph. Additionally or alternatively to one or more of the examples above, the computer system may be configured to communicate with a display, and the method may further comprise: an output is presented on a display that includes at least one node that is not occluded by another node in the rendered scene. In some embodiments, occlusion is the visual obstruction of one node by another node when viewing the rendered scene of the object from a given angle. Additionally or alternatively to one or more of the examples above, the computer system may be configured to communicate with a display, and the method may further include displaying the output on the display, e.g., by displaying a rendered scene present on a scene graph, or displaying only those of the first or second plurality of nodes that are not occluded and not the other nodes. For example, if the second plurality of nodes occludes a portion of the first plurality of nodes, the output of the display may be only the unoccluded nodes of the first plurality of nodes without displaying any of the second plurality of nodes. Additionally or alternatively to one or more of the above examples, the method may further include applying the optimization to the output at the computer system. Additionally or alternatively to one or more of the examples above, applying the optimization may include culling the surface. Additionally or alternatively to one or more of the above examples, the method may further include applying, at the computer system, a visual effect to the output. Additionally or alternatively to one or more of the examples above, applying the visual effect may include calculating a light value (light value). Additionally or alternatively to one or more of the examples above, applying the visual effect may include executing a shading program. Additionally or alternatively to one or more of the above examples, the method may further include applying, at the computer system, the physical effect to the output. Additionally or alternatively to one or more of the examples above, applying the physical effect may include detecting a conflict. Additionally or alternatively to one or more of the examples above, the first client application may be a first application executing on the computer system, the second client application may be a second application executing on the computer system, and the first client application may be sandboxed on the computer system relative to the second client application. Additionally or alternatively to one or more of the examples above, the first graphical data may correspond to a first client scenario diagram associated with a first client application, the second graphical data may correspond to a second client scenario diagram associated with a second client application, the first client scenario diagram may be sandboxed on the computer system relative to the second client scenario diagram, the first client scenario diagram may be sandboxed on the computer system relative to the scenario diagram, and the second client scenario diagram may be sandboxed on the computer system relative to the scenario diagram. Additionally or alternatively to one or more of the examples above, the scene graph may correspond to a version of a versioned scene graph. Additionally or alternatively to one or more of the examples above, the first graphics data may be transferred to the scene graph using a first processing thread of the computer system, and the second graphics data may be transferred to the scene graph using a second processing thread of the computer system independent of the first processing thread.
In some examples, a method is disclosed. The method can comprise the following steps: traversing a scene graph of a computer system having a display, wherein: the scenario diagram includes first 3D data associated with a first application, wherein the first 3D data includes one or more nodes, the scenario diagram includes second 3D data associated with a second application, wherein the second 3D data includes one or more nodes, the first application is sandboxed on the computer system relative to the second application, and the scenario diagram includes relationships between the nodes of the first 3D data and the nodes of the second 3D data; and displaying an image corresponding to the scene graph on the display, wherein: the image corresponds to the output of the traversal scene graph and reflects the relationship, the image being a partial or complete display of its data. Additionally or alternatively to one or more of the above examples, the relationship may be a spatial relationship. Additionally or alternatively to one or more of the examples above, the method may further include applying, at the computer system, the optimization to the output of the traversal scene graph. Additionally or alternatively to one or more of the examples above, applying the optimization may include culling the surface. Additionally or alternatively to one or more of the examples above, the method may further include applying, at the computer system, a visual effect to the output of the traversal scene graph. Additionally or alternatively to one or more of the examples above, applying the visual effect may include calculating a light value. Additionally or alternatively to one or more of the examples above, applying the visual effect may include executing a shading program. Additionally or alternatively to one or more of the examples above, the method may further include applying, at the computer system, a physical effect to the output of the traversal scene graph. Additionally or alternatively to one or more of the examples above, applying the physical effect may include detecting a conflict. Additionally or alternatively to one or more of the examples above, the scene graph may correspond to a version of a versioned scene graph. Additionally or alternatively to one or more of the examples above, the graphics data corresponding to the first 3D data may be transferred to the scene graph by a host application executing on the computer system. Additionally or alternatively to one or more of the examples above, the graphics data corresponding to the first 3D data may be transmitted to the host application by a client of the host application. Additionally or alternatively to one or more of the examples above, first graphics data corresponding to the first 3D data may be transferred to the scene graph by the host application using a first processing thread, and second graphics data corresponding to the second 3D data may be transferred to the scene graph by the host application using a second processing thread, independent of the first processing thread.
In some examples, a computer system is disclosed. The system may include one or more processors, and memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform one or more of the methods described above.
In some examples, a non-transitory computer-readable storage medium is disclosed. A non-transitory computer-readable storage medium may store instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising: receiving first graph data comprising a plurality of first nodes from a first client application of a computer system; receiving second graph data comprising a plurality of second nodes from a second client application of the computer system; and generating a scene graph, wherein: the scene graph describes a relationship between the first node and the second node, and the scene graph is configured to render the scene based on the occlusion relationship when traversed by the processor of the computer system, wherein one or more of the first node or the second node of the first plurality of nodes or the second plurality of nodes occludes the other. In some embodiments, an occlusion is a visual obstruction of one node to another node when viewing the rendered scene of the object from a given angle. Additionally or alternatively to one or more of the examples above, the method may further include traversing, by a processor of the computer system, the scene graph. Additionally or alternatively to one or more of the examples above, the computer system may be configured to communicate with a display, and the method may further include displaying the output on the display, e.g., by displaying a rendered scene present on a scene graph, or displaying only those of the first or second plurality of nodes that are not occluded and not the other nodes. For example, if the second plurality of nodes occludes a portion of the first plurality of nodes, the output of the display may be only the unoccluded nodes of the first plurality of nodes without displaying any of the second plurality of nodes. Additionally or alternatively to one or more of the above examples, the method may further include applying the optimization to the output at the computer system. Additionally or alternatively to one or more of the examples above, applying the optimization may include culling the surface. Additionally or alternatively to one or more of the above examples, the method may further include applying, at the computer system, a visual effect to the output. Additionally or alternatively to one or more of the examples above, applying the visual effect may include calculating a light value. Additionally or alternatively to one or more of the examples above, applying the visual effect may include executing a shading program. Additionally or alternatively to one or more of the above examples, the method may further include applying, at the computer system, the physical effect to the output. Additionally or alternatively to one or more of the examples above, applying the physical effect may include detecting a conflict. Additionally or alternatively to one or more of the examples above, the first client application may be a first application executing on the computer system, the second client application may be a second application executing on the computer system, and the first client application may be sandboxed on the computer system relative to the second client application. Additionally or alternatively to one or more of the examples above, the first graphical data may correspond to a first client scenario diagram associated with a first client application, the second graphical data may correspond to a second client scenario diagram associated with a second client application, the first client scenario diagram may be sandboxed on the computer system relative to the second client scenario diagram, the first client scenario diagram may be sandboxed on the computer system relative to the scenario diagram, and the second client scenario diagram may be sandboxed on the computer system relative to the scenario diagram. Additionally or alternatively to one or more of the examples above, the scene graph may correspond to a version of a versioned scene graph. Additionally or alternatively to one or more of the examples above, the first graphics data may be transferred to the scene graph using a first processing thread of the computer system, and the second graphics data may be transferred to the scene graph using a second processing thread of the computer system independent of the first processing thread.
In some examples, a non-transitory computer-readable storage medium is disclosed. A non-transitory computer-readable storage medium may store instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising: traversing a scene graph of a computer system having a display, wherein: the scenario diagram includes first 3D data associated with a first application, wherein the first 3D data includes one or more nodes, the scenario diagram includes second 3D data associated with a second application, wherein the second 3D data includes one or more nodes, the first application is sandboxed on the computer system relative to the second application, and the scenario diagram includes relationships between the nodes of the first 3D data and the nodes of the second 3D data; and displaying an image corresponding to the scene graph on the display, wherein: the image corresponds to the output of the traversal scene graph, and the image reflects the relationship. Additionally or alternatively to one or more of the above examples, the relationship may be a spatial relationship. Additionally or alternatively to one or more of the examples above, the method may further include applying, at the computer system, the optimization to the output of the traversal scene graph. Additionally or alternatively to one or more of the examples above, applying the optimization may include culling the surface. Additionally or alternatively to one or more of the examples above, the method may further include applying, at the computer system, a visual effect to the output of the traversal scene graph. Additionally or alternatively to one or more of the examples above, applying the visual effect may include calculating a light value. Additionally or alternatively to one or more of the examples above, applying the visual effect may include executing a shading program. Additionally or alternatively to one or more of the examples above, the method may further include applying, at the computer system, a physical effect to the output of the traversal scene graph. Additionally or alternatively to one or more of the examples above, applying the physical effect may include detecting a conflict. Additionally or alternatively to one or more of the examples above, the scene graph may correspond to a version of a versioned scene graph. Additionally or alternatively to one or more of the examples above, the graphics data corresponding to the first 3D data may be transferred to the scene graph by a host application executing on the computer system. Additionally or alternatively to one or more of the examples above, the graphics data corresponding to the first 3D data may be transmitted to the host application by a client of the host application. Additionally or alternatively to one or more of the examples above, first graphics data corresponding to the first 3D data may be transferred to the scene graph by the host application using a first processing thread, and second graphics data corresponding to the second 3D data may be transferred to the scene graph by the host application using a second processing thread, independent of the first processing thread.
In some examples, a computer system is disclosed. The system may include: one or more processors; and a memory configured to receive first scene data from a first client application at the computer system and second scene data from a second client application at the computer system; and memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform a method comprising: a graphics data structure is generated based on the first scene data and the second scene data, the graphics data structure configured to cause an output corresponding to an image on a display when provided as input to a rendering operation performed by one or more processors. Additionally or alternatively to one or more of the examples above, the graphical data structure may be at least one of a display list and a display tree. Additionally or alternatively to one or more of the examples above, the method may further include performing a rendering operation using the graphics data structure as input. Additionally or alternatively to one or more of the examples above, the computer system may further include a display, and the method may further include displaying an image on the display. Additionally or alternatively to one or more of the examples above, the first client application may be a first application executed by one or more processors of the first device, and the second client application may be a second application executed by one or more processors of the first device. Additionally or alternatively to one or more of the examples above, the first client application may be a first application executed by one or more processors of the first device, and the second client application may be a second application executed by one or more processors of the second device. Additionally or alternatively to one or more of the examples above, the memory may be further configured to receive third scene data from a third client application. Additionally or alternatively to one or more of the examples above, the method may further include deleting the first scene data from the memory. Additionally or alternatively to one or more of the above examples, the graphics data structure may include first data and second data, and the method may further include: determining whether the first data corresponds to an occlusion view or a non-occlusion view; in response to determining that the first data corresponds to a non-occluded view, rendering an image including the non-occluded view based on the first data; and in response to determining that the first data corresponds to an occlusion view, rendering an image that does not include the occlusion view. Additionally or alternatively to one or more of the examples above, the memory may be further configured to store the first scene data as a first version in the version control system in response to receiving the first scene data. Additionally or alternatively to one or more of the examples above, the memory may be further configured to: receiving third scene data from the first client application; and storing the third scene data as the second version in the version control system. Additionally or alternatively to one or more of the examples above, the method may further include deleting the first version from the memory in response to generating the graph data structure. Additionally or alternatively to one or more of the above examples, the method may be performed in parallel with the memory receiving the third scene data. Additionally or alternatively to one or more of the examples above, the memory may be configured to receive the first scene data in parallel with receiving the second scene data. Additionally or alternatively to one or more of the examples above, the memory may be configured to receive the third scene data at a first interval corresponding to the first data rate, and the method may further include adjusting a length of the first interval to correspond to the second data rate. Additionally or alternatively to one or more of the examples above, the first scene data may include at least one of new data, deleted data, and a change in a relationship between data.
In some examples, a computer system is disclosed. The computer system may include a server, server data, a first client application, and a second client application, and may be configured to: receiving, at a server, first unprocessed scene data from a first client application; receiving, at the server, second unprocessed scene data from the second client application; merging, at a server, first unprocessed scene data from a first client application, second unprocessed scene data from a second client application, and server data into a centralized scene data structure; executing at least a portion of the data contained within the centralized scene data structure at the server; and creating a graph data structure based on data executing within the centralized scene data structure. Additionally or alternatively to one or more of the above examples, the graphical data structure may be a display list or a display tree. Additionally or alternatively to one or more of the examples above, the computer system may further include a rendering engine configured to render the graphical data structure into a processed image. Additionally or alternatively to one or more of the examples above, the computer system may further include a display configured to display the processed image. Additionally or alternatively to one or more of the examples above, the display may be capable of displaying virtual content while maintaining at least a partial view of the physical world. Additionally or alternatively to one or more of the examples above, the first client application and the second client application may be two different applications running on a single physical device. Additionally or alternatively to one or more of the examples above, the first client application and the second client application may be two different applications, each running on a separate physical device. Additionally or alternatively to one or more of the examples above, the server may be configured to receive third unprocessed scene data from a third client application. Additionally or alternatively to one or more of the examples above, the server may be configured to delete unprocessed scene data from the first client application after execution of the unprocessed scene data from the first client application. Additionally or alternatively to one or more of the examples above, the rendering engine may further include an occlusion module configured to separate data within the graphical data structure into a first occlusion category and a second occlusion category and display the second occlusion category. Additionally or alternatively to one or more of the examples above, the server may be configured to store the first unprocessed scene data from the first client application as the first version. Additionally or alternatively to one or more of the examples above, the server may be configured to store the third unprocessed scene data from the first client application as the second version. Additionally or alternatively to one or more of the examples above, the computer system may be configured to store the first version of the first unprocessed scene data from the first client application from a time the server receives the first version of the first unprocessed scene data from the first client application until a time when the first unprocessed scene data from the first client application is read and executed. Additionally or alternatively to one or more of the examples above, the server may be configured to receive the first unprocessed scene data from the first client concurrently with receiving the second unprocessed scene data from the second client. Additionally or alternatively to one or more of the examples above, the server may be configured to slow a rate at which the first client application sends unprocessed scene data to the server. Additionally or alternatively to one or more of the examples above, the data received from the first client application and the second client application may be at least one selected from the group consisting of: new data, deleted data, changes in the relationship between previously transmitted data, and modified data.
Although the disclosed examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. For example, elements of one or more implementations may be combined, deleted, modified, or supplemented to form further implementations. Such changes and modifications are to be understood as being included within the scope of the disclosed examples as defined by the appended claims.

Claims (34)

1. A method, comprising:
receiving first graph data comprising a plurality of first nodes from a first client application of a computer system;
receiving, from a second client application of the computer system, second graph data comprising a plurality of second nodes; and
generating a scene graph, wherein:
the scene graph describes an occlusion relationship between at least one of the plurality of first nodes and at least one of the plurality of second nodes, an
The scene graph is configured to create a rendered scene based on the occlusion relationship, wherein at least one first node is occluded by at least one second node.
2. The method of claim 1, further comprising: traversing, by a processor of the computer system, the scene graph.
3. The method of claim 2, wherein the computer system is configured to communicate with a display, and the method further comprises displaying output on the display.
4. The method of claim 3, wherein displaying the output comprises: displaying at least one of the plurality of first nodes and at least one of the plurality of second nodes.
5. The method of claim 4, wherein displaying the output comprises displaying at least one first node.
6. The method of claim 2, further comprising: applying an optimization to the output at the computer system.
7. The method of claim 6, wherein applying the optimization comprises culling surfaces.
8. The method of claim 2, further comprising: applying, at the computer system, a visual effect to the output.
9. The method of claim 8, wherein applying the visual effect comprises calculating a light value.
10. The method of claim 8, wherein applying the visual effect comprises executing a shading program.
11. The method of claim 2, further comprising: applying a physical effect to the output at the computer system.
12. The method of claim 11, wherein applying the physical effect comprises detecting a conflict.
13. The method of claim 1, wherein the first client application is a first application executing on the computer system, the second client application is a second application executing on the computer system, and the first client application is sandboxed on the computer system relative to the second client application.
14. The method of claim 1, wherein:
the first graphical data corresponds to a first client scene graph associated with the first client application,
the second graphical data corresponds to a second client scene graph associated with the second client application,
the first client scenario diagram is sandboxed on the computer system relative to the second client scenario diagram,
the first client-side scene graph is sandboxed on the computer system relative to the scene graph, an
The second client scene graph is sandboxed on the computer system relative to the scene graph.
15. The method of claim 1, wherein the scene graph corresponds to a version of a versioned scene graph.
16. The method of claim 1, wherein the first graphics data is transferred to the scene graph using a first processing thread of the computer system, and the second graphics data is transferred to the scene graph using a second processing thread of the computer system independent of the first processing thread.
17. A system comprising one or more processors, wherein the one or more processors are configured to perform a method comprising:
receiving first graph data comprising a plurality of first nodes from a first client application of a computer system;
receiving, from a second client application of the computer system, second graph data comprising a plurality of second nodes; and
generating a scene graph, wherein:
the scene graph describes an occlusion relationship between at least one of the plurality of first nodes and at least one of the plurality of second nodes, an
The scene graph is configured to create a rendered scene based on the occlusion relationship, wherein at least one first node is occluded by at least one second node.
18. The system of claim 17, wherein the method further comprises: traversing the scene graph by the processor of the computer system.
19. The system of claim 18, wherein the computer system is configured to communicate with a display, and the method further comprises: causing the computer system to display output on the display.
20. The system of claim 19, wherein displaying the output comprises: displaying at least one of the plurality of first nodes and at least one of the plurality of second nodes.
21. The system of claim 20, wherein displaying the output comprises displaying at least one first node.
22. The system of claim 18, wherein the method further comprises: causing the computer system to apply an optimization to the output.
23. The system of claim 22, wherein applying the optimization comprises culling surfaces.
24. The system of claim 18, wherein the method further comprises: causing the computer system to apply a visual effect to the output.
25. The system of claim 24, wherein applying the visual effect comprises calculating a light value.
26. The system of claim 24, wherein applying the visual effect comprises executing a shading program.
27. The system of claim 18, wherein the method further comprises: causing the computer system to apply a physical effect to the output.
28. The system of claim 27, wherein applying the physical effect comprises detecting a conflict.
29. The system of claim 17, wherein the first client application is a first application executing on the computer system, the second client application is a second application executing on the computer system, and the first client application is sandboxed on the computer system relative to the second client application.
30. The system of claim 17, wherein:
the first graphical data corresponds to a first client scene graph associated with the first client application,
the second graphical data corresponds to a second client scene graph associated with the second client application,
the first client scenario diagram is sandboxed on the computer system relative to the second client scenario diagram,
the first client-side scene graph is sandboxed on the computer system relative to the scene graph, an
The second client scene graph is sandboxed on the computer system relative to the scene graph.
31. The system of claim 17, wherein the scene graph corresponds to a version of a versioned scene graph.
32. The system of claim 17, wherein the first graphics data is transferred to the scene graph using a first processing thread of the computer system, and the second graphics data is transferred to the scene graph using a second processing thread of the computer system independent of the first processing thread.
33. The system of claim 17, wherein the system comprises the computer system.
34. The system of claim 17, wherein the first client application is a first application executed via the one or more processors, the second client application is a second application executed via the one or more processors, and the first client application is sandboxed on the system relative to the second client application.
CN201980051078.7A 2018-06-18 2019-06-18 Centralized rendering Pending CN112513969A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/011,413 2018-06-18
US16/011,413 US10977858B2 (en) 2017-03-30 2018-06-18 Centralized rendering
PCT/US2019/037811 WO2019246157A1 (en) 2018-06-18 2019-06-18 Centralized rendering

Publications (1)

Publication Number Publication Date
CN112513969A true CN112513969A (en) 2021-03-16

Family

ID=68984334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980051078.7A Pending CN112513969A (en) 2018-06-18 2019-06-18 Centralized rendering

Country Status (4)

Country Link
EP (1) EP3807868A4 (en)
JP (3) JP7411585B2 (en)
CN (1) CN112513969A (en)
WO (1) WO2019246157A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020171644A1 (en) * 2001-03-31 2002-11-21 Reshetov Alexander V. Spatial patches for graphics rendering
US20050182844A1 (en) * 2004-02-17 2005-08-18 Sun Microsystems, Inc. Efficient communication in a client-server scene graph system
US20100013842A1 (en) * 2008-07-16 2010-01-21 Google Inc. Web-based graphics rendering system
US20130012418A1 (en) * 2010-06-15 2013-01-10 Yukio Tatsumi Lubricating oil composition for internal combustion engine
US8730264B1 (en) * 2011-09-26 2014-05-20 Google Inc. Determining when image elements intersect
CN104541201A (en) * 2012-04-05 2015-04-22 奇跃公司 Wide-field of view (FOV) imaging devices with active foveation capability
US20160293133A1 (en) * 2014-10-10 2016-10-06 DimensionalMechanics, Inc. System and methods for generating interactive virtual environments
CN106056663A (en) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 Rendering method for enhancing reality scene, processing module and reality enhancement glasses
CN107909652A (en) * 2017-11-10 2018-04-13 上海电机学院 A kind of actual situation scene mutually blocks implementation method
CN110476188A (en) * 2017-03-30 2019-11-19 奇跃公司 Centralization rendering

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3338673B2 (en) 1999-09-07 2002-10-28 株式会社マルチターム 3D virtual space sharing multi-user system
US7064766B2 (en) 2001-10-18 2006-06-20 Microsoft Corporation Intelligent caching data structure for immediate mode graphics
JP7203844B2 (en) 2017-07-25 2023-01-13 達闥機器人股▲分▼有限公司 Training data generation method, generation device, and semantic segmentation method for the image

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020171644A1 (en) * 2001-03-31 2002-11-21 Reshetov Alexander V. Spatial patches for graphics rendering
US20050182844A1 (en) * 2004-02-17 2005-08-18 Sun Microsystems, Inc. Efficient communication in a client-server scene graph system
US7800614B2 (en) * 2004-02-17 2010-09-21 Oracle America, Inc. Efficient communication in a client-server scene graph system
US20100013842A1 (en) * 2008-07-16 2010-01-21 Google Inc. Web-based graphics rendering system
US20130120418A1 (en) * 2008-07-16 2013-05-16 Robin Green Web-Based Graphics Rendering System
US20130012418A1 (en) * 2010-06-15 2013-01-10 Yukio Tatsumi Lubricating oil composition for internal combustion engine
US8730264B1 (en) * 2011-09-26 2014-05-20 Google Inc. Determining when image elements intersect
CN104541201A (en) * 2012-04-05 2015-04-22 奇跃公司 Wide-field of view (FOV) imaging devices with active foveation capability
US20160293133A1 (en) * 2014-10-10 2016-10-06 DimensionalMechanics, Inc. System and methods for generating interactive virtual environments
CN106056663A (en) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 Rendering method for enhancing reality scene, processing module and reality enhancement glasses
CN110476188A (en) * 2017-03-30 2019-11-19 奇跃公司 Centralization rendering
CN107909652A (en) * 2017-11-10 2018-04-13 上海电机学院 A kind of actual situation scene mutually blocks implementation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GREGORIJ KURILLO: "Teleimmersive 3D Collaborative Environment for Cyberarchaeology", COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 13 June 2010 (2010-06-13), pages 23 - 28, XP031728746 *

Also Published As

Publication number Publication date
EP3807868A4 (en) 2021-09-22
JP7411585B2 (en) 2024-01-11
EP3807868A1 (en) 2021-04-21
JP2023011823A (en) 2023-01-24
JP2021530024A (en) 2021-11-04
JP2024023346A (en) 2024-02-21
WO2019246157A1 (en) 2019-12-26

Similar Documents

Publication Publication Date Title
US11315316B2 (en) Centralized rendering
US11201953B2 (en) Application sharing
US11295518B2 (en) Centralized rendering
US11637999B1 (en) Metering for display modes in artificial reality
US11829529B2 (en) Look to pin on an artificial reality device
JP7411585B2 (en) Centralized rendering
US11663768B2 (en) Multi-process compositor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination