WO2023102275A1 - Multi-pipeline and jittered rendering methods for mobile - Google Patents
Multi-pipeline and jittered rendering methods for mobile Download PDFInfo
- Publication number
- WO2023102275A1 WO2023102275A1 PCT/US2022/054394 US2022054394W WO2023102275A1 WO 2023102275 A1 WO2023102275 A1 WO 2023102275A1 US 2022054394 W US2022054394 W US 2022054394W WO 2023102275 A1 WO2023102275 A1 WO 2023102275A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering pipeline
- scene data
- data
- virtual
- graphical
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the disclosed technology generally relates to methods of rendering digital images at a user device.
- the disclosed technology includes methods and systems for applying graphical manipulations to virtual environments, wherein the methods and systems result in an image snapshot functionality, virtual object jitter based antialiasing, and virtual object jitter-based depth of field effect.
- Image rendering quality including more realistic simulations of real-world objects, higher resolution, or smoother framerates, is a constant goal for several technologies and applications. This goal applies for all technologies relying on image rendering to create a user experience, however consumers of real-time gaming applications are notorious for their high expectations regarding the render quality of their user experiences. Trends in real-time gaming applications toward employing processor heavy graphical manipulation techniques such as ray tracing have put the pressure on innovators in this space to come up with new methods which provide the same or similar visual effects with less demand on hardware components. Innovators also seek to implement new functionalities which allow users of real-time gaming applications increased flexibility in tailoring their user experience to their personal preferences.
- FIG. 1 illustrates one example of a user experience rendering system, in accordance with examples of the present disclosure.
- FIGS. 2A-F illustrate various graphical manipulations within an example rendering pipeline, in accordance with examples of the present disclosure.
- FIG. 3 illustrates an example graphical application generating a real-time user experience, in accordance with examples of the present disclosure.
- FIG. 4 illustrates an example graphical application employing an in-game snapshot functionality, in accordance with examples of the present disclosure.
- FIG. 5 illustrates an example graphical application generating a real-time user experience and employing an embodiment of a jittered rendering pipeline, in accordance with examples of the present disclosure.
- FIGS. 6A-B illustrate how a jittered rendering pipeline may achieve an antialiasing effect, in accordance with examples of the present disclosure.
- FIGS. 7A-B illustrate how an image rendered with a depth of field effect generated by a jittered rendering pipeline may compare with an image rendered without a depth of field effect, in accordance with examples of the present disclosure.
- FIG. 8 depicts an additional block diagram of an example computer system in which various of the examples described herein may be implemented, in accordance with examples of the present disclosure.
- rendering refers to any process converting scene data into one or more images.
- Scene data refers to a series of data structures representing a virtual environment.
- a “virtual environment” as used herein refers to one or more virtual objects virtual objects (e.g., characters, landscape background, cameras, raster grids, light sources) spatially oriented along a virtual coordinate system.
- a “user experience” refers to one or more images which may be displayed to a user to simulate a virtual environment. In some examples, the user experience rendered is additionally interactive.
- Interactive as used herein denotes that the user may provide inputs which alter the virtual environment that is being simulated, and/or how the virtual environment is rendered into one or more images.
- a "graphical application” as described herein refers to a set of instructions which may be executed by a processor, wherein the set of instructions comprise instructions which cause the processor to perform graphical manipulations on scene data.
- a "graphical manipulation” as described herein refers to a set of logical operations which change data in accordance with a change to a visual representation of the data.
- additional sets of instructions may be run within or in parallel with a graphical application, wherein the additional sets of instructions apply manipulations on scene data which may reflect user inputs or other changes (e.g. changes to non-visual characteristics of the virtual objects such as a virtual actor's backstory or a virtual object's simulated mass) to the scene data.
- a rendering pipeline may be employed.
- a "rendering pipeline” described herein refers to a series of graphical manipulations.
- a rendering pipeline consists of graphical manipulations which may be conceptualized as placing and orienting virtual objects in a common coordinate frame, projecting those objects onto a raster grid, translating virtual optical characteristics (e.g., color, brightness, transparency) of the projected objects into virtual optical characteristics stored at fragments within the raster grid, converting the virtual optical characteristics at each fragment into pixel data written to a framebuffer, and then driving that framebuffer to a display element.
- a "raster grid” as used herein refers to a virtual two-dimensional grid.
- a “fragment” as used herein refers to individual cells within the raster grid.
- "Pixel data” as used herein refers to data entries which approximate virtual optical characteristics, wherein the pixel data is compatible with being driven to an associated element (e.g. a single red/green/blue combination "RGB” light-emitting diode (“LED”) pixel on a computer monitor screen) of a display device.
- a “framebuffer” as used herein refers to a collection of positions within computer readable media that corresponds to elements of a display device.
- the framebuffer is a faster portion of machine readable media (e.g., random-access memory (“RAM”), processor cache) that is capable of being driven to a display device by a processor quicker than slower portions of machine readable media (e.g., read-only memory (“ROM”), flash drives).
- RAM random-access memory
- ROM read-only memory
- flash drives e.g., hard drives
- a processor completes execution of a rendering pipeline the rendering pipeline is said to have been "iterated”. Multiple iterations of a rendering pipeline may result in a series of framebuffers being driven to the display device. The rate that a new framebuffer is driven to the display device, overriding a previous framebuffer is referred to herein as the "framerate" of the associated user experience.
- graphical applications may provide users with greater flexibility in tailoring their user experience to their personal preferences. This is accomplished by utilizing multiple rendering pipelines simultaneously. "Simultaneously" is used in this context to denote that the executable instructions and/or program data associated with the graphical manipulations within the pipelines may be loaded into a faster portion of memory before execution of some portion of the graphical application. This transfer to a faster portion of memory allows either of the rendering pipeline's associated data and/or instructions to be more quickly passed to processors when switching between the multiple rendering pipelines. Users receive greater flexibility in tailoring their user experience because running multiple rendering pipelines in such a way gives the user the ability to switch between the rendering pipelines.
- this switch is initiated by the user indicating that they wish to take an in-environment snapshot.
- the rendering pipeline associated with the snapshot yields a higher render quality, but at a lower framerate, effectively giving the user flexibility in determining when to sacrifice user experience framerate for user experience render quality.
- the switch between rendering pipelines when done on a system utilizing multiple rendering pipelines in the manner disclosed herein, is able to be accomplished without delays that would otherwise be needed to read a rendering pipeline's associated data and/or instructions from a slower portion of memory.
- examples of rendering pipelines disclosed herein employ improved methods of achieving visual effects in the user experience. In some examples, this is done by implementing jittering virtual objects in the virtual environment between iterations of the associated rendering pipeline, such that effects like anti-aliasing and depth of field may be achieved.
- Aliasing as used herein may refer to the phenomenon where an edge in of a virtual object bisects a fragment, resulting in a jagged edge formed by the fragment being able to map to one set of virtual optical characteristics. In some embodiments, fragments are limited to one set of virtual object characteristics so that they may be compatible with eventual translation into one set of pixel data to be driven to single element of a display device.
- "Jittering” as used herein refers to applying offsets to the spatial positions of virtual objects (or points making up vertices of the virtual objects), wherein the offsets change between each iteration of the associated rendering pipeline.
- the offsets are randomized, wherein the virtual objects are moved to a random point within a determined volume around the given virtual object's original position. In some examples, the offsets are limited to a determined area in a 2D-plane normal to a line formed between a virtual camera perspective point and a point on the virtual object. These offsets may result in the virtual optical characteristics for a given fragment to change between iterations of a rendering pipeline. This change between iterations results in a change in pixel data output to the associated element of the display device at various images within the series of images making up the user experience. In some embodiments, these changing elements in the user experience results in a blurring illusion, which smooths out the jagged edges from the aliasing effect.
- anti-aliasing This reduction of aliasing artifacts is referred to herein as "anti-aliasing".
- the pixel data when pixel data for fragments are written to the framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in anti-aliasing via a similar blurring of pixel data across multiple iterations of a rendering pipeline. This accumulating of virtual optical characteristics or pixel data across multiple iterations of a rendering pipeline is herein referred to as "progressive rendering".
- Depth of field refers to the effect created where virtual objects spatially separated from a focal point are blurred.
- the rendered images encourage the viewer to focus on the focal point, as objects separated from the focal point are blurred and therefore hard to visually perceive.
- the magnitude of blurring scales as the distance between the virtual object to the focal point increases. Examples are disclosed herein where a depth of field effect is achieved by a rendering pipeline employing graphical manipulations which jitter virtual objects according to their distance from a focal point defined in the scene data.
- the intended visual effects are more effectively isolated to the virtual objects they may have been intended.
- the effects may apply across multiple objects in the image, as fragments within the raster grid, or pixel data entries within the framebuffer do not have conceptual separation relating to the virtual objects to which the depth of field effect may have been intended. Instead, the effect is applied to virtual objects to which the effect was not intended.
- This misapplication of visual effects is referred to herein as "bleed- over”.
- further graphical manipulations are required to these bleed-over artifacts. By avoiding these further graphical manipulations the rendering pipeline may complete in less time, resulting in the rendering pipeline providing a similar depth of field effect at a higher framerate.
- FIG. 1 illustrates one example of a user experience rendering system 100.
- User experience system 100 may implement an example rendering pipeline (e.g., real-time rendering pipeline 131, offline rendering pipeline 132, interactive-time rendering pipeline 133, jittered rendering pipeline 134) to render a user experience.
- the user experience rendering system 100 comprises machine readable media 101 (sometimes referred to herein generally as "memory"), interconnection system 102, processor 103, input interface 104, display interface 105, and communication interface 106.
- machine readable media 101 sometimes referred to herein generally as "memory”
- interconnection system 102 interconnection system 102
- processor 103 input interface 104
- display interface 105 display interface
- communication interface 106 communication interface
- Machine readable media 101 may comprise any form of information storage (RAM, ROM, flash drives, processor caches), and covers both static and dynamic storage as well as long term and short term storage. Some of the information stored on machine readable media 101 may be categorized as executable instructions 107 and/or program data 108.
- Executable instructions 107 may refer to any set of instructions (e.g., compiled program logic, non-compiled program logic, machine code) stored on machine readable media 101 that, when executed by processor 103, cause the processor 103 to carry out the functions described herein.
- Executable instructions 107 may include operating system 109, application program 110, and graphical application 130.
- Program data 108 may refer to any collection of data input to and/or output from the processor 103 when executing any member of executable instructions 107.
- Program data 108 may include operating system data 111, application data 112, graphical application data 135, and rendered image store 136.
- Machine readable media 101 may comprise a combination of different storage media with physical and/or logical separation.
- data and/or instructions stored on machine readable media 101 may be stored partially across a plurality of storage media. For instance, while executing application program 110 processor 103 may write some portion of application data 112 from ROM to RAM, such that processor 103 will be able to more quickly access that portion of application data 112 while executing remaining instructions within application program 110. This writing of application data 112 from ROM to RAM does not remove application data 112 from machine readable media 101, because machine readable media 101 may refer collectively to any and all forms of machine readable media accessible by the processor (e.g., RAM, ROM, flash drives, processor caches).
- Interconnection system 102 may refer to one or more communication media facilitating interaction between components of user experience rendering system 100.
- interconnection system 102 is structured as a bus connected to machine readable media 101, processor 103, input interface 104, display interface 105, and communication interface 106, however in some examples, one or more of these components may have dedicated connections to one or more of the other components. In some examples, one or more of these components may be connected to one or more of the other components via a network connection.
- Processor 103 may refer to one or more general purpose processors (e.g.
- processors and/or one or more special purpose processors (e.g., graphics processing units (“GPUs”), network processors, or application-specific integrated circuits ("ASICs”)).
- GPUs graphics processing units
- ASICs application-specific integrated circuits
- processors may be operated in parallel so that multiple instruction sets may be simultaneously executed by processor 103.
- Input device 104 may refer to any device with which a user may interact (e.g. a keyboard, mouse, touch screen), wherein the device converts such interactions into signals interpretable by processor 103.
- FIG. 1 depicts input device 104 as a mobile platform's touchscreen 113, however other examples of user experience rendering system 100 may be implemented on a different platform lending itself to other types of input devices.
- an input device 104 may comprise a keyboard and/or mouse.
- input device 104 is depicted herein as relating to only one input device, however input device 104 may also refer to one or more additional input devices that may be operated simultaneously with the mobile platform's touch screen (e.g., side buttons, phone camera, microphone).
- additional input devices e.g., side buttons, phone camera, microphone.
- Display device 105 may refer to any device which may output a visual experience to a user (e.g. a smartphone screen, a liquid crystal display (“LCD”), an LED). Rendered images may be output to display device 105 by processor 103 writing pixel data representing the rendered image to rendered image store 136, with display device 105 in turn driving the pixel data stored at the rendered image store 136 to elements of display device 105. In some examples, processor 103 may drive the pixel data stored at the rendered image store 136 to elements of display device 105.
- FIG. 1 depicts display device 105 as a mobile platform display screen 114, however other examples of user experience rendering system 100 may be implemented on a different platform which lends itself to other types of display devices.
- an LCD monitor may permit a physically larger and/or higher resolution (more pixels per image) user experience.
- display device 105 is depicted herein as relating to only one display device, however display device 105 may also refer to one or more additional display devices (e.g., LEDs, attached LCD screens) that may be operated simultaneously with the mobile platform's screen.
- Communication interface 106 may refer to one or more devices that allow processor 103 to communicate with components not located locally with processor 103, and/or one or more devices that allow for instructions and/or data to be sent from machine readable media 101 over a network.
- Communication interface 106 may include a modem or soft modem, a network interface (such as an Ethernet, network interface card, WiMediaTM, IEEETM 802. XX or other interface), a communications port (such as for example, a Universal Serial Bus (“USB”) port, infrared (“IR”) port, Recommended Standard 232 (“RS232”) port Bluetooth® interface, or other port), or other communications interface.
- USB Universal Serial Bus
- IR infrared
- RS232 Recommended Standard 232
- Communications interface 106 Instructions and data transferred via communications interface 106 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 106. These signals might be emitted from communications interface 106 via a channel using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an radio frequency ("RF") link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
- FIG. 1 depicts communication interface employing a wireless cellular link 115 as a channel, however other examples of user experience rendering system 100 may be implemented on a different platform which lends itself to other types of communications channels.
- FIG. 1 depicts communication interface employing a wireless cellular link 115 as a channel, however other examples of user experience rendering system 100 may be implemented on a different platform which lends itself to other types of communications channels.
- FIG. 1 depicts communication interface employing a wireless cellular link 115 as a
- FIG. 1 depicts all of user experience rendering system 100's components being located locally within one mobile phone's physical housing and communicating via interconnect system 102.
- one or more of the disclosed components may instead be located remotely (outside of any associated physical housing around one or more components), with the remaining locally located components communicating with the remotely located components via communication interface 106.
- operating system 109 manages the various hardware and software components of the system and provides common interfacing services.
- Operating system 109 may include any known operating system available (e.g., WindowsTM, MacOSTM, LinuxTM), may be custom written for the system, or may be excluded altogether with the hardware and software components providing their own interfacing services.
- processor 103 may read input from and/or output to operating system data 111.
- Operating system data 111 may refer to data related to interfacing the various components of a computer system (e.g., executable instruction locations, program data locations, interface preferences, device driver locations).
- Application program 110 may include one or more software programs meant to perform a function aside from rendering a user experience (e.g., email, phone, internet browser).
- processor 103 may read input from and/or output to application data 112.
- Application data 112 may refer to data related to the functions performed by application program 110 (e.g., email address book, phone number contact list, bookmarked webpage list).
- Graphical application 130 may include any software program meant to output one or more rendered images. Examples of graphical applications may include interactive video games (e.g. World of WarcraftTM, Final Fantasy XIVTM, Pae-ManTM), as well as animation software (e.g. AutodeskTM MayaTM, BlenderTM, AdobeTM AnimateTM). When executing instructions relating to graphical application 130, processor 103 may read input from and/or output to graphical application data 135. Graphical application data 135 may refer to data related to the content visually displayed to a user (e.g., virtual character settings, login account profile, virtual environment objects).
- Rendered image store 136 is depicted herein within graphical application data 135, and is where images may be output by processor 103 executing a rendering pipeline.
- rendered image store 136 is the framebuffer.
- rendered image store 136 may consist of a fast portion (e.g., RAM, processor cache) of machine readable media 101.
- rendered images output by graphical application 130 may be written to a slower portion (e.g. ROM) of rendered image store 136 and then later written back to the framebuffer to drive rendered images on display device 105 without needing to run iterations of a rendering pipeline.
- rendered image storage 136 may refer to two separate and distinct machine readable media devices (e.g., RAM vs. ROM), with the shared characteristic that they may be utilized to hold rendered image data for later display on display device 105.
- Graphical application 130 may instruct processor 103 to process user inputs, graphical application data, etc. into changes to scene data 137, associated with a virtual environment, such that the changes to these virtual objects are reflected in one or more images rendered based on the virtual environment.
- processor 103 may execute instructions of graphical application 130 which output changes to the scene data 137 such that a virtual basketball object changes position relative to a virtual basketball court as if the virtual basketball had been propelled forward while falling according to a virtual gravity.
- not all of the associated changes to the virtual objects within scene data 137 trigger an iteration of a rendering pipeline and a resulting output to the display device 105.
- changes to the virtual basketball's data within scene data 137 may occur at a rate exceeding the rate that images can be rendered and output by processor 103 to the display device 105.
- the graphical application 130 instructs processor 103 to run through iterations of a rendering pipeline at periodic intervals independent of changes to the scene data 137.
- the rendering pipeline may be iterated fewer times per second than the processor 103 updates scene data 137, the instructions within the rendering pipeline are executed fewer times per second than if the rendering pipeline were iterated for every change to scene data 137.
- the virtual objects within scene data 137 have points which define the vertices of polygons forming the surface of the objects.
- the objects, vertices, and/or polygons may additionally have associated virtual optical characteristics (e.g., color, brightness, transparency).
- virtual optical characteristics e.g., color, brightness, transparency.
- the virtual optical characteristics in order for the scene data 137 to be converted into a visual image for display at display device 105, the virtual optical characteristics must be translated to pixel data which may be saved to the framebuffer within rendered image store 136.
- said virtual optical characteristics of virtual objects within scene data 137 are mapped to pixel data entries within the framebuffer by instructing processor 103 to project the virtual optical characteristics of one or more polygons in the scene data 137 onto positions on a raster grid.
- the virtual optical characteristics projected onto each fragment of the raster grid is then written by processor 103 to a position within the framebuffer which corresponds to an element of display device 105.
- scene data 137 copies of scene data 137, fragment data, and/or framebuffer data may be manipulated to provide for a desired effect in the user experience.
- the order and/or number of times that these graphical manipulations are applied and/or the data to which these graphical manipulations are applied may be varied to provide for a wide range of user experiences with differing render times and/or image quality output from the associated rendering pipeline.
- Examples of rendering pipelines are depicted herein as being subsets of instructions within graphical application 130, however in other examples, such rendering pipelines may instead be standalone sets of instructions not exclusive to any specific application, program, or set of executable instructions.
- Real-time rendering pipeline 131 may refer to any rendering pipeline that renders an image within a time frame meant to provide a user with a smooth visual experience based on scene data 137 that is being simultaneously manipulated by the user's inputs.
- a person of skill in the art will appreciate that real-time rendering pipelines are often required to complete an iteration within a timeframe that allows an entire framebuffer to be updated a minimum number of times per second (e.g., no less than thirty times per second) associated with a target framerate. What framerate is considered sufficiently smooth, however, can vary widely with the graphical application and the target user experience. For instance, a competitive first-person shooter video game (e.g.
- Counter- Strike® may require over 100 frames per second to be considered adequately smooth for the needs of competitive gamers, however animation software where virtual environments are designed for later refinement and/or viewing (e.g. Autodesk® Maya®, Blender®, Adobe® Animate®) may be sufficiently smooth as long as the animator can still orient themselves within the virtual environment (e.g. less than 10 frames per second may suffice).
- animation software where virtual environments are designed for later refinement and/or viewing (e.g. Autodesk® Maya®, Blender®, Adobe® Animate®) may be sufficiently smooth as long as the animator can still orient themselves within the virtual environment (e.g. less than 10 frames per second may suffice).
- Offline rendering pipeline 132 may refer to any rendering pipeline that renders an image within any length of time and without the scene data 137 being simultaneously manipulated by a user's inputs.
- An offline rendering pipeline 132 may render one or more images based on scene data 137 that is changing with time, however, these changes are determined before rendering of a first image begins.
- the purpose of offline rendering pipeline 132 is to create a higher quality visual experience for a user that does not interact with the graphical application 130 while the rendering is taking place.
- the tradeoff of higher quality often comes at the expense of the rendering process taking longer, as processor 103 may be required to complete more steps per iteration of the offline rendering pipeline 132.
- the one or more rendered images are stored at rendered image store 136 and later written to the framebuffer after all iterations of offline rendering pipeline 132 have completed.
- Interactive-time rendering pipeline 133 may refer to any rendering pipeline that renders an image within a time frame meant to provide the user with a smooth visual experience based on scene data 137 that is being simultaneously manipulated by the user's inputs to the user experience rendering system 100.
- interactive-time rendering pipeline 133 may update the framebuffer at a lower framerate than real-time rendering pipeline 131 or some other threshold value (e.g. thirty frames per second).
- this threshold value relates to what is considered sufficiently smooth for the target user experience, and can vary widely based on the target user. For instance, a competitive first-person shooter video game (e.g.
- Counter-Strike® may consider any rendering pipeline outputting at any framerate under a threshold value of 100 frames per second to be an interactive-time rendering pipeline.
- animation software where virtual environments are designed for later refinement and/or viewing (e.g. Autodesk® Maya®, Blender®, Adobe® Animate®) may require a rendering pipeline to output a framerate under 10 frames per second before it is considered an interactive-time rendering pipeline.
- interactive-time rendering pipeline 133 may be provided alongside a real-time rendering pipeline in order to give the user the ability to selectively switch between rendering pipelines. This switching allows the user to temporarily sacrifice the amount of frames rendered per second (and user experience smoothness) in exchange for an increase to the quality (e.g. resolution, virtual lighting realism, polygon count) of images rendered by the user experience rendering system 100.
- quality e.g. resolution, virtual lighting realism, polygon count
- Jittered rendering pipeline 134 may refer to a rendering pipeline that renders an image within any time frame, that additionally manipulates the spatial locations of virtual objects within scene data 137 (e.g., the x,y,z coordinates of the virtual objects centerpoints, vertices, edges, etc.).
- scene data 137 e.g., the x,y,z coordinates of the virtual objects centerpoints, vertices, edges, etc.
- a copy of scene data 137 may be manipulated instead, such that changes in spatial positions in the copy of scene data 137 do not affect other sets of instructions that may be reading from scene data 137 directly (e.g., a physics engine simulating physics within the virtual environment).
- a jittered rendering pipeline 134 may be employed to provide anti-aliasing on the rendered user experience.
- a jittered rendering pipeline 134 may provide a depth of field effect.
- processor 103 executes the instructions associated with the actions disclosed herein, for sake of concise language, such actions may be described as being taken by the associated example of graphical application 130, or such actions may not be explicitly described as taken by any specific actor. In those cases, processor 103 is still to be understood as executing the instructions associated with the action. In some examples, the disclosed steps may be executed in a different order, in parallel across multiple processors, or in addition to other steps relating to graphical manipulation. As such, the term "step" is provided for illustrative purposes and should be non-limiting in the ordering of the operations discussed herein.
- FIGS. 2A-F illustrate various graphical manipulations within an example rendering pipeline.
- the disclosed rendering pipeline constitutes a complete rendering pipeline.
- a complete rendering pipeline may constitute the disclosed example with one or more of the disclosed graphical manipulations removed, and/or one or more other graphical manipulations added in.
- FIGS. 2A-C illustrate the application of one or more vertex shaders to a scene data, wherein the vertex shaders may convert vertices relating to virtual objects within the scene data into points located relative to a coordinate system defining space within a virtual environment.
- the scene data may comprise a list of virtual objects (e.g., characters, landscape background, cameras, raster grids, light sources) grouped as arrays of data defining characteristics of the virtual objects, including the vertices defining the bounds of the virtual objects.
- FIG. 2A illustrates a virtual cube object 201 represented as an array of data containing x,y,z coordinates of each vertex of a virtual cube object 201. While virtual cube object 201 is stored as a list of vertices, additional attributes may also be ascribed to virtual cube object 201 by inclusion in the associated array of data. For instance, each vertex may be associated with a set of virtual optical characteristics (e.g., color, brightness, transparency), and/or a set of virtual physical characteristics (e.g., mass, heat, velocity, volume).
- virtual optical characteristics e.g., color, brightness, transparency
- virtual physical characteristics e.g., mass, heat, velocity, volume
- FIG. 2B illustrates an example result after one or more vertex shaders are applied to the scene data.
- a virtual spatial coordinate system 202 may be built based on the spatial coordinates and direction designated for a virtual camera perspective defined in the scene data, wherein the virtual spatial coordinate system 202 defines spatial relations within a virtual environment.
- the camera perspective is not displayed directly in FIGS. 2B-E, but may be understood to approximate the same perspective from which a viewer observes the content of FIGS. 2B-E.
- the virtual environment is populated with the vertices 203 of virtual cube object 201, as defined by each vertex's x,y,z coordinates in the scene data.
- objects outside of a viewing area defined in relation to the field of view, direction, and location of the virtual camera perspective may be removed from the scene data.
- FIG. 2C illustrates an example result after further vertex shaders are applied to the scene data, wherein the further vertex shaders comprise graphical manipulations which connect the vertices of virtual objects into a mesh of polygons. As shown, the vertices
- the vertex shaders also interpolate, across a polygon 204, the virtual optical characteristics from the polygon 204's vertices.
- FIG. 2C the end result of the application of the one or more vertex shaders is shown in FIG. 2C: the scene data is processed into several polygons 204 with virtual optical characteristics located along a virtual spatial coordinate system 202.
- FIGS. 2D-F illustrate a rasterization process as applied to the several polygons
- FIG. 2D illustrates the imposition of a raster grid 205 within the virtual environment virtual environment, and the projection of polygons 204 onto the raster grid 205.
- a raster grid may be a two dimensional grid that is perpendicular to the viewing direction of the virtual camera.
- the raster grid 205 is a virtual object represented in the scene data.
- polygons 204 are flattened along the direction the virtual camera is facing and slid toward the virtual camera perspective point until the flattened polygons 206 intersect with the raster grid 205.
- FIG. 2D wherein the flattened polygons 206 are visible on the raster grid 205.
- processor e.g. processor 103
- FIGS. 2A-F While these actions are illustrated in FIGS. 2A-F in a visual sense for better understanding by the reader, the illustrated actions are all, in a more precise sense, logical operations performed by a processor on scene data (or copies of the scene data) stored in memory. The logical operations corresponding to a given graphical manipulation illustrated herein is understood by one of skill in the art.
- FIG. 2E illustrates the application of one or more fragment shaders to the flattened polygons 206 and raster grid 205.
- Fragment shaders may be understood in this context to populate fragments within a raster grid with virtual optical characteristics associated objects in a virtual environment. In the disclosed example, this is accomplished by writing the virtual optical characteristics of the flattened polygons 206 to the fragments 207 which they intersect.
- each flattened polygon 206 may have optical characteristics that were mapped to them based on optical attributes included within the virtual cube object 201.
- polygons 206 may have received their optical attributes at an earlier stage in the rendering pipeline, such as where simulated light from virtual lighting source objects may have been introduced into the virtual environment when vertex shaders were being applied.
- one or more points within the fragment 207 are sampled to determine which flattened polygon 206 intersects the fragment 207.
- the optical characteristics of the intersecting polygon(s) 206 may be written to the fragment 207 via one or more logical operations.
- each of fragments 207 are shaded according to the virtual optical characteristics of the flattened polygon 206 that intersects with a point at the center of each fragment 207.
- This can be seen by the jagged edge that is formed by an entire fragment 207 displaying the optical characteristics of one flattened polygon 206, even in cases (such as fragment 207A) where multiple flattened polygons 206 cover a fragment 207.
- This jagged edge visual artifact is an example of aliasing, and, as described above and in relation to FIGS. 6A-B below, various methods of anti-aliasing may be employed to soften such jagged edge visual artifacts.
- FIG. 2F illustrates the final result of both the rasterization process and the rendering pipeline as a whole.
- fragments 207 are converted into pixel data which is written to framebuffer 208.
- this conversion is done by processor 103 simply writing the optical characteristics stored at fragment 207 to a corresponding location in framebuffer 208.
- Framebuffer 208 is in turn driven to a display device 105, and display device 105 in turn presents an assortment of elements to a user, wherein the elements have a real optical characteristics and orientation approximating the virtual optical characteristics and orientation of the framebuffer 208. The real optical characteristics and orientation of the elements thus approximate the virtual environment as viewed from the virtual camera perspective.
- FIG. 3 illustrates an example graphical application 300 generating a real-time user experience.
- the graphical application 300 employs an embodiment of real-time rendering pipeline 131.
- the graphical application 300 is started. In some examples, this occurs where a user provides input indicating that graphical application 300 should be run.
- the executable instructions and data associated with the graphical manipulations within real-time rendering pipeline 131 may be loaded into a faster portion (e.g., RAM, cache) of machine readable media 101. This avoids later delays during the execution of graphical application 300 that may otherwise be needed to read the associated data and/or instructions from a slower portion (e.g., ROM, flash) of machine readable media 101.
- the scene data is updated. In some examples, this occurs where processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of real-time rendering pipeline 131. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
- processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of real-time rendering pipeline 131. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
- real-time rendering pipeline 131 is a subset of instructions within graphical application 300. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS. 2A-E above, however other examples of real-time rendering pipeline 131 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing).
- other types of rendering pipelines e.g. offline rendering pipelines, interactive-time rendering pipelines, jittered rendering pipelines
- step 303 the first step of real-time rendering pipeline 131, one or more vertex shaders are applied to the copy of scene data. In some examples, this occurs where the one or more vertex shaders convert vertices within the copy of scene data into polygons placed within a virtual environment, wherein the polygons have associated virtual optical characteristics. Vertex shaders are described in more detail above with reference to FIGS. 2A-C.
- step 304 polygons within the scene data are rasterized. In some examples, this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D.
- one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E.
- the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F.
- Step 306 completes the real-time rendering pipeline 131 subset of instructions in the illustrated example of graphical application 300, however graphical application 300 thus far may have only displayed a single image to the user.
- other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to the a display device.
- the pixel data when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in a blurring of pixel data across multiple iterations of a rendering pipeline.
- the framebuffer may be output to a display device before an iteration of the realtime rendering pipeline 131 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer 208 to display device 105, wherein the determined maximum amount of time correlates with a determined minimum framerate).
- the framebuffer may be left as it is to await later iterations of a rendering pipeline before being output to a display device.
- graphical application 300 checks for input from the user. Input from a user may be in the form of machine-readable signals generated by a peripheral device (e.g., mouse, keyboard, phone touchscreen, phone button).
- graphical application 300 returns to step 302 to begin another iteration of real-time rendering pipeline 131.
- all or a portion of the copy of scene data created at this iteration of step 302 does not differ from all or a portion of a previous copy of scene data created at a previous iteration of step 302 (e.g. one or more of the objects, lighting sources, camera perspective, etc. within the scene data has not changed)
- graphical application 300 may skip the scene data update at step 302 and proceed through another iteration of real-time rendering pipeline 131, rendering the existing scene data.
- graphical application 300 may avoid additional iterations of the real-time rendering pipeline 131altogether where the scene data has not changed since the previous iteration.
- graphical application 300 may be simultaneously running another application, and/or subset of instructions within graphical application 300, which manipulates the scene data. These changes to the scene data may be reflected in the copy of scene data created at step 302, even when the user has not provided input.
- a physics engine may be running simultaneously, wherein the scene data is changed to simulate passive physical forces on the virtual objects in the virtual environment (e.g., a virtual cube object moves in the negative y direction to simulate a gravitational force being applied to the virtual cube object).
- step 307 graphical application 300 continues to step 308.
- step 308 the user input is checked to see if the user indicated a desire to stop graphical application 300. Where the user did input a command to terminate operation of graphical application 300, graphical application 300 continues to step 309.
- step 309 graphical application 300 is terminated and processor 103 ceases executing instructions associated with graphical application 300. Where the user did not input a command to terminate operation of graphical application 300, graphical application 300 continues to step 310.
- step 310 changes to the scene data are calculated.
- step 310 comprises converting user inputs into physical forces which are applied in the virtual environment. For instance, a user may input that they wish to shoot a virtual bullet at a virtual cube object.
- graphical application 300 may create a moving virtual bullet object in the scene data, and then run the scene data through a physics engine which simulates motion of the virtual bullet object and its collision with the virtual cube object based on virtual physical characteristics (e.g., mass, velocity, volume) of the virtual bullet object and of the virtual cube object.
- these changes are applied to scene data directly because the physics engine manipulates scene data, rather than manipulating a copy of scene data.
- Real-time rendering pipeline 131 disclosed above manipulates a copy of scene data in order to avoid affecting other processes, such as a physics engine, that may take inputs from the scene data.
- Graphical application 300 then returns to step 302, wherein it creates a new copy of the scene data in preparation for the next iteration of either real-time rendering pipeline 131.
- the scene data is copied into a faster portion of machine readable media 101 (e.g. RAM, cache).
- Graphical application 300 then proceeds back through real-time rendering pipeline 131 to render the new copy of scene data into images that the user can perceive via display device 105.
- the disclosed loop from steps 302-308, to step 310, and back to step 302, or alternatively the loop from steps 302- 307 and back to step 302 operate to create an interactive real-time user experience, wherein the user is able to perceive a virtual environment as it changes in response to inputs provided by the user.
- FIG. 4 illustrates an example graphical application 400 employing an in-game snapshot functionality.
- the graphical application 400 is employing an embodiment of a real-time rendering pipeline 131 simultaneously with an interactivetime rendering pipeline 133.
- the graphical application 400 is started. In some examples, this occurs where a user provides input indicating that graphical application 400 should be run.
- the executable instructions and data associated with the graphical manipulations within real-time rendering pipeline 131 and interactive-time rendering pipeline 133 may be loaded into a faster portion (e.g., RAM, cache) of machine readable media 101.
- the transfer of the instruction sets underlying graphical manipulations for both pipelines to a faster portion of machine readable media 101 allows either of the pipelines associated data and/or instructions to be readily passed to processor 103 when switching between rendering pipelines at a subsequent iteration. This avoids later delays during the execution of graphical application 400 that may otherwise be needed to read the associated data and/or instructions from a slower portion (e.g., ROM, flash) of machine readable media 101.
- the scene data is updated. In some examples, this occurs where processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
- processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
- the snapshot flag is checked.
- the snapshot flag is an indication stored in memory regarding whether the user has input a command to create an in-game snapshot.
- An in-game snapshot may refer to a higher quality rendering of the virtual environment, e.g. a rendering where virtual objects are displayed with higher resolution, more realistically simulated lighting, and/or smoother surfaces than a previous rendering.
- this higher quality rendering can only be accomplished with more elaborate graphical manipulations that take more processing time, resulting in iterations of the associated rendering pipeline taking longer to complete than iterations of a rendering pipeline outputting a lower quality rendering.
- the resulting outputs to the framebuffer may not occur fast enough to generate a smooth framerate.
- a smooth framerate is sometimes described in the industry as thirty frames per second, however, what is considered a smooth framerate can vary widely between industry and intended viewing user.
- step 403 is being first executed since starting graphical application 400, the snapshot flag is not set. However, later steps in graphical application 400 may enable a user to set the snapshot flag such that subsequent iterations of step 403 may be executed where the snapshot flag is set. Where the snapshot flag is not set, the processor continues through an example of real-time rendering pipeline 131.
- real-time rendering pipeline 131 is a subset of instructions within graphical application 400. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS. 2A-F above, however other examples of real-time rendering pipeline 131 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing).
- other types of rendering pipelines e.g. offline rendering pipelines, interactive-time rendering pipelines, jittered rendering pipelines
- real-time rendering pipeline 131 may be employed in place of real-time rendering pipeline 131 to provide a different user experience.
- step 404 the first step of real-time rendering pipeline 131, one or more vertex shaders are applied to the copy of scene data.
- step 405 polygons within the scene data are rasterized. In some examples, this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D.
- one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E.
- the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F.
- Step 407 completes the real-time rendering pipeline 131 subset of instructions in the illustrated example of graphical application 400, however graphical application 400 thus far may have only displayed a single image to the user.
- other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to a display device.
- the pixel data when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in a blurring of pixel data across multiple iterations of a rendering pipeline.
- the framebuffer may be output to a display device before an iteration of the real-time rendering pipeline 131 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer 208 to display device 105, wherein the determined maximum amount of time correlates with a determined minimum framerate).
- the framebuffer may be left as it is to await later iterations of the rendering pipeline before being output to a display device.
- graphical application 400 checks for input from a user.
- Input from a user may be in the form of machine-readable signals generated by a peripheral device (e.g., mouse, keyboard, phone touchscreen, phone button).
- a peripheral device e.g., mouse, keyboard, phone touchscreen, phone button.
- graphical application 400 returns to step 402 to begin another iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133.
- all or a portion of the copy of scene data created at this iteration of step 402 does not differ from all or a portion of a previous copy of scene data created at a previous iteration of step 402 (e.g. one or more of the objects, lighting sources, camera perspective, etc. within the scene data has not changed)
- graphical application 400 may skip the scene data update at step 402 and proceed through another iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133, rendering the existing scene data.
- graphical application 400 may avoid additional iterations of either the real-time rendering pipeline 131 or the interactive-time rendering pipeline 133 altogether where the scene data has not changed since the previous iteration.
- graphical application 400 may be simultaneously running another application, and/or subset of instructions within graphical application 400, which cause changes to the scene data.
- a physics engine may be running simultaneously, wherein the scene data is changed to simulate passive physical forces on the virtual objects in the virtual environment (e.g., a virtual cube object moves in the negative y direction to simulate a gravitational force).
- step 408 graphical application 400 continues to step 409.
- step 409 the user input is checked to see if the user indicated a desire to stop graphical application 400. Where the user did input a command to terminate operation of graphical application 400, graphical application 400 continues to step 410. At step 410, graphical application 400 is terminated and processor 103 ceases executing instructions associated with graphical application 400. Where the user did not input a command to terminate operation of graphical application 400, graphical application 400 continues to step 411.
- step 411 the user input is checked to see if it included a command to set a snapshot flag. Where the user input included a command to set a snapshot flag, graphical application 400 proceeds to step 412. Where the user input did not include a command to set a snapshot, graphical application 400 proceeds directly to step 413.
- the snapshot flag is set to indicate that the user commanded a snapshot be taken.
- the framebuffer is replaced with uniform pixel data (e.g. all fragments in the framebuffer indicate the same color for output to the display 17).
- This flashing of the framebuffer may create a "snapping of a camera" effect as the user will experience a brief flash of one color while the displayed pixels repopulate with virtual optical characteristics output by subsequent iterations of interactive-time rendering pipeline 133 (described in further detail below at steps 414-418).
- Graphical application 400 then proceeds to step 413.
- step 413 changes to the scene data are calculated.
- step 413 comprises converting user inputs into physical forces which are applied in the virtual environment. For instance, a user may input that they wish to shoot a virtual bullet at a virtual cube object.
- graphical application 400 may create a moving virtual bullet object in the scene data, and then run the scene data through a physics engine which simulates motion of the virtual bullet object and its collision with the virtual cube object based on virtual physical characteristics (e.g., mass, velocity, volume) of the virtual bullet object and of the virtual cube object.
- these changes are applied to scene data directly because the physics engine manipulates scene data, rather than manipulating a copy of scene data.
- Real-time rendering pipeline 131 and interactive-time rendering pipeline 133 disclosed above manipulate a copy of scene data in order to avoid affecting other processes, such as a physics engine, that may take inputs from the scene data.
- Graphical application 400 then returns to step 402, wherein it creates a new copy of the scene data in preparation for the next iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133.
- processor 103 again checks the snapshot flag at step 403. In some examples, such as where the snapshot flag was set at step 412, the graphical application 400 continues through an example of interactive-time rendering pipeline 133, as opposed to the real-time rendering pipeline 131 described above.
- a distinction between real-time rendering pipeline 131 and interactive-time rendering pipeline 133 in the example of graphical application 400 is that an iteration of real-time rendering pipeline 131 completes in less time than an iteration of interactive-time rendering pipeline 133 completes.
- interactive-time rendering pipeline 133 is run instead of real-time rendering pipeline 131, resulting in graphical application 400 outputting higher quality images to be output at a lower framerate until the snapshot flag is unset.
- a person of skill in the art will appreciate that what is considered a real-time rendering pipeline 131 in one example of graphical application 400 may be considered an interactive-time rendering pipeline 133 in another example of graphical application 400, and vice versa.
- the graphical application 400 continues from step 401 into interactive-time rendering pipeline 133.
- interactivetime rendering pipeline 133 is a subset of instructions within graphical application 400. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS.
- interactive-time rendering pipeline 133 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing).
- other types of rendering pipelines e.g. offline rendering pipelines, real-time rendering pipelines, jittered rendering pipelines
- step 414 the first step of interactive-time rendering pipeline 133, one or more vertex shaders are applied to the copy of scene data. In some examples, this occurs where the one or more vertex shaders convert vertices within the copy of scene data into polygons placed within a virtual environment, wherein the polygons have associated virtual optical characteristics. Vertex shaders are described in more detail above with reference to FIGS. 2A-C.
- step 414 is identical to step 404, except that step 414 is taken within interactive-time rendering pipeline 133 while step 404 is taken within real-time rendering pipeline 131. In other examples, steps 414 and 404 may differ.
- step 415 polygons within the scene data are rasterized. In some examples, this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D.
- step 415 is identical to step 405, except that step 415 is taken within interactive-time rendering pipeline 133 while step 405 is taken within real-time rendering pipeline 131. In other examples, steps 415 and 405 may differ.
- step 416 wherein path tracing graphical manipulations are employed to simulate lighting on the flattened polygons.
- Path tracing is described herein (but not illustrated) to serve as an example of a complex graphical manipulation technique that may take longer for processor 103 to complete than other graphical manipulation techniques.
- real-time rendering pipeline 131 and interactive-time rendering pipeline 133 are otherwise identical in the disclosed example of graphical application 400, the path tracing graphical manipulations carried out in step 416 result in interactive-time rendering pipeline 133 taking more time to complete than real-time rendering pipeline 131.
- path tracing comprises simulating beams of light by following one or more lines from the virtual camera perspective through the flattened polygons which were projected onto the raster grid, and then drawing one or more additional lines from the surfaces of the non-flattened versions of the polygons. Of the additional lines some intersect a virtual light source in the virtual environment. Where an additional line does intersect a virtual light source, virtual optical characteristics of the virtual light source and of the non-flattened polygon are used by processor 103 to calculate new virtual optical characteristics of the flattened polygon, wherein these new virtual optical characteristics may better approximate the behavior of real light better than the previous virtual optical characteristics of the flattened polygon.
- step 417 one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E.
- step 417 is identical to step 406, except that step 417 is taken within interactive-time rendering pipeline 133 while step 406 is taken within real-time rendering pipeline 131. In other examples, steps 417 and 406 may differ.
- step 418 the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F.
- step 418 is identical to step 407, except that step 418 is taken within interactive-time rendering pipeline 133 while step 407 is taken within real-time rendering pipeline 131. In other examples, steps 418 and 407 may differ.
- Step 418 completes the interactive-time rendering pipeline 133 subset of instructions in the illustrated example of graphical application 400, however graphical application 400 thus far may have only displayed a single image to the user.
- other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to a display device.
- the pixel data when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of the rendering pipeline. This accumulation operation results in a blurring of pixel data across multiple iterations of a rendering pipeline.
- the framebuffer may be output to a display device before an iteration of the interactivetime rendering pipeline 133 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer to display device 105, wherein the determined maximum amount of time correlates with a determined minimum framerate).
- the framebuffer may be left as it is to await later iterations of the rendering pipeline before being output to a display device.
- graphical application 400 then proceeds back through steps 408-413, back to 402, and then back through another iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133 depending on whether the snapshot flag was previously set.
- the snapshot flag may automatically be turned back off by a separate set of instructions which turn the snapshot flag off after a predetermined period of time.
- the end result of the disclosed example of graphical application 400 is a the creation of an interactive real-time user experience, wherein the user may indicate that a snapshot should be taken. This option to take a snapshot adds flexibility to the user experience, as the user may selectively decide when to trade rendering quality for framerate.
- FIG. 5 illustrates an example graphical application 500 generating a real-time user experience.
- the graphical application 500 employs an embodiment of jittered rendering pipeline 134.
- the graphical application 500 is started. In some examples, this occurs where a user provides input indicating that graphical application 500 should be run.
- the executable instructions and data associated with the graphical manipulations within jittered rendering pipeline 134 may be loaded into a faster portion (e.g., RAM, cache) of machine readable media 101. This avoids later delays during the execution of graphical application 500 that may otherwise be needed to read the associated data and/or instructions from a slower portion (e.g., ROM, flash) of machine readable media 101.
- the scene data is read. In some examples, this occurs where processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of jittered rendering pipeline 134. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
- machine readable media 101 e.g., processor cache, RAM
- jittered rendering pipeline 134 is a subset of instructions within graphical application 500. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS. 2A-E above, however other examples of jittered rendering pipeline 134 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing).
- other types of rendering pipelines e.g. offline rendering pipelines, interactive-time rendering pipelines, real-time rendering pipelines
- jittered rendering pipeline 134 may achieve framerates that justify categorization of jittered rendering pipeline 134 additionally as a real-time rendering pipeline 131, an offline rendering pipeline 132, or an interactive-time rendering pipeline 133.
- Jittered rendering pipeline 134 is not distinguished from other rendering pipelines by its framerate, but is instead distinguished by its employment of graphical manipulation steps which apply offsets to virtual objects within scene data (further described below).
- step 503 the first step of jittered rendering pipeline 134, one or more vertex shaders are applied to the copy of scene data.
- the vertices transformed at step 503 within the scene data are jittered. In some examples, this occurs where processor 103 applies an offset to the spatial positions indicated for one or more of the vertices within the copy of scene data. In some examples, the offset is applied only to those vertices within some volume of the virtual environment wherein vertices may potentially be within the field of view of a virtual camera perspective. In some examples, the offset is randomized, wherein the vertex is moved to a random point within a determined volume around the vertex's original position. In some examples, the offset is only in directions which are perpendicular to a direction that the virtual camera perspective is facing.
- the offsets are limited to a determined area in a 2D-plane normal to a line formed between a virtual camera perspective point and the vertex.
- the 2D projective position of the vertex is offset by a randomized amount in the range of -1 to 1 in the X and Y directions.
- the offset is applied to the scene data itself, rather than a copy.
- the scene data is applied to a copy of scene data that was created at step 502 in advance of the current iteration of jittered rendering pipeline 134.
- processor 103 removes the previous offset from scene data instead of adding a second offset. This causes the offset vertices to return to their original spatial positions in scene data before continuing to run the remaining steps within the current iteration of jittered rendering pipeline 134.
- the processor 103 does not apply a second offset to the same vertex within a second copy of scene data during the current iteration of jittered rendering pipeline 134. Not applying the second offset in the second iteration of jittered rendering pipeline 134 causes the vertex to appear to return to its non-offset position as designated in the scene data, which has been copied from but not altered during either the previous or current iteration of jittered rendering pipeline 134.
- step 505 polygons within the scene data are rasterized.
- this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D.
- jittering offset is instead applied after step 505, to the positions of one or more vertices for a flattened polygon, with the offset constrained to a determined area lying on the raster grid.
- one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E.
- the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F.
- Step 507 completes the jittered rendering pipeline 134 subset of instructions in the illustrated example of graphical application 500, however graphical application 500 thus far may have only displayed a single image to the user.
- other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to the a display device.
- the pixel data when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in a blurring of pixel data across multiple iterations of a rendering pipeline.
- the framebuffer may be output to a display device before an iteration of the jittered rendering pipeline 134 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer 208 to display device 105, wherein the determined maximum amount of time correlates with a determined minimum framerate).
- the framebuffer may be left as it is to await later iterations of a rendering pipeline before being output to a display device.
- jittered rendering pipeline 134 may result in the one or more vertices offset during one or more iterations to appear to shake between the non-offset position of the one or more vertices as defined in scene data and one or more offset positions.
- This shaking effect as discussed in below regarding FIGS. 6-7, may be utilized to achieve visual effects across a one or more rendered images.
- graphical application 500 checks for input from the user.
- Input from a user may be in the form of machine-readable signals generated by a peripheral device (e.g., mouse, keyboard, phone touchscreen, phone button).
- a peripheral device e.g., mouse, keyboard, phone touchscreen, phone button.
- graphical application 500 returns to step 502 to begin another iteration of jittered rendering pipeline 134.
- all or a portion of the copy of scene data created at this iteration of step 502 does not differ from all or a portion of a previous copy of scene data created at a previous iteration of step 502 (e.g. one or more of the objects, lighting sources, camera perspective, etc. within the scene data has not changed)
- graphical application 500 may skip the scene data update at step 502 and proceed through another iteration of jittered rendering pipeline 134, rendering the existing scene data.
- graphical application 500 may be simultaneously running another application, and/or subset of instructions within graphical application 500, which manipulates the scene data. These changes to the scene data may be reflected in the copy of scene data created at step 502, even when the user has not provided input.
- a physics engine may be running simultaneously, wherein the scene data is changed to simulate passive physical forces on the virtual objects in the virtual environment (e.g., a virtual cube object moves in the negative y direction to simulate a gravitational force).
- graphical application 500 continues to step 509.
- step 509 the user input is checked to see if the user indicated a desire to stop graphical application 500. Where the user did input a command to terminate operation of graphical application 500, graphical application 500 continues to step 510. At step 510, graphical application 500 is terminated and processor 103 ceases executing instructions associated with graphical application 500. Where the user did not input a command to terminate operation of graphical application 500, graphical application 500 continues to step 511.
- step 511 changes to the scene data are calculated.
- step 511 comprises converting user inputs into physical forces which are applied in the virtual environment. For instance, a user may input that they wish to shoot a virtual bullet at a virtual cube object.
- graphical application 500 may create a moving virtual bullet object in the scene data, and then run the scene data through a physics engine which simulates motion of the virtual bullet object and its collision with the virtual cube object based on virtual physical characteristics (e.g., mass, velocity, volume) of the virtual bullet object and of the virtual cube object.
- these changes are applied to scene data directly because the physics engine manipulates scene data, rather than manipulating a copy of scene data.
- jittered rendering pipeline 134 disclosed above manipulates a copy of scene data in order to avoid affecting other processes, such as a physics engine, that may take inputs from the scene data.
- Graphical application 500 then returns to step 502, wherein it creates a new copy of the scene data in preparation for the next iteration of either jittered rendering pipeline 134.
- the scene data is copied into a faster portion of machine readable media 101 (e.g. RAM, cache).
- Graphical application 500 then proceeds back through jittered rendering pipeline 134 to render the new copy of scene data into images that the user can perceive via display device 105.
- the disclosed loop from steps 502-509, to step 511, and back to step 502, or alternatively the loop from steps 502-508 and back to step 502 operate to jitter vertices within the scene data (or a copy of scene data) between subsequent iterations of a rendering pipeline.
- jittered rendering pipeline 134 As each iteration of jittered rendering pipeline 134 is executed, the vertices that appear in the virtual environment jitter about their designated spatial positions, causing, as discussed below in FIG. 6A-B, edges of polygons to shift within one or more fragments and altering the optical characteristics that get output to the framebuffer.
- jittered rendering pipeline 134 may apply step 503 before executing different graphical manipulations.
- jittering caused by employing step 503 before a path tracing graphical manipulation may be useful for achieving aliasing and/or depth of field effects (discussed in detail below in relation to FIGS. 6-7).
- the jittering of virtual objects affects the collision of one or more lines drawn during the path tracing step between iterations of the rendering pipeline. This jittering may blur the lighting effects simulated by path tracing graphical manipulations across multiple rendered images viewed in series, particularly where progressive rendering techniques are utilized.
- FIGS. 6A-B illustrate how a jittered rendering pipeline may achieve an antialiasing effect.
- virtual cube object 601 is illustrated in its ideal form with straight edges.
- Raster grid 602 is between the virtual camera perspective (approximately the same perspective that the illustration is viewed from) and cube virtual object 601.
- the point 603 of one vertex of virtual cube object 601 and the edges of virtual cube object 601 are illustrated as showing through raster grid 602, but this is only for explanatory purposes— as one of skill in the art will appreciate, and as discussed above, each fragment of raster grid 602 can only take on one set of optical characteristics because it must later map to a single pixel data entry in a framebuffer.
- FIG. 6B illustrates how cube virtual object 601 may appear altered in a second iteration of a jittered rendering pipeline.
- the position of cube scene element 601's vertex has been offset from a first position at point 603 which it held during a previous iteration of the jittered rendering pipeline.
- Cube scene element 601's vertex is now located at point 604 in the current iteration of the jittered rendering pipeline.
- the jittering of vertices between iterations of jittered rendering pipeline has resulting in a softening of the jagged edges shown in raster grid 602 in FIG. 6A. This softening is found usually near the edges of polygons, where the jittering offset may be enough to change which polygon's optical characteristics are written to a given fragment of raster grid 602 in a subsequent iteration of a jittered rendering pipeline.
- the second set of virtual optical characteristics when writing the second set of virtual optical characteristics associated with the newly intersecting polygon's virtual optical characteristics to the framebuffer, the second set of virtual optical characteristics may be accumulated and/or averaged with the previous set of virtual optical characteristics already present in the framebuffer during the previous iteration of a jittered rendering pipeline.
- This progressive rendering technique allows the framebuffer values to be averaged between iterations of the jittered rendering pipeline such that framebuffer positions relating to fragments that are bisected by a polygon edge output a set of virtual optical characteristics that represents the average of the virtual optical characteristics on either side of the polygon edge. Without the application of the jittering offsets, the optical characteristics of one polygon or the other would persist in the framebuffer as long as the scene data remains static. When viewed on a large scale, this creates a smoothing visual effect on edges that were previously jagged due to aliasing.
- examples of jittered rendering pipeline 134 may be employed to anti-alias a graphical user experience.
- FIGS. 7A-B illustrate how an image rendered with a depth of field effect generated by a jittered rendering pipeline may compare with an image rendered without a depth of field effect.
- a depth of field effect may be achieved in jittered rendering pipelines where instead of (or in addition to) jittering vertices to achieve anti-aliasing effects, offsets are applied to vertices according to their spatial relation to a focal point 701.
- focal point 701 is a virtual object within the scene data.
- the magnitude of the offset applied increases.
- FIG. 7A illustrates a rendering produced by a graphical application not employing any means to create a depth of field effect.
- FIG. 7B illustrates a rendering produced by a graphical application employing a jittered rendering pipeline to achieve a depth of field effect.
- shaded man scene element 702 is near focal point 701, so the rendered image output of shaded man scene element 702 is substantially unchanged between FIGS. 7A-B.
- onlooker scene element 703, as well as billboard element 704 become blurry in the rendering output by the graphical application employing a jittered rendering pipeline based depth of field effect.
- Onlooker scene element 703 and billboard scene element 704 are both significantly further away from focal point 701 than shaded man scene element 702. Therefore, in the disclosed example, at each iteration of the jittered rendering pipeline, onlooker scene element 703 and billboard element 704's vertices are offset by larger distances than shaded man scene element 702's vertices are offset.
- scene elements within a predetermined distance of focal point 701, such as shaded man scene element 702 may not be offset at all.
- progressive rendering techniques are used to change the depth of field effect of the jittered rendering pipeline.
- FIG. 8 depicts a block diagram of an example computer system 800 in which various of the examples described herein may be implemented.
- the computer system 800 includes a bus 802 or other communication mechanism for communicating information, one or more hardware processors 804 coupled with bus 802 for processing information.
- Hardware processor(s) 804 may be, for example, one or more general purpose microprocessors.
- the computer system 800 also includes a main memory 806, such as a RAM, cache and/or other dynamic storage devices, coupled to bus 802 for storing information and instructions to be executed by processor 804.
- Main memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 804.
- Such instructions when stored in storage media accessible to processor 804, render computer system 800 into a special-purpose machine that is customized to perform the operations specified in the instructions.
- the computer system 800 further includes a ROM 808 or other static storage device coupled to bus 802 for storing static information and instructions for processor 804.
- a storage device 810 such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 802 for storing information and instructions.
- the computer system 800 may be coupled via bus 802 to a display 812, such as an LCD (or touch screen), for displaying information to a computer user.
- a display 812 such as an LCD (or touch screen)
- An input device 814 is coupled to bus 802 for communicating information and command selections to processor 804.
- cursor control 816 is Another type of user input device
- cursor control 816 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812.
- cursor control such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812.
- the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
- the computing system 800 may include a user interface module to implement a graphical user interface ("GUI") that may be stored in a mass storage device as executable software codes that are executed by the computing device(s).
- GUI graphical user interface
- This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
- the word “component,” “engine,” “system,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, JavaTM, C or C++.
- a software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, PerlTM, or PythonTM. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts.
- Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution).
- a computer readable medium such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution).
- Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device.
- Software instructions may be embedded in firmware, such as an EPROM.
- hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
- the computer system 800 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or field-programmable gate array ("FPGAs", firmware and/or program logic which in combination with the computer system causes or programs computer system 800 to be a special-purpose machine. According to one example, the techniques herein are performed by computer system 800 in response to processor(s) 804 executing one or more sequences of one or more instructions contained in main memory 806. Such instructions may be read into main memory 806 from another storage medium, such as storage device 810. Execution of the sequences of instructions contained in main memory 806 causes processor(s) 804 to perform the process steps described herein. In alternative examples, hard-wired circuitry may be used in place of or in combination with software instructions.
- non-transitory media refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media.
- Non-volatile media includes, for example, optical or magnetic disks, such as storage device 810.
- Volatile media includes dynamic memory, such as main memory 806.
- non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
- Non-transitory media is distinct from but may be used in conjunction with transmission media.
- Transmission media participates in transferring information between non-transitory media.
- transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 802.
- transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
- the computer system 800 also includes a communication interface 818 coupled to bus 802.
- Communication interface 818 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks.
- communication interface 818 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
- ISDN integrated services digital network
- communication interface 818 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN).
- LAN local area network
- Wireless links may also be implemented.
- communication interface 818 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
- a network link typically provides data communication through one or more networks to other data devices.
- a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP).
- ISP Internet Service Provider
- the ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet.”
- Internet Internet
- Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams.
- the signals through the various networks and the signals on network link and through communication interface 818, which carry the digital data to and from computer system 800, are example forms of transmission media.
- the computer system 800 can send messages and receive data, including program code, through the network(s), network link and communication interface 818.
- a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 818.
- the received code may be executed by processor 804 as it is received, and/or stored in storage device 810, or other non-volatile storage for later execution.
- Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware.
- the one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service” (SaaS).
- SaaS software as a service
- the processes and algorithms may be implemented partially or wholly in application-specific circuitry.
- the various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations.
- a circuit might be implemented utilizing any form of hardware, software, or a combination thereof.
- processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit.
- the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality.
- a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 800.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
Systems and methods provide improved user experience rendering techniques. The systems and methods disclosed herein may provide for a flexible user experience, wherein a snapshot may be indicated by the user to switch from a user experience rendering at what the user may consider a smooth framerate, to a user experience associated with higher quality image rendering. In addition, the systems and methods disclosed herein provide for achievement of desirable visual effects (e.g., anti-aliasing, depth of field) by jittering objects in the scene data during the image rendering process. The object jittering mitigates or avoids undesirable visual effects (e.g., bleed-over) that may occur when applying other graphical manipulations in order to render user experiences.
Description
MULTI-PIPELINE AND JITTERED RENDERING METHODS FOR MOBILE
Cross-Reference to Related Applications
[0001] This application claims the benefit of U.S. Provisional Application No. 63/422,826 filed November 4, 2022 and titled "INTERACTIVE-RATE PATH TRACING FOR MOBILE," which is hereby incorporated herein by reference in its entirety.
Technical Field
[0002] The disclosed technology generally relates to methods of rendering digital images at a user device. Particularly, the disclosed technology includes methods and systems for applying graphical manipulations to virtual environments, wherein the methods and systems result in an image snapshot functionality, virtual object jitter based antialiasing, and virtual object jitter-based depth of field effect.
Background
[0003] Image rendering quality, including more realistic simulations of real-world objects, higher resolution, or smoother framerates, is a constant goal for several technologies and applications. This goal applies for all technologies relying on image rendering to create a user experience, however consumers of real-time gaming applications are notorious for their high expectations regarding the render quality of their user experiences. Trends in real-time gaming applications toward employing processor heavy graphical manipulation techniques such as ray tracing have put the pressure on innovators in this space to come up with new methods which provide the same or similar visual effects with less demand on hardware components. Innovators also seek to implement new functionalities which allow users of real-time gaming applications increased flexibility in tailoring their user experience to their personal preferences.
[0004] Furthermore, when working with mobile device platforms to create a user experience, endeavors to increase render quality are even more daunting, as mobile platforms often have tighter hardware limitations when compared with similarly priced desktop platforms. Improvements which allow the same or similar rendering qualities to be provided with less demand on processors are welcome steps toward keeping mobile devices viable as platforms rendering user experiences.
Brief Description of the Drawings
[0005] The present disclosure, in accordance with one or more various examples, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example examples.
[0006] FIG. 1 illustrates one example of a user experience rendering system, in accordance with examples of the present disclosure.
[0007] FIGS. 2A-F illustrate various graphical manipulations within an example rendering pipeline, in accordance with examples of the present disclosure.
[0008] FIG. 3 illustrates an example graphical application generating a real-time user experience, in accordance with examples of the present disclosure.
[0009] FIG. 4 illustrates an example graphical application employing an in-game snapshot functionality, in accordance with examples of the present disclosure.
[0010] FIG. 5 illustrates an example graphical application generating a real-time user experience and employing an embodiment of a jittered rendering pipeline, in accordance with examples of the present disclosure.
[0011] FIGS. 6A-B illustrate how a jittered rendering pipeline may achieve an antialiasing effect, in accordance with examples of the present disclosure.
[0012] FIGS. 7A-B illustrate how an image rendered with a depth of field effect generated by a jittered rendering pipeline may compare with an image rendered without a depth of field effect, in accordance with examples of the present disclosure.
[0013] FIG. 8 depicts an additional block diagram of an example computer system in which various of the examples described herein may be implemented, in accordance with examples of the present disclosure.
[0014] The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.
Detailed Description
[0015] The technology described herein provide improved user experience rendering techniques. In the disclosure herein, "rendering" refers to any process converting scene data into one or more images. "Scene data" as used herein refers to a series of data structures representing a virtual environment. A "virtual environment" as used herein refers to one or more virtual objects virtual objects (e.g., characters, landscape background, cameras, raster grids, light sources) spatially oriented along a virtual coordinate system. A "user experience" refers to one or more images which may be displayed to a user to simulate a virtual environment. In some examples, the user experience rendered is additionally interactive. "Interactive" as used herein denotes that the user may provide inputs which alter the virtual environment that is being simulated, and/or how the virtual environment is rendered into one or more images.
[0016] In some examples, the technology described herein is organized as a graphical application being run on a computer. A "graphical application" as described herein refers to a set of instructions which may be executed by a processor, wherein the set of instructions comprise instructions which cause the processor to perform graphical manipulations on scene data. A "graphical manipulation" as described herein refers to a set of logical operations which change data in accordance with a change to a visual representation of the data. In some examples, additional sets of instructions may be run within or in parallel with a graphical application, wherein the additional sets of instructions apply manipulations on scene data which may reflect user inputs or other changes (e.g.
changes to non-visual characteristics of the virtual objects such as a virtual actor's backstory or a virtual object's simulated mass) to the scene data.
[0017] In one example of a graphical application, a rendering pipeline may be employed. A "rendering pipeline" described herein refers to a series of graphical manipulations. In some examples, a rendering pipeline consists of graphical manipulations which may be conceptualized as placing and orienting virtual objects in a common coordinate frame, projecting those objects onto a raster grid, translating virtual optical characteristics (e.g., color, brightness, transparency) of the projected objects into virtual optical characteristics stored at fragments within the raster grid, converting the virtual optical characteristics at each fragment into pixel data written to a framebuffer, and then driving that framebuffer to a display element. A "raster grid" as used herein refers to a virtual two-dimensional grid. A "fragment" as used herein refers to individual cells within the raster grid. "Pixel data" as used herein refers to data entries which approximate virtual optical characteristics, wherein the pixel data is compatible with being driven to an associated element (e.g. a single red/green/blue combination "RGB" light-emitting diode ("LED") pixel on a computer monitor screen) of a display device. A "framebuffer" as used herein refers to a collection of positions within computer readable media that corresponds to elements of a display device. In some examples, the framebuffer is a faster portion of machine readable media (e.g., random-access memory ("RAM"), processor cache) that is capable of being driven to a display device by a processor quicker than slower portions of machine readable media (e.g., read-only memory ("ROM"), flash drives). In some examples, where a processor completes execution of a rendering pipeline the rendering pipeline is said to have been "iterated". Multiple iterations of a rendering pipeline may result in a series of framebuffers being driven to the display device. The rate that a new framebuffer is driven to the display device, overriding a previous framebuffer is referred to herein as the "framerate" of the associated user experience.
[0018] In some examples, graphical applications may provide users with greater flexibility in tailoring their user experience to their personal preferences. This is
accomplished by utilizing multiple rendering pipelines simultaneously. "Simultaneously" is used in this context to denote that the executable instructions and/or program data associated with the graphical manipulations within the pipelines may be loaded into a faster portion of memory before execution of some portion of the graphical application. This transfer to a faster portion of memory allows either of the rendering pipeline's associated data and/or instructions to be more quickly passed to processors when switching between the multiple rendering pipelines. Users receive greater flexibility in tailoring their user experience because running multiple rendering pipelines in such a way gives the user the ability to switch between the rendering pipelines. In some examples, this switch is initiated by the user indicating that they wish to take an in-environment snapshot. In some examples, the rendering pipeline associated with the snapshot yields a higher render quality, but at a lower framerate, effectively giving the user flexibility in determining when to sacrifice user experience framerate for user experience render quality. The switch between rendering pipelines, when done on a system utilizing multiple rendering pipelines in the manner disclosed herein, is able to be accomplished without delays that would otherwise be needed to read a rendering pipeline's associated data and/or instructions from a slower portion of memory.
[0019] In addition, examples of rendering pipelines disclosed herein employ improved methods of achieving visual effects in the user experience. In some examples, this is done by implementing jittering virtual objects in the virtual environment between iterations of the associated rendering pipeline, such that effects like anti-aliasing and depth of field may be achieved.
[0020] "Aliasing" as used herein may refer to the phenomenon where an edge in of a virtual object bisects a fragment, resulting in a jagged edge formed by the fragment being able to map to one set of virtual optical characteristics. In some embodiments, fragments are limited to one set of virtual object characteristics so that they may be compatible with eventual translation into one set of pixel data to be driven to single element of a display device.
[0021] "Jittering" as used herein refers to applying offsets to the spatial positions of virtual objects (or points making up vertices of the virtual objects), wherein the offsets change between each iteration of the associated rendering pipeline. In some examples, the offsets are randomized, wherein the virtual objects are moved to a random point within a determined volume around the given virtual object's original position. In some examples, the offsets are limited to a determined area in a 2D-plane normal to a line formed between a virtual camera perspective point and a point on the virtual object. These offsets may result in the virtual optical characteristics for a given fragment to change between iterations of a rendering pipeline. This change between iterations results in a change in pixel data output to the associated element of the display device at various images within the series of images making up the user experience. In some embodiments, these changing elements in the user experience results in a blurring illusion, which smooths out the jagged edges from the aliasing effect. This reduction of aliasing artifacts is referred to herein as "anti-aliasing". In some embodiments, when pixel data for fragments are written to the framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in anti-aliasing via a similar blurring of pixel data across multiple iterations of a rendering pipeline. This accumulating of virtual optical characteristics or pixel data across multiple iterations of a rendering pipeline is herein referred to as "progressive rendering".
[0022] "Depth of field" as used herein, refers to the effect created where virtual objects spatially separated from a focal point are blurred. In examples of rendering pipelines achieving a depth of field effect, the rendered images encourage the viewer to focus on the focal point, as objects separated from the focal point are blurred and therefore hard to visually perceive. In many examples, the magnitude of blurring scales as the distance between the virtual object to the focal point increases. Examples are disclosed herein where a depth of field effect is achieved by a rendering pipeline employing graphical manipulations
which jitter virtual objects according to their distance from a focal point defined in the scene data.
[0023] As an example of the improvements provided by the disclosed example rendering pipelines: where rendering pipelines employ jittering to achieve effects in the rendered images, the intended visual effects are more effectively isolated to the virtual objects they may have been intended. For instance, where depth of field effects are achieved through graphical manipulations to later forms of the scene data in the rendering pipeline (e.g., raster grids or framebuffers), the effects may apply across multiple objects in the image, as fragments within the raster grid, or pixel data entries within the framebuffer do not have conceptual separation relating to the virtual objects to which the depth of field effect may have been intended. Instead, the effect is applied to virtual objects to which the effect was not intended. This misapplication of visual effects is referred to herein as "bleed- over". In some examples, further graphical manipulations are required to these bleed-over artifacts. By avoiding these further graphical manipulations the rendering pipeline may complete in less time, resulting in the rendering pipeline providing a similar depth of field effect at a higher framerate.
[0024] FIG. 1 illustrates one example of a user experience rendering system 100. User experience system 100 may implement an example rendering pipeline (e.g., real-time rendering pipeline 131, offline rendering pipeline 132, interactive-time rendering pipeline 133, jittered rendering pipeline 134) to render a user experience. The user experience rendering system 100 comprises machine readable media 101 (sometimes referred to herein generally as "memory"), interconnection system 102, processor 103, input interface 104, display interface 105, and communication interface 106. In this example, user experience rendering system 100 is configured to generate one or more images, such that a user experience is created by displaying the one or more images to a user.
[0025] Machine readable media 101 may comprise any form of information storage (RAM, ROM, flash drives, processor caches), and covers both static and dynamic storage as well as long term and short term storage. Some of the information stored on machine
readable media 101 may be categorized as executable instructions 107 and/or program data 108. Executable instructions 107 may refer to any set of instructions (e.g., compiled program logic, non-compiled program logic, machine code) stored on machine readable media 101 that, when executed by processor 103, cause the processor 103 to carry out the functions described herein. Executable instructions 107 may include operating system 109, application program 110, and graphical application 130. Program data 108 may refer to any collection of data input to and/or output from the processor 103 when executing any member of executable instructions 107. Program data 108 may include operating system data 111, application data 112, graphical application data 135, and rendered image store 136.
[0026] Machine readable media 101 may comprise a combination of different storage media with physical and/or logical separation. In addition, data and/or instructions stored on machine readable media 101 may be stored partially across a plurality of storage media. For instance, while executing application program 110 processor 103 may write some portion of application data 112 from ROM to RAM, such that processor 103 will be able to more quickly access that portion of application data 112 while executing remaining instructions within application program 110. This writing of application data 112 from ROM to RAM does not remove application data 112 from machine readable media 101, because machine readable media 101 may refer collectively to any and all forms of machine readable media accessible by the processor (e.g., RAM, ROM, flash drives, processor caches).
[0027] Interconnection system 102 may refer to one or more communication media facilitating interaction between components of user experience rendering system 100. In FIG. 1, interconnection system 102 is structured as a bus connected to machine readable media 101, processor 103, input interface 104, display interface 105, and communication interface 106, however in some examples, one or more of these components may have dedicated connections to one or more of the other components. In some examples, one or more of these components may be connected to one or more of the other components via a network connection.
[0028] Processor 103 may refer to one or more general purpose processors (e.g. microprocessors) and/or one or more special purpose processors (e.g., graphics processing units ("GPUs"), network processors, or application-specific integrated circuits ("ASICs")). Further, in examples where multiple processors are represented by processor 103, said processors may be operated in parallel so that multiple instruction sets may be simultaneously executed by processor 103.
[0029] Input device 104 may refer to any device with which a user may interact (e.g. a keyboard, mouse, touch screen), wherein the device converts such interactions into signals interpretable by processor 103. FIG. 1 depicts input device 104 as a mobile platform's touchscreen 113, however other examples of user experience rendering system 100 may be implemented on a different platform lending itself to other types of input devices. For instance, where a personal desktop computer is used as the platform for user experience rendering system 100, an input device 104 may comprise a keyboard and/or mouse. In addition, input device 104 is depicted herein as relating to only one input device, however input device 104 may also refer to one or more additional input devices that may be operated simultaneously with the mobile platform's touch screen (e.g., side buttons, phone camera, microphone).
[0030] Display device 105 may refer to any device which may output a visual experience to a user (e.g. a smartphone screen, a liquid crystal display ("LCD"), an LED). Rendered images may be output to display device 105 by processor 103 writing pixel data representing the rendered image to rendered image store 136, with display device 105 in turn driving the pixel data stored at the rendered image store 136 to elements of display device 105. In some examples, processor 103 may drive the pixel data stored at the rendered image store 136 to elements of display device 105. FIG. 1 depicts display device 105 as a mobile platform display screen 114, however other examples of user experience rendering system 100 may be implemented on a different platform which lends itself to other types of display devices. For instance, where a personal desktop computer is used as the platform for user experience rendering system 100, an LCD monitor may permit a
physically larger and/or higher resolution (more pixels per image) user experience. In addition, display device 105 is depicted herein as relating to only one display device, however display device 105 may also refer to one or more additional display devices (e.g., LEDs, attached LCD screens) that may be operated simultaneously with the mobile platform's screen.
[0031] Communication interface 106 may refer to one or more devices that allow processor 103 to communicate with components not located locally with processor 103, and/or one or more devices that allow for instructions and/or data to be sent from machine readable media 101 over a network. Communication interface 106 may include a modem or soft modem, a network interface (such as an Ethernet, network interface card, WiMedia™, IEEE™ 802. XX or other interface), a communications port (such as for example, a Universal Serial Bus ("USB") port, infrared ("IR") port, Recommended Standard 232 ("RS232") port Bluetooth® interface, or other port), or other communications interface. Instructions and data transferred via communications interface 106 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 106. These signals might be emitted from communications interface 106 via a channel using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an radio frequency ("RF") link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels. FIG. 1 depicts communication interface employing a wireless cellular link 115 as a channel, however other examples of user experience rendering system 100 may be implemented on a different platform which lends itself to other types of communications channels. FIG. 1 depicts all of user experience rendering system 100's components being located locally within one mobile phone's physical housing and communicating via interconnect system 102. In some examples, one or more of the disclosed components may instead be located remotely (outside of any associated physical housing around one or more components), with the remaining locally
located components communicating with the remotely located components via communication interface 106.
[0032] Referring now to executable instructions 107 and program data 108 present on machine readable media 101, operating system 109 manages the various hardware and software components of the system and provides common interfacing services. Operating system 109 may include any known operating system available (e.g., Windows™, MacOS™, Linux™), may be custom written for the system, or may be excluded altogether with the hardware and software components providing their own interfacing services. When executing instructions relating to operating system 109, processor 103 may read input from and/or output to operating system data 111. Operating system data 111 may refer to data related to interfacing the various components of a computer system (e.g., executable instruction locations, program data locations, interface preferences, device driver locations).
[0033] Application program 110 may include one or more software programs meant to perform a function aside from rendering a user experience (e.g., email, phone, internet browser). When executing instructions relating to application program 110, processor 103 may read input from and/or output to application data 112. Application data 112 may refer to data related to the functions performed by application program 110 (e.g., email address book, phone number contact list, bookmarked webpage list).
[0034] Graphical application 130 may include any software program meant to output one or more rendered images. Examples of graphical applications may include interactive video games (e.g. World of Warcraft™, Final Fantasy XIV™, Pae-Man™), as well as animation software (e.g. Autodesk™ Maya™, Blender™, Adobe™ Animate™). When executing instructions relating to graphical application 130, processor 103 may read input from and/or output to graphical application data 135. Graphical application data 135 may refer to data related to the content visually displayed to a user (e.g., virtual character settings, login account profile, virtual environment objects).
[0035] Rendered image store 136 is depicted herein within graphical application data 135, and is where images may be output by processor 103 executing a rendering
pipeline. In some examples, rendered image store 136 is the framebuffer. As described herein, rendered image store 136 may consist of a fast portion (e.g., RAM, processor cache) of machine readable media 101. In some examples, such as those employing an offline rendering pipeline 132, rendered images output by graphical application 130 may be written to a slower portion (e.g. ROM) of rendered image store 136 and then later written back to the framebuffer to drive rendered images on display device 105 without needing to run iterations of a rendering pipeline. Note that rendered image storage 136 may refer to two separate and distinct machine readable media devices (e.g., RAM vs. ROM), with the shared characteristic that they may be utilized to hold rendered image data for later display on display device 105.
[0036] Graphical application 130 may instruct processor 103 to process user inputs, graphical application data, etc. into changes to scene data 137, associated with a virtual environment, such that the changes to these virtual objects are reflected in one or more images rendered based on the virtual environment. For example, in the context of a video game, processor 103 may execute instructions of graphical application 130 which output changes to the scene data 137 such that a virtual basketball object changes position relative to a virtual basketball court as if the virtual basketball had been propelled forward while falling according to a virtual gravity.
[0037] In some examples, not all of the associated changes to the virtual objects within scene data 137 trigger an iteration of a rendering pipeline and a resulting output to the display device 105. In such examples, changes to the virtual basketball's data within scene data 137 may occur at a rate exceeding the rate that images can be rendered and output by processor 103 to the display device 105. Instead, the graphical application 130 instructs processor 103 to run through iterations of a rendering pipeline at periodic intervals independent of changes to the scene data 137. In such examples, because the rendering pipeline may be iterated fewer times per second than the processor 103 updates scene data 137, the instructions within the rendering pipeline are executed fewer times per second than if the rendering pipeline were iterated for every change to scene data 137.
[0038] In some examples, the virtual objects within scene data 137 have points which define the vertices of polygons forming the surface of the objects. The objects, vertices, and/or polygons may additionally have associated virtual optical characteristics (e.g., color, brightness, transparency). In one example, in order for the scene data 137 to be converted into a visual image for display at display device 105, the virtual optical characteristics must be translated to pixel data which may be saved to the framebuffer within rendered image store 136.
[0039] In some examples, said virtual optical characteristics of virtual objects within scene data 137 are mapped to pixel data entries within the framebuffer by instructing processor 103 to project the virtual optical characteristics of one or more polygons in the scene data 137 onto positions on a raster grid. The virtual optical characteristics projected onto each fragment of the raster grid is then written by processor 103 to a position within the framebuffer which corresponds to an element of display device 105.
[0040] There are many ways that scene data 137, copies of scene data 137, fragment data, and/or framebuffer data may be manipulated to provide for a desired effect in the user experience. The order and/or number of times that these graphical manipulations are applied and/or the data to which these graphical manipulations are applied may be varied to provide for a wide range of user experiences with differing render times and/or image quality output from the associated rendering pipeline. Examples of rendering pipelines are depicted herein as being subsets of instructions within graphical application 130, however in other examples, such rendering pipelines may instead be standalone sets of instructions not exclusive to any specific application, program, or set of executable instructions.
[0041] Real-time rendering pipeline 131 may refer to any rendering pipeline that renders an image within a time frame meant to provide a user with a smooth visual experience based on scene data 137 that is being simultaneously manipulated by the user's inputs. A person of skill in the art will appreciate that real-time rendering pipelines are often required to complete an iteration within a timeframe that allows an entire framebuffer to
be updated a minimum number of times per second (e.g., no less than thirty times per second) associated with a target framerate. What framerate is considered sufficiently smooth, however, can vary widely with the graphical application and the target user experience. For instance, a competitive first-person shooter video game (e.g. Counter- Strike®) may require over 100 frames per second to be considered adequately smooth for the needs of competitive gamers, however animation software where virtual environments are designed for later refinement and/or viewing (e.g. Autodesk® Maya®, Blender®, Adobe® Animate®) may be sufficiently smooth as long as the animator can still orient themselves within the virtual environment (e.g. less than 10 frames per second may suffice).
[0042] Offline rendering pipeline 132 may refer to any rendering pipeline that renders an image within any length of time and without the scene data 137 being simultaneously manipulated by a user's inputs. An offline rendering pipeline 132 may render one or more images based on scene data 137 that is changing with time, however, these changes are determined before rendering of a first image begins. In some examples, the purpose of offline rendering pipeline 132 is to create a higher quality visual experience for a user that does not interact with the graphical application 130 while the rendering is taking place. In some examples, the tradeoff of higher quality often comes at the expense of the rendering process taking longer, as processor 103 may be required to complete more steps per iteration of the offline rendering pipeline 132. In some examples employing offline rendering pipeline 132, the one or more rendered images are stored at rendered image store 136 and later written to the framebuffer after all iterations of offline rendering pipeline 132 have completed.
[0043] Interactive-time rendering pipeline 133, like real-time rendering pipeline 131, may refer to any rendering pipeline that renders an image within a time frame meant to provide the user with a smooth visual experience based on scene data 137 that is being simultaneously manipulated by the user's inputs to the user experience rendering system 100. In contrast to real-time rendering pipeline 131, interactive-time rendering pipeline 133 may update the framebuffer at a lower framerate than real-time rendering pipeline 131 or
some other threshold value (e.g. thirty frames per second). In some examples, this threshold value relates to what is considered sufficiently smooth for the target user experience, and can vary widely based on the target user. For instance, a competitive first-person shooter video game (e.g. Counter-Strike®) may consider any rendering pipeline outputting at any framerate under a threshold value of 100 frames per second to be an interactive-time rendering pipeline. On the other hand, animation software where virtual environments are designed for later refinement and/or viewing (e.g. Autodesk® Maya®, Blender®, Adobe® Animate®) may require a rendering pipeline to output a framerate under 10 frames per second before it is considered an interactive-time rendering pipeline.
[0044] In one example, interactive-time rendering pipeline 133 may be provided alongside a real-time rendering pipeline in order to give the user the ability to selectively switch between rendering pipelines. This switching allows the user to temporarily sacrifice the amount of frames rendered per second (and user experience smoothness) in exchange for an increase to the quality (e.g. resolution, virtual lighting realism, polygon count) of images rendered by the user experience rendering system 100.
[0045] Jittered rendering pipeline 134, may refer to a rendering pipeline that renders an image within any time frame, that additionally manipulates the spatial locations of virtual objects within scene data 137 (e.g., the x,y,z coordinates of the virtual objects centerpoints, vertices, edges, etc.). In some examples, instead of scene data 137 being manipulated directly by processor 103 while executing jittered rendering pipeline 134, a copy of scene data 137 may be manipulated instead, such that changes in spatial positions in the copy of scene data 137 do not affect other sets of instructions that may be reading from scene data 137 directly (e.g., a physics engine simulating physics within the virtual environment). This jitters the virtual objects being rendered between subsequent iterations of the jittered rendering pipeline 134, such that various visual effects accumulate across multiple images rendered by the user experience rendering system 100. In one example, a jittered rendering pipeline 134 may be employed to provide anti-aliasing on the rendered
user experience. In another example, a jittered rendering pipeline 134 may provide a depth of field effect.
[0046] Regarding FIGS. 2-7 below, different examples of graphical application 130 and rendering pipelines are illustrated. While processor 103 executes the instructions associated with the actions disclosed herein, for sake of concise language, such actions may be described as being taken by the associated example of graphical application 130, or such actions may not be explicitly described as taken by any specific actor. In those cases, processor 103 is still to be understood as executing the instructions associated with the action. In some examples, the disclosed steps may be executed in a different order, in parallel across multiple processors, or in addition to other steps relating to graphical manipulation. As such, the term "step" is provided for illustrative purposes and should be non-limiting in the ordering of the operations discussed herein.
[0047] FIGS. 2A-F illustrate various graphical manipulations within an example rendering pipeline. In some examples, the disclosed rendering pipeline constitutes a complete rendering pipeline. In other examples, a complete rendering pipeline may constitute the disclosed example with one or more of the disclosed graphical manipulations removed, and/or one or more other graphical manipulations added in.
[0048] FIGS. 2A-C illustrate the application of one or more vertex shaders to a scene data, wherein the vertex shaders may convert vertices relating to virtual objects within the scene data into points located relative to a coordinate system defining space within a virtual environment. Before applying the vertex shaders, the scene data may comprise a list of virtual objects (e.g., characters, landscape background, cameras, raster grids, light sources) grouped as arrays of data defining characteristics of the virtual objects, including the vertices defining the bounds of the virtual objects.
[0049] FIG. 2A illustrates a virtual cube object 201 represented as an array of data containing x,y,z coordinates of each vertex of a virtual cube object 201. While virtual cube object 201 is stored as a list of vertices, additional attributes may also be ascribed to virtual cube object 201 by inclusion in the associated array of data. For instance, each vertex may
be associated with a set of virtual optical characteristics (e.g., color, brightness, transparency), and/or a set of virtual physical characteristics (e.g., mass, heat, velocity, volume).
[0050] FIG. 2B illustrates an example result after one or more vertex shaders are applied to the scene data. A virtual spatial coordinate system 202 may be built based on the spatial coordinates and direction designated for a virtual camera perspective defined in the scene data, wherein the virtual spatial coordinate system 202 defines spatial relations within a virtual environment. The camera perspective is not displayed directly in FIGS. 2B-E, but may be understood to approximate the same perspective from which a viewer observes the content of FIGS. 2B-E. The virtual environment is populated with the vertices 203 of virtual cube object 201, as defined by each vertex's x,y,z coordinates in the scene data. In some examples, objects outside of a viewing area defined in relation to the field of view, direction, and location of the virtual camera perspective may be removed from the scene data.
[0051] FIG. 2C illustrates an example result after further vertex shaders are applied to the scene data, wherein the further vertex shaders comprise graphical manipulations which connect the vertices of virtual objects into a mesh of polygons. As shown, the vertices
203 of virtual cube object 201 have been connected to form a set of six polygons 204. In some examples, the vertex shaders also interpolate, across a polygon 204, the virtual optical characteristics from the polygon 204's vertices. In the disclosed example, the end result of the application of the one or more vertex shaders is shown in FIG. 2C: the scene data is processed into several polygons 204 with virtual optical characteristics located along a virtual spatial coordinate system 202.
[0052] FIGS. 2D-F illustrate a rasterization process as applied to the several polygons
204 located within the virtual environment generated from the scene data. Rasterization, as described herein, refers to any process of converting a perspective containing a collection of virtual objects within a virtual environment into a collection of pixel data ready to be output to a display device.
[0053] FIG. 2D illustrates the imposition of a raster grid 205 within the virtual environment virtual environment, and the projection of polygons 204 onto the raster grid 205. A raster grid may be a two dimensional grid that is perpendicular to the viewing direction of the virtual camera. In some examples, the raster grid 205 is a virtual object represented in the scene data. In some examples, after the raster grid is imposed in the virtual environment, polygons 204 are flattened along the direction the virtual camera is facing and slid toward the virtual camera perspective point until the flattened polygons 206 intersect with the raster grid 205. The end result can be seen in FIG. 2D, wherein the flattened polygons 206 are visible on the raster grid 205. Of note, all actions described herein are completed by a processor (e.g. processor 103). While these actions are illustrated in FIGS. 2A-F in a visual sense for better understanding by the reader, the illustrated actions are all, in a more precise sense, logical operations performed by a processor on scene data (or copies of the scene data) stored in memory. The logical operations corresponding to a given graphical manipulation illustrated herein is understood by one of skill in the art.
[0054] FIG. 2E illustrates the application of one or more fragment shaders to the flattened polygons 206 and raster grid 205. Fragment shaders may be understood in this context to populate fragments within a raster grid with virtual optical characteristics associated objects in a virtual environment. In the disclosed example, this is accomplished by writing the virtual optical characteristics of the flattened polygons 206 to the fragments 207 which they intersect. In some examples, each flattened polygon 206 may have optical characteristics that were mapped to them based on optical attributes included within the virtual cube object 201. In examples not illustrated herein, polygons 206 may have received their optical attributes at an earlier stage in the rendering pipeline, such as where simulated light from virtual lighting source objects may have been introduced into the virtual environment when vertex shaders were being applied. In some examples, for each fragment 207 within the raster grid 205, one or more points within the fragment 207 are sampled to determine which flattened polygon 206 intersects the fragment 207. After the intersecting
polygon(s) 206 are identified, the optical characteristics of the intersecting polygon(s) 206 may be written to the fragment 207 via one or more logical operations.
[0055] In the disclosed example, each of fragments 207 are shaded according to the virtual optical characteristics of the flattened polygon 206 that intersects with a point at the center of each fragment 207. This can be seen by the jagged edge that is formed by an entire fragment 207 displaying the optical characteristics of one flattened polygon 206, even in cases (such as fragment 207A) where multiple flattened polygons 206 cover a fragment 207. This jagged edge visual artifact is an example of aliasing, and, as described above and in relation to FIGS. 6A-B below, various methods of anti-aliasing may be employed to soften such jagged edge visual artifacts.
[0056] FIG. 2F illustrates the final result of both the rasterization process and the rendering pipeline as a whole. In the disclosed example, fragments 207 are converted into pixel data which is written to framebuffer 208. In some examples, this conversion is done by processor 103 simply writing the optical characteristics stored at fragment 207 to a corresponding location in framebuffer 208. Framebuffer 208 is in turn driven to a display device 105, and display device 105 in turn presents an assortment of elements to a user, wherein the elements have a real optical characteristics and orientation approximating the virtual optical characteristics and orientation of the framebuffer 208. The real optical characteristics and orientation of the elements thus approximate the virtual environment as viewed from the virtual camera perspective.
[0057] FIG. 3 illustrates an example graphical application 300 generating a real-time user experience. In the disclosed example, the graphical application 300 employs an embodiment of real-time rendering pipeline 131.
[0058] At step 301, the graphical application 300 is started. In some examples, this occurs where a user provides input indicating that graphical application 300 should be run. In some examples, the executable instructions and data associated with the graphical manipulations within real-time rendering pipeline 131 may be loaded into a faster portion (e.g., RAM, cache) of machine readable media 101. This avoids later delays during the
execution of graphical application 300 that may otherwise be needed to read the associated data and/or instructions from a slower portion (e.g., ROM, flash) of machine readable media 101.
[0059] At step 302, the scene data is updated. In some examples, this occurs where processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of real-time rendering pipeline 131. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
[0060] In the disclosed example, real-time rendering pipeline 131 is a subset of instructions within graphical application 300. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS. 2A-E above, however other examples of real-time rendering pipeline 131 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing). In addition, other types of rendering pipelines (e.g. offline rendering pipelines, interactive-time rendering pipelines, jittered rendering pipelines) may be employed in place of real-time rendering pipeline 131 to provide a different user experience.
[0061] At step 303, the first step of real-time rendering pipeline 131, one or more vertex shaders are applied to the copy of scene data. In some examples, this occurs where the one or more vertex shaders convert vertices within the copy of scene data into polygons placed within a virtual environment, wherein the polygons have associated virtual optical characteristics. Vertex shaders are described in more detail above with reference to FIGS. 2A-C.
[0062] At step 304, polygons within the scene data are rasterized. In some examples, this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D.
[0063] At step 305, one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E.
[0064] At step 306, the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F.
[0065] Step 306 completes the real-time rendering pipeline 131 subset of instructions in the illustrated example of graphical application 300, however graphical application 300 thus far may have only displayed a single image to the user. In some examples, other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to the a display device. In some examples, when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in a blurring of pixel data across multiple iterations of a rendering pipeline. In some examples, the framebuffer may be output to a display device before an iteration of the realtime rendering pipeline 131 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer 208 to display device 105, wherein the determined maximum amount of time correlates with a determined minimum framerate). In some examples, the framebuffer may be left as it is to await later iterations of a rendering pipeline before being output to a display device.
[0066] At step 307, graphical application 300 checks for input from the user. Input from a user may be in the form of machine-readable signals generated by a peripheral device (e.g., mouse, keyboard, phone touchscreen, phone button).
[0067] Where user input is not detected, graphical application 300 returns to step 302 to begin another iteration of real-time rendering pipeline 131. In some examples, where all or a portion of the copy of scene data created at this iteration of step 302 does not differ from all or a portion of a previous copy of scene data created at a previous iteration of step 302 (e.g. one or more of the objects, lighting sources, camera perspective, etc. within the scene data has not changed), graphical application 300 may skip the scene data update at step 302 and proceed through another iteration of real-time rendering pipeline 131, rendering the existing scene data. In some examples, for instance where the real-time rendering pipeline 131 does not employ multiple buffering ("multiple-buffering" referring to configurations where additional framebuffers store copies of previously completed framebuffers), or where real-time rendering pipeline 131 comprises functionality to copy the latest framebuffer data across all other frame buffers, graphical application 300 may avoid additional iterations of the real-time rendering pipeline 131altogether where the scene data has not changed since the previous iteration.
[0068] In some examples, graphical application 300 may be simultaneously running another application, and/or subset of instructions within graphical application 300, which manipulates the scene data. These changes to the scene data may be reflected in the copy of scene data created at step 302, even when the user has not provided input. For example, a physics engine may be running simultaneously, wherein the scene data is changed to simulate passive physical forces on the virtual objects in the virtual environment (e.g., a virtual cube object moves in the negative y direction to simulate a gravitational force being applied to the virtual cube object).
[0069] Where a user input is detected at step 307, graphical application 300 continues to step 308.
[0070] At step 308, the user input is checked to see if the user indicated a desire to stop graphical application 300. Where the user did input a command to terminate operation of graphical application 300, graphical application 300 continues to step 309. At step 309, graphical application 300 is terminated and processor 103 ceases executing instructions associated with graphical application 300. Where the user did not input a command to terminate operation of graphical application 300, graphical application 300 continues to step 310.
[0071] At step 310, changes to the scene data are calculated. In some examples, step 310 comprises converting user inputs into physical forces which are applied in the virtual environment. For instance, a user may input that they wish to shoot a virtual bullet at a virtual cube object. In that case, graphical application 300 may create a moving virtual bullet object in the scene data, and then run the scene data through a physics engine which simulates motion of the virtual bullet object and its collision with the virtual cube object based on virtual physical characteristics (e.g., mass, velocity, volume) of the virtual bullet object and of the virtual cube object. In the disclosed example, these changes are applied to scene data directly because the physics engine manipulates scene data, rather than manipulating a copy of scene data. Real-time rendering pipeline 131 disclosed above manipulates a copy of scene data in order to avoid affecting other processes, such as a physics engine, that may take inputs from the scene data.
[0072] Graphical application 300 then returns to step 302, wherein it creates a new copy of the scene data in preparation for the next iteration of either real-time rendering pipeline 131. As discussed above, in some examples the scene data is copied into a faster portion of machine readable media 101 (e.g. RAM, cache). Graphical application 300 then proceeds back through real-time rendering pipeline 131 to render the new copy of scene data into images that the user can perceive via display device 105. The disclosed loop from steps 302-308, to step 310, and back to step 302, or alternatively the loop from steps 302- 307 and back to step 302 operate to create an interactive real-time user experience,
wherein the user is able to perceive a virtual environment as it changes in response to inputs provided by the user.
[0073] FIG. 4 illustrates an example graphical application 400 employing an in-game snapshot functionality. In the disclosed example, the graphical application 400 is employing an embodiment of a real-time rendering pipeline 131 simultaneously with an interactivetime rendering pipeline 133.
[0074] At step 401, the graphical application 400 is started. In some examples, this occurs where a user provides input indicating that graphical application 400 should be run. In some examples, the executable instructions and data associated with the graphical manipulations within real-time rendering pipeline 131 and interactive-time rendering pipeline 133 may be loaded into a faster portion (e.g., RAM, cache) of machine readable media 101. The transfer of the instruction sets underlying graphical manipulations for both pipelines to a faster portion of machine readable media 101 allows either of the pipelines associated data and/or instructions to be readily passed to processor 103 when switching between rendering pipelines at a subsequent iteration. This avoids later delays during the execution of graphical application 400 that may otherwise be needed to read the associated data and/or instructions from a slower portion (e.g., ROM, flash) of machine readable media 101.
[0075] At step 402, the scene data is updated. In some examples, this occurs where processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
[0076] At step 403, the snapshot flag is checked. In some examples, the snapshot flag is an indication stored in memory regarding whether the user has input a command to create an in-game snapshot. An in-game snapshot may refer to a higher quality rendering of the virtual environment, e.g. a rendering where virtual objects are displayed with higher
resolution, more realistically simulated lighting, and/or smoother surfaces than a previous rendering.
[0077] In some examples this higher quality rendering can only be accomplished with more elaborate graphical manipulations that take more processing time, resulting in iterations of the associated rendering pipeline taking longer to complete than iterations of a rendering pipeline outputting a lower quality rendering. When a rendering pipeline iteration takes longer to complete, the resulting outputs to the framebuffer may not occur fast enough to generate a smooth framerate. As discussed above, a smooth framerate is sometimes described in the industry as thirty frames per second, however, what is considered a smooth framerate can vary widely between industry and intended viewing user.
[0078] In some examples, where step 403 is being first executed since starting graphical application 400, the snapshot flag is not set. However, later steps in graphical application 400 may enable a user to set the snapshot flag such that subsequent iterations of step 403 may be executed where the snapshot flag is set. Where the snapshot flag is not set, the processor continues through an example of real-time rendering pipeline 131.
[0079] In the disclosed example, real-time rendering pipeline 131 is a subset of instructions within graphical application 400. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS. 2A-F above, however other examples of real-time rendering pipeline 131 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing). In addition, other types of rendering pipelines (e.g. offline rendering pipelines, interactive-time
rendering pipelines, jittered rendering pipelines) may be employed in place of real-time rendering pipeline 131 to provide a different user experience.
[0080] At step 404, the first step of real-time rendering pipeline 131, one or more vertex shaders are applied to the copy of scene data. In some examples, this occurs where the one or more vertex shaders convert vertices within the copy of scene data into polygons placed within a virtual environment, wherein the polygons have associated virtual optical characteristics. Vertex shaders are described in more detail above with reference to FIGS. 2A-C.
[0081] At step 405, polygons within the scene data are rasterized. In some examples, this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D.
[0082] At step 406, one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E.
[0083] At step 407, the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F.
[0084] Step 407 completes the real-time rendering pipeline 131 subset of instructions in the illustrated example of graphical application 400, however graphical application 400 thus far may have only displayed a single image to the user. In some examples, other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to a display device. In some examples, when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in a blurring
of pixel data across multiple iterations of a rendering pipeline. In some examples, the framebuffer may be output to a display device before an iteration of the real-time rendering pipeline 131 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer 208 to display device 105, wherein the determined maximum amount of time correlates with a determined minimum framerate). In some examples, the framebuffer may be left as it is to await later iterations of the rendering pipeline before being output to a display device.
[0085] At step 408, graphical application 400 checks for input from a user. Input from a user may be in the form of machine-readable signals generated by a peripheral device (e.g., mouse, keyboard, phone touchscreen, phone button).
[0086] Where user input is not detected, graphical application 400 returns to step 402 to begin another iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133. In some examples, where all or a portion of the copy of scene data created at this iteration of step 402 does not differ from all or a portion of a previous copy of scene data created at a previous iteration of step 402 (e.g. one or more of the objects, lighting sources, camera perspective, etc. within the scene data has not changed), graphical application 400 may skip the scene data update at step 402 and proceed through another iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133, rendering the existing scene data. In some examples, for instance where neither realtime rendering pipeline 131 nor interactive-time rendering pipeline 133 employ multiple buffering ("multiple-buffering" referring to configurations where additional framebuffers store copies of previously completed framebuffers), or where both real-time rendering pipeline 131 or interactive-time rendering pipeline 133 comprises functionality to copy the latest framebuffer data across all other frame buffers, graphical application 400 may avoid additional iterations of either the real-time rendering pipeline 131 or the interactive-time rendering pipeline 133 altogether where the scene data has not changed since the previous iteration.
[0087] In some examples, graphical application 400 may be simultaneously running another application, and/or subset of instructions within graphical application 400, which cause changes to the scene data. These changes to the scene data may be reflected in the copy of scene data created at step 402, even when the user has not provided input. For example, a physics engine may be running simultaneously, wherein the scene data is changed to simulate passive physical forces on the virtual objects in the virtual environment (e.g., a virtual cube object moves in the negative y direction to simulate a gravitational force).
[0088] Where a user input is detected at step 408, graphical application 400 continues to step 409.
[0089] At step 409, the user input is checked to see if the user indicated a desire to stop graphical application 400. Where the user did input a command to terminate operation of graphical application 400, graphical application 400 continues to step 410. At step 410, graphical application 400 is terminated and processor 103 ceases executing instructions associated with graphical application 400. Where the user did not input a command to terminate operation of graphical application 400, graphical application 400 continues to step 411.
[0090] At step 411, the user input is checked to see if it included a command to set a snapshot flag. Where the user input included a command to set a snapshot flag, graphical application 400 proceeds to step 412. Where the user input did not include a command to set a snapshot, graphical application 400 proceeds directly to step 413.
[0091] At step 412, the snapshot flag is set to indicate that the user commanded a snapshot be taken. In some examples, if the snapshot flag was previously not at a previous iteration of step 403, then during step 412 the framebuffer is replaced with uniform pixel data (e.g. all fragments in the framebuffer indicate the same color for output to the display 17). This flashing of the framebuffer may create a "snapping of a camera" effect as the user will experience a brief flash of one color while the displayed pixels repopulate with virtual optical characteristics output by subsequent iterations of interactive-time rendering
pipeline 133 (described in further detail below at steps 414-418). Graphical application 400 then proceeds to step 413.
[0092] At step 413, changes to the scene data are calculated. In some examples, step 413 comprises converting user inputs into physical forces which are applied in the virtual environment. For instance, a user may input that they wish to shoot a virtual bullet at a virtual cube object. In that case, graphical application 400 may create a moving virtual bullet object in the scene data, and then run the scene data through a physics engine which simulates motion of the virtual bullet object and its collision with the virtual cube object based on virtual physical characteristics (e.g., mass, velocity, volume) of the virtual bullet object and of the virtual cube object. In the disclosed example, these changes are applied to scene data directly because the physics engine manipulates scene data, rather than manipulating a copy of scene data. Real-time rendering pipeline 131 and interactive-time rendering pipeline 133 disclosed above manipulate a copy of scene data in order to avoid affecting other processes, such as a physics engine, that may take inputs from the scene data.
[0093] Graphical application 400 then returns to step 402, wherein it creates a new copy of the scene data in preparation for the next iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133. After step 402, processor 103 again checks the snapshot flag at step 403. In some examples, such as where the snapshot flag was set at step 412, the graphical application 400 continues through an example of interactive-time rendering pipeline 133, as opposed to the real-time rendering pipeline 131 described above.
[0094] A distinction between real-time rendering pipeline 131 and interactive-time rendering pipeline 133 in the example of graphical application 400 is that an iteration of real-time rendering pipeline 131 completes in less time than an iteration of interactive-time rendering pipeline 133 completes. As described herein, where a snapshot has been indicated, interactive-time rendering pipeline 133 is run instead of real-time rendering
pipeline 131, resulting in graphical application 400 outputting higher quality images to be output at a lower framerate until the snapshot flag is unset.
[0095] A person of skill in the art will appreciate that what is considered a real-time rendering pipeline 131 in one example of graphical application 400 may be considered an interactive-time rendering pipeline 133 in another example of graphical application 400, and vice versa.
[0096] Where the snapshot flag is set, the graphical application 400 continues from step 401 into interactive-time rendering pipeline 133. In the disclosed example, interactivetime rendering pipeline 133 is a subset of instructions within graphical application 400. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS. 2A-F above, however other examples of interactive-time rendering pipeline 133 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing). In addition, other types of rendering pipelines (e.g. offline rendering pipelines, real-time rendering pipelines, jittered rendering pipelines) may be employed in place of interactive-time rendering pipeline 133 to provide a different user experience.
[0097] At step 414, the first step of interactive-time rendering pipeline 133, one or more vertex shaders are applied to the copy of scene data. In some examples, this occurs where the one or more vertex shaders convert vertices within the copy of scene data into polygons placed within a virtual environment, wherein the polygons have associated virtual optical characteristics. Vertex shaders are described in more detail above with reference to FIGS. 2A-C. In the disclosed example of graphical application 400, step 414 is identical to step 404, except that step 414 is taken within interactive-time rendering pipeline 133 while
step 404 is taken within real-time rendering pipeline 131. In other examples, steps 414 and 404 may differ.
[0098] At step 415, polygons within the scene data are rasterized. In some examples, this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D. In the disclosed example of graphical application 400, step 415 is identical to step 405, except that step 415 is taken within interactive-time rendering pipeline 133 while step 405 is taken within real-time rendering pipeline 131. In other examples, steps 415 and 405 may differ.
[0099] At step 416, wherein path tracing graphical manipulations are employed to simulate lighting on the flattened polygons. Path tracing is described herein (but not illustrated) to serve as an example of a complex graphical manipulation technique that may take longer for processor 103 to complete than other graphical manipulation techniques. As such, although real-time rendering pipeline 131 and interactive-time rendering pipeline 133 are otherwise identical in the disclosed example of graphical application 400, the path tracing graphical manipulations carried out in step 416 result in interactive-time rendering pipeline 133 taking more time to complete than real-time rendering pipeline 131.
[00100] In some examples, path tracing comprises simulating beams of light by following one or more lines from the virtual camera perspective through the flattened polygons which were projected onto the raster grid, and then drawing one or more additional lines from the surfaces of the non-flattened versions of the polygons. Of the additional lines some intersect a virtual light source in the virtual environment. Where an additional line does intersect a virtual light source, virtual optical characteristics of the virtual light source and of the non-flattened polygon are used by processor 103 to calculate new virtual optical characteristics of the flattened polygon, wherein these new virtual optical characteristics may better approximate the behavior of real light better than the previous virtual optical characteristics of the flattened polygon.
[00101] At step 417, one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment
shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E. In the disclosed example of graphical application 400, step 417 is identical to step 406, except that step 417 is taken within interactive-time rendering pipeline 133 while step 406 is taken within real-time rendering pipeline 131. In other examples, steps 417 and 406 may differ.
[00102] At step 418, the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F. In the disclosed example of graphical application 400, step 418 is identical to step 407, except that step 418 is taken within interactive-time rendering pipeline 133 while step 407 is taken within real-time rendering pipeline 131. In other examples, steps 418 and 407 may differ.
[00103] Step 418 completes the interactive-time rendering pipeline 133 subset of instructions in the illustrated example of graphical application 400, however graphical application 400 thus far may have only displayed a single image to the user. In some examples, other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to a display device. In some examples, when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of the rendering pipeline. This accumulation operation results in a blurring of pixel data across multiple iterations of a rendering pipeline. In some examples, the framebuffer may be output to a display device before an iteration of the interactivetime rendering pipeline 133 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer to display device 105, wherein the determined maximum amount of time correlates with a determined minimum framerate). In some
examples, the framebuffer may be left as it is to await later iterations of the rendering pipeline before being output to a display device.
[00104] In some examples, graphical application 400 then proceeds back through steps 408-413, back to 402, and then back through another iteration of either real-time rendering pipeline 131 or interactive-time rendering pipeline 133 depending on whether the snapshot flag was previously set. In some examples, the snapshot flag may automatically be turned back off by a separate set of instructions which turn the snapshot flag off after a predetermined period of time.
[00105] The end result of the disclosed example of graphical application 400 is a the creation of an interactive real-time user experience, wherein the user may indicate that a snapshot should be taken. This option to take a snapshot adds flexibility to the user experience, as the user may selectively decide when to trade rendering quality for framerate.
[00106] FIG. 5 illustrates an example graphical application 500 generating a real-time user experience. In the disclosed example, the graphical application 500 employs an embodiment of jittered rendering pipeline 134.
[00107] At step 501, the graphical application 500 is started. In some examples, this occurs where a user provides input indicating that graphical application 500 should be run. In some examples, the executable instructions and data associated with the graphical manipulations within jittered rendering pipeline 134 may be loaded into a faster portion (e.g., RAM, cache) of machine readable media 101. This avoids later delays during the execution of graphical application 500 that may otherwise be needed to read the associated data and/or instructions from a slower portion (e.g., ROM, flash) of machine readable media 101.
[00108] At step 502, the scene data is read. In some examples, this occurs where processor 103 copies scene data into a faster portion of machine readable media 101 (e.g., processor cache, RAM) in preparation for starting an iteration of jittered rendering pipeline
134. In some examples, this step is only necessary when the scene data has changed or when the graphical application is first initialized.
[00109] In the disclosed example, jittered rendering pipeline 134 is a subset of instructions within graphical application 500. Graphical manipulations are described herein with reference to definitions and illustrations disclosed with reference to FIGS. 2A-E above, however other examples of jittered rendering pipeline 134 may apply portions of the disclosed graphical manipulation steps in different ways, and/or may employ the disclosed graphical manipulation steps in any order, and/or may employ one or more of the disclosed graphical manipulation steps in parallel across multiple processors, and/or may employ one or more graphical manipulation steps multiple times, and/or may employ the disclosed graphical manipulation steps in a looped fashion, and/or may employ graphical manipulation steps not disclosed herein (e.g. tessellation, depth testing, stencil testing). In addition, other types of rendering pipelines (e.g. offline rendering pipelines, interactive-time rendering pipelines, real-time rendering pipelines) may be employed in place of, in series with, or alongside jittered rendering pipeline 134 to provide a different user experience.
[00110] In addition, jittered rendering pipeline 134 may achieve framerates that justify categorization of jittered rendering pipeline 134 additionally as a real-time rendering pipeline 131, an offline rendering pipeline 132, or an interactive-time rendering pipeline 133. Jittered rendering pipeline 134 is not distinguished from other rendering pipelines by its framerate, but is instead distinguished by its employment of graphical manipulation steps which apply offsets to virtual objects within scene data (further described below).
[00111] At step 503, the first step of jittered rendering pipeline 134, one or more vertex shaders are applied to the copy of scene data. In some examples, this occurs where the one or more vertex shaders convert vertices within the copy of scene data into polygons placed within a virtual environment, wherein the polygons have associated virtual optical characteristics. Vertex shaders are described in more detail above with reference to FIGS.
2A-C.
[00112] At step 504 the vertices transformed at step 503 within the scene data are jittered. In some examples, this occurs where processor 103 applies an offset to the spatial positions indicated for one or more of the vertices within the copy of scene data. In some examples, the offset is applied only to those vertices within some volume of the virtual environment wherein vertices may potentially be within the field of view of a virtual camera perspective. In some examples, the offset is randomized, wherein the vertex is moved to a random point within a determined volume around the vertex's original position. In some examples, the offset is only in directions which are perpendicular to a direction that the virtual camera perspective is facing. In some examples, the offsets are limited to a determined area in a 2D-plane normal to a line formed between a virtual camera perspective point and the vertex. In some examples, the 2D projective position of the vertex is offset by a randomized amount in the range of -1 to 1 in the X and Y directions.
[00113] In some examples, the offset is applied to the scene data itself, rather than a copy. In some examples the scene data is applied to a copy of scene data that was created at step 502 in advance of the current iteration of jittered rendering pipeline 134. In examples where an offset had been applied to one or more vertices within scene data directly in a previous iteration of jittered rendering pipeline 134, processor 103 removes the previous offset from scene data instead of adding a second offset. This causes the offset vertices to return to their original spatial positions in scene data before continuing to run the remaining steps within the current iteration of jittered rendering pipeline 134.
[00114] In examples where a first offset had been applied to one or more vertices within a first copy of scene data created during a previous iteration of jittered rendering pipeline 134, the processor 103 does not apply a second offset to the same vertex within a second copy of scene data during the current iteration of jittered rendering pipeline 134. Not applying the second offset in the second iteration of jittered rendering pipeline 134 causes the vertex to appear to return to its non-offset position as designated in the scene data, which has been copied from but not altered during either the previous or current iteration of jittered rendering pipeline 134.
[00115] At step 505, polygons within the scene data are rasterized. In some examples, this occurs where the polygons are flattened and imposed over a raster grid. This step in the rasterization process is described in more detail above with reference to FIG. 2D. In some examples of jittered rendering pipeline 134, jittering offset is instead applied after step 505, to the positions of one or more vertices for a flattened polygon, with the offset constrained to a determined area lying on the raster grid.
[00116] At step 506, one or more fragment shaders are applied to the flattened polygons and raster grid. In some examples, this occurs where the one or more fragment shaders populate fragments within the raster grid with the virtual optical characteristics of flattened polygons imposed over the fragments. Fragment shaders are described in more detail above with reference to FIG. 2E.
[00117] At step 507, the fragments are written to the framebuffer. In some examples, this occurs where the virtual optical characteristics populating the fragments are converted into pixel data and then written to the framebuffer. This step in the rasterization process is described in more detail above with reference to FIG. 2F.
[00118] Step 507 completes the jittered rendering pipeline 134 subset of instructions in the illustrated example of graphical application 500, however graphical application 500 thus far may have only displayed a single image to the user. In some examples, other graphical manipulations may be applied directly to the fragments and/or framebuffer before the framebuffer is output to the a display device. In some examples, when pixel data for fragments are written to framebuffer, the pixel data may be accumulated with previous pixel data that may have already been present in framebuffer as a result of a previous iteration of a rendering pipeline. This accumulation operation results in a blurring of pixel data across multiple iterations of a rendering pipeline. In some examples, the framebuffer may be output to a display device before an iteration of the jittered rendering pipeline 134 has completed (e.g. the processor 103 may output a partially updated framebuffer 208 where a determined maximum amount of time has elapsed since the previous output of the framebuffer 208 to display device 105, wherein the determined maximum amount of time
correlates with a determined minimum framerate). In some examples, the framebuffer may be left as it is to await later iterations of a rendering pipeline before being output to a display device.
[00119] One of skill in the art understands that rapid successive iterations of jittered rendering pipeline 134 according to the examples above may result in the one or more vertices offset during one or more iterations to appear to shake between the non-offset position of the one or more vertices as defined in scene data and one or more offset positions. This shaking effect, as discussed in below regarding FIGS. 6-7, may be utilized to achieve visual effects across a one or more rendered images.
[00120] At step 508, graphical application 500 checks for input from the user. Input from a user may be in the form of machine-readable signals generated by a peripheral device (e.g., mouse, keyboard, phone touchscreen, phone button).
[00121] Where user input is not detected, graphical application 500 returns to step 502 to begin another iteration of jittered rendering pipeline 134. In some examples, where all or a portion of the copy of scene data created at this iteration of step 502 does not differ from all or a portion of a previous copy of scene data created at a previous iteration of step 502 (e.g. one or more of the objects, lighting sources, camera perspective, etc. within the scene data has not changed), graphical application 500 may skip the scene data update at step 502 and proceed through another iteration of jittered rendering pipeline 134, rendering the existing scene data.
[00122] In some examples, graphical application 500 may be simultaneously running another application, and/or subset of instructions within graphical application 500, which manipulates the scene data. These changes to the scene data may be reflected in the copy of scene data created at step 502, even when the user has not provided input. For example, a physics engine may be running simultaneously, wherein the scene data is changed to simulate passive physical forces on the virtual objects in the virtual environment (e.g., a virtual cube object moves in the negative y direction to simulate a gravitational force).
[00123] Where a user input is detected at step 508, graphical application 500 continues to step 509.
[00124] At step 509, the user input is checked to see if the user indicated a desire to stop graphical application 500. Where the user did input a command to terminate operation of graphical application 500, graphical application 500 continues to step 510. At step 510, graphical application 500 is terminated and processor 103 ceases executing instructions associated with graphical application 500. Where the user did not input a command to terminate operation of graphical application 500, graphical application 500 continues to step 511.
[00125] At step 511, changes to the scene data are calculated. In some examples, step 511 comprises converting user inputs into physical forces which are applied in the virtual environment. For instance, a user may input that they wish to shoot a virtual bullet at a virtual cube object. In that case, graphical application 500 may create a moving virtual bullet object in the scene data, and then run the scene data through a physics engine which simulates motion of the virtual bullet object and its collision with the virtual cube object based on virtual physical characteristics (e.g., mass, velocity, volume) of the virtual bullet object and of the virtual cube object. In the disclosed example, these changes are applied to scene data directly because the physics engine manipulates scene data, rather than manipulating a copy of scene data. In some examples, jittered rendering pipeline 134 disclosed above manipulates a copy of scene data in order to avoid affecting other processes, such as a physics engine, that may take inputs from the scene data.
[00126] Graphical application 500 then returns to step 502, wherein it creates a new copy of the scene data in preparation for the next iteration of either jittered rendering pipeline 134. As discussed above, in some examples the scene data is copied into a faster portion of machine readable media 101 (e.g. RAM, cache). Graphical application 500 then proceeds back through jittered rendering pipeline 134 to render the new copy of scene data into images that the user can perceive via display device 105. The disclosed loop from steps 502-509, to step 511, and back to step 502, or alternatively the loop from steps 502-508 and
back to step 502 operate to jitter vertices within the scene data (or a copy of scene data) between subsequent iterations of a rendering pipeline. In some examples, as each iteration of jittered rendering pipeline 134 is executed, the vertices that appear in the virtual environment jitter about their designated spatial positions, causing, as discussed below in FIG. 6A-B, edges of polygons to shift within one or more fragments and altering the optical characteristics that get output to the framebuffer.
[00127] In some examples, jittered rendering pipeline 134 may apply step 503 before executing different graphical manipulations. For instance, jittering caused by employing step 503 before a path tracing graphical manipulation (discussed above in relation to step 416 of FIG. 4), may be useful for achieving aliasing and/or depth of field effects (discussed in detail below in relation to FIGS. 6-7). In such an example, the jittering of virtual objects affects the collision of one or more lines drawn during the path tracing step between iterations of the rendering pipeline. This jittering may blur the lighting effects simulated by path tracing graphical manipulations across multiple rendered images viewed in series, particularly where progressive rendering techniques are utilized.
[00128] FIGS. 6A-B illustrate how a jittered rendering pipeline may achieve an antialiasing effect. Referring to FIG. 6A, virtual cube object 601 is illustrated in its ideal form with straight edges. Raster grid 602 is between the virtual camera perspective (approximately the same perspective that the illustration is viewed from) and cube virtual object 601. The point 603 of one vertex of virtual cube object 601 and the edges of virtual cube object 601 are illustrated as showing through raster grid 602, but this is only for explanatory purposes— as one of skill in the art will appreciate, and as discussed above, each fragment of raster grid 602 can only take on one set of optical characteristics because it must later map to a single pixel data entry in a framebuffer. The vertex 603 and edges of virtual cube object 601 show where the various polygons line up with each fragment within raster grid 602. The effect of aliasing is most apparent in the illustration where the idealized edges of virtual cube object 601 may be contrasted with the jagged edges formed by the forced uniformity of optical characteristics across an individual fragment of raster grid 602.
[00129] FIG. 6B illustrates how cube virtual object 601 may appear altered in a second iteration of a jittered rendering pipeline. The position of cube scene element 601's vertex has been offset from a first position at point 603 which it held during a previous iteration of the jittered rendering pipeline. Cube scene element 601's vertex is now located at point 604 in the current iteration of the jittered rendering pipeline. As can be seen in the new optical characteristics (denoted by hashing) shown on raster grid 602 in FIG. 6B, the jittering of vertices between iterations of jittered rendering pipeline has resulting in a softening of the jagged edges shown in raster grid 602 in FIG. 6A. This softening is found usually near the edges of polygons, where the jittering offset may be enough to change which polygon's optical characteristics are written to a given fragment of raster grid 602 in a subsequent iteration of a jittered rendering pipeline.
[00130] In some examples, when writing the second set of virtual optical characteristics associated with the newly intersecting polygon's virtual optical characteristics to the framebuffer, the second set of virtual optical characteristics may be accumulated and/or averaged with the previous set of virtual optical characteristics already present in the framebuffer during the previous iteration of a jittered rendering pipeline. This progressive rendering technique allows the framebuffer values to be averaged between iterations of the jittered rendering pipeline such that framebuffer positions relating to fragments that are bisected by a polygon edge output a set of virtual optical characteristics that represents the average of the virtual optical characteristics on either side of the polygon edge. Without the application of the jittering offsets, the optical characteristics of one polygon or the other would persist in the framebuffer as long as the scene data remains static. When viewed on a large scale, this creates a smoothing visual effect on edges that were previously jagged due to aliasing. In other words, examples of jittered rendering pipeline 134 may be employed to anti-alias a graphical user experience.
[00131] FIGS. 7A-B illustrate how an image rendered with a depth of field effect generated by a jittered rendering pipeline may compare with an image rendered without a depth of field effect. A depth of field effect may be achieved in jittered rendering pipelines
where instead of (or in addition to) jittering vertices to achieve anti-aliasing effects, offsets are applied to vertices according to their spatial relation to a focal point 701. In some examples, focal point 701 is a virtual object within the scene data. In some examples, as distance from focal point 701 to a vertex increases, the magnitude of the offset applied increases.
[00132] FIG. 7A illustrates a rendering produced by a graphical application not employing any means to create a depth of field effect. FIG. 7B illustrates a rendering produced by a graphical application employing a jittered rendering pipeline to achieve a depth of field effect. In both FIG. 7A and 7B, hatted man scene element 702 is near focal point 701, so the rendered image output of hatted man scene element 702 is substantially unchanged between FIGS. 7A-B. However, onlooker scene element 703, as well as billboard element 704, become blurry in the rendering output by the graphical application employing a jittered rendering pipeline based depth of field effect. This is because successive iterations of a jittered rendering pipeline applying a depth of field effect (as discussed above) causes vertices to jitter in accordance with their distance from focal point 701, resulting in the appearance of a blur of the objects associated with the jittering vertices as objects jump between multiple positions at each iteration where the framebuffer is driven to the display.
[00133] Onlooker scene element 703 and billboard scene element 704 are both significantly further away from focal point 701 than hatted man scene element 702. Therefore, in the disclosed example, at each iteration of the jittered rendering pipeline, onlooker scene element 703 and billboard element 704's vertices are offset by larger distances than hatted man scene element 702's vertices are offset. In some examples of a jittered rendering pipeline applying a depth of field effect, scene elements within a predetermined distance of focal point 701, such as hatted man scene element 702, may not be offset at all. In some examples, progressive rendering techniques are used to change the depth of field effect of the jittered rendering pipeline. As can be seen, the end result of a depth of field effect is, in some examples, a drawing of the viewer's eye to the unblurred portions of the image near the focal point 701.
[00134] FIG. 8 depicts a block diagram of an example computer system 800 in which various of the examples described herein may be implemented. The computer system 800 includes a bus 802 or other communication mechanism for communicating information, one or more hardware processors 804 coupled with bus 802 for processing information. Hardware processor(s) 804 may be, for example, one or more general purpose microprocessors.
[00135] The computer system 800 also includes a main memory 806, such as a RAM, cache and/or other dynamic storage devices, coupled to bus 802 for storing information and instructions to be executed by processor 804. Main memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 804. Such instructions, when stored in storage media accessible to processor 804, render computer system 800 into a special-purpose machine that is customized to perform the operations specified in the instructions.
[00136] The computer system 800 further includes a ROM 808 or other static storage device coupled to bus 802 for storing static information and instructions for processor 804. A storage device 810, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 802 for storing information and instructions.
[00137] The computer system 800 may be coupled via bus 802 to a display 812, such as an LCD (or touch screen), for displaying information to a computer user. An input device 814, including alphanumeric and other keys, is coupled to bus 802 for communicating information and command selections to processor 804. Another type of user input device is cursor control 816, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812. In some examples, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
[00138] The computing system 800 may include a user interface module to implement a graphical user interface ("GUI") that may be stored in a mass storage device as
executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
[00139] In general, the word "component," "engine," "system," "database," data store," and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java™, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl™, or Python™. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
[00140] The computer system 800 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or field-programmable gate array ("FPGAs", firmware and/or program logic which in combination with the computer system causes or programs computer system 800 to be a special-purpose machine. According to
one example, the techniques herein are performed by computer system 800 in response to processor(s) 804 executing one or more sequences of one or more instructions contained in main memory 806. Such instructions may be read into main memory 806 from another storage medium, such as storage device 810. Execution of the sequences of instructions contained in main memory 806 causes processor(s) 804 to perform the process steps described herein. In alternative examples, hard-wired circuitry may be used in place of or in combination with software instructions.
[00141] The term "non-transitory media," and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 810. Volatile media includes dynamic memory, such as main memory 806. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
[00142] Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
[00143] The computer system 800 also includes a communication interface 818 coupled to bus 802. Communication interface 818 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 818 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication
connection to a corresponding type of telephone line. As another example, communication interface 818 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 818 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[00144] A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet." Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 818, which carry the digital data to and from computer system 800, are example forms of transmission media.
[00145] The computer system 800 can send messages and receive data, including program code, through the network(s), network link and communication interface 818. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 818.
[00146] The received code may be executed by processor 804 as it is received, and/or stored in storage device 810, or other non-volatile storage for later execution.
[00147] Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service" (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described
above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example examples. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines.
[00148] As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 800.
[00149] As used herein, the term "or" may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, "can," "could," "might," or "may," unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain
examples include, while other examples do not include, certain features, elements and/or steps.
[00150] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as "conventional," "traditional," "normal," "standard," "known," and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
Claims
1. A system comprising: a computer readable media, wherein the computer readable media stores executable instructions; and a processor coupled to the computer readable media, the processor configured to execute the executable instructions to: store first data associated with a first rendering pipeline, second data associated with a second rendering pipeline, and scene data in the computer readable media, wherein: the first rendering pipeline contains first instructions that cause the processor to render an image based on the scene data at a first framerate; the second rendering pipeline contains second instructions that cause the processor to render the image based on the scene data at a second framerate; and the first framerate is greater than the second framerate; detect activation of a snapshot flag, wherein the snapshot flag indicates that the second instructions associated with the second rendering pipeline should be executed instead of the first instructions associated with the first rendering pipeline; and execute the second instructions associated with the second rendering pipeline instead of the first instructions associated with the first rendering pipeline.
2. The system of claim 1, wherein the executable instructions further cause the processor to: in parallel with the execution of the first instructions of the first rendering pipeline or the second rendering pipeline causing the processor to render images based on the scene data, change the scene data to simulate interactions among virtual objects represented within the scene data.
- 48 -
3. The system of claim 2, further wherein the interactions were initiated by input from a user.
4. The system of claim 1, wherein the executable instructions further cause the processor to: store the first data associated with a first rendering pipeline, the second data associated with a second rendering pipeline, and the scene data within the processor cache storage media.
5. The system of claim 1, wherein the executable instructions further cause the processor to: store the first data associated with a first rendering pipeline, the second data associated with a second rendering pipeline, and the scene data within the RAM storage media.
6. The system of claim 1, wherein the executable instructions further cause the processor to: before executing the second instructions associated with the second rendering pipeline instead of the first instructions associated with the first rendering pipeline, write a uniform set of optical characteristics to a framebuffer.
7. The system of claim 1, wherein the executable instructions further cause the processor to: after executing the second instructions associated with the second rendering pipeline, execute additional iterations of the second rendering pipeline instead of the first instructions associated with the first rendering pipeline; and execute the first instructions associated with the first rendering pipeline instead of the second instructions associated with the second rendering pipeline.
8. The system of claim 7, wherein the processor deactivates the snapshot flag after a predetermined length of time.
- 49 >_
9. A method comprising: applying a first set of graphical manipulations to a scene data; applying offsets to vertices within the scene data; applying a second set of graphical manipulations to the scene data resulting in a set of pixel data within the framebuffer; and updating an element of a display device using the set of pixel data in the framebuffer, wherein the element of the display device comprises a pixel of the display device.
10. The method of claim 9, further comprising: upon sending the set of pixel data to the display device, applying a progressive rendering graphical manipulation to the scene data, wherein the progressive rendering graphical manipulation changes a first pixel data value within the set of pixel data based on a second pixel data value within a previous set of pixel data associated with a previous set of graphical manipulations.
11. The method of claim 10, wherein the progressive rendering graphical manipulation comprises averaging the first pixel data value with the second pixel data value.
12. The method of claim 9, wherein the scene data is copied from an original scene data before applying the offsets to the vertices within the scene data.
13. The method of claim 9, wherein at least one of the first and second set of graphical manipulations to the scene data comprises: applying a vertex shader to the vertices; projecting polygons formed by the vertices onto a raster grid; applying a fragment shader to fragments of the raster grid; and converting virtual optical characteristics at the fragments into a plurality of pixel data.
- 50 -
14. The method of claim 9, wherein at least one of the first and second set of graphical manipulations to the scene data comprises: following a first line from a virtual camera perspective within the scene data to a polygon formed by the vertices; following a second line from the polygon to a virtual light source within the scene data; and changing the virtual optical characteristics of the polygon based on the virtual optical characteristics of the virtual light source.
15. The method of claim 9, further comprising: limiting the vertices to which the offsets are applied to vertices located within a determined volume, wherein the determined volume is associated with a virtual camera perspective.
16. The method of claim 9, wherein the offsets which are applied to the vertices are randomized, such that the offsets move a given vertex to a random point within a determined volume around the given vertex's original position.
17. The method of claim 9, further comprising: limiting the offsets applied to the vertices to a plane, such that the offsets move a given vertex to a point on the plane, and wherein the plane is oriented normal to a direction associated with a virtual camera perspective.
18. The method of claim 9, further comprising: the offsets which are applied to the vertices are scaled, such that the magnitude of a given offset is variable, and wherein the scaling procedure is based on a distance between the position of a given vertex and a focal point.
19. The method of claim 18, wherein the scaling procedure increases the magnitude of a given offset as the distance between the position of the given vertex and the focal point increases.
- 51 -
20. A system comprising: a computer readable media, wherein the computer readable media stores executable instructions; and a processor coupled to the computer readable media, the processor configured to execute the executable instructions to: apply a first set of graphical manipulations to a scene data; apply offsets to vertices within the scene data; apply a second set of graphical manipulations to the scene data resulting in a set of pixel data within a framebuffer; and update an element of a display device using the set of pixel data in the framebuffer, wherein the element of the display device comprises a pixel of the display device.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263422826P | 2022-11-04 | 2022-11-04 | |
US63/422,826 | 2022-11-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023102275A1 true WO2023102275A1 (en) | 2023-06-08 |
Family
ID=86613078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/054394 WO2023102275A1 (en) | 2022-11-04 | 2022-12-30 | Multi-pipeline and jittered rendering methods for mobile |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023102275A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117710548A (en) * | 2023-07-28 | 2024-03-15 | 荣耀终端有限公司 | Image rendering method and related equipment thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200074716A1 (en) * | 2018-08-29 | 2020-03-05 | Intel Corporation | Real-time system and method for rendering stereoscopic panoramic images |
US20210279950A1 (en) * | 2020-03-04 | 2021-09-09 | Magic Leap, Inc. | Systems and methods for efficient floorplan generation from 3d scans of indoor scenes |
-
2022
- 2022-12-30 WO PCT/US2022/054394 patent/WO2023102275A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200074716A1 (en) * | 2018-08-29 | 2020-03-05 | Intel Corporation | Real-time system and method for rendering stereoscopic panoramic images |
US20210279950A1 (en) * | 2020-03-04 | 2021-09-09 | Magic Leap, Inc. | Systems and methods for efficient floorplan generation from 3d scans of indoor scenes |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117710548A (en) * | 2023-07-28 | 2024-03-15 | 荣耀终端有限公司 | Image rendering method and related equipment thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11379105B2 (en) | Displaying a three dimensional user interface | |
CN108619720B (en) | Animation playing method and device, storage medium and electronic device | |
Mittring | Finding next gen: Cryengine 2 | |
Shreiner | OpenGL programming guide: the official guide to learning OpenGL, versions 3.0 and 3.1 | |
US10016679B2 (en) | Multiple frame distributed rendering of interactive content | |
US11724184B2 (en) | 2.5D graphics rendering system | |
CN108236783B (en) | Method and device for simulating illumination in game scene, terminal equipment and storage medium | |
CN115088254A (en) | Motion smoothing in distributed systems | |
JP2020522802A (en) | Method and system for rendering a frame of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene | |
CN112184873B (en) | Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium | |
US20240037839A1 (en) | Image rendering | |
CN111739142A (en) | Scene rendering method and device, electronic equipment and computer readable storage medium | |
WO2023102275A1 (en) | Multi-pipeline and jittered rendering methods for mobile | |
CN112270732A (en) | Particle animation generation method, processing device, electronic device, and storage medium | |
CN110415326A (en) | A kind of implementation method and device of particle effect | |
CN112181633B (en) | Asset aware computing architecture for graphics processing | |
KR102108244B1 (en) | Image processing method and device | |
US11158113B1 (en) | Increasing the speed of computation of a volumetric scattering render technique | |
Köster et al. | Gravity games: a framework for interactive space physics on media facades | |
Yang et al. | Visual effects in computer games | |
Ogino et al. | A distributed framework for creating mobile mixed reality systems | |
Sousa et al. | Cryengine 3: Three years of work in review | |
US11501493B2 (en) | System for procedural generation of braid representations in a computer image generation system | |
Yuniarti et al. | Implementation of reconstruction filter to create motion blur effect in Urho3D game engine | |
US20240153159A1 (en) | Method, apparatus, electronic device and storage medium for controlling based on extended reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22902297 Country of ref document: EP Kind code of ref document: A1 |