US20180061116A1 - System and method of gaze predictive rendering of a focal area of an animation - Google Patents
System and method of gaze predictive rendering of a focal area of an animation Download PDFInfo
- Publication number
- US20180061116A1 US20180061116A1 US15/245,523 US201615245523A US2018061116A1 US 20180061116 A1 US20180061116 A1 US 20180061116A1 US 201615245523 A US201615245523 A US 201615245523A US 2018061116 A1 US2018061116 A1 US 2018061116A1
- Authority
- US
- United States
- Prior art keywords
- virtual space
- focal area
- view
- user
- field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims description 57
- 230000004371 high visual acuity Effects 0.000 claims abstract description 8
- 230000001711 saccadic effect Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 description 22
- 230000033001 locomotion Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 14
- 230000002093 peripheral effect Effects 0.000 description 14
- 238000012876 topography Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000003993 interaction Effects 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 7
- 238000013459 approach Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000004308 accommodation Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000003292 diminished effect Effects 0.000 description 3
- 230000004424 eye movement Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000005043 peripheral vision Effects 0.000 description 2
- 230000004434 saccadic eye movement Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000004469 smooth pursuit movement Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000005576 amination reaction Methods 0.000 description 1
- 239000005557 antagonist Substances 0.000 description 1
- 239000003518 caustics Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000004470 vergence movement Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
Definitions
- This disclosure relates to a system and method of gaze-predictive rendering of a focal area of an animation.
- foveated rendering implements a high-resolution render of a particular region of individual frame images.
- a users gaze may be tracked so that the high-resolution render is positioned on the images to correspond with a user's foveal region.
- An area surrounding the high-resolution region is then rendered at relatively lower resolution.
- users may experience visual anomaly when prompted about the fact.
- Other techniques have implemented a foveated rendering method with spatial and temporal property variation. With such techniques, at a certain level-of-detail (LOD), users may experience the foveated renders to be of equal or higher quality than non-foveated counterparts.
- LOD level-of-detail
- Latency in a system configured to achieve foveated image rendering may produce a “pop” effect caused by a high-resolution foveal region “catching up” to a user's actual gaze direction.
- one aspect of the disclosure relates to a system configured for gaze-predictive rendering of a focal area of an animation presented on a display.
- the focal area may comprise an area corresponding to a predicted location of a user's foveal region and an area surrounding the foveal region.
- the focal area may be rendered at a higher resolution, higher color bit depth, and/or higher luminous intensity than an area outside the focal area.
- the location of the predicted foveal region may be based on statistical targets of eye fixation.
- the focal area may comprise an area that may be larger than the foveal region. In this way, the true foveal region may be contained (at least in part) within the focal area, even if the true foveal region may not fixate directly on the statistical targets.
- the system may include one or more physical processors configured by machine-readable instructions. Executing the machine-readable instructions may cause the one or more physical processors to facilitate gaze-predictive rendering of a focal area of an animation presented on a display.
- the animation may comprise views of a virtual space.
- the virtual space may include a video game taking place in the virtual space.
- the animation may include a sequence of frame.
- the frames of the animation may comprise images of the virtual space within a field of view of the virtual space.
- the machine-readable instructions may include one or more of a space component, a field-of-view component, a gaze component, a latency component, a focal area component, a render component, and/or other components.
- the space component may be configured to obtain state information describing state of a virtual space.
- the state of the virtual space at an individual point in time may define one or more of one or more virtual objects within the virtual space, positions of the one or more virtual objects, and/or other information.
- the field of view component may be configured to determine a field of view of the virtual space.
- the frames of the animation may comprise images of the virtual space within the field of view.
- a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame.
- the gaze component may be configured to predict a gaze direction of a user within the field of view.
- the user may be viewing the animation via a display.
- the gaze direction may define a line of sight of the user.
- the focal area component may be configured to determine a focal area within the field of view based on the predicted gaze direction, and/or other information.
- the focal area may include one or more of a predicted foveal region corresponding to the predicted gaze direction, an area surrounding the foveal region, and/or other components.
- the foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight.
- the render component may be configured to render, from the state information, individual images for individual frames of the animation.
- Individual images may depict the virtual space within the field of view determined at individual points in time that correspond to individual frames.
- the focal area within the field of view may be rendered at a higher resolution than an area outside the focal area.
- the rendered images may include a first image for the first frame, and/or other images for other frames of the animation.
- the first image may depict the virtual space within the field of view at the point in time corresponding to the first frame.
- FIG. 1 illustrates a system configured for rendering a focal area of an animation presented on a display, in accordance with one or more implementations.
- FIG. 2 illustrates an exemplary graphic of a rendering of a focal area in an image of a frame corresponding to a first point in time, in accordance with one or more implementations.
- FIG. 3 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations.
- FIG. 4 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations.
- FIG. 5 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations.
- FIG. 6 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations.
- FIG. 7 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations.
- FIG. 8 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations
- FIG. 9 shows an exemplary graphic of a user viewing a display of a computing platform.
- FIG. 10 illustrates a method of latency-aware rendering of a focal area of an animation presented on a display, in accordance with one or more implementations.
- FIG. 11 illustrates a method of gaze-predictive rendering of a focal area of an animation presented on a display, in accordance with one or more implementations.
- FIG. 12 illustrates a method of bandwidth-sensitive rendering of a focal area of an animation presented on a display, in accordance with one or more implementations.
- FIG. 1 illustrates a system 100 configured for rendering a focal area of an animation presented on a display, in accordance with one or more implementations.
- human visual perception is assumed to be perfect. That is, displayed frame images of an amination are continuously expected to be fully visually appreciated (e.g. with respect to resolution, color, and/or other visual attributes), regardless of even some of its most obvious flaws, for example loss of acuity outside the eye's foveal region.
- One or more implementations of system 100 propose solutions on how to efficiently perform perceptually lossless rendering, wherein a focal area of individual frame image may be rendered at a relatively higher visual fidelity (e.g., with respect to one or more of resolution, color, luminance, and/or visual attributes) than areas outside the focal area. Areas outside the focal area may be indistinguishable from the high fidelity counterpart (e.g., the focal area). Thus there may be little or no perceived difference in quality, but performance may be greatly increased.
- a focal area of individual frame image may be rendered at a relatively higher visual fidelity (e.g., with respect to one or more of resolution, color, luminance, and/or visual attributes) than areas outside the focal area. Areas outside the focal area may be indistinguishable from the high fidelity counterpart (e.g., the focal area). Thus there may be little or no perceived difference in quality, but performance may be greatly increased.
- an animation may comprise views of a virtual space.
- the animation may include a sequence of frame.
- the frames of the animation may comprise images of the virtual space within a field of view of the virtual space.
- the virtual space may include a video game taking place in the virtual space.
- the virtual space may comprise an immersive virtual reality space.
- the field of view at individual points in time may be predetermined, determined based on gameplay within the virtual space, determined based on a position and/or orientation of a user, and/or determined in other ways.
- system 100 may be configured for latency-aware rendering of a focal area of an animation presented on a display.
- a latency aware formulation may be implemented for calculating the focal area.
- a predetermined and/or recursively determined system latency may be compensated for to maintain a foveated illusion at the cost of computational gain.
- An exemplary formula for determining the focal area may take into account one or more of a user's position, a maximal eye saccadic speed value, display characteristics, system latency, and/or other factors to ensure that the users foveal region may be contained (at least in part) within the focal area.
- system 100 may be configured for gaze-predictive rendering of a focal area of an animation presented on a display. For example, identification of relatively lower and/or higher visual fidelity regions (e.g., with respect to resolution, color, luminance, and/or other visual attributes) of frame images may be “trained” based on statistical targets of eye fixation that correspond to the users foveal region. Training data may be pre-computed on a database of previous eye tracked viewing sessions of a given animation, and applied in real-time. This precomputed gaze anticipation, or gaze-prediction, approach may facilitate scheduled render processing according to learned expected saccades and gaze directions. In anticipating gaze, this approach may identify focal areas within frame images ahead of image rendering, which may be valuable for latency critical rendering and display systems.
- view dependent material rendering with glossy or mirror effects may be preempted (to a degree) with this approach.
- Some indirect light bounce effects (global illumination, caustics, scattering, etc.) from scene materials may be pre-sampled according to predicted gaze direction. Live eye tracking and tracking of eye depth and light accommodation which also takes significant computation and may be difficult to sense with current high cost eye tracking hardware, may also be precomputed.
- system 100 may be configured for bandwidth-sensitive rendering of a focal area of an animation presented on a display.
- the rendering of images may be optimized for bandwidth considerations through exploitation of one or more of foveal color, luminance perception, quality level parameter for post processing effects, texture sampling parameter for coarser looking textures, coarser physics accuracy, coarser global illumination features accuracy (shadow resolution, ambient occlusion resolution), geometry tessellation, and/or other aspects that may affect bandwidth.
- the foveal region's color sensing receptor cones are densely packed versus the peripheral region.
- One or more implementations of system 100 propose reducing a color bit depth in areas outside a focal area.
- Rod density which may not be sensitive to color, falls off away from the foveal region, therefore luminance bit depth may also be reduced outside the focal area.
- the manner in which bit depths may be reduced may follow a nonlinear function.
- luminance perception may be strongest at approximately 25 degrees angular deviation from the line-of-sight.
- Blue cones are sparse in the eye (2%) and absent from the fovea, but red and green in current human vision may be perceptually similar to blue (suggesting learned response later in the visual system). In general, levels of color distinction correspond directly to how much bandwidth may be minimally necessary. Further, variable temporal sensitivity across the retina may correspond to minimum necessary temporal bandwidth in perceptually lossless rendering.
- the system 100 may include one or more of one or more computing platforms (e.g., computing platform 124 and/or one or more other computing platforms), one or more servers (e.g., server 102 and/or other servers), and/or other components.
- one or more computing platforms e.g., computing platform 124
- computing platform 124 may communicate with one or more other computing platforms according to a peer-to-peer architecture, via communications routed through one or more servers, and/or other communication scheme.
- the users may access system 100 and/or the virtual space via computing platforms associated with the users.
- Individual computing platforms may include one or more of a cellular telephone, a smartphone, a head-up display, a virtual reality headset (e.g., a head-mounted display such as a FOVE head-mounted display), a laptop, a tablet computer, a desktop computer, a television set-top box, a client device, a smart TV, a gaming console, and/or other devices suitable for the intended purposes as described herein.
- Individual computing platforms may include a display configured to present the animation for viewing by a user, and/or other components.
- a display may comprise one or more of a display screen, a graphics processing unit, and/or other components.
- one or more computing platforms may operate together as part of an immersive virtual reality environment, such as a cave automatic virtual environment (CAVE).
- an immersive virtual reality environment such as a cave automatic virtual environment (CAVE).
- CAVE cave automatic virtual environment
- a virtual reality headset may comprise one or more inertial measurement units, other sensors, and/or other components.
- the one or more inertial measurement units and/or other sensors may be configured to generate output signals conveying one or more of position, orientation, acceleration, and/or other information associated with the virtual reality headset.
- a virtual reality headset may comprise one or more of an OCULUS RIFT by OCULUS VR, a HOLOLENS by MICROSOFT, and/or other devices.
- server 102 may include one or more physical processors 104 configured by machine-readable instructions 106 , electronic storage 119 , and/or other components. Executing the machine-readable instructions 106 may cause server 102 to facilitate rendering a focal area of an animation.
- the machine-readable instructions 106 may include one or more of a space component 108 , a user component 109 , a field of view component 110 (abbreviated “FoV Component 110 ” in FIG. 1 ), a gaze component 112 , a latency component 114 , a focal area component 116 , a render component 118 , and/or other components.
- computing platform 124 may be configured to facilitate rendering a focal area of an animation using information stored by and/or local to computing platform 124 (e.g., a cartridge, a disk, a memory card/stick, USB memory stick, electronic storage, and/or other considerations) and/or other information.
- processors of computing platform 124 may include machine-readable instructions that may comprise one or more of the same or similar components of machine-readable instructions 106 of server 102 .
- the space component 108 may be configured to implement one or more instances of a virtual space and/or video game taking place in the virtual space executed by machine-readable instructions 106 .
- the space component 108 may be configured to determine views of the virtual space.
- the views of the virtual space may correspond to a field of view within the virtual space determined by the field of view component 110 .
- the views may then be communicated (e.g., via streaming, via object/position data, and/or other information) from server 102 to computing platforms for presentation to users.
- the views presented to a user may be expressed as rendered frame images (see, e.g., render component 116 ).
- the instance of the virtual space may comprise a simulated space that is accessible by users via computing platforms that present the views of the virtual space (e.g., present the rendered images).
- the simulated space may have a topography, express ongoing real-time interaction by one or more users, and/or include one or more objects positioned within the topography that are capable of locomotion within the topography.
- the topography may be a 2-dimensional topography.
- the topography may be a 3-dimensional topography.
- the topography may include dimensions of the space, and/or surface features of a surface or objects that are “native” to the space.
- the topography may describe a surface (e.g., a ground surface) that runs through at least a substantial portion of the space.
- the topography may describe a volume with one or more bodies positioned therein (e.g., a simulation of gravity-deprived space with one or more celestial bodies positioned therein).
- the instance executed by machine-readable instructions 106 may be synchronous, asynchronous, and/or semi-synchronous.
- views of the virtual space may be determined based on state information and/or other information.
- the state information may describe state of the virtual space.
- the state of the virtual space at an individual point in time may define one or more of one or more virtual objects (e.g., player characters, non-player characters, topographical elements of an environment of the virtual space, and/or other virtual objects) within the virtual space, their positions, and/or other information.
- the state of the virtual space may correspond to a state of a game taking place in the virtual space.
- a view determined and/or presented to a given user may correspond to a game entity being controlled by the given user.
- the state information may further correspond to one or more of a location in the virtual space (e.g., the location from which the view is taken, the location the view depicts, and/or other locations), a zoom ratio, a dimensionality of objects, a point-of-view, and/or view parameters.
- a location in the virtual space e.g., the location from which the view is taken, the location the view depicts, and/or other locations
- zoom ratio e.g., the location from which the view is taken, the location the view depicts, and/or other locations
- One or more of the view parameters may be selectable by the user.
- the space component 108 may be configured to express the virtual space in a more limited, or richer, manner.
- views determined for the virtual space may be selected from a limited set of graphics depicting an event in a given place within the virtual space.
- the views may include additional content (e.g., text, audio, pre-stored video content, and/or other content) that describes particulars of the current state of the place, beyond the relatively generic graphics.
- additional content e.g., text, audio, pre-stored video content, and/or other content
- a view may include a generic battle graphic with a textual description of the opponents to be confronted. Other expressions of individual places within the virtual space are contemplated.
- users may control game entities, objects, simulated physical phenomena (e.g., wind, rain, earthquakes, and/or other phenomena), and/or other elements within the virtual space to interact with the virtual space and/or each other.
- One or more user controlled element(s) may move through and interact with the virtual space (e.g., non-user characters in the virtual space, other objects in the virtual space).
- the user controlled elements controlled by and/or associated with a given user may be created and/or customized by the given user.
- the user may have an “inventory” of virtual items and/or currency that the user can use (e.g., by manipulation of a game entity or other user controlled element, and/or other items) within the virtual space.
- User participation in the virtual space may include controlling one or more of the available user controlled elements in the virtual space. Control may be exercised through control inputs and/or commands input by the users through individual computing platforms. The users may interact with each other through communications exchanged within the virtual space. Such communications may include one or more of textual chat, instant messages, private messages, voice communications, and/or other communications. Communications may be received and entered by the users via their respective computing platforms. Communications may be routed to and/or from the appropriate users through server 102 .
- User participation in the virtual space may include controlling one or more game entities in the virtual space.
- a game entity may refer to a virtual object (or group of objects) present in the virtual space that represents an individual user.
- a game entity may be a virtual character (e.g., an avatar) and/or other virtual object.
- a group of game entities may include a group of virtual characters, virtual objects, and/or other groups.
- Virtual objects may include virtual items and/or good.
- Virtual items and/or goods may include one or more of a virtual weapon, a tool, a food, a currency, a reward, a bonus, health, a potion, an enhancement, a mount, a power-up, a speed-up, clothing, a vehicle, an anatomical feature of a game entity, a troop or troop type, a pet, a virtual resource, and/or other virtual items and/or goods.
- an instance of the virtual space may be persistent. That is, the virtual space may continue on whether or not individual players are currently logged in and/or participating in the virtual space. A user that logs out of the virtual space and then logs back in some time later may find the virtual space has been changed through the interactions of other players with the virtual space during the time the player was logged out.
- These changes may include changes to the simulated physical space, changes in the user's inventory, changes in other user's inventories, changes experienced by non-player characters, changes to the virtual items available for use in the virtual space, and/or other changes.
- the user component 109 may be configured to access and/or manage one or more user profiles, user information, and/or user accounts associated with the users.
- the one or more user profiles and/or user information may include information stored locally by a given computing platform, by server 102 , one or more other computing platforms, and/or other storage locations.
- the user profiles may include, for example, information identifying users (e.g., a username or handle, a number, an identifier, and/or other identifying information) within the virtual space, security login information (e.g., a login code or password), virtual space account information, subscription information, virtual (or real) currency account information (e.g., related to currency held in credit for a user), control input information (e.g., a history of control inputs provided by the user), virtual inventory information (e.g., virtual inventories associated with the users that include one or more virtual items available for the users in the virtual space), relationship information (e.g., information related to relationships between users in the virtual space), virtual space usage information (e.g., a log-in history indicating the frequency and/or amount of times the user logs-in to the user accounts), interaction history among users in the virtual space, information stated by users, browsing history of users, a computing platform identification associated with a user, a phone number associated with a user, predictive gaze direction information (described in more
- the field of view component 110 may be configured to determine a field of view of the virtual space.
- the field of view determined by field of view component 110 may dictate the views of the virtual space determined and presented by the space component 108 .
- a frame of the animation may comprise an image of the virtual space within the field of view at a point in time that corresponds to the frame.
- the field of view may be predetermined for one or more points in time that correspond to one or more frames of the animation.
- gameplay within the virtual space may guide the player along a predetermined path within the virtual space such that the field of view of the virtual space may be predetermined for one or more points in time during gameplay.
- the field of view may be predetermined for one or more points in time that correspond to a non-interactive in-game cutscene (e.g., also referred to as an in-game cinematic and/or in-game movie).
- the field of view may be determined based on control inputs and/or commands input by a user through a computing platform.
- the control inputs and/or commands may dictate control of a game entity associated with the user within the virtual space.
- the game entity may be positioned at a location in the virtual space.
- the field of view may correspond to one or more of the location in the virtual space (e.g., the location from which the view is taken, the location the view depicts, and/or other locations), a point of view from the perspective of the game entity (e.g., a first person perspective and/or a third person perspective), and/or other information.
- the field of view may be determined based on sensor output generated by one or more sensors of a computing platform.
- a computing platform may comprise a virtual reality headset and/or other computing platform.
- An inertial measurement unit and/or other sensors generating sensor output conveying one or more of position, orientation, acceleration, and/or other information associated with a virtual reality headset may dictate the field of view in the virtual space.
- the virtual space may comprise an immersive virtual reality space.
- the virtual reality headset may be worn on the user's face and/or head. The user may turn their head (e.g., look around) to change the field of view of the virtual space from which views of the virtual space that are determined and presented to them via a display screen of the headset.
- the gaze component 112 may be configured to determine a gaze direction of a user within the field of view, and/or other gaze information.
- the gaze direction may define one or more of a light-of-sight of the user, and/or other information.
- the gaze direction may be expressed as a vector in a three-dimensional coordinate system, and/or expressed in other ways.
- the line-of-sight may comprise a virtual line connecting the fovea of the users eye with a fixation point.
- the gaze direction and/or line-of-sight may further correspond to a foveal region.
- the foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight (e.g., peripheral vision).
- the foveal region projected onto a plane (e.g., on a display screen the user is watching) may be determined based one or more of a distance of the fixation point from the users eye, an angle subtended by the fovea, and/or other information.
- the gaze direction may be determined by one or more gaze tracking devices (such as gaze tracking device 126 ).
- Gaze tracking device 126 may comprise a device configured to determine and/or track eye movement, and/or determine gaze in other ways.
- the gaze tracking device 126 may comprise one or more of a camera, a processing unit, and/or other components.
- the camera may be configured to capture video of one or both eyes and/or record their movement as the user looks at some kind of stimulus (e.g., a display screen).
- tracking may be accomplished by identifying a center of the pupil(s) and infrared/near-infrared non-collimated light to create corneal reflections (CR).
- CR corneal reflections
- a vector between the pupil center and the corneal reflections may be determined to compute a fixation point on surface and/or the gaze direction.
- a calibration procedure of the individual user may be performed.
- FIG. 9 shows an exemplary graphic of a user 900 viewing a display 902 of a computing platform (not shown in FIG. 9 ).
- the graphic illustrates a user's gaze direction 904 , the user's line-of-sight 906 (which projects to a fixation point on the display 902 ), and a foveal region 908 projected on the display 902 (noting that that the graphic may not be to scale).
- the gaze direction 904 and/or line-of-sight 906 may be determined by a gaze tracking device 912 (e.g., the same or similar gaze tracking device 126 of FIG. 1 ) positioned at or near the display 902 of the computing platform.
- a gaze tracking device 912 e.g., the same or similar gaze tracking device 126 of FIG. 1
- the gaze direction 904 may be expressed as a vector positioned in three-dimensional space using the gaze tracking device 912 (or other point in space) as an origin of a three-dimensional coordinate system.
- the foveal region 908 may be determined based on one or more of the gaze direction 904 , the users distance from the display 902 , an angle 910 subtended by the fovea of the user's eye (or eyes), and/or other information.
- conventional geometric relationships between sides and angles of a triangle may be employed to determine the length (e.g., diameter) of the projected foveal region 908 .
- the gaze component 112 may be configured to predict a gaze direction of individual users.
- the prediction may be based on a machine learning approach and/or other technique for predicting gaze direction.
- the gaze component 112 may be configured to identify statistical targets of eye fixation (e.g., fixation points and/or fixation regions), and determine from the targets a predicted gaze direction that corresponds to the user's foveal region.
- the machine learning approach may be trained based on a database of previous eye tracked viewing sessions of one or more animations.
- statistical targets of eye fixation may be determined from previous eye tracked viewing sessions by one or more users. In some implementations, the statistical targets may be an average of targets of eye fixation determined from previous eye tracked viewing sessions by multiple users. In some implementations, the statistical targets may be an average determined from eye tracked viewing sessions by individual users. By way of non-limiting example, the statistical targets may be specific to individual users and applied for gaze prediction purposes specifically for the individual users when the individual users are viewing the animation.
- statistical targets of eye fixation may be stored as predictive gaze direction information within a user account stored by user component 109 .
- the predictive gaze direction information of an individual user account may comprise statistical targets of eye fixation that are specific to an individual user of the user account.
- predictive gaze direction information within a user account stored by user component 109 may comprise statistical targets that are averaged from pervious eye tracked viewing session of multiple users.
- a prediction of gaze direction may be based on one or more virtual objects within a field of view of the virtual space.
- one or more virtual objects within a field of view may draw the users' attention relatively more than other virtual objects within the field of view.
- Such virtual objects may become a target of eye fixation for predicting the user's gaze direction.
- An individual virtual object may be predicted as a target of eye fixation based on one or more of movement of the individual virtual object, a position of the individual virtual object, a role assigned to the individual virtual object, and/or other factors.
- targets of eye fixation may be based on predicted high level semantic distractions, e.g. recognizing a face may leads to lower cognition of surrounding features in a field of view.
- movement of a virtual object within a field of view of the virtual space may draw the user's attention to the virtual object and become a target of eye fixation.
- a user may be drawn to a particular virtual object if it is moving within a field of view of the virtual space.
- a user may be drawn to a particular virtual object if it is moving within a field of view of the virtual space relatively faster than other virtual objects that may be present within the field of view.
- the positions of the virtual object at individual points in time within the field of view during movement may be predicted as targeted points (or regions) of eye fixation. Individual points (or regions) may facilitate determining a predicted gaze direction.
- the gaze direction may be predicted (e.g., calculated).
- the gaze direction may be calculated such that the projection of the user's foveal region on the display screen corresponding to the predicted gaze direction may include at least some of the individual targeted points (or regions).
- position of a virtual object within a field of view of the virtual space may draw the user's attention to the virtual object and become a target of eye fixation.
- a user may be drawn to a virtual object that may be positioned at a relatively central part (or other part) of a display screen relative to other virtual objects that may be positioned towards a peripheral edge of the display screen.
- the position of the virtual object may be predicted as a targeted point (or region) of eye fixation.
- the point (or region) may facilitate determining a predicted gaze direction.
- the gaze direction may be predicted such that the projection of the user's foveal region on the display screen includes at least some of the targeted point (or region).
- individual roles assigned to individual virtual objects within a field of view of the virtual space may dictate where the user's attention may be drawn to.
- individual virtual object having a given role may become targets of eye fixation.
- Individual roles of individual virtual object may comprise one or more of a player character (e.g., a game entity associated with a user and controlled by the user), a teammate, a main or central character, a minor character, a protagonist, an antagonist, an anti-hero, an enemy, a combatant, a speaker, a listener, and/or other roles.
- a player character e.g., a game entity associated with a user and controlled by the user
- teammate e.g., a teammate
- main or central character e.g., a main or central character
- a minor character e.g., a protagonist, an antagonist, an anti-hero, an enemy, a combatant, a speaker, a listener, and/or other roles.
- a position of a first virtual object assigned a first role may become a target of eye fixation based on the first virtual object being assigned the first role.
- a virtual object that is a player character within a field of view of a virtual space e.g., via a third person perspective
- the user may be drawn instead to a virtual object that is a game enemy that may be approaching the player character.
- the positions of the approaching game enemy may become targeted points (or regions) of eye fixation.
- a virtual object that may be assigned a speaker role may become a target of eye fixation.
- a virtual object that may be assigned a listener role e.g., entity that may be listening to another entity
- a virtual object that may be assigned a main character role may become a target of eye fixation.
- a virtual object that may be assigned a combatant role that may move toward the main character virtual object may become a subsequent target of eye fixation.
- the latency component 114 may be configured to obtain gaze adjustment latency.
- the gaze adjustment latency may quantify latency in one or more of determining changes in the gaze direction, making corresponding adjustments to a focal area within a field of view of the virtual space, and/or other operations carried out by system 100 .
- latency in system 100 may be attributed to one or more components included in system 100 .
- latency in system 100 may be attributed to latency in gaze tracking device 126 in determining and/or tracking eye movement in order to determine a user's gaze direction.
- latency may be attributed to other factors.
- latency in system 100 may be attribute to the speed at which information may be communicated through network 120 , and/or other factors.
- the gaze adjustment latency may be expressed through units of time.
- a gaze adjustment latency may be expressed as a numerical value in units of milliseconds, and/or units of time.
- the gaze adjustment latency may be a predetermined value.
- the gaze adjustment latency may be pre-set to a value that corresponds to latency attributed to gaze tracking device 126 and/or other components of system 100 .
- the gaze adjustment latency may be determined recursively. In some implementations, determining the gaze adjustment latency recursively may comprise determining a gaze adjustment latency after individual renders of individual images of individual frames of the animation (see, e.g., render component 116 ). Thus, gaze adjustment latency may be determined in an on-going basis during successive renders of frame images.
- latency component 114 may be configured to determine a gaze adjustment latency that quantifies latency in one or more of determining a change in gaze direction of the user between the first frame and an immediate prior frame, making corresponding adjustments to a focal area within a field of view (e.g., that was adjusted based on changes in gaze direction of the user between the first frame and the immediate prior frame), and/or other latency-attributed factors.
- latency component 114 may be configured to determine a gaze adjustment latency that quantifies latency in one or more of determining a change in gaze direction of the user between the first frame and second frame, making corresponding adjustments to the focal area within a field of view (e.g., that was adjusted based on changes in gaze direction of the user between the first frame and the second frame), and/or other latency-attributed factors.
- the focal area component 116 may be configured to determine a focal area within a field of view of the virtual space.
- the focal area may be determined based on one or more of a gaze direction, a gaze adjustment latency, and/or other information.
- the focal area within the field of view may comprise one or more of a foveal region corresponding to the gaze direction, an area surrounding the foveal region, and/or other components.
- the focal area may comprise an area that may be larger than a true (e.g., tracked, calculated, and/or predicted) foveal region. As such, the focal area may be determined to account for system latency such that the foveal region in the field of view may be contained (at least in part) within the focal area.
- a focal area may be determined based on other information associated with a user's gaze.
- a focal area may be determined based on one or more of a focus accommodation (e.g., vergence movements), saccadic movements, smooth pursuit movements, vestibulo-occular movements, and/or other information.
- Focus accommodation may be used to determine a focal area to be in areas of a focal depth and/or focal volume.
- a latency between vergence target intention and physical action may be exploited. Saccadic movements may incur a delay from target intention to physically catching up, which may afford a predictable adjustment time. Smooth pursuit movements may incur an initial saccade, then follow a target.
- a target remains constant in appearance and may exploit reprojection rendering effectively (reducing work for newly computed frames by re-projected prior frames).
- Vestibulo-occular movements may be inclusive of head stability tracking. Prediction of these movements may be incorporated into the reduction of necessary rendering computation.
- determining the focal area may comprise determining one or more of a size of the focal area, a position of a center point of the focal area, and/or other information.
- the size may be a function of one or more of the gaze adjustment latency, a users' maximum saccadic speed, the user's distance from a display, an angle subtended by the fovea of the user's eye, a pixel density of the display, and/or other information.
- the focal area may comprise a circular, or substantially circular area.
- a diameter of the focal area may be determined by the following equation:
- F ⁇ is the diameter of the focal area
- L tot is the gaze adjustment latency (e.g., in milliseconds)
- S max is the user's maximum saccadic speed in radians per millisecond
- d u is the user's distance from the display
- ⁇ is the angled subtended by the fovea (e.g., approximately 5 degrees, and/or other angle)
- b w is the width of the blending boarder between a peripheral edge of the focal area and an area outside the focal area
- ⁇ pixel is the pixel density of the display in pixels per millimeter (or other units of density)
- c is an error constant.
- a blending boarder may provide a region that smoothly blends a boarder of a focal area from “sharp focus” inside the focal area to “blurred undetailed” in the region outside the focal area.
- focal area may be determined in other ways.
- a focal area may be determined such that a distance between a peripheral edge of the foveal region and a peripheral edge of the focal area may comprise a distance wherein for a given saccadic speed of a user's eye movement and gaze adjustment latency, the peripheral edge of the foveal region may not surpass the peripheral edge of the focal area (see, e.g., FIG. 2 and FIG. 3 , described in more detail herein).
- the foveal region may be contained (at least in part) within the focal area.
- the position of the focal area within the field of view may be determined based on the user's gaze direction, and/or other information.
- the light-of-sight defined by the gaze direction may project to a fixation point on a display screen.
- the fixation point may comprise a center point of the user's true (e.g., tracked, calculated, and/or predicted) foveal region.
- the center point of the foveal region may be used to determine a center point of the focal area.
- the focal area may be positioned such that an imaginary center point of the focal area may be aligned with an imaginary center point of the users foveal region (e.g., the fixation point of the light-of-sight).
- latency may exist in system 100 such that the determined position of the focal area (e.g., via a determined center point of the focal area) may lag in being aligned with a true (e.g., tracked, calculated, and/or predicted) center of the users foveal region that projects to the display (e.g., a fixation point).
- the manner in which in the size of the focal area may be calculated may ensure that, while accounting for latency in the system, the true (e.g., tracked, calculated, and/or predicted) foveal region in the field of view may be contained (at least in part) within the focal area while adjustments to the focal area are being made to “catch up” with the) foveal region.
- the render component 118 may be configured to render images for frames.
- the render component 118 may be configured to render, from the state information, individual images for individual frames of the animation. Individual images may depict the virtual space within a field of view determined at individual points in time that corresponds to individual frames.
- the rendering component 118 may provide the rendered image to the space component 110 for presentation to users via computing platforms.
- render component 118 may be configured to render individual images based on parameter values of one or more rendering parameters of the individual frames.
- Rendering parameters may comprise one or more of a resolution parameter, a color bit depth parameter, a luminance bit depth parameter, and/or other parameters.
- a parameter value of a resolution parameter of an individual frame may specify one or more of a region within the field of view associated with the individual frame, a resolution at which a specified region may be rendered, and/or other information.
- a first value of a resolution parameter for a frame image may specify a focal area within a field of view corresponding to a point in time for the frame, a first resolution at which the focal area may be rendered, and/or other information.
- a second value of a resolution parameter for the frame image may specify an area outside of the focal area, a second resolution at which the specified area outside of the focal area may be rendered, and/or other information.
- the first resolution may be relatively higher than the second resolution.
- the first resolution may be relatively higher insofar that the first resolution comprises a “standard” resolution value while the second resolution comprises a resolution value that may be a diminished (e.g., reduced) with respect to the standard resolution value.
- the first resolution may be relatively higher insofar that the second resolution comprises a “standard” resolution value while the first resolution comprises a resolution value that may be greater than the standard resolution value.
- the term “standard resolution” may refer to one or more of a resolution that may be intended by a provider of the virtual space, a resolution which a computing platform may be capable of presenting, and/or other information.
- a parameter value of a color bit depth parameter of an individual frame may specify one or more of a region within the field of view associated with the individual frame, a color bit depth at which color in the specified region may be rendered, and/or other information.
- a first value of a color bit depth parameter for a frame image may specify a focal area within a field of view corresponding to a point in time for the frame, a first bit depth at which color for the focal area may be rendered, and/or other information.
- a second value of a color bit depth parameter for the frame image may specify an area outside of the focal area, a second bit depth at which color for the specified area outside of the focal area may be rendered, and/or other information.
- the first bit depth may be relatively higher than the second bit depth.
- the first bit depth may be relatively higher insofar that the first bit depth comprises a “standard” bit depth while the second bit depth may comprise a bit depth that may be a diminished (e.g., reduced) bit depth with respect to the standard bit depth.
- the first bit depth may be relatively higher insofar that the second bit depth comprises a “standard” bit depth while the first bit depth comprises a bit depth that may be greater than the standard bit depth.
- the term “standard bit depth” may refer to one or more of a color bit depth that may be intended by a provider of the virtual space, a color bit depth which a computing platform may be capable of presenting, and/or other information.
- a parameter value of a luminance bit depth parameter of an individual frame may specify one or more of a region within the field of view associated with the individual frame, a luminous bit depth at which luminous intensity of the specified region may be rendered, and/or other information.
- a first value of a luminance bit depth parameter for a frame image may specify a focal area within a field of view corresponding to a point in time for the frame, a first bit depth at which a luminous intensity for the focal area may be rendered, and/or other information.
- a second value of a luminance bit depth parameter for the frame image may specify an area outside of the determined focal area, a second bit depth at which the luminous intensity of the specified area outside of the focal area may be rendered, and/or other information.
- the first bit depth may be relatively higher than the second bit depth.
- the first bit depth may be relatively higher insofar that the first bit depth comprises a “standard” bit depth while the second bit depth comprises a bit depth that may be a diminished (e.g., reduced) bit depth with respect to the standard bit depth.
- the first bit depth may be relatively higher insofar that the second bit depth comprises a “standard” bit depth while the first bit depth comprises a bit depth that may be greater than the standard bit depth.
- the term “standard bit depth” may refer to one or more of a bit depth for luminous intensity that may be intended by a provider of the virtual space, a bit depth for luminous intensity which a computing platform may be capable of presenting, and/or other information.
- images rendered by rendering component 118 may include one or more of a first image for a first frame corresponding to a first point in time, a second image for a second frame corresponding to a second point in time, and/or other images for other frames corresponding to other points in time.
- the first image may depict the virtual space within a field of view at a point in time corresponding to the first frame.
- a focal area within the field of view may be rendered according to one or more parameter values of one or more rendering parameters that may be different from parameter values for an area outside the focal area.
- one or more features and/or functions of system 100 presented herein may be carried for other multimedia types.
- one or more features and/or functions presented herein may be applied in the framework of generating and/or rendering deep media video formats (e.g., 360 VR with parallax ability to move through a video space).
- the process of rendering and encoding the VR video format may be accelerated and made more efficient through machine learning prediction of a perceptual focal depth in the context of the offline processed video content.
- the VR video decoding and display may be accelerated and made more efficient through machine learning as above but dynamically in real-time VR display.
- FIGS. 2-8 illustrate exemplary graphics of focal areas in frame images of an animation, in accordance with one or more implementations presented herein.
- FIGS. 2-5 illustrate various frame images having focal areas rendered in accordance with a latency-aware implementation of system 100 , and/or other implementations.
- the frame images shown in the figures will be considered as sequentially rendered and presented frame images of an animation.
- FIG. 2 illustrates a first image 202 of a first frame that corresponds to a first point in time.
- the first image 202 may include a view of a virtual space 200 corresponding a field of view of the virtual space determined for the first point in time.
- the view of the virtual space may include one or more virtual objects and their position determined from state information at the first point in time.
- the one or more virtual object may include a first virtual object 204 , a second virtual object 206 , and/or other virtual objects.
- the first virtual object 204 may be assigned a first role.
- the first role may comprise main character role and/or other role.
- the second virtual object 206 may be assigned a second role.
- the second role may comprise a combatant role, and/
- the first image 202 may include a focal area 208 (having a peripheral edge shown by the dashed line and an imaginary center point shown by the “X”).
- the focal area 208 may comprise one or more of a foveal region 210 (having a peripheral edge shown by the dotted line and an imaginary center point shown by the “O”), an area 212 outside the foveal region 210 , and/or other components.
- the center point “O” of the foveal region 210 may be a point of eye fixation.
- the first image 202 may be associated with a point in time where the users has maintained a focus on the first virtual object 204 such that latency may have not yet come into effect.
- the focal area 208 may be aligned with the users foveal region 210 (shown by the “X” overlaid on the “O”). However, as will be described in more detail below, the effect of latency may cause the focal area 208 to lag in maintaining an alignment with the foveal region 210 .
- FIG. 3 illustrates a second image 302 of a second frame that corresponds to a second point in time.
- the second point in time may occur temporally after the first point in time.
- the second image 302 may include a view of the virtual space 200 corresponding a field of view of the virtual space determined for the second point in time.
- the second image 302 may include the focal area 208 comprising one or more of the foveal region 210 , the area 212 outside the foveal region 210 , and/or other components.
- the second image 302 depicts movement of the second virtual object 206 toward the first virtual object 204 , for example, in accordance with one or more of a combat scenario, dialog, and/or other interaction.
- FIG. 4 illustrates a third image 402 of a third frame that corresponds to a third point in time.
- the third point in time may occur temporally after the second point in time.
- the third image 402 may include a view of the virtual space 200 corresponding a field of view of the virtual space determined for the third point in time.
- the third image 402 may include the focal area 208 comprising one or more of the foveal region 210 , the area 212 outside the foveal region 210 , and/or other components.
- the third image 402 depicts the second virtual object 206 being positioned adjacent the first virtual object 204 , for example, during one or more of combat, dialog, and/or other interaction.
- the size of the focal area 208 may be determined such that the area 212 outside the foveal region 210 has a width “W.”
- the width “W” may be determined based on one or more of the user's maximum saccadic speed, a gaze adjustment latency (assumed as a constant value), and/or other factors.
- the width “W” may be the product of the user's maximum saccadic speed and the gaze adjustment latency. Latency may cause the focal area 208 to lag in maintaining its alignment with the foveal region 210 .
- the focal area 208 may not be able to “keep up” and stay aligned ( FIG. 3 ).
- the shift in the first direction 214 ( FIG. 2 ) may be attributed to the users attention being drawn to the movement of the second virtual object 206 .
- the width “W” Based on the size of the focal area 208 and in particular the width “W,” by the time the peripheral edge of the foveal region 210 reaches the peripheral edge of the focal area 208 ( FIG.
- the gaze adjustment latency has lapsed and the focal area 208 can now “catch up” and re-align with the foveal region 210 ( FIG. 4 ).
- the gaze adjustment latency may be amount of time between the first point in time and the third point in time.
- FIG. 5 illustrates a fourth image 502 of a fourth frame that corresponds to a fourth point in time.
- the fourth point in time may occur temporally after the third point in time.
- the fourth image 502 may include a view of the virtual space 200 corresponding a field of view of the virtual space determined for the fourth point in time.
- the fourth image 402 may include an latency-adjusted focal area 504 comprising one or more of the foveal region 210 , an adjusted area 506 outside the foveal region 210 , and/or other components.
- the fourth image 502 depicts the second virtual object 206 moving away from the first virtual object 204 , for example, post combat, dialog, and/or other interaction.
- the fourth image 402 may illustrate an implementation of system 100 ( FIG. 1 ) where gaze adjustment latency may not be assumed as a constant value but may instead by determined in an on-going basis.
- the latency component 114 FIG. 1
- the new gaze adjustment latency may quantify latency in determining changes in the gaze direction between the second and third frames and making corresponding adjustments (e.g., positional and/or size adjustments) to the focal area 208 between the second and third frames.
- the adjusted focal area 504 may have a size that may be increased relative the focal area 208 in FIGS.
- the new gaze adjustment latency being higher than gaze adjustment latency attributed to the preceding frames.
- the size of the focal area may be reduced (e.g., given that the focal area may now “keep up” faster with the user's tracked gaze direction).
- FIGS. 6-8 illustrate various frame images having focal areas rendered in accordance with a gaze-predictive implementation of system 100 .
- the frame images shown in the figures will be considered as sequentially presented frame images of an animation.
- FIG. 6 illustrates a first image 602 of a first frame that corresponds to a first point in time.
- the first image 602 may include a view of a virtual space 600 corresponding a field of view of the virtual space determined for the first point in time.
- the view of the virtual space may include one or more virtual objects and their position determined from state information at the first point in time.
- the one or more virtual object may include a first virtual object 604 , a second virtual object 606 , and/or other virtual objects.
- the first virtual object 604 may be assigned a main character role and/or other role.
- the second virtual object 606 may be assigned a combatant role and/or other role.
- the first image 602 may include a focal area 608 (having a peripheral edge shown by the dashed line and an imaginary center point shown by the “X”).
- the focal area 608 may comprise one or more of a foveal region 610 (having a peripheral edge shown by the dotted line and an imaginary center point shown by the “O”), an area 612 outside the foveal region 610 , and/or other components.
- the center point “O” of the foveal region 610 may be a point of eye fixation.
- the focal area 608 may be aligned with the foveal region 610 (shown by the “X” overlaid on the “O”).
- FIG. 7 illustrates a second image 702 of a second frame that corresponds to a second point in time.
- the second point in time may occur temporally after the first point in time.
- the second image 702 may include a view of the virtual space 600 corresponding a field of view of the virtual space determined for the second point in time.
- the second image 702 may include the focal area 608 comprising one or more of the foveal region 610 , the area 612 outside the foveal region 210 , and/or other components.
- the second image 702 depicts movement of the second virtual object 606 toward the first virtual object 604 , for example, in accordance with one or more of a combat scenario, dialog, and/or other interaction.
- FIG. 8 illustrates a third image 802 of a third frame that corresponds to a third point in time.
- the third point in time may occur temporally after the second point in time.
- the third image 802 may include a view of the virtual space 600 corresponding a field of view of the virtual space determined for the third point in time.
- the third image 802 may include the focal area 608 comprising one or more of the foveal region 610 , the area 612 outside the foveal region 610 , and/or other components.
- the third image 802 depicts the second virtual object 606 being positioned adjacent the first virtual object 604 , for example, during combat, dialog, and/or other interaction.
- the positional changes of the focal area 608 between the frames of FIGS. 6-8 illustrates a result of a gaze-predictive implementation of system 100 ( FIG. 1 ).
- the position of focal area 608 in FIG. 6 may correspond to the first virtual object 604 being determined (e.g., via machine learning) as a statistical target of eye fixation. This may be attributed to one or more of previous eye tracking sessions of one or more users that conveyed that users in average viewed the first virtual object 604 at the first point in time of the first frame 602 , the role assigned to the first virtual object 604 , the position of the first virtual object 604 within a substantially central region of the image, and/or other factors.
- the focal area 608 may be adjusted in a first direction 614 ( FIG. 6 ) toward the second virtual object 606 . This adjustment may be based on a region that includes both the first virtual object 604 and second virtual object 606 in the second image 702 ( FIG. 7 ) comprising a region of eye fixation.
- the position of the focal area 608 in FIG. 7 may correspond to the region embodying the focal area 608 being determined (e.g., via machine learning) as a statistical target of eye fixation.
- This may be attributed to one or more of previous eye tracking sessions of one or more users that conveyed that users in average shifted their attention toward the second virtual object 606 , the movement of the second virtual object 606 toward the first virtual object 604 and away from the edge of the second image 702 , the role assigned to the second virtual object 606 , the position of the second virtual object 606 , and/or other factors. It is noted that since the prediction of the focal area 608 may indeed be a prediction, the focal area 608 may not maintain a true alignment with the user's foveal region 610 (as illustrated by the offset center point “O” of the foveal region 610 with respect to the center point “X” of the focal area 608 ). However, as shown in FIG. 8 , the user's foveal region 610 may catch up with the predicted focal area 608 .
- FIG. 2-8 corresponds descriptions were provided for illustrative purposes only and are not to be considered limiting. Instead, the illustrations were provided merely show particular implementations of system 100 ( FIG. 1 ) and the effect individual implementations may have on rendered frame images. In other implementations, the size and/or position of a focal area, the views of the virtual space, the virtual objects shown in the virtual space, and/or other aspects described specifically for FIG. 2-8 may be different.
- server 102 , computing platform 124 , external resources 12 , gaze tracking device 126 , and/or other entities participating in system 100 may be operatively linked via one or more electronic communication links.
- electronic communication links may be established, at least in part, via a network 120 such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting and that the scope of this disclosure includes implementations in which server 102 , computing platform 124 , external resources 12 , and/or other entities participating in system 100 may be operatively linked via some other communication media.
- the external resources 122 may include sources of information, hosts, and/or providers of virtual spaces outside of system 100 , external entities participating with system 100 , external entities for player-to-player communications, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 122 may be provided by resources included in system 100 .
- Server 102 may include electronic storage 119 , one or more processors 104 , and/or other components. Server 102 may include communication lines or ports to enable the exchange of information with network 120 , computing platform 124 , external resources 122 , and/or entities. Illustration of server 102 in FIG. 1 is not intended to be limiting. Server 102 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server 102 . For example, server 102 may be implemented by a cloud of computing platforms operating together as server 102 .
- Electronic storage 119 may comprise electronic storage media that electronically stores information.
- the electronic storage media of electronic storage 119 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with server 102 and/or removable storage that is removably connectable to server 102 via, for example, a port or a drive.
- a port may include a USB port, a firewire port, and/or other port.
- a drive may include a disk drive and/or other drive.
- Electronic storage 119 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
- the electronic storage 119 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
- Electronic storage 119 may store software algorithms, information determined by processor(s) 104 , information received from computing platform 124 , and/or other information that enables server 102 to function as described herein.
- Processor(s) 104 is configured to provide information-processing capabilities in server 102 .
- processor(s) 104 may include one or more of a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
- processor(s) 104 is shown in FIG. 1 as a single entity, this is for illustrative purposes only.
- processor(s) 104 may include one or more processing units. These processing units may be physically located within the same device, or processor(s) 104 may represent processing functionality of a plurality of devices operating in coordination.
- Processor(s) 104 may be configured to execute components 108 , 109 , 110 , 112 , 114 , 116 , and/or 118 .
- Processor(s) 104 may be configured to execute components 108 , 109 , 110 , 112 , 114 , 116 , and/or 118 by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s) 104 .
- components 108 , 109 , 110 , 112 , 114 , 116 , and/or 118 are illustrated in FIG. 1 as being co-located within a single processing unit, in implementations in which processor(s) 104 includes multiple processing units, one or more of components 108 , 109 , 110 , 112 , 114 , 116 , and/or 118 may be located remotely from the other components.
- components 108 , 109 , 110 , 112 , 114 , 116 , and/or 118 may provide more or less functionality than is described.
- one or more of components 108 , 109 , 110 , 112 , 114 , 116 , and/or 118 may be eliminated, and some or all of its functionality may be provided by other ones of components 108 , 109 , 110 , 112 , 114 , 116 , 118 , and/or other components.
- processor(s) 104 may be configured to execute one or more additional components that may perform some or all of the functionality attributed below to one of components 108 , 109 , 110 , 112 , 114 , 116 , and/or 118 .
- FIG. 10 illustrates a method 1000 of latency-aware rendering of a focal area of an animation presented on a display.
- the operations of method 1000 presented below are intended to be illustrative. In some embodiments, method 1000 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1000 are illustrated in FIG. 10 and described below is not intended to be limiting.
- method 1000 may be implemented in a computer system comprising one or more of one or more processing devices (e.g., a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information), storage media storing machine-readable instructions, and/or other components.
- the one or more processing devices may include one or more devices executing some or all of the operations of method 1000 in response to instructions stored electronically on electronic storage media.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1000 .
- state information describing state of a virtual space may be obtained.
- the state at an individual point in time may define one or more virtual objects within the virtual space, their positions, and/or other information.
- operation 1002 may be performed by one or more physical processors executing a space component the same as or similar to space component 108 (shown in FIG. 1 and described herein).
- a field of view of the virtual space may be determined.
- the frames of the animation may comprise images of the virtual space within the field of view.
- a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame.
- operation 1004 may be performed by one or more physical processors executing a field of view component the same as or similar to field of view 110 (shown in FIG. 1 and described herein).
- a gaze direction of a user within the field of view may be determined.
- the gaze direction may define a light of sight of the user.
- the user may view the animation via the display.
- operation 1006 may be performed by one or more physical processors executing a gaze component the same as or similar to the gaze component 112 (shown in FIG. 1 and described herein).
- gaze adjustment latency may be obtained.
- the gaze adjustment latency may quantify latency in one or more of determining changes in the gaze direction, making corresponding adjustments to a focal area within the field of view, and/or other operations of method 1000 .
- operation 1008 may be performed by one or more physical processors executing a latency component the same as or similar to the latency component 114 (shown in FIG. 1 and described herein).
- a focal area within a field of view may be determined based on one or more of the gaze direction, gaze adjustment latency, and/or other information.
- the focal area may include one or more of a foveal region corresponding to the gaze direction, an area surrounding the foveal region, and/or other components.
- the foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight.
- operation 1010 may be performed by one or more physical processors executing a focal area component the same as or similar to the focal area component 116 (shown in FIG. 1 and described herein).
- individual images for individual frames of the animation may be rendered from the state information.
- Individual images may depict the virtual space within the field of view determined at individual points in time that corresponds to individual frames.
- a first image may be rendered for the first frame.
- the first image may depict the virtual space within the field of view at the point in time corresponding to the first frame.
- a focal area within the field of view may be rendered at a higher resolution than an area outside the focal area.
- operation 1012 may be performed by one or more physical processors executing a render component the same as or similar to the render component 118 (shown in FIG. 1 and described herein).
- FIG. 11 illustrates a method 1100 of gaze-predictive rendering of a focal area of an animation presented on a display.
- the operations of method 1100 presented below are intended to be illustrative. In some embodiments, method 1100 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1100 are illustrated in FIG. 11 and described below is not intended to be limiting.
- method 1100 may be implemented in a computer system comprising one or more of one or more processing devices (e.g., a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information), storage media storing machine-readable instructions, and/or other components.
- the one or more processing devices may include one or more devices executing some or all of the operations of method 1100 in response to instructions stored electronically on electronic storage media.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1100 .
- state information describing state of a virtual space may be obtained.
- the state at an individual point in time may define one or more virtual objects within the virtual space, their positions, and/or other information.
- operation 1102 may be performed by one or more physical processors executing a space component the same as or similar to space component 108 (shown in FIG. 1 and described herein).
- a field of view of the virtual space may be determined.
- the frames of the animation may comprise images of the virtual space within the field of view.
- a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame.
- operation 1104 may be performed by one or more physical processors executing a field of view component the same as or similar to field of view 110 (shown in FIG. 1 and described herein).
- a gaze direction of a user within the field of view may be predicted.
- the gaze direction may define a light of sight of the user.
- the user may view the animation via the display.
- operation 1106 may be performed by one or more physical processors executing a gaze component the same as or similar to the gaze component 112 (shown in FIG. 1 and described herein).
- a focal area within a field of view may be determined based on the predicted gaze direction and/or other information.
- the focal area may include one or more of a foveal region corresponding to the predicted gaze direction, an area surrounding the foveal region, and/or other components.
- the foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight.
- operation 1108 may be performed by one or more physical processors executing a focal area component the same as or similar to the focal area component 116 (shown in FIG. 1 and described herein).
- individual images for individual frames of the animation may be rendered from the state information.
- Individual images may depict the virtual space within the field of view determined at individual points in time that corresponds to individual frames.
- a first image may be rendered for the first frame.
- the first image may depict the virtual space within the field of view at the point in time corresponding to the first frame.
- the focal area within the field of view may be rendered at a higher resolution than an area outside the focal area.
- operation 1110 may be performed by one or more physical processors executing a render component the same as or similar to the render component 118 (shown in FIG. 1 and described herein).
- FIG. 12 illustrates a method 1200 of bandwidth-sensitive rendering of a focal area of an animation presented on a display.
- the operations of method 1200 presented below are intended to be illustrative. In some embodiments, method 1200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1200 are illustrated in FIG. 12 and described below is not intended to be limiting.
- method 1200 may be implemented in a computer system comprising one or more of one or more processing devices (e.g., a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information), storage media storing machine-readable instructions, and/or other components.
- the one or more processing devices may include one or more devices executing some or all of the operations of method 1200 in response to instructions stored electronically on electronic storage media.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1200 .
- state information describing state of a virtual space may be obtained.
- the state at an individual point in time may define one or more virtual objects within the virtual space, their positions, and/or other information.
- operation 1202 may be performed by one or more physical processors executing a space component the same as or similar to space component 108 (shown in FIG. 1 and described herein).
- a field of view of the virtual space may be determined.
- the frames of the animation may comprise images of the virtual space within the field of view.
- a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame.
- operation 1204 may be performed by one or more physical processors executing a field of view component the same as or similar to field of view 110 (shown in FIG. 1 and described herein).
- a focal area within a field of view may be determined.
- the focal area may include one or more of a foveal region corresponding to a user's gaze direction, an area surrounding the foveal region, and/or other components.
- the foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight.
- operation 1206 may be performed by one or more physical processors executing a focal area component the same as or similar to the focal area component 116 (shown in FIG. 1 and described herein).
- individual images for individual frames of the animation may be rendered from the state information.
- Individual images may depict the virtual space within the field of view determined at individual points in time that corresponds to individual frames.
- a first image may be rendered for the first frame.
- the first image may depict the virtual space within the field of view at the point in time corresponding to the first frame.
- the focal area within the field of view may be rendered with a higher color bit depth and/or higher luminance bit depth relative an area outside the focal area.
- operation 1208 may be performed by one or more physical processors executing a render component the same as or similar to the render component 118 (shown in FIG. 1 and described herein).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- This disclosure relates to a system and method of gaze-predictive rendering of a focal area of an animation.
- When rendering digital images in animations, it is often assumed that the human visual system is perfect, despite limitations arising from a variety of different complexities and phenomena. That is, current methods of real-time rendering of a digital animation may operate on an assumption that a single rendered frame image will be fully visually appreciated at any single point in time. However, peripheral vision may be significantly worse than foveal vision in many ways, and these differences may not be explained solely by a loss of acuity. However, acuity sensitivity still forms a significant portion of peripheral detail loss and can be a phenomena to exploit.
- One method of exploitation, termed “foveated rendering” or “foveated imaging,” implements a high-resolution render of a particular region of individual frame images. A users gaze may be tracked so that the high-resolution render is positioned on the images to correspond with a user's foveal region. An area surrounding the high-resolution region is then rendered at relatively lower resolution. However, users may experience visual anomaly when prompted about the fact. Other techniques have implemented a foveated rendering method with spatial and temporal property variation. With such techniques, at a certain level-of-detail (LOD), users may experience the foveated renders to be of equal or higher quality than non-foveated counterparts.
- With the increasing use of 4K-8K UHD displays and the push towards higher pixel densities for head-mounted displays, the industry is pressured to meet market demands for intensive real-time rendering.
- Latency in a system configured to achieve foveated image rendering may produce a “pop” effect caused by a high-resolution foveal region “catching up” to a user's actual gaze direction. Accordingly, one aspect of the disclosure relates to a system configured for gaze-predictive rendering of a focal area of an animation presented on a display. The focal area may comprise an area corresponding to a predicted location of a user's foveal region and an area surrounding the foveal region. The focal area may be rendered at a higher resolution, higher color bit depth, and/or higher luminous intensity than an area outside the focal area. The location of the predicted foveal region may be based on statistical targets of eye fixation. The focal area may comprise an area that may be larger than the foveal region. In this way, the true foveal region may be contained (at least in part) within the focal area, even if the true foveal region may not fixate directly on the statistical targets.
- In some implementations, the system may include one or more physical processors configured by machine-readable instructions. Executing the machine-readable instructions may cause the one or more physical processors to facilitate gaze-predictive rendering of a focal area of an animation presented on a display. In some implementations, the animation may comprise views of a virtual space. The virtual space may include a video game taking place in the virtual space. The animation may include a sequence of frame. The frames of the animation may comprise images of the virtual space within a field of view of the virtual space. The machine-readable instructions may include one or more of a space component, a field-of-view component, a gaze component, a latency component, a focal area component, a render component, and/or other components.
- The space component may be configured to obtain state information describing state of a virtual space. The state of the virtual space at an individual point in time may define one or more of one or more virtual objects within the virtual space, positions of the one or more virtual objects, and/or other information.
- The field of view component may be configured to determine a field of view of the virtual space. The frames of the animation may comprise images of the virtual space within the field of view. By way of non-limiting example, a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame.
- The gaze component may be configured to predict a gaze direction of a user within the field of view. The user may be viewing the animation via a display. The gaze direction may define a line of sight of the user.
- The focal area component may be configured to determine a focal area within the field of view based on the predicted gaze direction, and/or other information. The focal area may include one or more of a predicted foveal region corresponding to the predicted gaze direction, an area surrounding the foveal region, and/or other components. The foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight.
- The render component may be configured to render, from the state information, individual images for individual frames of the animation. Individual images may depict the virtual space within the field of view determined at individual points in time that correspond to individual frames. The focal area within the field of view may be rendered at a higher resolution than an area outside the focal area. By way of non-limiting example, the rendered images may include a first image for the first frame, and/or other images for other frames of the animation. The first image may depict the virtual space within the field of view at the point in time corresponding to the first frame.
- These and other features, and characteristics of the present technology, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
-
FIG. 1 illustrates a system configured for rendering a focal area of an animation presented on a display, in accordance with one or more implementations. -
FIG. 2 illustrates an exemplary graphic of a rendering of a focal area in an image of a frame corresponding to a first point in time, in accordance with one or more implementations. -
FIG. 3 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations. -
FIG. 4 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations. -
FIG. 5 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations. -
FIG. 6 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations. -
FIG. 7 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations. -
FIG. 8 illustrates an exemplary graphic of a rendering of the focal area in an image, in accordance with one or more implementations -
FIG. 9 shows an exemplary graphic of a user viewing a display of a computing platform. -
FIG. 10 illustrates a method of latency-aware rendering of a focal area of an animation presented on a display, in accordance with one or more implementations. -
FIG. 11 illustrates a method of gaze-predictive rendering of a focal area of an animation presented on a display, in accordance with one or more implementations. -
FIG. 12 illustrates a method of bandwidth-sensitive rendering of a focal area of an animation presented on a display, in accordance with one or more implementations. -
FIG. 1 illustrates asystem 100 configured for rendering a focal area of an animation presented on a display, in accordance with one or more implementations. In conventional rendering, despite its significance, human visual perception is assumed to be perfect. That is, displayed frame images of an amination are continuously expected to be fully visually appreciated (e.g. with respect to resolution, color, and/or other visual attributes), regardless of even some of its most obvious flaws, for example loss of acuity outside the eye's foveal region. - One or more implementations of
system 100 propose solutions on how to efficiently perform perceptually lossless rendering, wherein a focal area of individual frame image may be rendered at a relatively higher visual fidelity (e.g., with respect to one or more of resolution, color, luminance, and/or visual attributes) than areas outside the focal area. Areas outside the focal area may be indistinguishable from the high fidelity counterpart (e.g., the focal area). Thus there may be little or no perceived difference in quality, but performance may be greatly increased. - In some implementations, an animation may comprise views of a virtual space. The animation may include a sequence of frame. The frames of the animation may comprise images of the virtual space within a field of view of the virtual space. The virtual space may include a video game taking place in the virtual space. In some implementations, the virtual space may comprise an immersive virtual reality space. In some implementations, the field of view at individual points in time may be predetermined, determined based on gameplay within the virtual space, determined based on a position and/or orientation of a user, and/or determined in other ways.
- In some implementations,
system 100 may be configured for latency-aware rendering of a focal area of an animation presented on a display. In some implementations, a latency aware formulation may be implemented for calculating the focal area. A predetermined and/or recursively determined system latency may be compensated for to maintain a foveated illusion at the cost of computational gain. An exemplary formula for determining the focal area may take into account one or more of a user's position, a maximal eye saccadic speed value, display characteristics, system latency, and/or other factors to ensure that the users foveal region may be contained (at least in part) within the focal area. - In some implementations,
system 100 may be configured for gaze-predictive rendering of a focal area of an animation presented on a display. For example, identification of relatively lower and/or higher visual fidelity regions (e.g., with respect to resolution, color, luminance, and/or other visual attributes) of frame images may be “trained” based on statistical targets of eye fixation that correspond to the users foveal region. Training data may be pre-computed on a database of previous eye tracked viewing sessions of a given animation, and applied in real-time. This precomputed gaze anticipation, or gaze-prediction, approach may facilitate scheduled render processing according to learned expected saccades and gaze directions. In anticipating gaze, this approach may identify focal areas within frame images ahead of image rendering, which may be valuable for latency critical rendering and display systems. In some implementations, view dependent material rendering with glossy or mirror effects may be preempted (to a degree) with this approach. Some indirect light bounce effects (global illumination, caustics, scattering, etc.) from scene materials may be pre-sampled according to predicted gaze direction. Live eye tracking and tracking of eye depth and light accommodation which also takes significant computation and may be difficult to sense with current high cost eye tracking hardware, may also be precomputed. - In some implementations,
system 100 may be configured for bandwidth-sensitive rendering of a focal area of an animation presented on a display. Within such an implementation, the rendering of images may be optimized for bandwidth considerations through exploitation of one or more of foveal color, luminance perception, quality level parameter for post processing effects, texture sampling parameter for coarser looking textures, coarser physics accuracy, coarser global illumination features accuracy (shadow resolution, ambient occlusion resolution), geometry tessellation, and/or other aspects that may affect bandwidth. - The foveal region's color sensing receptor cones are densely packed versus the peripheral region. One or more implementations of
system 100 propose reducing a color bit depth in areas outside a focal area. Rod density, which may not be sensitive to color, falls off away from the foveal region, therefore luminance bit depth may also be reduced outside the focal area. In some implementations, the manner in which bit depths may be reduced may follow a nonlinear function. By way of non-limiting example, in low-light conditions, luminance perception may be strongest at approximately 25 degrees angular deviation from the line-of-sight. Blue cones are sparse in the eye (2%) and absent from the fovea, but red and green in current human vision may be perceptually similar to blue (suggesting learned response later in the visual system). In general, levels of color distinction correspond directly to how much bandwidth may be minimally necessary. Further, variable temporal sensitivity across the retina may correspond to minimum necessary temporal bandwidth in perceptually lossless rendering. - The
system 100 may include one or more of one or more computing platforms (e.g.,computing platform 124 and/or one or more other computing platforms), one or more servers (e.g.,server 102 and/or other servers), and/or other components. In some implementations, one or more computing platforms (e.g., computing platform 124) may be configured to communicate with one or more servers (e.g., server 102) according to a client/server architecture and/or other communication scheme. In some implementations,computing platform 124 may communicate with one or more other computing platforms according to a peer-to-peer architecture, via communications routed through one or more servers, and/or other communication scheme. The users may accesssystem 100 and/or the virtual space via computing platforms associated with the users. - Individual computing platforms may include one or more of a cellular telephone, a smartphone, a head-up display, a virtual reality headset (e.g., a head-mounted display such as a FOVE head-mounted display), a laptop, a tablet computer, a desktop computer, a television set-top box, a client device, a smart TV, a gaming console, and/or other devices suitable for the intended purposes as described herein. Individual computing platforms may include a display configured to present the animation for viewing by a user, and/or other components. A display may comprise one or more of a display screen, a graphics processing unit, and/or other components. In some implementations, one or more computing platforms may operate together as part of an immersive virtual reality environment, such as a cave automatic virtual environment (CAVE).
- In some implementations, a virtual reality headset may comprise one or more inertial measurement units, other sensors, and/or other components. The one or more inertial measurement units and/or other sensors may be configured to generate output signals conveying one or more of position, orientation, acceleration, and/or other information associated with the virtual reality headset. By way of non-limiting illustration, a virtual reality headset may comprise one or more of an OCULUS RIFT by OCULUS VR, a HOLOLENS by MICROSOFT, and/or other devices.
- In some implementations,
server 102 may include one or morephysical processors 104 configured by machine-readable instructions 106,electronic storage 119, and/or other components. Executing the machine-readable instructions 106 may causeserver 102 to facilitate rendering a focal area of an animation. The machine-readable instructions 106 may include one or more of aspace component 108, a user component 109, a field of view component 110 (abbreviated “FoV Component 110” inFIG. 1 ), agaze component 112, alatency component 114, afocal area component 116, a rendercomponent 118, and/or other components. - It is noted that in some implementations, one or more features and/or functions attributed to
server 102 may be attributed to individual computing platforms. By way of non-limiting example,computing platform 124 may be configured to facilitate rendering a focal area of an animation using information stored by and/or local to computing platform 124 (e.g., a cartridge, a disk, a memory card/stick, USB memory stick, electronic storage, and/or other considerations) and/or other information. By way of non-limiting example, one or more processors of computing platform 124 (not shown inFIG. 1 ) may include machine-readable instructions that may comprise one or more of the same or similar components of machine-readable instructions 106 ofserver 102. - The
space component 108 may be configured to implement one or more instances of a virtual space and/or video game taking place in the virtual space executed by machine-readable instructions 106. Thespace component 108 may be configured to determine views of the virtual space. The views of the virtual space may correspond to a field of view within the virtual space determined by the field ofview component 110. The views may then be communicated (e.g., via streaming, via object/position data, and/or other information) fromserver 102 to computing platforms for presentation to users. In some implementations, the views presented to a user may be expressed as rendered frame images (see, e.g., render component 116). - The instance of the virtual space may comprise a simulated space that is accessible by users via computing platforms that present the views of the virtual space (e.g., present the rendered images). The simulated space may have a topography, express ongoing real-time interaction by one or more users, and/or include one or more objects positioned within the topography that are capable of locomotion within the topography. In some instances, the topography may be a 2-dimensional topography. In other instances, the topography may be a 3-dimensional topography. The topography may include dimensions of the space, and/or surface features of a surface or objects that are “native” to the space. In some instances, the topography may describe a surface (e.g., a ground surface) that runs through at least a substantial portion of the space. In some instances, the topography may describe a volume with one or more bodies positioned therein (e.g., a simulation of gravity-deprived space with one or more celestial bodies positioned therein). The instance executed by machine-
readable instructions 106 may be synchronous, asynchronous, and/or semi-synchronous. - In some implementations, views of the virtual space may be determined based on state information and/or other information. The state information may describe state of the virtual space. The state of the virtual space at an individual point in time may define one or more of one or more virtual objects (e.g., player characters, non-player characters, topographical elements of an environment of the virtual space, and/or other virtual objects) within the virtual space, their positions, and/or other information. In some implementations, the state of the virtual space may correspond to a state of a game taking place in the virtual space. By way of non-limiting example, a view determined and/or presented to a given user may correspond to a game entity being controlled by the given user. The state information may further correspond to one or more of a location in the virtual space (e.g., the location from which the view is taken, the location the view depicts, and/or other locations), a zoom ratio, a dimensionality of objects, a point-of-view, and/or view parameters. One or more of the view parameters may be selectable by the user.
- The above description of the manner in which views of the virtual space are determined by
space component 108 is not intended to be limiting. Thespace component 108 may be configured to express the virtual space in a more limited, or richer, manner. For example, views determined for the virtual space may be selected from a limited set of graphics depicting an event in a given place within the virtual space. The views may include additional content (e.g., text, audio, pre-stored video content, and/or other content) that describes particulars of the current state of the place, beyond the relatively generic graphics. For example, a view may include a generic battle graphic with a textual description of the opponents to be confronted. Other expressions of individual places within the virtual space are contemplated. - Within the instance(s) of the virtual space executed by
space component 108, users may control game entities, objects, simulated physical phenomena (e.g., wind, rain, earthquakes, and/or other phenomena), and/or other elements within the virtual space to interact with the virtual space and/or each other. One or more user controlled element(s) may move through and interact with the virtual space (e.g., non-user characters in the virtual space, other objects in the virtual space). The user controlled elements controlled by and/or associated with a given user may be created and/or customized by the given user. The user may have an “inventory” of virtual items and/or currency that the user can use (e.g., by manipulation of a game entity or other user controlled element, and/or other items) within the virtual space. - User participation in the virtual space may include controlling one or more of the available user controlled elements in the virtual space. Control may be exercised through control inputs and/or commands input by the users through individual computing platforms. The users may interact with each other through communications exchanged within the virtual space. Such communications may include one or more of textual chat, instant messages, private messages, voice communications, and/or other communications. Communications may be received and entered by the users via their respective computing platforms. Communications may be routed to and/or from the appropriate users through
server 102. - User participation in the virtual space may include controlling one or more game entities in the virtual space. A game entity may refer to a virtual object (or group of objects) present in the virtual space that represents an individual user. A game entity may be a virtual character (e.g., an avatar) and/or other virtual object. A group of game entities may include a group of virtual characters, virtual objects, and/or other groups.
- Virtual objects may include virtual items and/or good. Virtual items and/or goods may include one or more of a virtual weapon, a tool, a food, a currency, a reward, a bonus, health, a potion, an enhancement, a mount, a power-up, a speed-up, clothing, a vehicle, an anatomical feature of a game entity, a troop or troop type, a pet, a virtual resource, and/or other virtual items and/or goods.
- In some implementations, an instance of the virtual space may be persistent. That is, the virtual space may continue on whether or not individual players are currently logged in and/or participating in the virtual space. A user that logs out of the virtual space and then logs back in some time later may find the virtual space has been changed through the interactions of other players with the virtual space during the time the player was logged out. These changes may include changes to the simulated physical space, changes in the user's inventory, changes in other user's inventories, changes experienced by non-player characters, changes to the virtual items available for use in the virtual space, and/or other changes.
- The user component 109 may be configured to access and/or manage one or more user profiles, user information, and/or user accounts associated with the users. The one or more user profiles and/or user information may include information stored locally by a given computing platform, by
server 102, one or more other computing platforms, and/or other storage locations. The user profiles may include, for example, information identifying users (e.g., a username or handle, a number, an identifier, and/or other identifying information) within the virtual space, security login information (e.g., a login code or password), virtual space account information, subscription information, virtual (or real) currency account information (e.g., related to currency held in credit for a user), control input information (e.g., a history of control inputs provided by the user), virtual inventory information (e.g., virtual inventories associated with the users that include one or more virtual items available for the users in the virtual space), relationship information (e.g., information related to relationships between users in the virtual space), virtual space usage information (e.g., a log-in history indicating the frequency and/or amount of times the user logs-in to the user accounts), interaction history among users in the virtual space, information stated by users, browsing history of users, a computing platform identification associated with a user, a phone number associated with a user, predictive gaze direction information (described in more detail herein), and/or other information related to users. - The field of
view component 110 may be configured to determine a field of view of the virtual space. The field of view determined by field ofview component 110 may dictate the views of the virtual space determined and presented by thespace component 108. By way of non-limiting example, a frame of the animation may comprise an image of the virtual space within the field of view at a point in time that corresponds to the frame. - In some implementations, the field of view may be predetermined for one or more points in time that correspond to one or more frames of the animation. By way of non-limiting illustration, gameplay within the virtual space may guide the player along a predetermined path within the virtual space such that the field of view of the virtual space may be predetermined for one or more points in time during gameplay. In some implementations, the field of view may be predetermined for one or more points in time that correspond to a non-interactive in-game cutscene (e.g., also referred to as an in-game cinematic and/or in-game movie).
- In some implementations, the field of view may be determined based on control inputs and/or commands input by a user through a computing platform. The control inputs and/or commands may dictate control of a game entity associated with the user within the virtual space. The game entity may be positioned at a location in the virtual space. The field of view may correspond to one or more of the location in the virtual space (e.g., the location from which the view is taken, the location the view depicts, and/or other locations), a point of view from the perspective of the game entity (e.g., a first person perspective and/or a third person perspective), and/or other information.
- In some implementations, the field of view may be determined based on sensor output generated by one or more sensors of a computing platform. By way of non-limiting example, a computing platform may comprise a virtual reality headset and/or other computing platform. An inertial measurement unit and/or other sensors generating sensor output conveying one or more of position, orientation, acceleration, and/or other information associated with a virtual reality headset may dictate the field of view in the virtual space. For example, the virtual space may comprise an immersive virtual reality space. The virtual reality headset may be worn on the user's face and/or head. The user may turn their head (e.g., look around) to change the field of view of the virtual space from which views of the virtual space that are determined and presented to them via a display screen of the headset.
- The
gaze component 112 may be configured to determine a gaze direction of a user within the field of view, and/or other gaze information. The gaze direction may define one or more of a light-of-sight of the user, and/or other information. The gaze direction may be expressed as a vector in a three-dimensional coordinate system, and/or expressed in other ways. - The line-of-sight may comprise a virtual line connecting the fovea of the users eye with a fixation point. The gaze direction and/or line-of-sight may further correspond to a foveal region. The foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight (e.g., peripheral vision). The foveal region projected onto a plane (e.g., on a display screen the user is watching) may be determined based one or more of a distance of the fixation point from the users eye, an angle subtended by the fovea, and/or other information.
- In some implementations, the gaze direction may be determined by one or more gaze tracking devices (such as gaze tracking device 126).
Gaze tracking device 126 may comprise a device configured to determine and/or track eye movement, and/or determine gaze in other ways. Thegaze tracking device 126 may comprise one or more of a camera, a processing unit, and/or other components. The camera may be configured to capture video of one or both eyes and/or record their movement as the user looks at some kind of stimulus (e.g., a display screen). By way of non-limiting example, tracking may be accomplished by identifying a center of the pupil(s) and infrared/near-infrared non-collimated light to create corneal reflections (CR). A vector between the pupil center and the corneal reflections may be determined to compute a fixation point on surface and/or the gaze direction. A calibration procedure of the individual user may be performed. - By way of non-limiting example,
FIG. 9 shows an exemplary graphic of auser 900 viewing adisplay 902 of a computing platform (not shown inFIG. 9 ). The graphic illustrates a user'sgaze direction 904, the user's line-of-sight 906 (which projects to a fixation point on the display 902), and afoveal region 908 projected on the display 902 (noting that that the graphic may not be to scale). Thegaze direction 904 and/or line-of-sight 906 may be determined by a gaze tracking device 912 (e.g., the same or similargaze tracking device 126 ofFIG. 1 ) positioned at or near thedisplay 902 of the computing platform. Thegaze direction 904 may be expressed as a vector positioned in three-dimensional space using the gaze tracking device 912 (or other point in space) as an origin of a three-dimensional coordinate system. Thefoveal region 908 may be determined based on one or more of thegaze direction 904, the users distance from thedisplay 902, anangle 910 subtended by the fovea of the user's eye (or eyes), and/or other information. By way of non-limiting example, conventional geometric relationships between sides and angles of a triangle may be employed to determine the length (e.g., diameter) of the projectedfoveal region 908. - Returning to
FIG. 1 , thegaze component 112 may be configured to predict a gaze direction of individual users. In some implementations, the prediction may be based on a machine learning approach and/or other technique for predicting gaze direction. For example, through machine learning, thegaze component 112 may be configured to identify statistical targets of eye fixation (e.g., fixation points and/or fixation regions), and determine from the targets a predicted gaze direction that corresponds to the user's foveal region. The machine learning approach may be trained based on a database of previous eye tracked viewing sessions of one or more animations. - In some implementations, statistical targets of eye fixation may be determined from previous eye tracked viewing sessions by one or more users. In some implementations, the statistical targets may be an average of targets of eye fixation determined from previous eye tracked viewing sessions by multiple users. In some implementations, the statistical targets may be an average determined from eye tracked viewing sessions by individual users. By way of non-limiting example, the statistical targets may be specific to individual users and applied for gaze prediction purposes specifically for the individual users when the individual users are viewing the animation.
- In some implementations, statistical targets of eye fixation may be stored as predictive gaze direction information within a user account stored by user component 109. The predictive gaze direction information of an individual user account may comprise statistical targets of eye fixation that are specific to an individual user of the user account. In some implementations, predictive gaze direction information within a user account stored by user component 109 may comprise statistical targets that are averaged from pervious eye tracked viewing session of multiple users.
- In some implementations, a prediction of gaze direction may be based on one or more virtual objects within a field of view of the virtual space. By way of non-limiting example, one or more virtual objects within a field of view may draw the users' attention relatively more than other virtual objects within the field of view. Such virtual objects may become a target of eye fixation for predicting the user's gaze direction. An individual virtual object may be predicted as a target of eye fixation based on one or more of movement of the individual virtual object, a position of the individual virtual object, a role assigned to the individual virtual object, and/or other factors. In some implementations, targets of eye fixation may be based on predicted high level semantic distractions, e.g. recognizing a face may leads to lower cognition of surrounding features in a field of view.
- By way of non-limiting example, movement of a virtual object within a field of view of the virtual space may draw the user's attention to the virtual object and become a target of eye fixation. For example, a user may be drawn to a particular virtual object if it is moving within a field of view of the virtual space. In some implementations, a user may be drawn to a particular virtual object if it is moving within a field of view of the virtual space relatively faster than other virtual objects that may be present within the field of view. The positions of the virtual object at individual points in time within the field of view during movement may be predicted as targeted points (or regions) of eye fixation. Individual points (or regions) may facilitate determining a predicted gaze direction. For example, based on one or more of the user's distance from a display screen, an angle subtended by the fovea, and/or other factors, the gaze direction may be predicted (e.g., calculated). The gaze direction may be calculated such that the projection of the user's foveal region on the display screen corresponding to the predicted gaze direction may include at least some of the individual targeted points (or regions).
- By way of non-limiting example, position of a virtual object within a field of view of the virtual space may draw the user's attention to the virtual object and become a target of eye fixation. For example, a user may be drawn to a virtual object that may be positioned at a relatively central part (or other part) of a display screen relative to other virtual objects that may be positioned towards a peripheral edge of the display screen. The position of the virtual object may be predicted as a targeted point (or region) of eye fixation. The point (or region) may facilitate determining a predicted gaze direction. For example, based on one or more of the user's distance from a display screen, an angle subtended by the fovea, and/or other factors, the gaze direction may be predicted such that the projection of the user's foveal region on the display screen includes at least some of the targeted point (or region).
- By way of non-limiting example, individual roles assigned to individual virtual objects within a field of view of the virtual space may dictate where the user's attention may be drawn to. As a result, individual virtual object having a given role may become targets of eye fixation. Individual roles of individual virtual object may comprise one or more of a player character (e.g., a game entity associated with a user and controlled by the user), a teammate, a main or central character, a minor character, a protagonist, an antagonist, an anti-hero, an enemy, a combatant, a speaker, a listener, and/or other roles. It is noted that the above listing of roles assigned to virtual objects is provided for illustrative purposes only and is not to be considered limiting. For example, in some implementations, roles assigned to virtual object may be considered in other ways.
- By way of non-limiting example, a position of a first virtual object assigned a first role may become a target of eye fixation based on the first virtual object being assigned the first role. By way of non-limiting illustration, a virtual object that is a player character within a field of view of a virtual space (e.g., via a third person perspective) may not be a target of eye fixation since the user may instead by focused on what happening around the player character. The user may be drawn instead to a virtual object that is a game enemy that may be approaching the player character. The positions of the approaching game enemy may become targeted points (or regions) of eye fixation.
- By way of non-limiting illustration, a virtual object that may be assigned a speaker role (e.g., entity that may be performing a dialog) may become a target of eye fixation. A virtual object that may be assigned a listener role (e.g., entity that may be listening to another entity) may not be target of eye fixation.
- By way of non-limiting illustration, a virtual object that may be assigned a main character role may become a target of eye fixation. A virtual object that may be assigned a combatant role that may move toward the main character virtual object may become a subsequent target of eye fixation.
- The above descriptions of how statistical target training, movement of virtual objects, positions of virtual objects, and/or roles assigned to virtual objects may be used to predict gaze direction are provided for illustrative purposes only and is not to be considered limiting. For example, one or more other implementations of
system 100 may employ other techniques for predicting gaze direction. - The
latency component 114 may be configured to obtain gaze adjustment latency. The gaze adjustment latency may quantify latency in one or more of determining changes in the gaze direction, making corresponding adjustments to a focal area within a field of view of the virtual space, and/or other operations carried out bysystem 100. In some implementations, latency insystem 100 may be attributed to one or more components included insystem 100. By way of non-limiting example, latency insystem 100 may be attributed to latency ingaze tracking device 126 in determining and/or tracking eye movement in order to determine a user's gaze direction. However it is noted that latency may be attributed to other factors. For example, latency insystem 100 may be attribute to the speed at which information may be communicated throughnetwork 120, and/or other factors. - The gaze adjustment latency may be expressed through units of time. By way of non-limiting example, a gaze adjustment latency may be expressed as a numerical value in units of milliseconds, and/or units of time.
- In some implementations, the gaze adjustment latency may be a predetermined value. For example, the gaze adjustment latency may be pre-set to a value that corresponds to latency attributed to gaze
tracking device 126 and/or other components ofsystem 100. - In some implementations, the gaze adjustment latency may be determined recursively. In some implementations, determining the gaze adjustment latency recursively may comprise determining a gaze adjustment latency after individual renders of individual images of individual frames of the animation (see, e.g., render component 116). Thus, gaze adjustment latency may be determined in an on-going basis during successive renders of frame images.
- By way of non-limiting example, subsequent to rendering a first frame, and at a point in time corresponding to a second frame that occurs temporally after the first frame,
latency component 114 may be configured to determine a gaze adjustment latency that quantifies latency in one or more of determining a change in gaze direction of the user between the first frame and an immediate prior frame, making corresponding adjustments to a focal area within a field of view (e.g., that was adjusted based on changes in gaze direction of the user between the first frame and the immediate prior frame), and/or other latency-attributed factors. Further, subsequent to rendering the second frame, and at a point in time corresponding to a third frame that occurs temporally after the second frame,latency component 114 may be configured to determine a gaze adjustment latency that quantifies latency in one or more of determining a change in gaze direction of the user between the first frame and second frame, making corresponding adjustments to the focal area within a field of view (e.g., that was adjusted based on changes in gaze direction of the user between the first frame and the second frame), and/or other latency-attributed factors. - The
focal area component 116 may be configured to determine a focal area within a field of view of the virtual space. The focal area may be determined based on one or more of a gaze direction, a gaze adjustment latency, and/or other information. The focal area within the field of view may comprise one or more of a foveal region corresponding to the gaze direction, an area surrounding the foveal region, and/or other components. The focal area may comprise an area that may be larger than a true (e.g., tracked, calculated, and/or predicted) foveal region. As such, the focal area may be determined to account for system latency such that the foveal region in the field of view may be contained (at least in part) within the focal area. - In some implementations, a focal area may be determined based on other information associated with a user's gaze. By way of non-limiting illustration, a focal area may be determined based on one or more of a focus accommodation (e.g., vergence movements), saccadic movements, smooth pursuit movements, vestibulo-occular movements, and/or other information. Focus accommodation may be used to determine a focal area to be in areas of a focal depth and/or focal volume. In some implementations, a latency between vergence target intention and physical action may be exploited. Saccadic movements may incur a delay from target intention to physically catching up, which may afford a predictable adjustment time. Smooth pursuit movements may incur an initial saccade, then follow a target. In this case it may be predictable that a target remains constant in appearance and may exploit reprojection rendering effectively (reducing work for newly computed frames by re-projected prior frames). Vestibulo-occular movements may be inclusive of head stability tracking. Prediction of these movements may be incorporated into the reduction of necessary rendering computation.
- In some implementations, determining the focal area may comprise determining one or more of a size of the focal area, a position of a center point of the focal area, and/or other information. In some implementations, the size may be a function of one or more of the gaze adjustment latency, a users' maximum saccadic speed, the user's distance from a display, an angle subtended by the fovea of the user's eye, a pixel density of the display, and/or other information.
- In some implementations, the focal area may comprise a circular, or substantially circular area. In some implementation, a diameter of the focal area may be determined by the following equation:
-
- where Fø is the diameter of the focal area, Ltot is the gaze adjustment latency (e.g., in milliseconds), Smax is the user's maximum saccadic speed in radians per millisecond, du is the user's distance from the display, α is the angled subtended by the fovea (e.g., approximately 5 degrees, and/or other angle), bw is the width of the blending boarder between a peripheral edge of the focal area and an area outside the focal area, ρpixel is the pixel density of the display in pixels per millimeter (or other units of density), and c is an error constant. A blending boarder may provide a region that smoothly blends a boarder of a focal area from “sharp focus” inside the focal area to “blurred undetailed” in the region outside the focal area.
- It is noted that the above equation for determining a diameter of a focal area is provided for illustrative purposes only and is not to be considered limiting. For example, in some implementations, focal area may be determined in other ways.
- By way of non-limiting illustration, a focal area may be determined such that a distance between a peripheral edge of the foveal region and a peripheral edge of the focal area may comprise a distance wherein for a given saccadic speed of a user's eye movement and gaze adjustment latency, the peripheral edge of the foveal region may not surpass the peripheral edge of the focal area (see, e.g.,
FIG. 2 andFIG. 3 , described in more detail herein). Thus, the foveal region may be contained (at least in part) within the focal area. - In some implementations, the position of the focal area within the field of view may be determined based on the user's gaze direction, and/or other information. For example, the light-of-sight defined by the gaze direction may project to a fixation point on a display screen. The fixation point may comprise a center point of the user's true (e.g., tracked, calculated, and/or predicted) foveal region. The center point of the foveal region may be used to determine a center point of the focal area. By way of non-limiting example, the focal area may be positioned such that an imaginary center point of the focal area may be aligned with an imaginary center point of the users foveal region (e.g., the fixation point of the light-of-sight).
- As noted herein, latency may exist in
system 100 such that the determined position of the focal area (e.g., via a determined center point of the focal area) may lag in being aligned with a true (e.g., tracked, calculated, and/or predicted) center of the users foveal region that projects to the display (e.g., a fixation point). In some implementations, the manner in which in the size of the focal area may be calculated may ensure that, while accounting for latency in the system, the true (e.g., tracked, calculated, and/or predicted) foveal region in the field of view may be contained (at least in part) within the focal area while adjustments to the focal area are being made to “catch up” with the) foveal region. An illustrative example of this is presented in more detail herein with reference toFIGS. 2-4 . - Returning to
FIG. 1 , the rendercomponent 118 may be configured to render images for frames. By way of non-limiting example, the rendercomponent 118 may be configured to render, from the state information, individual images for individual frames of the animation. Individual images may depict the virtual space within a field of view determined at individual points in time that corresponds to individual frames. Therendering component 118 may provide the rendered image to thespace component 110 for presentation to users via computing platforms. - In some implementations, render
component 118 may be configured to render individual images based on parameter values of one or more rendering parameters of the individual frames. Rendering parameters may comprise one or more of a resolution parameter, a color bit depth parameter, a luminance bit depth parameter, and/or other parameters. - A parameter value of a resolution parameter of an individual frame may specify one or more of a region within the field of view associated with the individual frame, a resolution at which a specified region may be rendered, and/or other information. By way of non-limiting example, a first value of a resolution parameter for a frame image may specify a focal area within a field of view corresponding to a point in time for the frame, a first resolution at which the focal area may be rendered, and/or other information. By way of non-limiting example, a second value of a resolution parameter for the frame image may specify an area outside of the focal area, a second resolution at which the specified area outside of the focal area may be rendered, and/or other information. In some implementations, the first resolution may be relatively higher than the second resolution.
- In some implementations, the first resolution may be relatively higher insofar that the first resolution comprises a “standard” resolution value while the second resolution comprises a resolution value that may be a diminished (e.g., reduced) with respect to the standard resolution value. In some implementations, the first resolution may be relatively higher insofar that the second resolution comprises a “standard” resolution value while the first resolution comprises a resolution value that may be greater than the standard resolution value. In some implementations, the term “standard resolution” may refer to one or more of a resolution that may be intended by a provider of the virtual space, a resolution which a computing platform may be capable of presenting, and/or other information.
- A parameter value of a color bit depth parameter of an individual frame may specify one or more of a region within the field of view associated with the individual frame, a color bit depth at which color in the specified region may be rendered, and/or other information. By way of non-limiting example, a first value of a color bit depth parameter for a frame image may specify a focal area within a field of view corresponding to a point in time for the frame, a first bit depth at which color for the focal area may be rendered, and/or other information. By way of non-limiting example, a second value of a color bit depth parameter for the frame image may specify an area outside of the focal area, a second bit depth at which color for the specified area outside of the focal area may be rendered, and/or other information. In some implementations, the first bit depth may be relatively higher than the second bit depth.
- In some implementations, the first bit depth may be relatively higher insofar that the first bit depth comprises a “standard” bit depth while the second bit depth may comprise a bit depth that may be a diminished (e.g., reduced) bit depth with respect to the standard bit depth. In some implementations, the first bit depth may be relatively higher insofar that the second bit depth comprises a “standard” bit depth while the first bit depth comprises a bit depth that may be greater than the standard bit depth. In some implementations, the term “standard bit depth” may refer to one or more of a color bit depth that may be intended by a provider of the virtual space, a color bit depth which a computing platform may be capable of presenting, and/or other information.
- A parameter value of a luminance bit depth parameter of an individual frame may specify one or more of a region within the field of view associated with the individual frame, a luminous bit depth at which luminous intensity of the specified region may be rendered, and/or other information. By way of non-limiting example, a first value of a luminance bit depth parameter for a frame image may specify a focal area within a field of view corresponding to a point in time for the frame, a first bit depth at which a luminous intensity for the focal area may be rendered, and/or other information. By way of non-limiting example, a second value of a luminance bit depth parameter for the frame image may specify an area outside of the determined focal area, a second bit depth at which the luminous intensity of the specified area outside of the focal area may be rendered, and/or other information. In some implementations, the first bit depth may be relatively higher than the second bit depth.
- In some implementations, the first bit depth may be relatively higher insofar that the first bit depth comprises a “standard” bit depth while the second bit depth comprises a bit depth that may be a diminished (e.g., reduced) bit depth with respect to the standard bit depth. In some implementations, the first bit depth may be relatively higher insofar that the second bit depth comprises a “standard” bit depth while the first bit depth comprises a bit depth that may be greater than the standard bit depth. In some implementations, the term “standard bit depth” may refer to one or more of a bit depth for luminous intensity that may be intended by a provider of the virtual space, a bit depth for luminous intensity which a computing platform may be capable of presenting, and/or other information.
- By way of non-limiting illustration, images rendered by
rendering component 118 may include one or more of a first image for a first frame corresponding to a first point in time, a second image for a second frame corresponding to a second point in time, and/or other images for other frames corresponding to other points in time. The first image may depict the virtual space within a field of view at a point in time corresponding to the first frame. A focal area within the field of view may be rendered according to one or more parameter values of one or more rendering parameters that may be different from parameter values for an area outside the focal area. - It is noted that one or more features and/or functions of
system 100 presented herein may be carried for other multimedia types. For example, one or more features and/or functions presented herein may be applied in the framework of generating and/or rendering deep media video formats (e.g., 360 VR with parallax ability to move through a video space). In this case, the process of rendering and encoding the VR video format may be accelerated and made more efficient through machine learning prediction of a perceptual focal depth in the context of the offline processed video content. Then, during playback, the VR video decoding and display may be accelerated and made more efficient through machine learning as above but dynamically in real-time VR display. - Reference is now made to
FIGS. 2-8 which illustrate exemplary graphics of focal areas in frame images of an animation, in accordance with one or more implementations presented herein. -
FIGS. 2-5 illustrate various frame images having focal areas rendered in accordance with a latency-aware implementation ofsystem 100, and/or other implementations. For illustrative purposes, the frame images shown in the figures will be considered as sequentially rendered and presented frame images of an animation. For example,FIG. 2 illustrates afirst image 202 of a first frame that corresponds to a first point in time. Thefirst image 202 may include a view of avirtual space 200 corresponding a field of view of the virtual space determined for the first point in time. The view of the virtual space may include one or more virtual objects and their position determined from state information at the first point in time. The one or more virtual object may include a firstvirtual object 204, a secondvirtual object 206, and/or other virtual objects. The firstvirtual object 204 may be assigned a first role. The first role may comprise main character role and/or other role. The secondvirtual object 206 may be assigned a second role. The second role may comprise a combatant role, and/or other role. - The
first image 202 may include a focal area 208 (having a peripheral edge shown by the dashed line and an imaginary center point shown by the “X”). Thefocal area 208 may comprise one or more of a foveal region 210 (having a peripheral edge shown by the dotted line and an imaginary center point shown by the “O”), anarea 212 outside thefoveal region 210, and/or other components. The center point “O” of thefoveal region 210 may be a point of eye fixation. For illustrative purposes, thefirst image 202 may be associated with a point in time where the users has maintained a focus on the firstvirtual object 204 such that latency may have not yet come into effect. As such, thefocal area 208 may be aligned with the users foveal region 210 (shown by the “X” overlaid on the “O”). However, as will be described in more detail below, the effect of latency may cause thefocal area 208 to lag in maintaining an alignment with thefoveal region 210. - It is noted that the depictions of the dashed line, dotted line, center point “X,” and center point “O” are provided for illustrative purposes only. In practice, the boundary between the
focal area 608 and area outside thefocal area 608 may indeed be indistinguishable. -
FIG. 3 illustrates asecond image 302 of a second frame that corresponds to a second point in time. The second point in time may occur temporally after the first point in time. Thesecond image 302 may include a view of thevirtual space 200 corresponding a field of view of the virtual space determined for the second point in time. Thesecond image 302 may include thefocal area 208 comprising one or more of thefoveal region 210, thearea 212 outside thefoveal region 210, and/or other components. Thesecond image 302 depicts movement of the secondvirtual object 206 toward the firstvirtual object 204, for example, in accordance with one or more of a combat scenario, dialog, and/or other interaction. -
FIG. 4 illustrates athird image 402 of a third frame that corresponds to a third point in time. The third point in time may occur temporally after the second point in time. Thethird image 402 may include a view of thevirtual space 200 corresponding a field of view of the virtual space determined for the third point in time. Thethird image 402 may include thefocal area 208 comprising one or more of thefoveal region 210, thearea 212 outside thefoveal region 210, and/or other components. Thethird image 402 depicts the secondvirtual object 206 being positioned adjacent the firstvirtual object 204, for example, during one or more of combat, dialog, and/or other interaction. - Returning to
FIG. 2 , in some implementations, the size of thefocal area 208 may be determined such that thearea 212 outside thefoveal region 210 has a width “W.” The width “W” may be determined based on one or more of the user's maximum saccadic speed, a gaze adjustment latency (assumed as a constant value), and/or other factors. For example, the width “W” may be the product of the user's maximum saccadic speed and the gaze adjustment latency. Latency may cause thefocal area 208 to lag in maintaining its alignment with thefoveal region 210. As such, as the users gaze direction changes, for example the user's gaze direction shifts in a first direction 214 (e.g., causing thefoveal region 210 to move in thefirst direction 214 as well), thefocal area 208 may not be able to “keep up” and stay aligned (FIG. 3 ). For illustrative purposes, the shift in the first direction 214 (FIG. 2 ) may be attributed to the users attention being drawn to the movement of the secondvirtual object 206. Based on the size of thefocal area 208 and in particular the width “W,” by the time the peripheral edge of thefoveal region 210 reaches the peripheral edge of the focal area 208 (FIG. 3 ), the gaze adjustment latency has lapsed and thefocal area 208 can now “catch up” and re-align with the foveal region 210 (FIG. 4 ). For example, the gaze adjustment latency may be amount of time between the first point in time and the third point in time. -
FIG. 5 illustrates afourth image 502 of a fourth frame that corresponds to a fourth point in time. The fourth point in time may occur temporally after the third point in time. Thefourth image 502 may include a view of thevirtual space 200 corresponding a field of view of the virtual space determined for the fourth point in time. Thefourth image 402 may include an latency-adjustedfocal area 504 comprising one or more of thefoveal region 210, an adjustedarea 506 outside thefoveal region 210, and/or other components. Thefourth image 502 depicts the secondvirtual object 206 moving away from the firstvirtual object 204, for example, post combat, dialog, and/or other interaction. - The
fourth image 402 may illustrate an implementation of system 100 (FIG. 1 ) where gaze adjustment latency may not be assumed as a constant value but may instead by determined in an on-going basis. By way of non-limiting example, at the fourth point in time, the latency component 114 (FIG. 1 ) may be configured to determine a new gaze adjustment latency. The new gaze adjustment latency may quantify latency in determining changes in the gaze direction between the second and third frames and making corresponding adjustments (e.g., positional and/or size adjustments) to thefocal area 208 between the second and third frames. For illustrative purposes, the adjustedfocal area 504 may have a size that may be increased relative thefocal area 208 inFIGS. 2-4 by virtue of the new gaze adjustment latency being higher than gaze adjustment latency attributed to the preceding frames. However, if it is determined that gaze adjustment latency has decreased, the size of the focal area may be reduced (e.g., given that the focal area may now “keep up” faster with the user's tracked gaze direction). -
FIGS. 6-8 illustrate various frame images having focal areas rendered in accordance with a gaze-predictive implementation ofsystem 100. For illustrative purposes, the frame images shown in the figures will be considered as sequentially presented frame images of an animation. For example,FIG. 6 illustrates afirst image 602 of a first frame that corresponds to a first point in time. Thefirst image 602 may include a view of avirtual space 600 corresponding a field of view of the virtual space determined for the first point in time. The view of the virtual space may include one or more virtual objects and their position determined from state information at the first point in time. The one or more virtual object may include a firstvirtual object 604, a secondvirtual object 606, and/or other virtual objects. The firstvirtual object 604 may be assigned a main character role and/or other role. The secondvirtual object 606 may be assigned a combatant role and/or other role. - The
first image 602 may include a focal area 608 (having a peripheral edge shown by the dashed line and an imaginary center point shown by the “X”). Thefocal area 608 may comprise one or more of a foveal region 610 (having a peripheral edge shown by the dotted line and an imaginary center point shown by the “O”), anarea 612 outside thefoveal region 610, and/or other components. The center point “O” of thefoveal region 610 may be a point of eye fixation. Thefocal area 608 may be aligned with the foveal region 610 (shown by the “X” overlaid on the “O”). -
FIG. 7 illustrates asecond image 702 of a second frame that corresponds to a second point in time. The second point in time may occur temporally after the first point in time. Thesecond image 702 may include a view of thevirtual space 600 corresponding a field of view of the virtual space determined for the second point in time. Thesecond image 702 may include thefocal area 608 comprising one or more of thefoveal region 610, thearea 612 outside thefoveal region 210, and/or other components. Thesecond image 702 depicts movement of the secondvirtual object 606 toward the firstvirtual object 604, for example, in accordance with one or more of a combat scenario, dialog, and/or other interaction. -
FIG. 8 illustrates athird image 802 of a third frame that corresponds to a third point in time. The third point in time may occur temporally after the second point in time. Thethird image 802 may include a view of thevirtual space 600 corresponding a field of view of the virtual space determined for the third point in time. Thethird image 802 may include thefocal area 608 comprising one or more of thefoveal region 610, thearea 612 outside thefoveal region 610, and/or other components. Thethird image 802 depicts the secondvirtual object 606 being positioned adjacent the firstvirtual object 604, for example, during combat, dialog, and/or other interaction. - The positional changes of the
focal area 608 between the frames ofFIGS. 6-8 illustrates a result of a gaze-predictive implementation of system 100 (FIG. 1 ). By way of non-limiting example, the position offocal area 608 inFIG. 6 may correspond to the firstvirtual object 604 being determined (e.g., via machine learning) as a statistical target of eye fixation. This may be attributed to one or more of previous eye tracking sessions of one or more users that conveyed that users in average viewed the firstvirtual object 604 at the first point in time of thefirst frame 602, the role assigned to the firstvirtual object 604, the position of the firstvirtual object 604 within a substantially central region of the image, and/or other factors. - The
focal area 608 may be adjusted in a first direction 614 (FIG. 6 ) toward the secondvirtual object 606. This adjustment may be based on a region that includes both the firstvirtual object 604 and secondvirtual object 606 in the second image 702 (FIG. 7 ) comprising a region of eye fixation. By way of non-limiting example, the position of thefocal area 608 inFIG. 7 may correspond to the region embodying thefocal area 608 being determined (e.g., via machine learning) as a statistical target of eye fixation. This may be attributed to one or more of previous eye tracking sessions of one or more users that conveyed that users in average shifted their attention toward the secondvirtual object 606, the movement of the secondvirtual object 606 toward the firstvirtual object 604 and away from the edge of thesecond image 702, the role assigned to the secondvirtual object 606, the position of the secondvirtual object 606, and/or other factors. It is noted that since the prediction of thefocal area 608 may indeed be a prediction, thefocal area 608 may not maintain a true alignment with the user's foveal region 610 (as illustrated by the offset center point “O” of thefoveal region 610 with respect to the center point “X” of the focal area 608). However, as shown inFIG. 8 , the user'sfoveal region 610 may catch up with the predictedfocal area 608. - It is noted that the illustrations in
FIG. 2-8 and corresponds descriptions were provided for illustrative purposes only and are not to be considered limiting. Instead, the illustrations were provided merely show particular implementations of system 100 (FIG. 1 ) and the effect individual implementations may have on rendered frame images. In other implementations, the size and/or position of a focal area, the views of the virtual space, the virtual objects shown in the virtual space, and/or other aspects described specifically forFIG. 2-8 may be different. - Returning to
FIG. 1 ,server 102,computing platform 124, external resources 12,gaze tracking device 126, and/or other entities participating insystem 100 may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via anetwork 120 such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting and that the scope of this disclosure includes implementations in whichserver 102,computing platform 124, external resources 12, and/or other entities participating insystem 100 may be operatively linked via some other communication media. - The
external resources 122 may include sources of information, hosts, and/or providers of virtual spaces outside ofsystem 100, external entities participating withsystem 100, external entities for player-to-player communications, and/or other resources. In some implementations, some or all of the functionality attributed herein toexternal resources 122 may be provided by resources included insystem 100. -
Server 102 may includeelectronic storage 119, one ormore processors 104, and/or other components.Server 102 may include communication lines or ports to enable the exchange of information withnetwork 120,computing platform 124,external resources 122, and/or entities. Illustration ofserver 102 inFIG. 1 is not intended to be limiting.Server 102 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein toserver 102. For example,server 102 may be implemented by a cloud of computing platforms operating together asserver 102. -
Electronic storage 119 may comprise electronic storage media that electronically stores information. The electronic storage media ofelectronic storage 119 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) withserver 102 and/or removable storage that is removably connectable toserver 102 via, for example, a port or a drive. A port may include a USB port, a firewire port, and/or other port. A drive may include a disk drive and/or other drive.Electronic storage 119 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Theelectronic storage 119 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).Electronic storage 119 may store software algorithms, information determined by processor(s) 104, information received fromcomputing platform 124, and/or other information that enablesserver 102 to function as described herein. - Processor(s) 104 is configured to provide information-processing capabilities in
server 102. As such, processor(s) 104 may include one or more of a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s) 104 is shown inFIG. 1 as a single entity, this is for illustrative purposes only. In some implementations, processor(s) 104 may include one or more processing units. These processing units may be physically located within the same device, or processor(s) 104 may represent processing functionality of a plurality of devices operating in coordination. Processor(s) 104 may be configured to executecomponents components - It should be appreciated that although
components FIG. 1 as being co-located within a single processing unit, in implementations in which processor(s) 104 includes multiple processing units, one or more ofcomponents different components components components components components -
FIG. 10 illustrates amethod 1000 of latency-aware rendering of a focal area of an animation presented on a display. The operations ofmethod 1000 presented below are intended to be illustrative. In some embodiments,method 1000 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 1000 are illustrated inFIG. 10 and described below is not intended to be limiting. - In some implementations,
method 1000 may be implemented in a computer system comprising one or more of one or more processing devices (e.g., a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information), storage media storing machine-readable instructions, and/or other components. The one or more processing devices may include one or more devices executing some or all of the operations ofmethod 1000 in response to instructions stored electronically on electronic storage media. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations ofmethod 1000. - At an
operation 1002, state information describing state of a virtual space may be obtained. The state at an individual point in time may define one or more virtual objects within the virtual space, their positions, and/or other information. In some implementations,operation 1002 may be performed by one or more physical processors executing a space component the same as or similar to space component 108 (shown inFIG. 1 and described herein). - At an
operation 1004, a field of view of the virtual space may be determined. The frames of the animation may comprise images of the virtual space within the field of view. By way of non-limiting example, a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame. In some implementations,operation 1004 may be performed by one or more physical processors executing a field of view component the same as or similar to field of view 110 (shown inFIG. 1 and described herein). - At an
operation 1006, a gaze direction of a user within the field of view may be determined. The gaze direction may define a light of sight of the user. The user may view the animation via the display. In some implementations,operation 1006 may be performed by one or more physical processors executing a gaze component the same as or similar to the gaze component 112 (shown inFIG. 1 and described herein). - At an operation 1008, gaze adjustment latency may be obtained. The gaze adjustment latency may quantify latency in one or more of determining changes in the gaze direction, making corresponding adjustments to a focal area within the field of view, and/or other operations of
method 1000. In some implementations, operation 1008 may be performed by one or more physical processors executing a latency component the same as or similar to the latency component 114 (shown inFIG. 1 and described herein). - At an
operation 1010, a focal area within a field of view may be determined based on one or more of the gaze direction, gaze adjustment latency, and/or other information. The focal area may include one or more of a foveal region corresponding to the gaze direction, an area surrounding the foveal region, and/or other components. The foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight. In some implementations,operation 1010 may be performed by one or more physical processors executing a focal area component the same as or similar to the focal area component 116 (shown inFIG. 1 and described herein). - At an
operation 1012, individual images for individual frames of the animation may be rendered from the state information. Individual images may depict the virtual space within the field of view determined at individual points in time that corresponds to individual frames. By way of non-limiting example, a first image may be rendered for the first frame. The first image may depict the virtual space within the field of view at the point in time corresponding to the first frame. A focal area within the field of view may be rendered at a higher resolution than an area outside the focal area. In some implementations,operation 1012 may be performed by one or more physical processors executing a render component the same as or similar to the render component 118 (shown inFIG. 1 and described herein). -
FIG. 11 illustrates amethod 1100 of gaze-predictive rendering of a focal area of an animation presented on a display. The operations ofmethod 1100 presented below are intended to be illustrative. In some embodiments,method 1100 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 1100 are illustrated inFIG. 11 and described below is not intended to be limiting. - In some implementations,
method 1100 may be implemented in a computer system comprising one or more of one or more processing devices (e.g., a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information), storage media storing machine-readable instructions, and/or other components. The one or more processing devices may include one or more devices executing some or all of the operations ofmethod 1100 in response to instructions stored electronically on electronic storage media. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations ofmethod 1100. - At an
operation 1102, state information describing state of a virtual space may be obtained. The state at an individual point in time may define one or more virtual objects within the virtual space, their positions, and/or other information. In some implementations,operation 1102 may be performed by one or more physical processors executing a space component the same as or similar to space component 108 (shown inFIG. 1 and described herein). - At an
operation 1104, a field of view of the virtual space may be determined. The frames of the animation may comprise images of the virtual space within the field of view. By way of non-limiting example, a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame. In some implementations,operation 1104 may be performed by one or more physical processors executing a field of view component the same as or similar to field of view 110 (shown inFIG. 1 and described herein). - At an
operation 1106, a gaze direction of a user within the field of view may be predicted. The gaze direction may define a light of sight of the user. The user may view the animation via the display. In some implementations,operation 1106 may be performed by one or more physical processors executing a gaze component the same as or similar to the gaze component 112 (shown inFIG. 1 and described herein). - At an
operation 1108, a focal area within a field of view may be determined based on the predicted gaze direction and/or other information. The focal area may include one or more of a foveal region corresponding to the predicted gaze direction, an area surrounding the foveal region, and/or other components. The foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight. In some implementations,operation 1108 may be performed by one or more physical processors executing a focal area component the same as or similar to the focal area component 116 (shown inFIG. 1 and described herein). - At an
operation 1110, individual images for individual frames of the animation may be rendered from the state information. Individual images may depict the virtual space within the field of view determined at individual points in time that corresponds to individual frames. By way of non-limiting example, a first image may be rendered for the first frame. The first image may depict the virtual space within the field of view at the point in time corresponding to the first frame. The focal area within the field of view may be rendered at a higher resolution than an area outside the focal area. In some implementations,operation 1110 may be performed by one or more physical processors executing a render component the same as or similar to the render component 118 (shown inFIG. 1 and described herein). -
FIG. 12 illustrates amethod 1200 of bandwidth-sensitive rendering of a focal area of an animation presented on a display. The operations ofmethod 1200 presented below are intended to be illustrative. In some embodiments,method 1200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 1200 are illustrated inFIG. 12 and described below is not intended to be limiting. - In some implementations,
method 1200 may be implemented in a computer system comprising one or more of one or more processing devices (e.g., a physical processor, a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information), storage media storing machine-readable instructions, and/or other components. The one or more processing devices may include one or more devices executing some or all of the operations ofmethod 1200 in response to instructions stored electronically on electronic storage media. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations ofmethod 1200. - At an
operation 1202, state information describing state of a virtual space may be obtained. The state at an individual point in time may define one or more virtual objects within the virtual space, their positions, and/or other information. In some implementations,operation 1202 may be performed by one or more physical processors executing a space component the same as or similar to space component 108 (shown inFIG. 1 and described herein). - At an
operation 1204, a field of view of the virtual space may be determined. The frames of the animation may comprise images of the virtual space within the field of view. By way of non-limiting example, a first frame may comprise an image of the virtual space within the field of view at a point in time that corresponds to the first frame. In some implementations,operation 1204 may be performed by one or more physical processors executing a field of view component the same as or similar to field of view 110 (shown inFIG. 1 and described herein). - At an
operation 1206, a focal area within a field of view may be determined. The focal area may include one or more of a foveal region corresponding to a user's gaze direction, an area surrounding the foveal region, and/or other components. The foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight. In some implementations,operation 1206 may be performed by one or more physical processors executing a focal area component the same as or similar to the focal area component 116 (shown inFIG. 1 and described herein). - At an
operation 1208, individual images for individual frames of the animation may be rendered from the state information. Individual images may depict the virtual space within the field of view determined at individual points in time that corresponds to individual frames. By way of non-limiting example, a first image may be rendered for the first frame. The first image may depict the virtual space within the field of view at the point in time corresponding to the first frame. The focal area within the field of view may be rendered with a higher color bit depth and/or higher luminance bit depth relative an area outside the focal area. In some implementations,operation 1208 may be performed by one or more physical processors executing a render component the same as or similar to the render component 118 (shown inFIG. 1 and described herein). - Although the present technology has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the technology is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present technology contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/245,523 US10255714B2 (en) | 2016-08-24 | 2016-08-24 | System and method of gaze predictive rendering of a focal area of an animation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/245,523 US10255714B2 (en) | 2016-08-24 | 2016-08-24 | System and method of gaze predictive rendering of a focal area of an animation |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180061116A1 true US20180061116A1 (en) | 2018-03-01 |
US10255714B2 US10255714B2 (en) | 2019-04-09 |
Family
ID=61243230
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/245,523 Active 2036-11-23 US10255714B2 (en) | 2016-08-24 | 2016-08-24 | System and method of gaze predictive rendering of a focal area of an animation |
Country Status (1)
Country | Link |
---|---|
US (1) | US10255714B2 (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170270383A1 (en) * | 2016-03-18 | 2017-09-21 | Fuji Jukogyo Kabushiki Kaisha | Search assisting apparatus, search assisting method, and computer readable medium |
US20180307311A1 (en) * | 2017-04-21 | 2018-10-25 | Accenture Global Solutions Limited | Multi-device virtual reality, artificial reality and mixed reality analytics |
CN109242943A (en) * | 2018-08-21 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of image rendering method, device and image processing equipment, storage medium |
US10274734B2 (en) * | 2016-08-31 | 2019-04-30 | Lg Display Co., Ltd. | Personal immersive display device and driving method thereof |
US10296940B2 (en) * | 2016-08-26 | 2019-05-21 | Minkonet Corporation | Method of collecting advertisement exposure data of game video |
US10330935B2 (en) * | 2016-09-22 | 2019-06-25 | Apple Inc. | Predictive, foveated virtual reality system |
CN110009714A (en) * | 2019-03-05 | 2019-07-12 | 重庆爱奇艺智能科技有限公司 | The method and device of virtual role expression in the eyes is adjusted in smart machine |
US10448824B2 (en) * | 2016-12-29 | 2019-10-22 | Intel Corporation | Focus adjustment method and apparatus |
US10564715B2 (en) * | 2016-11-14 | 2020-02-18 | Google Llc | Dual-path foveated graphics pipeline |
WO2020040865A1 (en) * | 2018-08-22 | 2020-02-27 | Microsoft Technology Licensing, Llc | Foveated color correction to improve color uniformity of head-mounted displays |
US10580207B2 (en) * | 2017-11-24 | 2020-03-03 | Frederic Bavastro | Augmented reality method and system for design |
US10643581B2 (en) * | 2017-10-16 | 2020-05-05 | Samsung Display Co., Ltd. | Head mount display device and operation method of the same |
WO2020099046A1 (en) * | 2018-11-15 | 2020-05-22 | Bayerische Motoren Werke Aktiengesellschaft | Dynamic information protection for display devices |
US10764581B2 (en) * | 2018-05-24 | 2020-09-01 | Lockhead Martin Corporation | Multi-resolution regionalized data transmission |
US10942575B2 (en) * | 2017-06-07 | 2021-03-09 | Cisco Technology, Inc. | 2D pointing indicator analysis |
US10977859B2 (en) * | 2017-11-24 | 2021-04-13 | Frederic Bavastro | Augmented reality method and system for design |
CN113362449A (en) * | 2021-06-01 | 2021-09-07 | 聚好看科技股份有限公司 | Three-dimensional reconstruction method, device and system |
US11222397B2 (en) * | 2016-12-23 | 2022-01-11 | Qualcomm Incorporated | Foveated rendering in tiled architectures |
US11269409B2 (en) * | 2017-04-07 | 2022-03-08 | Intel Corporation | Apparatus and method for foveated rendering, bin comparison and TBIMR memory-backed storage for virtual reality implementations |
US20220113795A1 (en) * | 2020-10-09 | 2022-04-14 | Sony Interactive Entertainment Inc. | Data processing system and method for image enhancement |
US20220129911A1 (en) * | 2019-02-05 | 2022-04-28 | Infilect Technologies Private Limited | System and method for quantifying brand visibility and compliance metrics for a brand |
US20220292548A1 (en) * | 2018-12-24 | 2022-09-15 | Infilect Technologies Private Limited | System and method for generating a modified design creative |
US11662807B2 (en) * | 2020-01-06 | 2023-05-30 | Tectus Corporation | Eye-tracking user interface for virtual tool control |
US20230206656A1 (en) * | 2021-12-24 | 2023-06-29 | Vinai Artificial Intelligence Application And Research Joint Stock Company | Method and system for training a machine learning model for point of gaze prediction |
US11699266B2 (en) * | 2015-09-02 | 2023-07-11 | Interdigital Ce Patent Holdings, Sas | Method, apparatus and system for facilitating navigation in an extended scene |
US20230381645A1 (en) * | 2022-05-27 | 2023-11-30 | Sony Interactive Entertainment LLC | Methods and systems to activate selective navigation or magnification of screen content |
US12145060B2 (en) * | 2022-05-27 | 2024-11-19 | Sony Interactive Entertainment LLC | Methods and systems to activate selective navigation or magnification of screen content |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10979721B2 (en) * | 2016-11-17 | 2021-04-13 | Dolby Laboratories Licensing Corporation | Predicting and verifying regions of interest selections |
EP3602181A4 (en) * | 2017-03-27 | 2020-05-13 | Avegant Corp. | Steerable foveal display |
US10572764B1 (en) * | 2017-06-05 | 2020-02-25 | Google Llc | Adaptive stereo rendering to reduce motion sickness |
JP7235041B2 (en) * | 2018-03-26 | 2023-03-08 | ソニーグループ株式会社 | Information processing device, information processing method, and program |
US11169383B2 (en) | 2018-12-07 | 2021-11-09 | Avegant Corp. | Steerable positioning element |
CN109741463B (en) * | 2019-01-02 | 2022-07-19 | 京东方科技集团股份有限公司 | Rendering method, device and equipment of virtual reality scene |
CA3125739A1 (en) | 2019-01-07 | 2020-07-16 | Avegant Corp. | Control system and rendering pipeline |
CN217739617U (en) | 2019-03-29 | 2022-11-04 | 阿维甘特公司 | System for providing steerable hybrid display using waveguides |
KR20220120615A (en) | 2020-01-06 | 2022-08-30 | 아브간트 코포레이션 | Head-mounted system with color-specific modulation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120154277A1 (en) * | 2010-12-17 | 2012-06-21 | Avi Bar-Zeev | Optimized focal area for augmented reality displays |
US20140184475A1 (en) * | 2012-12-27 | 2014-07-03 | Andras Tantos | Display update time reduction for a near-eye display |
US20170123492A1 (en) * | 2014-05-09 | 2017-05-04 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5552805A (en) | 1994-11-25 | 1996-09-03 | Praxisoft, Inc. | Method and system for displaying blended colors |
EP1587329B1 (en) | 2003-01-20 | 2015-04-15 | Sanyo Electric Co., Ltd. | Three-dimensional video providing method and three-dimensional video display device |
US10536670B2 (en) | 2007-04-25 | 2020-01-14 | David Chaum | Video copy prevention systems with interaction and compression |
US20110091130A1 (en) | 2008-06-09 | 2011-04-21 | Universite De Montreal | Method and module for improving image fidelity |
US9823745B1 (en) | 2012-08-30 | 2017-11-21 | Atheer, Inc. | Method and apparatus for selectively presenting content |
US20140176591A1 (en) | 2012-12-26 | 2014-06-26 | Georg Klein | Low-latency fusing of color image data |
US9684976B2 (en) | 2013-03-13 | 2017-06-20 | Qualcomm Incorporated | Operating system-resident display module parameter selection system |
EP3537324B1 (en) | 2013-03-15 | 2022-03-16 | Intel Corporation | Technologies for secure storage and use of biometric authentication information |
EP3048955A2 (en) | 2013-09-25 | 2016-08-03 | MindMaze SA | Physiological parameter measurement and feedback system |
CN109597202B (en) | 2013-11-27 | 2021-08-03 | 奇跃公司 | Virtual and augmented reality systems and methods |
US9766463B2 (en) | 2014-01-21 | 2017-09-19 | Osterhout Group, Inc. | See-through computer display systems |
US9753288B2 (en) | 2014-01-21 | 2017-09-05 | Osterhout Group, Inc. | See-through computer display systems |
US10043281B2 (en) | 2015-06-14 | 2018-08-07 | Sony Interactive Entertainment Inc. | Apparatus and method for estimating eye gaze location |
EP4090034A1 (en) | 2015-08-07 | 2022-11-16 | Apple Inc. | System and method for displaying a stream of images |
US11010956B2 (en) | 2015-12-09 | 2021-05-18 | Imagination Technologies Limited | Foveated rendering |
US10372205B2 (en) | 2016-03-31 | 2019-08-06 | Sony Interactive Entertainment Inc. | Reducing rendering computation and power consumption by detecting saccades and blinks |
US11024014B2 (en) | 2016-06-28 | 2021-06-01 | Microsoft Technology Licensing, Llc | Sharp text rendering with reprojection |
-
2016
- 2016-08-24 US US15/245,523 patent/US10255714B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120154277A1 (en) * | 2010-12-17 | 2012-06-21 | Avi Bar-Zeev | Optimized focal area for augmented reality displays |
US20140184475A1 (en) * | 2012-12-27 | 2014-07-03 | Andras Tantos | Display update time reduction for a near-eye display |
US20170123492A1 (en) * | 2014-05-09 | 2017-05-04 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11699266B2 (en) * | 2015-09-02 | 2023-07-11 | Interdigital Ce Patent Holdings, Sas | Method, apparatus and system for facilitating navigation in an extended scene |
US20170270383A1 (en) * | 2016-03-18 | 2017-09-21 | Fuji Jukogyo Kabushiki Kaisha | Search assisting apparatus, search assisting method, and computer readable medium |
US10176394B2 (en) * | 2016-03-18 | 2019-01-08 | Subaru Corporation | Search assisting apparatus, search assisting method, and computer readable medium |
US10296940B2 (en) * | 2016-08-26 | 2019-05-21 | Minkonet Corporation | Method of collecting advertisement exposure data of game video |
US10274734B2 (en) * | 2016-08-31 | 2019-04-30 | Lg Display Co., Ltd. | Personal immersive display device and driving method thereof |
US10330935B2 (en) * | 2016-09-22 | 2019-06-25 | Apple Inc. | Predictive, foveated virtual reality system |
US10739599B2 (en) * | 2016-09-22 | 2020-08-11 | Apple Inc. | Predictive, foveated virtual reality system |
US10564715B2 (en) * | 2016-11-14 | 2020-02-18 | Google Llc | Dual-path foveated graphics pipeline |
US11222397B2 (en) * | 2016-12-23 | 2022-01-11 | Qualcomm Incorporated | Foveated rendering in tiled architectures |
US11330979B2 (en) | 2016-12-29 | 2022-05-17 | Intel Corporation | Focus adjustment method and apparatus |
US10448824B2 (en) * | 2016-12-29 | 2019-10-22 | Intel Corporation | Focus adjustment method and apparatus |
US11941169B2 (en) | 2017-04-07 | 2024-03-26 | Intel Corporation | Apparatus and method for foveated rendering, bin comparison and TBIMR memory-backed storage for virtual reality implementations |
US11269409B2 (en) * | 2017-04-07 | 2022-03-08 | Intel Corporation | Apparatus and method for foveated rendering, bin comparison and TBIMR memory-backed storage for virtual reality implementations |
US20180307311A1 (en) * | 2017-04-21 | 2018-10-25 | Accenture Global Solutions Limited | Multi-device virtual reality, artificial reality and mixed reality analytics |
US10712814B2 (en) * | 2017-04-21 | 2020-07-14 | Accenture Global Solutions Limited | Multi-device virtual reality, augmented reality and mixed reality analytics |
US10942575B2 (en) * | 2017-06-07 | 2021-03-09 | Cisco Technology, Inc. | 2D pointing indicator analysis |
US10643581B2 (en) * | 2017-10-16 | 2020-05-05 | Samsung Display Co., Ltd. | Head mount display device and operation method of the same |
US10977859B2 (en) * | 2017-11-24 | 2021-04-13 | Frederic Bavastro | Augmented reality method and system for design |
US10580207B2 (en) * | 2017-11-24 | 2020-03-03 | Frederic Bavastro | Augmented reality method and system for design |
US11341721B2 (en) | 2017-11-24 | 2022-05-24 | Frederic Bavastro | Method for generating visualizations |
US10764581B2 (en) * | 2018-05-24 | 2020-09-01 | Lockhead Martin Corporation | Multi-resolution regionalized data transmission |
US11418782B2 (en) * | 2018-05-24 | 2022-08-16 | Lockheed Martin Corporation | Multi-resolution regionalized data transmission |
CN109242943A (en) * | 2018-08-21 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of image rendering method, device and image processing equipment, storage medium |
WO2020040865A1 (en) * | 2018-08-22 | 2020-02-27 | Microsoft Technology Licensing, Llc | Foveated color correction to improve color uniformity of head-mounted displays |
US11347056B2 (en) | 2018-08-22 | 2022-05-31 | Microsoft Technology Licensing, Llc | Foveated color correction to improve color uniformity of head-mounted displays |
WO2020099046A1 (en) * | 2018-11-15 | 2020-05-22 | Bayerische Motoren Werke Aktiengesellschaft | Dynamic information protection for display devices |
US11954920B2 (en) | 2018-11-15 | 2024-04-09 | Bayerische Motoren Werke Aktiengesellschaft | Dynamic information protection for display devices |
US20220292548A1 (en) * | 2018-12-24 | 2022-09-15 | Infilect Technologies Private Limited | System and method for generating a modified design creative |
US11699162B2 (en) * | 2018-12-24 | 2023-07-11 | Infilect Technologies Private Limited | System and method for generating a modified design creative |
US20220129911A1 (en) * | 2019-02-05 | 2022-04-28 | Infilect Technologies Private Limited | System and method for quantifying brand visibility and compliance metrics for a brand |
US11966929B2 (en) * | 2019-02-05 | 2024-04-23 | Infilect Technologies Private Limited | System and method for quantifying brand visibility and compliance metrics for a brand |
CN110009714A (en) * | 2019-03-05 | 2019-07-12 | 重庆爱奇艺智能科技有限公司 | The method and device of virtual role expression in the eyes is adjusted in smart machine |
US11662807B2 (en) * | 2020-01-06 | 2023-05-30 | Tectus Corporation | Eye-tracking user interface for virtual tool control |
GB2599900A (en) * | 2020-10-09 | 2022-04-20 | Sony Interactive Entertainment Inc | Data processing system and method for image enhancement |
US20220113795A1 (en) * | 2020-10-09 | 2022-04-14 | Sony Interactive Entertainment Inc. | Data processing system and method for image enhancement |
GB2599900B (en) * | 2020-10-09 | 2023-01-11 | Sony Interactive Entertainment Inc | Data processing system and method for image enhancement |
CN113362449A (en) * | 2021-06-01 | 2021-09-07 | 聚好看科技股份有限公司 | Three-dimensional reconstruction method, device and system |
US20230206656A1 (en) * | 2021-12-24 | 2023-06-29 | Vinai Artificial Intelligence Application And Research Joint Stock Company | Method and system for training a machine learning model for point of gaze prediction |
US20230381645A1 (en) * | 2022-05-27 | 2023-11-30 | Sony Interactive Entertainment LLC | Methods and systems to activate selective navigation or magnification of screen content |
US12145060B2 (en) * | 2022-05-27 | 2024-11-19 | Sony Interactive Entertainment LLC | Methods and systems to activate selective navigation or magnification of screen content |
Also Published As
Publication number | Publication date |
---|---|
US10255714B2 (en) | 2019-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10255714B2 (en) | System and method of gaze predictive rendering of a focal area of an animation | |
US10042421B2 (en) | System and method of latency-aware rendering of a focal area of an animation | |
US20180061084A1 (en) | System and method of bandwidth-sensitive rendering of a focal area of an animation | |
US12118676B2 (en) | Sensory stimulus management in head mounted display | |
US11199705B2 (en) | Image rendering responsive to user actions in head mounted display | |
US10210666B2 (en) | Filtering and parental control methods for restricting visual activity on a head mounted display | |
EP3265864B1 (en) | Tracking system for head mounted display | |
US9551873B2 (en) | Head mounted device (HMD) system having interface with mobile computing device for rendering virtual reality content | |
EP3241207A2 (en) | Scanning display system in head-mounted display for virtual reality | |
CN114967917B (en) | Method and system for determining a current gaze direction | |
US11270475B2 (en) | Variable rendering system and method | |
Ponto et al. | Online real-time presentation of virtual experiences forexternal viewers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE WALT DISNEY COMPANY LIMITED, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MITCHELL, KENNETH J.;ANDREWS, SHELDON;SIGNING DATES FROM 20160817 TO 20160822;REEL/FRAME:039523/0094 Owner name: UNIVERSITY OF BATH, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COSKER, DARREN;SWAFFORD, NICHOLAS T.;SIGNING DATES FROM 20160817 TO 20160823;REEL/FRAME:039523/0183 Owner name: DISNEY ENTERPRISES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:UNIVERSITY OF BATH;REEL/FRAME:039798/0243 Effective date: 20160817 |
|
AS | Assignment |
Owner name: DISNEY ENTERPRISES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THE WALT DISNEY COMPANY LIMITED;REEL/FRAME:039818/0349 Effective date: 20160909 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |