WO2019217262A1 - Dynamic foveated rendering - Google Patents

Dynamic foveated rendering Download PDF

Info

Publication number
WO2019217262A1
WO2019217262A1 PCT/US2019/030820 US2019030820W WO2019217262A1 WO 2019217262 A1 WO2019217262 A1 WO 2019217262A1 US 2019030820 W US2019030820 W US 2019030820W WO 2019217262 A1 WO2019217262 A1 WO 2019217262A1
Authority
WO
WIPO (PCT)
Prior art keywords
resolution function
eye tracking
rendering
rendering resolution
user
Prior art date
Application number
PCT/US2019/030820
Other languages
French (fr)
Original Assignee
Zermatt Technologies Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zermatt Technologies Llc filed Critical Zermatt Technologies Llc
Publication of WO2019217262A1 publication Critical patent/WO2019217262A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0147Head-up displays characterised by optical features comprising a device modifying the resolution of the displayed image
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the present disclosure generally relates to image rendering, and in particular, to systems, methods, and devices for rendering images for simulated reality with a varying amount of detail.
  • a physical setting refers to a world that individuals can sense and/or with which individuals can interact without assistance of electronic systems.
  • Physical settings e.g., a physical forest
  • physical elements e.g., physical trees, physical structures, and physical animals. Individuals can directly interact with and/or sense the physical setting, such as through touch, sight, smell, hearing, and taste.
  • a simulated reality (SR) setting refers to an entirely or partly computer-created setting that individuals can sense and/or with which individuals can interact via an electronic system.
  • SR a subset of an individual’s movements is monitored, and, responsive thereto, one or more attributes of one or more virtual objects in the SR setting is changed in a manner that conforms with one or more physical laws.
  • a SR system may detect an individual walking a few paces forward and, responsive thereto, adjust graphics and audio presented to the individual in a manner similar to how such scenery and sounds would change in a physical setting. Modifications to attribute(s) of virtual object(s) in a SR setting also may be made responsive to representations of movement (e.g., audio instructions).
  • An individual may interact with and/or sense a SR object using any one of his senses, including touch, smell, sight, taste, and sound.
  • an individual may interact with and/or sense aural objects that create a multi-dimensional (e.g., three dimensional) or spatial aural setting, and/or enable aural transparency.
  • Multi-dimensional or spatial aural settings provide an individual with a perception of discrete aural sources in multi-dimensional space.
  • Aural transparency selectively incorporates sounds from the physical setting, either with or without computer-created audio.
  • an individual may interact with and/or sense only aural objects.
  • a VR setting refers to a simulated setting that is designed only to include computer-created sensory inputs for at least one of the senses.
  • a VR setting includes multiple virtual objects with which an individual may interact and/or sense. An individual may interact and/or sense virtual objects in the VR setting through a simulation of a subset of the individual’s actions within the computer-created setting, and/or through a simulation of the individual or his presence within the computer-created setting.
  • a MR setting refers to a simulated setting that is designed to integrate computer-created sensory inputs (e.g., virtual objects) with sensory inputs from the physical setting, or a representation thereof.
  • a mixed reality setting is between, and does not include, a VR setting at one end and an entirely physical setting at the other end.
  • computer-created sensory inputs may adapt to changes in sensory inputs from the physical setting.
  • some electronic systems for presenting MR settings may monitor orientation and/or location with respect to the physical setting to enable interaction between virtual objects and real objects (which are physical elements from the physical setting or representations thereof). For example, a system may monitor movements so that a virtual plant appears stationery with respect to a physical building.
  • An AR setting refers to a simulated setting in which at least one virtual object is superimposed over a physical setting, or a representation thereof.
  • an electronic system may have an opaque display and at least one imaging sensor for capturing images or video of the physical setting, which are representations of the physical setting. The system combines the images or video with virtual objects, and displays the combination on the opaque display.
  • An individual using the system, views the physical setting indirectly via the images or video of the physical setting, and observes the virtual objects superimposed over the physical setting.
  • image sensor(s) to capture images of the physical setting, and presents the AR setting on the opaque display using those images, the displayed images are called a video pass-through.
  • an electronic system for displaying an AR setting may have a transparent or semi-transparent display through which an individual may view the physical setting directly.
  • the system may display virtual objects on the transparent or semi-transparent display, so that an individual, using the system, observes the virtual objects superimposed over the physical setting.
  • a system may comprise a projection system that projects virtual objects into the physical setting.
  • the virtual objects may be projected, for example, on a physical surface or as a holograph, so that an individual, using the system, observes the virtual objects superimposed over the physical setting.
  • An augmented reality setting also may refer to a simulated setting in which a representation of a physical setting is altered by computer-created sensory information.
  • a portion of a representation of a physical setting may be graphically altered (e.g., enlarged), such that the altered portion may still be representative of but not a faithfully- reproduced version of the originally captured image(s).
  • a system may alter at least one of the sensor images to impose a particular viewpoint different than the viewpoint captured by the image sensor(s).
  • a representation of a physical setting may be altered by graphically obscuring or excluding portions thereof.
  • An AV setting refers to a simulated setting in which a computer-created or virtual setting incorporates at least one sensory input from the physical setting.
  • the sensory input(s) from the physical setting may be representations of at least one characteristic of the physical setting.
  • a virtual object may assume a color of a physical element captured by imaging sensor(s).
  • a virtual object may exhibit characteristics consistent with actual weather conditions in the physical setting, as identified via imaging, weather-related sensors, and/or online weather data.
  • an augmented reality forest may have virtual trees and structures, but the animals may have features that are accurately reproduced from images taken of physical animals.
  • a head mounted system may have an opaque display and speaker(s).
  • a head mounted system may be designed to receive an external display (e.g., a smartphone).
  • the head mounted system may have imaging sensor(s) and/or microphones for taking images/video and/or capturing audio of the physical setting, respectively.
  • a head mounted system also may have a transparent or semi transparent display.
  • the transparent or semi-transparent display may incorporate a substrate through which light representative of images is directed to an individual’s eyes.
  • the display may incorporate LEDs, OLEDs, a digital light projector, a laser scanning light source, liquid crystal on silicon, or any combination of these technologies.
  • the substrate through which the light is transmitted may be a light waveguide, optical combiner, optical reflector, holographic substrate, or any combination of these substrates.
  • the transparent or semi transparent display may transition selectively between an opaque state and a transparent or semi-transparent state.
  • the electronic system may be a projection-based system.
  • a projection-based system may use retinal projection to project images onto an individual’s retina.
  • a projection system also may project virtual objects into a physical setting (e.g., onto a physical surface or as a holograph).
  • SR systems include heads up displays, automotive windshields with the ability to display graphics, windows with the ability to display graphics, lenses with the ability to display graphics, headphones or earphones, speaker arrangements, input mechanisms (e.g., controllers having or not having haptic feedback), tablets, smartphones, and desktop or laptop computers.
  • Rendering an image for an SR experience can be computationally expensive.
  • portions of the image are rendered on a display panel with different resolutions. For example, in various implementations, portions corresponding to a user’s field of focus are rendered with higher resolution than portions corresponding to a user’s periphery.
  • Figure 1 is a block diagram of an example operating environment in accordance with some implementations.
  • Figure 2 illustrates an SR pipeline that receives SR content and displays an image on a display panel based on the SR content in accordance with some implementations.
  • Figures 3A-3D illustrate various rendering resolution functions in a first dimension in accordance with various implementations.
  • Figures 4A D illustrate various two-dimensional rendering resolution functions in accordance with various implementations.
  • Figure 5 A illustrates an example rendering resolution function that characterizes a resolution in a display space as a function of angle in a warped space in accordance with some implementations .
  • Figure 5B illustrates the integral of the example rendering resolution function of Figure 5 A in accordance with some implementations.
  • Figure 5C illustrates the tangent of the inverse of the integral of the example rendering resolution function of Figure 5 A in accordance with some implementations.
  • Figure 6A illustrates an example rendering resolution function for performing static foveation in accordance with some implementations
  • Figure 6B illustrates an example rendering resolution function for performing dynamic foveation in accordance with some implementations.
  • Figure 7 is a flowchart representation of a method of rendering an image based on a rendering resolution function in accordance with some implementations.
  • Figure 8A illustrates an example image representation, in a display space, of SR content to be rendered in accordance with some implementations.
  • Figure 8B illustrates a warped image of the SR content of Figure 8A in accordance with some implementations.
  • Figure 9 is a flowchart representation of a method of rendering an image in one of a plurality of foveation modes in accordance with some implementations.
  • Figures 10A-10C illustrate various constrained rendering resolution functions in accordance with various implementations.
  • Figure 11 is a flowchart representation of a method of rendering an image with a constrained rendering resolution function in accordance with some implementations.
  • Figure 12 is a flowchart representation of a method of rendering an image based on eye tracking metadata in accordance with some implementations.
  • Figures 13A-13B illustrate various confidence-based rendering resolution functions in accordance with various implementations.
  • Figure 14 is a flowchart representation of a method of rendering an image based on SR content in accordance with some implementations.
  • Various implementations disclosed herein include devices, systems, and methods for rendering an image based on a rendering resolution function.
  • the method includes obtaining SR content to be rendered into a display space and obtaining a rendering resolution function defining a mapping between the display space and a warped space, the rendering resolution function depending on a fixation point of a user in the display space.
  • the method further includes generating a rendered image based on the SR content and the rendering resolution function, wherein the rendered image includes a warped image with a plurality of pixels at respective locations uniformly spaced in a grid pattern in the warped space, wherein the plurality of pixels are respectively associated with a plurality of respective pixel values based on the SR content and a plurality of respective scaling factors defining an area in the display space based on the rendering resolution function.
  • Various implementations disclosed herein include devices, systems, and methods for rendering an image in one of a plurality of foveation modes.
  • the method includes obtaining eye tracking data indicative of a gaze of a user and obtaining SR content to be rendered.
  • the method includes determining a foveation mode to apply to rendering the SR content.
  • the method further includes, in accordance with a determination that the foveation mode is a dynamic foveation mode, rendering the SR content according to dynamic foveation based on the eye tracking data.
  • Various implementations disclosed herein include devices, systems, and methods for rendering an image based on a constrained rendering resolution function.
  • the method includes obtaining eye tracking data indicative of a gaze of a user.
  • the method includes generating a rendering resolution function based on the eye tracking data, the rendering resolution function having a maximum value dependent on the eye tracking data and a summation value independent of the eye tracking data.
  • the method includes generating a rendered image based on SR content and the rendering resolution function.
  • Various implementations disclosed herein include devices, systems, and methods for rendering an image based on eye tracking metadata.
  • the method includes obtaining eye tracking data indicative of a gaze of a user and obtaining eye tracking metadata indicative of a characteristic of the eye tracking data.
  • the method includes generating a rendering resolution function based on the eye tracking data and the eye tracking metadata.
  • the method further includes generating a rendered image based on SR content and the rendering resolution function.
  • Various implementations disclosed herein include devices, systems, and methods for rendering an image based on SR content.
  • the method includes obtaining eye tracking data indicative of a gaze of a user and obtaining SR content to be rendered.
  • the method includes generating a rendering resolution function based on the eye tracking data and the SR content.
  • the method includes generating a rendered image based on the SR content and the rendering resolution function.
  • a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein.
  • a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein.
  • a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
  • FIG. 1 is a block diagram of an example operating environment 100 in accordance with some implementations. While pertinent features are shown, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, as a non-limiting example, the operating environment 100 includes a controller 110 and a head-mounted device (HMD) 120.
  • HMD head-mounted device
  • the controller 110 is configured to manage and coordinate a simulated reality (SR) experience for the user.
  • the controller 110 includes a suitable combination of software, firmware, and/or hardware.
  • the controller 110 is a computing device that is local or remote relative to the scene 105.
  • the controller 110 is a local server located within the scene 105.
  • the controller 110 is a remote server located outside of the scene 105 (e.g., a cloud server, central server, etc.).
  • the controller 110 is communicatively coupled with the HMD 120 via one or more wired or wireless communication channels 144 (e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.l6x, IEEE 802.3x, etc.). In another example, the controller 110 is included within the enclosure of HMD 120.
  • wired or wireless communication channels 144 e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.l6x, IEEE 802.3x, etc.
  • the HMD 120 is configured to present the SR experience to the user.
  • the HMD 120 includes a suitable combination of software, firmware, and/or hardware.
  • the functionalities of the controller 110 are provided by and/or combined with the HMD 120.
  • the HMD 120 provides an SR experience to the user while the user is virtually and/or physically present within the scene 105.
  • the HMD 120 is configured to present AR content (e.g., one or more virtual objects) and to enable optical see-through of the scene 105.
  • the HMD 120 is configured to present AR content (e.g., one or more virtual objects) overlaid or otherwise combined with images or portions thereof captured by the scene camera of HMD 120.
  • the HMD 120 while presenting AV content, the HMD 120 is configured to present elements of the real world, or representations thereof, combined with or superimposed over a user’ s view of a computer-simulated environment.
  • the HMD 120 is configured to present VR content.
  • the user wears the HMD 120 on his/her head.
  • the HMD 120 includes one or more SR displays provided to display the SR content, optionally through an eyepiece or other optical lens system.
  • the HMD 120 encloses the field-of-view of the user.
  • the HMD 120 is replaced with a handheld device (such as a smartphone or tablet) configured to present SR content in which the user does not wear the HMD 120, but holds the device with a display directed towards the field-of-view of the user and a camera directed towards the scene 105.
  • the handheld device can be placed within an enclosure that can be worn on the head of the user.
  • the HMD 120 is replaced with an SR chamber, enclosure, or room configured to present SR content, wherein the user does not wear or hold the HMD 120.
  • the HMD 120 includes an SR pipeline that presents the SR content.
  • Figure 2 illustrates an SR pipeline 200 that receives SR content and displays an image on a display panel 240 based on the SR content.
  • the SR pipeline 200 includes a rendering module 210 that receives the SR content (and eye tracking data from an eye tracker 260) and renders an image based on the SR content.
  • SR content includes definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), and other information describing content to be represented in the rendered image.
  • An image includes a matrix of pixels, each pixel having a corresponding pixel value and a corresponding pixel location.
  • the pixel values range from 0 to 255.
  • each pixel value is a color triplet including three values corresponding to three color channels.
  • an image is an RGB image and each pixel value includes a red value, a green value, and a blue value.
  • an image is a YUV image and each pixel value includes a luminance value and two chroma values.
  • the image is a YUV444 image in which each chroma value is associated with one pixel.
  • the image is a YUV420 image in which each chroma value is associated with a 2x2 block of pixels (e.g., the chroma values are downsampled).
  • an image includes a matrix of tiles, each tile having a corresponding tile location and including a block of pixels with corresponding pixel values.
  • each tile is a 32x32 block of pixels. While specific pixel values, image formats, and tile sizes are provided, it should be appreciated that other values, format, and tile sizes may be used.
  • the image rendered by the rendering module 210 (e.g., the rendered image) is provided to a transport module 220 that couples the rendering module 210 to a display module 230.
  • the transport module 220 includes a compression module 222 that compresses the rendered image (resulting in a compressed image), a communications channel 224 that carries the compressed image, and a decompression module 226 that decompresses the compressed image (resulting in a decompressed image).
  • the decompressed image is provided to a display module 230 that converts the decompressed image into panel data.
  • the panel data is provided to a display panel 240 that displays a displayed image as described by (e.g., according to) the panel data.
  • the display module 230 includes a lens compensation module 232 that compensates for distortion caused by an eyepiece 242 of the HMD.
  • the lens compensation module 232 predistorts the decompressed image in an inverse relationship to the distortion caused by the eyepiece 242 such that the displayed image, when viewed through the eyepiece 242 by a user 250, appears undistorted.
  • the display module 230 also includes a panel compensation module 234 that converts image data into panel data to be read by the display panel 240.
  • the eyepiece 242 limits the resolution that can be perceived by the user 250.
  • the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function of distance from an origin of the display space.
  • the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function of an angle between the optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the eyepiece 242.
  • the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function an angle between the optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
  • the display panel 240 includes a matrix of MxN pixels located at respective locations in a display space.
  • the display panel 240 displays the displayed image by emitting light from each of the pixels as described by (e.g., according to) the panel data.
  • the SR pipeline 200 includes an eye tracker 260 that generates eye tracking data indicative of a gaze of the user 250.
  • the eye tracking data includes data indicative of a fixation point of the user 250 on the display panel 240.
  • the eye tracking data includes data indicative of a gaze angle of the user 250, such as the angle between the current optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
  • the rendering module 210 in order to render an image for display on the display panel 240, the rendering module 210 generates MxN pixel values for each pixel of an MxN image.
  • each pixel of the rendered image corresponds to a pixel of the display panel 240 with a corresponding location in the display space.
  • the rendering module 210 generates a pixel value for MxN pixel locations uniformly spaced in a grid pattern in the display space.
  • the rendering module 210 generates a tile of TxT pixels, each pixel having a corresponding pixel value, at M/TxN/T tile locations uniformly spaced in a grid pattern in the display space.
  • Rendering MxN pixel values can be computationally expensive. Further, as the size of the rendered image increases, so does the amount of processing needed to compress the image at the compression module 222, the amount of bandwidth needed to transport the compressed image across the communications channel 224, and the amount of processing needed to decompress the compressed image at the decompression module 226.
  • foveation e.g. foveated imaging
  • Foveation is a digital image processing technique in which the image resolution, or amount of detail, varies across an image.
  • a foveated image has different resolutions at different parts of the image.
  • Humans typically have relatively weak peripheral vision.
  • resolvable resolution for a user is maximum over a field of fixation (e.g., where the user is gazing) and falls off in an inverse linear fashion.
  • the displayed image displayed by the display panel 240 is a foveated image having a maximum resolution at a field of focus and a resolution that decreases in an inverse linear fashion in proportion to the distance from the field of focus.
  • the foveated image perceptually matches a non- foveated image, e.g., the processing is“lossless.”
  • the foveated image is perceptually better than a non-foveated image, e.g., the quality of the image is greater at the gaze location than a non-foveated image of greater size.
  • the foveated image is perceptually degraded as compared to a non-foveated image, but more efficient in power/bandwidth, e.g., the processing is“lossy.”
  • an MxN foveated image includes less information than an MxN unfoveated image.
  • the rendering module 210 generates, as a rendered image, a foveated image.
  • the rendering module 210 can generate an MxN foveated image more quickly and with less processing power (and battery power) than the rendering module 210 can generate an MxN unfoveated image.
  • an MxN foveated image can be expressed with less data than an MxN unfoveated image.
  • an MxN foveated image file is smaller in size than an MxN unfoveated image file.
  • compressing an MxN foveated image using various compression techniques results in fewer bits than compressing an MxN unfoveated image.
  • a foveation ratio, R can be defined as the amount of information in the MxN unfoveated image divided by the amount of information in the MxN foveated image.
  • the foveation ratio is between 1.5 and 10.
  • the foveation ratio is 2.
  • the foveation ratio is 3 or 4.
  • the foveation ratio is constant among images.
  • the foveation ratio is selected based on the image being rendered.
  • the rendering module 210 in order to render an image for display on the display panel 240, the rendering module 210 generates M/RxN/R pixel values for each pixel of an M/RxN/R warped image. Each pixel of the warped image corresponds to an area greater than a pixel of the display panel 240 at a corresponding location in the display space. Thus, the rendering module 210 generates a pixel value for each of M/RxN/R locations in the display space that are not uniformly distributed in a grid pattern.
  • the rendering module 210 generates a tile of TxT pixels, each pixel having a corresponding pixel value, at each of M/(RT)xN/(RT) locations in the display space that are not uniformly distributed in a grid pattern.
  • the respective area in the display space corresponding to each pixel value (or each tile) is defined by the corresponding location in the display space (a rendering location) and a scaling factor (or a set of a horizontal scaling factor and a vertical scaling factor).
  • the rendering module 210 generates, as a rendered image, a warped image.
  • the warped image includes a matrix of M/RxN/R pixel values for M/RxN/R locations uniformly spaced in a grid pattern in a warped space that is different than the display space.
  • the warped image includes a matrix of M/RxN/R pixel values for M/RxN/R locations in the display space that are not uniformly distributed in a grid pattern.
  • the resolution of the warped image is uniform in the warped space, the resolution varies in the display space. This is described in greater detail below with respect to Figures 8 A and 8B.
  • the rendering module 210 determines the rendering locations and the corresponding scaling factors based on a rendering resolution function that generally characterizes the resolution of the rendered image in the displayed space.
  • the rendering resolution function, S(x), is a function of a distance from an origin of the display space (which may correspond to the center of the display panel 240).
  • the rendering resolution function, X(q), is a function of an angle between an optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
  • the rendering resolution function, X(q) is expressed in pixels per degree (PPD).
  • the rendering resolution function (in a first dimension) is defined as:
  • S ma x is the maximum of the rendering resolution function (e.g., approximately 60 PPD)
  • S mm is the asymptote of the rendering resolution function, Of,,/ characterizes the size of the field of focus
  • w characterizes the width of the rendering resolution function.
  • Figure 3 A illustrates a rendering resolution function 310 (in a first dimension) which falls off in an inverse linear fashion from a field of focus.
  • Figure 3B illustrates a rendering resolution function 320 (in a first dimension) which falls off in a linear fashion from a field of focus.
  • Figure 3C illustrates a rendering resolution function 330 (in a first dimension) which is approximately Gaussian.
  • Figure 3D illustrates a rendering resolution function 340 (in a first dimension) which falls off in a rounded stepwise fashion.
  • Each of the rendering resolutions functions 310-340 of Figures 3 A-3D is in the form a peak including a peak height (e.g., a maximum value) and a peak width.
  • the peak width can be defined in a number of ways.
  • the peak width is defined as the size of the field of focus (as illustrated by width 311 of Figure 3A and width 321 of Figure 3B).
  • the peak width is defined as the full width at half maximum (as illustrated by width 331 of Figure 3C).
  • the peak width is defined as the distance between the two inflection points nearest the origin (as illustrated by width 341 of Figure 3D).
  • Figures 3A-3D illustrate rendering resolution functions in a single dimension
  • the rendering resolution function used by the rendering module 210 can be a two-dimensional function.
  • Figure 4 A illustrates a two-dimensional rendering resolution function 410 in which the rendering resolution function 410 is independent in a horizontal dimension ( Q ) and a vertical dimension (f).
  • Figure 4C illustrates a two-dimensional rendering resolution function 430 in which the rendering resolution function 430 is different in a horizontal dimension ( Q ) and a vertical dimension (f).
  • Figure 4D illustrates a two- dimensional rendering resolution function 440 based on a human vision model.
  • the rendering module 210 generates the rendering resolution function based on a number of factors, including biological information regarding human vision, eye tracking data, eye tracking metadata, the SR content, and various constraints (such as constraints imposed by the hardware of the HMD).
  • Figure 5A illustrates an example rendering resolution function 510, denoted
  • the rendering resolution function 510 is a constant (e.g., S max ) within a field of focus (between -Of,, / and +0 f detox f) and falls off in an inverse linear fashion outside this window.
  • Figure 5B illustrates the integral 520, denoted II(Q), of the rendering resolution function 510 of Figure 5 A within a field of view, e.g., from -0 f v to +0f ov .
  • U Q the integral 520, denoted II(Q)
  • the integral 520 ranges from 0 at -0 fOV to a maximum value, denoted U max , at
  • FIG. 5C illustrates the tangent 530, denoted V(XR), of the inverse of the integral
  • the tangent 530 illustrates a direct mapping from rendered space, in XR, to display space, in xr>.
  • the uniform sampling points in the warped space (equally spaced along the XR axis) corresponding to non-uniform sampling points in the display space (non-equally spaced along the axis).
  • Scaling factors can be determined by the distances between the non-uniform sampling points in the display space.
  • the rendering module 210 uses a rendering resolution function that does not depend on the gaze on the user. However, when performing dynamic foveation, the rendering module 210 uses a rendering resolution function that depends on the gaze of the user. In particular, when performing dynamic foveation, the rendering module 210 uses a rendering resolution function that has a peak height at a location corresponding to a location in the display space at which the user is looking (e.g., the point of fixation as determined by the eye tracker 260).
  • Figure 6A illustrates a rendering resolution function 610 that may be used by the rendering module 210 when performing static foveation.
  • the rendering module 210 may also use the rendering resolution function 610 of Figure 6 A when performing dynamic foveation and the user is looking at the center of the display panel 240.
  • Figure 6B illustrates a rendering resolution function 620 that may be used by the rendering module when performing dynamic foveation and the user is looking at an angle (0 g ) away from the center of the display panel 240.
  • Figure 7 is a flowchart representation of a method 700 of rendering an image in accordance with some implementations.
  • the method 700 is performed by a rendering module, such as the rendering module 210 of Figure 2.
  • the method 700 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
  • the method 700 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
  • the method 700 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
  • the method 700 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
  • the method 700 begins at block 710 with the rendering module obtaining SR content to be rendered into a display space.
  • SR content can include definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), or other information describing content to be represented in the rendered image.
  • the method 700 continues at block 720 with the rendering module obtaining a rendering resolution function defining a mapping between the display space and a warped space, the rendering resolution function depending on a fixation point of a user in the display space.
  • a rendering resolution function defining a mapping between the display space and a warped space, the rendering resolution function depending on a fixation point of a user in the display space.
  • Various rendering resolution functions are illustrated in Figures 3A-3D and Figures 4A- 4D. Various methods of generating a rendering resolution function are described further below.
  • the rendering resolution function generally characterizes the resolution of the rendered image in the display space.
  • the integral of the rendering resolution function provides a mapping between the display space and the warped space (as illustrated in Figures 5A-5C).
  • the rendering resolution function, S(x) is a function of a distance from an origin of the display space.
  • the rendering resolution function, S(6) is a function of an angle between an optical axis of the user and the optical axis when the user is looking at the center of the display panel. Accordingly, the rendering resolution function characterizes a resolution in the display space as a function of angle (in the display space).
  • the rendering resolution function, S(6) is expressed in pixels per degree (PPD).
  • the rendering module performs dynamic foveation and the rendering resolution function depends on the gaze of the user.
  • obtaining the rendering resolution function includes obtaining eye tracking data indicative of a gaze of a user, e.g., from the eye tracker 260 of Figure 2, determining the fixation point of the user in the display space based on the eye tracking data, and generating the rendering resolution function based on the fixation point of the user in the display space.
  • the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
  • generating the rendering resolution function based on the eye tracking data includes generating a rendering resolution function having a peak height at a location the user is looking at, as indicated by the eye tracking data.
  • the method 700 continues at block 730 with the rendering module generating a rendered image based on the SR content and the rendering resolution function.
  • the rendered image includes a warped image with a plurality of pixels at respective locations uniformly spaced in a grid pattern in the warped space.
  • the plurality of pixels are respectively associated with a plurality of respective pixel values based on the SR content.
  • the plurality of pixels are respectively associated with a plurality of respective scaling factors defining an area in the display space based on the rendering resolution function.
  • An image that is said to be in a display space has uniformly spaced regions (e.g., pixels or groups of pixels) that map to uniformly spaced regions (pixels or groups of pixels) of a display.
  • An image that is said to be in a warped space has uniformly spaced regions (e.g., pixels or groups of pixels) that map to non- uniformly spaced regions (e.g., pixels or groups of pixels) in the display space.
  • the relationship between uniformly spaced regions in the warped space to non-uniformly spaced regions in the display space is defined at least in part by the scaling factors.
  • the plurality of respective scaling factors (like the rendering resolution function) define a mapping between the warped space and the display space.
  • the plurality of respective scaling factors (like the rendering resolution function) define a mapping between the warped space and the display space.
  • the warped image includes a plurality of tiles at respective locations uniformly spaced in a grid pattern in the warped space and each of the plurality of tiles is associated with a respective one or more scaling factors.
  • each tile (including a plurality of pixels) is associated with a single horizontal scaling factor and a single vertical scaling factor.
  • each tile is associated with a single scaling factor that is used for both horizontal and vertical scaling.
  • each tile is a 32x32 matrix of pixels.
  • the rendering module transmits the warped image including the plurality of pixel values in association with the plurality of respective scaling factors. Accordingly, the warped image and the scaling factors, rather than a foveated image which could be generated using this information, is propagated through the pipeline.
  • the rendering module 210 generates a warped image and a plurality of respective scaling factors that are transmitted by the rendering module 210.
  • the warped image (or a processed version of the warped image) and the plurality of respective scaling factors are received (and used in processing the warped image) by the transport module 220 (and the compression module 222 and decompression module 226 thereof) as described in U.S. Patent App. No. 62/667,727, entitled“DYNAMIC FOVEATED COMPRESSION,” filed on May 7, 2018, and hereby incorporated by reference in its entirety.
  • the warped image (or a processed version of the warped image) and the plurality of respective scaling factors are received (and used in processing the warped image) by the display module 230 (and the lens compensation module 232 and the panel compensation module 234 thereof) as described in U.S. Patent App. No. 62/667,728, entitled“DYNAMIC FOVEATED DISPLAY,” filed on May 7, 2018, and hereby incorporated by reference in its entirety.
  • the rendering module generates the scaling factors based on the rendering resolution function.
  • the scaling factors are generated based on the rendering resolution function as described above with respect to Figures 5A-5C.
  • generating the scaling factors includes determining the integral of the rendering resolution function.
  • generating the scaling factors includes determining the tangent of the inverse of the integral of the rendering resolution function.
  • generating the scaling factors includes, determining, for each of the respective locations uniformly spaced in a grid pattern in the warped space, the respective scaling factors based on the tangent of the inverse of the integral of the rendering resolution function. Accordingly, for a plurality of locations uniformly spaced in the warped space, a plurality of locations non-uniformly spaced in the display space are represented by the scaling factors.
  • Figure 8 A illustrates an image representation of SR content 810 to be rendered in a display space.
  • Figure 8B illustrates a warped image 820 generated according to the method 700 of Figure 7.
  • a rendering resolution function different parts of the SR content 810 corresponding to non-uniformly spaced regions (e.g., different amounts of area) in the display space are rendered into uniformly spaced regions (e.g., the same amount of area) in the warped image 820.
  • the rendering module 210 can perform static foveation or dynamic foveation.
  • the rendering module 210 determines a foveation mode to apply for rendering SR content and performs static foveation or dynamic foveation according to the determined foveation mode.
  • a static foveation mode the SR content is rendered independently of eye tracking data.
  • a no-foveation mode the rendered image is characterized by fixed resolutions per display regions (e.g., a constant number of pixels per tile).
  • a dynamic foveation mode the resolution of the rendered image depends on the gaze of a user.
  • Figure 9 is a flowchart representation of a method 900 of rendering an image in accordance with some implementations.
  • the method 900 is performed by a rendering module, such as the rendering module 210 of Figure 2.
  • the method 900 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
  • the method 900 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
  • the method 900 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
  • the method 900 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
  • the method 900 begins in block 910 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where a user is looking, such as gaze direction or a fixation point of a user).
  • the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
  • the method 900 continues in block 920 with the rendering module obtaining
  • the SR content can include definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), or other information describing content to be represented in a rendered image.
  • the method 900 continues in block 930 with the rendering module determining a foveation mode to apply to rendering the SR content.
  • the rendering module determines the foveation mode based on various factors.
  • the rendering module determines the foveation mode based on a rendering processor characteristic. For example, in some implementations, the rendering module determines the foveation mode based on an available processing power, a processing speed, or a processor type of the rendering processor of the rendering module.
  • the rendering module selects a dynamic foveation mode and when the rendering module has a small available processing power (due to a small processing capacity or high usage of the processing capacity), the rendering module selects a static foveation mode or no-foveation mode.
  • controller 110 e.g., the rendering processor is at the controller
  • the rendering module selects a dynamic foveation mode and when the rendering is performed by the HMD 120 (e.g., the rendering processor is at the HMD), the rendering module selects a static foveation mode or a no-foveation mode.
  • switching between static and dynamic foveation modes occurs based on characteristics of the HMD 120, such as the processing power of the HMD 120 relative to the processing power of the controller 110.
  • the rendering module selects a static foveation or a no-foveation mode when eye tracking performance (e.g., reliability) becomes sufficiently degraded.
  • eye tracking performance e.g., reliability
  • static foveation mode or no-foveation mode is selected when eye tracking is lost.
  • static foveation mode or no-foveation mode is selected when eye tracking performance breaches a threshold, such as when eye tracking accuracy falls too low (e.g., due to large gaps in eye tracking data) and/or latency related to eye tracking exceeds a value.
  • the rendering module shifts focus to the center of the HMD 120 and, using static foveation, gradually increases the field of fixation (FoF) when diminishment of eye tracking performance during dynamic foveation (e.g., after a timeout, as indicated by a low prediction confidence) is suspected.
  • FoF field of fixation
  • the rendering module selects a static foveation mode or no-foveation mode in order to account for other considerations. For example, in some implementations, the rendering module selects a static foveation mode or no-foveation mode where superior eye-tracking sensor performance is desirable. As another example, in some implementations, the rendering module selects a static foveation mode or no-foveation mode when the user wearing the HMD 120 has a medical condition that prevents eye tracking or makes it sufficiently ineffective.
  • a static foveation mode or no-foveation mode is selected because it provides better performance of various aspects of the rendering imaging system.
  • static foveation mode or no-foveation mode provides better rate control.
  • static foveation mode or no-foveation mode provides better concealment of mixed foveated and non-foveated regions (e.g. by making fainter the line demarcating the regions).
  • a static foveation mode or no-foveation mode provides better display panel consumption bandwidth, by, for instance, using static grouped compensation data to maintain similar power and/or bandwidth.
  • static foveation mode or no-foveation mode mitigates the risk of rendering undesirable visual aspects, such as flicker and/or artifacts (e.g., grouped rolling emission shear artifact).
  • the method 900 continues in decision block 935.
  • the method 900 continues in block 940, wherein the rendering module renders the SR content according to dynamic foveation based on the eye tracking data (e.g., as described above with respect to Figure 7).
  • the method 900 continues in block 942, wherein the rendering module renders the SR content according to static foveation independent of the eye tracking data (e.g., as described above with respect to Figure 7).
  • the method 900 continues in block 944, wherein the rendering module renders the SR content without foveation.
  • the method 900 returns to block 920 where additional SR content is received.
  • the rendering module renders different SR content with different foveation modes depending on changing circumstances. While shown in a particular order, it should be appreciated that blocks of method 900 can be performed in different orders or at the same time. For example, eye tracking data can be obtained (e.g., as in block 910) throughout the performance of method 900 and that blocks relying on that data can use any of the previously obtained (e.g., most recently obtained) eye tracking data or variants thereof (e.g., windowed average or the like).
  • Figure 10A illustrates an eyepiece resolution function 1020, E(q), that varies as a function of angle.
  • the eyepiece resolution function 1020 has a maximum at the center of the eyepiece 242 and falls off towards the edges.
  • the eyepiece resolution function 1020 includes a portion of a circle, ellipse, parabola, or hyperbola.
  • Figure 10A also illustrates an unconstrained rendering resolution function 1010, that has a peak centered at a gaze angle (0 ⁇ ,). Around the peak, the unconstrained rendering resolution function 1010 is greater than the eyepiece resolution function 1020.
  • the rendering module 210 generates a capped rendering resolution function 1030 (in bold), S c (0), equal to the lesser of the eyepiece resolution function 1010 and the unconstrained rendering resolution function.
  • the amount of computational expense and delay associated with the rendering module 210 rendering the rendered image is kept relatively constant (e.g., normalized), irrespective of the gaze angle of the user 250. Accordingly, in various implementations, the rendering module 210 renders the rendered image using a rendering resolution function that has a fixed summation value indicative of the total amount of detail in the rendered image.
  • the summation value is generally equal to the integral of the rendering resolution function over the field of view. In other words, the summation value corresponds to the area under the rendering resolution function over the field of view. In various implementations, the summation value corresponds to the number of pixels, tiles, and/or (x,y) locations in the rendered image.
  • the summation value of the capped rendering resolution function 1030 is less than the summation value of the unconstrained rendering resolution function 1010.
  • the rendering module increases values of capped rendering resolution function 1030 that were not decreased as compared to the unconstrained rendering resolution function 1010.
  • Figure 10B illustrates a first constrained rendering resolution function 1032 in which the fall-off portions of the rendering resolution function are increased as compared to the fall-off portions of the capped rendering resolution function 1030.
  • Figure 10C illustrates a second constrained rendering resolution function 1034 in which the peak width of the rendering resolution function is increased as compared to the peak width of the capped rendering resolution function 1030.
  • Figure 11 is a flowchart representation of a method 1100 of rendering an image in accordance with some implementations.
  • the method 1100 is performed by a rendering module, such as the rendering module 210 of Figure 2.
  • the method 1100 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
  • the method 1100 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
  • the method 1100 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 1100 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
  • instructions e.g., code
  • a non-transitory computer-readable medium e.g., a memory
  • the method 1100 begins in block 1110 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where the user is looking, such as gaze direction, and/or fixation point of the user).
  • the rendering module receives data indicative of performance characteristics of an eyepiece at least at the gaze of the user.
  • performance characteristics of the eyepiece at the gaze of the user can be determined from the eye tracking data.
  • the method 1110 continues in block 1120, with the rendering module generating a rendering resolution function based on the eye tracking data, the rendering resolution function having a maximum value dependent on the eye tracking data and a summation value independent of the eye tracking data.
  • generating the rendering resolution function includes generating an unconstrained rendering resolution function based on the eye tracking data (such as the unconstrained rendering resolution function 1010 of Figure 10A); determining the maximum value (of the rendering resolution function after constraining) based on the eye tracking data (and, optionally, an eyepiece resolution function such as the eyepiece resolution function 1020 of Figure 10A); decreasing values of the unconstrained rendering resolution function above the maximum value to the maximum value in order to generate a capped rendering resolution function (such as the capped rendering resolution function 1030 of Figure 10A); and increasing non-decreased values of the capped rendering resolution function in order to generate the rendering resolution function.
  • increasing the non- decreased values of the capped rendering resolution function includes increasing fall-off portions of the capped rendering resolution function.
  • peripheral portions of the rendering resolution function fall-off in an inverse-linear fashion (e.g., hyperbolically).
  • increasing the non-decreased values of the capped rendering resolution function includes increasing a peak width of the capped rendering resolution function, such as increasing the size of the field of focus.
  • the maximum value is based on a mapping between the gaze of the user and lens performance characteristics.
  • the lens performance characteristics are represented by an eyepiece resolution function or a modulation transfer function (MTF).
  • the lens performance characteristics are determined by surface lens modeling.
  • the maximum value is determined as a function of gaze direction (because the eyepiece resolution function varies as a function of gaze direction). In various implementations, the maximum value is based on changes in the gaze of the user, such as gaze motion (e.g., changing gaze location). For example, in some implementations, the maximum value of the rendering resolution function is decreased when the user is looking around (because resolution perception decreasing during eye motion). As another example, in some implementations, when the user blinks, the maximum value of the rendering resolution function is decreased (because resolution perception [and eye tracking confidence] decreases when the user blinks).
  • the maximum value is affected by the lens performance characteristics. For example, in some implementations, the maximum value is decreased when the lens performance characteristics indicate that the lens cannot support a higher resolution. In some implementations, the lens performance characteristics include a distortion introduced by a lens.
  • the method 1100 continued in block 1130, with the rendering module generating a rendered image based on SR content and the rendering resolution function (e.g., as described above with respect to Figure 7).
  • the rendered image is a foveated image, such as an image having lower resolution outside the user’ s field of fixation (FoF).
  • the rendered image is a warped image, such as an image transformed into a non-uniform space as compared to the SR content.
  • Figure 12 is a flowchart representation of a method 1200 of rendering an image in accordance with some implementations.
  • the method 1200 is performed by a rendering module, such as the rendering module 210 of Figure 2.
  • the method 1200 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
  • the method 1200 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
  • the method 1200 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
  • the method 1200 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
  • the method 1200 begins at block 1210 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where a user is looking, such as gaze direction or a fixation point of a user).
  • the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
  • the method 1200 continues at block 1220 with the rendering module obtaining eye tracking metadata indicative of a characteristic of the eye tracking data.
  • the eye tracking metadata is obtained in association with the corresponding eye tracking data.
  • the eye tracking data and the associated eye tracking metadata are received from an eye tracker, such as eye tracker 260 of Figure 2.
  • the eye tracking metadata includes data indicative of a confidence of the eye tracking data.
  • the eye tracking metadata provides a measurement of a belief that the eye tracking data correctly indicates the gaze of the user.
  • the data indicative of the confidence of the eye tracking data includes data indicative of an accuracy of the eye tracking data.
  • the rendering module generates the data indicative of the accuracy of the eye tracking data based on a series of recently captured images of the eye of the user, recent measurements of the gaze of the user, user biometrics, and/or other obtained data.
  • the data indicative of the confidence of the eye tracking data includes data indicative of a latency of the eye tracking data (e.g., a difference between the time the eye tracking data is generated and the time the eye tracking data is received by the rendering module).
  • the rendering module generates the data indicative of the latency of the eye tracking data based on timestamps of the eye tracking data.
  • the confidence of the eye tracking data is higher when the latency is less than when the latency is more.
  • the eye tracking data includes data indicative of a prediction of the gaze of the user, and the data indicative of a confidence of the eye tracking data includes data indicative of a confidence of the prediction.
  • the data indicative of a prediction of the gaze of the user is based on past measurements of the gaze of the user based on past captured images.
  • the prediction of the gaze of the user is based on classifying past motion of the gaze of the user as a continuous fixation, smooth pursuit, or saccade.
  • the confidence of the prediction is based on this classification. In particular, in various implementations, the confidence of the prediction is higher when past motion is classified as a continuous fixation or smooth pursuit than when the past motion is classified as a saccade.
  • the eye tracking metadata includes data indicative of one or more biometrics of the user, and, in particular, biometrics which affect the eye tracking metadata or its confidence.
  • biometrics of the user include one or more of eye anatomy, ethnicity/physionomegy, eye color, age, visual aids (e.g., corrective lenses), make-up (e.g., eyeliner or mascara), medical condition, historic gaze variation, input preferences or calibration, headset position/orientation, pupil dilation/center-shift, and/or eyelid position.
  • the eye tracking metadata includes data indicative of one or more environmental conditions of an environment of the user in which the eye tracking data was generated.
  • the environmental conditions include one or more of vibration, ambient temperature, IR direction light, or IR light intensity.
  • the method 1200 continues at block 1230 with the rendering module generating a rendering resolution function based on the eye tracking data and the eye tracking metadata.
  • the rendering module generates the rendering resolution function with a peak maximum based on the eye tracking data (e.g., the resolution is highest where the user is looking).
  • the rendering module generates the rendering resolution function with a peak width based on the eye tracking metadata (e.g., with a wider peak when the eye tracking metadata indicates less confidence in the correctness of the eye tracking data).
  • the method 1200 continues at block 1240 with the rendering module generating a rendered image based on the SR content and the rendering resolution function (e.g., as described above with respect to Figure 7).
  • the rendered image is a foveated image, such as an image having lower resolution outside the user’ s field of fixation (FoF).
  • the rendered image is a warped image, such as an image transformed into a non-uniform space as compared to the SR content.
  • Figure 13A illustrates a rendering resolution function 1310 that may be used by the rendering module when performing dynamic foveation, when the eye tracking data indicates that the user is looking at an angle (0 g ) away from the center of the display panel, and when the eye tracking metadata indicates a first confidence resulting in a first peak width 1311.
  • Figure 13B illustrates a rendering resolution function 1320 that may be used by the rendering module when performing dynamic foveation, when the eye tracking data indicates that the user is looking at the angle (0 g ) away from the center of the display panel, and when the eye tracking metadata indicates a second confidence, less than the first confidence, resulting in a second peak width 1321 greater than the first peak width 1311.
  • the rendering module detects loss of an eye tracking stream including the eye tracking metadata and the eye tracking data. In response, the rendering module generates a second rendering resolution function based on detecting the loss of the eye tracking stream and generates a second rendered image based on the SR content and the second rendering resolution function.
  • detecting the loss of the eye tracking stream includes determining that the gaze of the user was static at a time of the loss of the eye tracking stream. Accordingly, in various implementations, generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a same location as a peak maximum of the rendering resolution function and with a peak width greater than a peak width of the rendering resolution function. Thus, in various implementations, in response to detecting the loss of an eye tracking stream, the rendering resolution function stays at the same location, but the size of the field of fixation increases.
  • detecting the loss of the eye tracking stream includes determining that the gaze of the user was moving at a time of the loss of the eye tracking stream.
  • generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a location displaced toward the center as compared to a peak maximum of the rendering resolution function, and with a peak width greater than a peak width of the rendering resolution function.
  • the rendering resolution function moves to the center of the display panel and the size of the field of fixation increases.
  • detecting the loss of the eye tracking stream includes determining that the gaze of the user was moving in a direction at a time of the loss of the eye tracking stream.
  • generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a location displaced in the direction as compared to a peak maximum of the rendering resolution function, and with a peak width greater than a peak width of the rendering resolution function.
  • the rendering resolution function moves to a predicted location and the size of the field of fixation increases.
  • Figure 14 is a flowchart representation of a method 1400 of rendering an image in accordance with some implementations.
  • the method 1400 is performed by a rendering module, such as the rendering module 210 of Figure 2.
  • the method 1400 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
  • the method 1400 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
  • the method 1400 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
  • the method 1400 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
  • the method 1400 begins at block 1410 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where a user is looking, such as gaze direction or a fixation point of a user).
  • the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
  • the method 1400 continues at block 1420 with the rendering module obtaining SR content to be rendered.
  • the SR content can include definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), or other information describing content to be represented in a rendered image.
  • the method 1400 continues at block 1430 with the rendering module generating a rendering resolution function based on the eye tracking data and the SR content.
  • the rendering module generates the rendering resolution function with a peak maximum based on the eye tracking data (e.g., the resolution is highest where the user is looking).
  • the rendering module generates the rendering resolution function based on the eye tracking data and adjusts the rendering resolution function based on the SR content.
  • the rendering module increases the rendering resolution function in one or more areas of import, such as a game objective or content at which humans are particularly adapted to resolve (e.g., content to which humans are likely to pay attention), like a face or high resolution object.
  • the rendering module increases the rendering resolution function in one or more areas of motion (e.g., of objects of the SR content).
  • the rendering module generates the rendering resolution function based on a brightness of the SR content. For example, in some implementations, because peripheral vision is more light-sensitive than central vision, peripheral resolution is increased in darker conditions (as compared to brighter conditions). In various implementations, increasing the peripheral resolution includes increasing the peak width of the rendering resolution function and/or increasing the fall-off portions of the rendering resolution function.
  • the rendering module generates the rendering resolution function based on a color of the SR content. For example, in some implementations, because sensitivity to red-green color variations declines more steeply toward the periphery than sensitivity to luminance or blue-yellow colors, peripheral resolution is decreased when the SR content is primarily red- green as opposed to blue-yellow. In various implementations, decreasing the peripheral resolution includes decreasing the peak width of the rendering resolution function and/or decreasing the fall-off portions of the rendering resolution function.
  • generating the rendering resolution function based on the SR content includes generating different rendering resolution functions for different color channels (e.g., three different rendering resolution functions for three different color channels, such as red, green, and blue).
  • the rendering module generates a first rendering resolution function for a first color channel and a second rendering resolution function for a second color channel different than the first rendering resolution function for the first color channel.
  • the rendering module generates a first color channel image of the rendered image based on the first rendering resolution function and a second color channel image of the rendered image based on the second rendering resolution function.
  • the rendering module generates the rendering resolution function based on a complexity of the SR content. For example, in various implementations, the rendering module increases the rendering resolution function in areas with high resolution spatial changes and/or fast spatial changes.
  • the method 1400 continues at block 1440 with the rendering module generating a rendered image based on the SR content and the rendering resolution function (e.g., as described above with respect to Figure 7).
  • the rendered image is a foveated image, such as an image having lower resolution outside the user’ s field of fixation (FoF).
  • the rendered image is a warped image, such as an image transformed into a non-uniform space as compared to the SR content.
  • first,“second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the“first node” are renamed consistently and all occurrences of the“second node” are renamed consistently.
  • the first node and the second node are both nodes, but they are not the same node.
  • the term“if’ may be construed to mean“when” or“upon” or “in response to determining” or“in accordance with a determination” or“in response to detecting,” that a stated condition precedent is true, depending on the context.
  • the phrase“if it is determined [that a stated condition precedent is true]” or“if [a stated condition precedent is true]” or“when [a stated condition precedent is true]” may be construed to mean “upon determining” or“in response to determining” or“in accordance with a determination” or“upon detecting” or“in response to detecting” that the stated condition precedent is true, depending on the context.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

In various implementations, methods of rendering simulated reality (SR) content using foveation are described. In one implementation, a static or dynamic foveation mode is used to render the SR content. In one implementation, a rendering resolution function is generated depending on a fixation point of a user and a warped image is generated based on the rendering resolution function. In one implementation, the rendering resolution function is based on eye tracking metadata. In one implementation, the rendering resolution function is based on the SR content.

Description

DYNAMIC FOVEATED RENDERING
TECHNICAL FIELD
[0001] The present disclosure generally relates to image rendering, and in particular, to systems, methods, and devices for rendering images for simulated reality with a varying amount of detail.
BACKGROUND
[0002] A physical setting refers to a world that individuals can sense and/or with which individuals can interact without assistance of electronic systems. Physical settings (e.g., a physical forest) include physical elements (e.g., physical trees, physical structures, and physical animals). Individuals can directly interact with and/or sense the physical setting, such as through touch, sight, smell, hearing, and taste.
[0003] In contrast, a simulated reality (SR) setting refers to an entirely or partly computer-created setting that individuals can sense and/or with which individuals can interact via an electronic system. In SR, a subset of an individual’s movements is monitored, and, responsive thereto, one or more attributes of one or more virtual objects in the SR setting is changed in a manner that conforms with one or more physical laws. For example, a SR system may detect an individual walking a few paces forward and, responsive thereto, adjust graphics and audio presented to the individual in a manner similar to how such scenery and sounds would change in a physical setting. Modifications to attribute(s) of virtual object(s) in a SR setting also may be made responsive to representations of movement (e.g., audio instructions).
[0004] An individual may interact with and/or sense a SR object using any one of his senses, including touch, smell, sight, taste, and sound. For example, an individual may interact with and/or sense aural objects that create a multi-dimensional (e.g., three dimensional) or spatial aural setting, and/or enable aural transparency. Multi-dimensional or spatial aural settings provide an individual with a perception of discrete aural sources in multi-dimensional space. Aural transparency selectively incorporates sounds from the physical setting, either with or without computer-created audio. In some SR settings, an individual may interact with and/or sense only aural objects.
[0005] One example of SR is virtual reality (VR). A VR setting refers to a simulated setting that is designed only to include computer-created sensory inputs for at least one of the senses. A VR setting includes multiple virtual objects with which an individual may interact and/or sense. An individual may interact and/or sense virtual objects in the VR setting through a simulation of a subset of the individual’s actions within the computer-created setting, and/or through a simulation of the individual or his presence within the computer-created setting.
[0006] Another example of SR is mixed reality (MR). A MR setting refers to a simulated setting that is designed to integrate computer-created sensory inputs (e.g., virtual objects) with sensory inputs from the physical setting, or a representation thereof. On a reality spectrum, a mixed reality setting is between, and does not include, a VR setting at one end and an entirely physical setting at the other end.
[0007] In some MR settings, computer-created sensory inputs may adapt to changes in sensory inputs from the physical setting. Also, some electronic systems for presenting MR settings may monitor orientation and/or location with respect to the physical setting to enable interaction between virtual objects and real objects (which are physical elements from the physical setting or representations thereof). For example, a system may monitor movements so that a virtual plant appears stationery with respect to a physical building.
[0008] One example of mixed reality is augmented reality (AR). An AR setting refers to a simulated setting in which at least one virtual object is superimposed over a physical setting, or a representation thereof. For example, an electronic system may have an opaque display and at least one imaging sensor for capturing images or video of the physical setting, which are representations of the physical setting. The system combines the images or video with virtual objects, and displays the combination on the opaque display. An individual, using the system, views the physical setting indirectly via the images or video of the physical setting, and observes the virtual objects superimposed over the physical setting. When a system uses image sensor(s) to capture images of the physical setting, and presents the AR setting on the opaque display using those images, the displayed images are called a video pass-through. Alternatively, an electronic system for displaying an AR setting may have a transparent or semi-transparent display through which an individual may view the physical setting directly. The system may display virtual objects on the transparent or semi-transparent display, so that an individual, using the system, observes the virtual objects superimposed over the physical setting. In another example, a system may comprise a projection system that projects virtual objects into the physical setting. The virtual objects may be projected, for example, on a physical surface or as a holograph, so that an individual, using the system, observes the virtual objects superimposed over the physical setting. [0009] An augmented reality setting also may refer to a simulated setting in which a representation of a physical setting is altered by computer-created sensory information. For example, a portion of a representation of a physical setting may be graphically altered (e.g., enlarged), such that the altered portion may still be representative of but not a faithfully- reproduced version of the originally captured image(s). As another example, in providing video pass-through, a system may alter at least one of the sensor images to impose a particular viewpoint different than the viewpoint captured by the image sensor(s). As an additional example, a representation of a physical setting may be altered by graphically obscuring or excluding portions thereof.
[0010] Another example of mixed reality is augmented virtuality (AV). An AV setting refers to a simulated setting in which a computer-created or virtual setting incorporates at least one sensory input from the physical setting. The sensory input(s) from the physical setting may be representations of at least one characteristic of the physical setting. For example, a virtual object may assume a color of a physical element captured by imaging sensor(s). In another example, a virtual object may exhibit characteristics consistent with actual weather conditions in the physical setting, as identified via imaging, weather-related sensors, and/or online weather data. In yet another example, an augmented reality forest may have virtual trees and structures, but the animals may have features that are accurately reproduced from images taken of physical animals.
[0011] Many electronic systems enable an individual to interact with and/or sense various SR settings. One example includes head mounted systems. A head mounted system may have an opaque display and speaker(s). Alternatively, a head mounted system may be designed to receive an external display (e.g., a smartphone). The head mounted system may have imaging sensor(s) and/or microphones for taking images/video and/or capturing audio of the physical setting, respectively. A head mounted system also may have a transparent or semi transparent display. The transparent or semi-transparent display may incorporate a substrate through which light representative of images is directed to an individual’s eyes. The display may incorporate LEDs, OLEDs, a digital light projector, a laser scanning light source, liquid crystal on silicon, or any combination of these technologies. The substrate through which the light is transmitted may be a light waveguide, optical combiner, optical reflector, holographic substrate, or any combination of these substrates. In one embodiment, the transparent or semi transparent display may transition selectively between an opaque state and a transparent or semi-transparent state. In another example, the electronic system may be a projection-based system. A projection-based system may use retinal projection to project images onto an individual’s retina. Alternatively, a projection system also may project virtual objects into a physical setting (e.g., onto a physical surface or as a holograph). Other examples of SR systems include heads up displays, automotive windshields with the ability to display graphics, windows with the ability to display graphics, lenses with the ability to display graphics, headphones or earphones, speaker arrangements, input mechanisms (e.g., controllers having or not having haptic feedback), tablets, smartphones, and desktop or laptop computers.
[0012] Rendering an image for an SR experience can be computationally expensive.
Accordingly, to reduce this computational burden, advantage is taken of the fact that humans typically have relatively weak peripheral vision. Accordingly, different portions of the image are rendered on a display panel with different resolutions. For example, in various implementations, portions corresponding to a user’s field of focus are rendered with higher resolution than portions corresponding to a user’s periphery.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
[0014] Figure 1 is a block diagram of an example operating environment in accordance with some implementations.
[0015] Figure 2 illustrates an SR pipeline that receives SR content and displays an image on a display panel based on the SR content in accordance with some implementations.
[0016] Figures 3A-3D illustrate various rendering resolution functions in a first dimension in accordance with various implementations.
[0017] Figures 4A D illustrate various two-dimensional rendering resolution functions in accordance with various implementations.
[0018] Figure 5 A illustrates an example rendering resolution function that characterizes a resolution in a display space as a function of angle in a warped space in accordance with some implementations .
[0019] Figure 5B illustrates the integral of the example rendering resolution function of Figure 5 A in accordance with some implementations. [0020] Figure 5C illustrates the tangent of the inverse of the integral of the example rendering resolution function of Figure 5 A in accordance with some implementations.
[0021] Figure 6A illustrates an example rendering resolution function for performing static foveation in accordance with some implementations
[0022] Figure 6B illustrates an example rendering resolution function for performing dynamic foveation in accordance with some implementations.
[0023] Figure 7 is a flowchart representation of a method of rendering an image based on a rendering resolution function in accordance with some implementations.
[0024] Figure 8A illustrates an example image representation, in a display space, of SR content to be rendered in accordance with some implementations.
[0025] Figure 8B illustrates a warped image of the SR content of Figure 8A in accordance with some implementations.
[0026] Figure 9 is a flowchart representation of a method of rendering an image in one of a plurality of foveation modes in accordance with some implementations.
[0027] Figures 10A-10C illustrate various constrained rendering resolution functions in accordance with various implementations.
[0028] Figure 11 is a flowchart representation of a method of rendering an image with a constrained rendering resolution function in accordance with some implementations.
[0029] Figure 12 is a flowchart representation of a method of rendering an image based on eye tracking metadata in accordance with some implementations.
[0030] Figures 13A-13B illustrate various confidence-based rendering resolution functions in accordance with various implementations.
[0031] Figure 14 is a flowchart representation of a method of rendering an image based on SR content in accordance with some implementations.
[0032] In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. SUMMARY
[0033] Various implementations disclosed herein include devices, systems, and methods for rendering an image based on a rendering resolution function. The method includes obtaining SR content to be rendered into a display space and obtaining a rendering resolution function defining a mapping between the display space and a warped space, the rendering resolution function depending on a fixation point of a user in the display space. The method further includes generating a rendered image based on the SR content and the rendering resolution function, wherein the rendered image includes a warped image with a plurality of pixels at respective locations uniformly spaced in a grid pattern in the warped space, wherein the plurality of pixels are respectively associated with a plurality of respective pixel values based on the SR content and a plurality of respective scaling factors defining an area in the display space based on the rendering resolution function.
[0034] Various implementations disclosed herein include devices, systems, and methods for rendering an image in one of a plurality of foveation modes. The method includes obtaining eye tracking data indicative of a gaze of a user and obtaining SR content to be rendered. The method includes determining a foveation mode to apply to rendering the SR content. The method further includes, in accordance with a determination that the foveation mode is a dynamic foveation mode, rendering the SR content according to dynamic foveation based on the eye tracking data.
[0035] Various implementations disclosed herein include devices, systems, and methods for rendering an image based on a constrained rendering resolution function. The method includes obtaining eye tracking data indicative of a gaze of a user. The method includes generating a rendering resolution function based on the eye tracking data, the rendering resolution function having a maximum value dependent on the eye tracking data and a summation value independent of the eye tracking data. The method includes generating a rendered image based on SR content and the rendering resolution function.
[0036] Various implementations disclosed herein include devices, systems, and methods for rendering an image based on eye tracking metadata. The method includes obtaining eye tracking data indicative of a gaze of a user and obtaining eye tracking metadata indicative of a characteristic of the eye tracking data. The method includes generating a rendering resolution function based on the eye tracking data and the eye tracking metadata. The method further includes generating a rendered image based on SR content and the rendering resolution function.
[0037] Various implementations disclosed herein include devices, systems, and methods for rendering an image based on SR content. The method includes obtaining eye tracking data indicative of a gaze of a user and obtaining SR content to be rendered. The method includes generating a rendering resolution function based on the eye tracking data and the SR content. The method includes generating a rendered image based on the SR content and the rendering resolution function.
[0038] In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
DESCRIPTION
[0039] Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
[0040] As noted above, in various implementations, different portions of an image are rendered on a display panel with different resolutions. Various methods of determining the resolution for different portions of an image based on a number of factors are described below.
[0041] Figure 1 is a block diagram of an example operating environment 100 in accordance with some implementations. While pertinent features are shown, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, as a non-limiting example, the operating environment 100 includes a controller 110 and a head-mounted device (HMD) 120.
[0042] In some implementations, the controller 110 is configured to manage and coordinate a simulated reality (SR) experience for the user. In some implementations, the controller 110 includes a suitable combination of software, firmware, and/or hardware. In some implementations, the controller 110 is a computing device that is local or remote relative to the scene 105. For example, the controller 110 is a local server located within the scene 105. In another example, the controller 110 is a remote server located outside of the scene 105 (e.g., a cloud server, central server, etc.). In some implementations, the controller 110 is communicatively coupled with the HMD 120 via one or more wired or wireless communication channels 144 (e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.l6x, IEEE 802.3x, etc.). In another example, the controller 110 is included within the enclosure of HMD 120.
[0043] In some implementations, the HMD 120 is configured to present the SR experience to the user. In some implementations, the HMD 120 includes a suitable combination of software, firmware, and/or hardware. In some implementations, the functionalities of the controller 110 are provided by and/or combined with the HMD 120.
[0044] According to some implementations, the HMD 120 provides an SR experience to the user while the user is virtually and/or physically present within the scene 105. In some implementations, while presenting an AR experience, the HMD 120 is configured to present AR content (e.g., one or more virtual objects) and to enable optical see-through of the scene 105. In some implementations, while presenting an AR experience, the HMD 120 is configured to present AR content (e.g., one or more virtual objects) overlaid or otherwise combined with images or portions thereof captured by the scene camera of HMD 120. In some implementations, while presenting AV content, the HMD 120 is configured to present elements of the real world, or representations thereof, combined with or superimposed over a user’ s view of a computer-simulated environment. In some implementations, while presenting a VR experience, the HMD 120 is configured to present VR content.
[0045] In some implementations, the user wears the HMD 120 on his/her head. As such, the HMD 120 includes one or more SR displays provided to display the SR content, optionally through an eyepiece or other optical lens system. For example, in various implementations, the HMD 120 encloses the field-of-view of the user. In some implementations, the HMD 120 is replaced with a handheld device (such as a smartphone or tablet) configured to present SR content in which the user does not wear the HMD 120, but holds the device with a display directed towards the field-of-view of the user and a camera directed towards the scene 105. In some implementations, the handheld device can be placed within an enclosure that can be worn on the head of the user. In some implementations, the HMD 120 is replaced with an SR chamber, enclosure, or room configured to present SR content, wherein the user does not wear or hold the HMD 120.
[0046] In various implementations, the HMD 120 includes an SR pipeline that presents the SR content. Figure 2 illustrates an SR pipeline 200 that receives SR content and displays an image on a display panel 240 based on the SR content.
[0047] The SR pipeline 200 includes a rendering module 210 that receives the SR content (and eye tracking data from an eye tracker 260) and renders an image based on the SR content. In various implementations, SR content includes definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), and other information describing content to be represented in the rendered image.
[0048] An image includes a matrix of pixels, each pixel having a corresponding pixel value and a corresponding pixel location. In various implementations, the pixel values range from 0 to 255. In various implementations, each pixel value is a color triplet including three values corresponding to three color channels. For example, in one implementation, an image is an RGB image and each pixel value includes a red value, a green value, and a blue value. As another example, in one implementation, an image is a YUV image and each pixel value includes a luminance value and two chroma values. In various implementations, the image is a YUV444 image in which each chroma value is associated with one pixel. In various implementations, the image is a YUV420 image in which each chroma value is associated with a 2x2 block of pixels (e.g., the chroma values are downsampled). In some implementations, an image includes a matrix of tiles, each tile having a corresponding tile location and including a block of pixels with corresponding pixel values. In some implementations, each tile is a 32x32 block of pixels. While specific pixel values, image formats, and tile sizes are provided, it should be appreciated that other values, format, and tile sizes may be used.
[0049] The image rendered by the rendering module 210 (e.g., the rendered image) is provided to a transport module 220 that couples the rendering module 210 to a display module 230. The transport module 220 includes a compression module 222 that compresses the rendered image (resulting in a compressed image), a communications channel 224 that carries the compressed image, and a decompression module 226 that decompresses the compressed image (resulting in a decompressed image).
[0050] The decompressed image is provided to a display module 230 that converts the decompressed image into panel data. The panel data is provided to a display panel 240 that displays a displayed image as described by (e.g., according to) the panel data. The display module 230 includes a lens compensation module 232 that compensates for distortion caused by an eyepiece 242 of the HMD. For example, in various implementations, the lens compensation module 232 predistorts the decompressed image in an inverse relationship to the distortion caused by the eyepiece 242 such that the displayed image, when viewed through the eyepiece 242 by a user 250, appears undistorted. The display module 230 also includes a panel compensation module 234 that converts image data into panel data to be read by the display panel 240.
[0051] The eyepiece 242 limits the resolution that can be perceived by the user 250. In various implementations, the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function of distance from an origin of the display space. In various implementations, the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function of an angle between the optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the eyepiece 242. In various implementations, the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function an angle between the optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
[0052] The display panel 240 includes a matrix of MxN pixels located at respective locations in a display space. The display panel 240 displays the displayed image by emitting light from each of the pixels as described by (e.g., according to) the panel data.
[0053] In various implementations, the SR pipeline 200 includes an eye tracker 260 that generates eye tracking data indicative of a gaze of the user 250. In various implementations, the eye tracking data includes data indicative of a fixation point of the user 250 on the display panel 240. In various implementations, the eye tracking data includes data indicative of a gaze angle of the user 250, such as the angle between the current optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
[0054] In one implementation, in order to render an image for display on the display panel 240, the rendering module 210 generates MxN pixel values for each pixel of an MxN image. Thus, each pixel of the rendered image corresponds to a pixel of the display panel 240 with a corresponding location in the display space. Thus, the rendering module 210 generates a pixel value for MxN pixel locations uniformly spaced in a grid pattern in the display space. Equivalently, the rendering module 210 generates a tile of TxT pixels, each pixel having a corresponding pixel value, at M/TxN/T tile locations uniformly spaced in a grid pattern in the display space.
[0055] Rendering MxN pixel values can be computationally expensive. Further, as the size of the rendered image increases, so does the amount of processing needed to compress the image at the compression module 222, the amount of bandwidth needed to transport the compressed image across the communications channel 224, and the amount of processing needed to decompress the compressed image at the decompression module 226.
[0056] In various implementations, in order to decrease the size of the rendered image without degrading the user experience, foveation (e.g. foveated imaging) is used. Foveation is a digital image processing technique in which the image resolution, or amount of detail, varies across an image. Thus, a foveated image has different resolutions at different parts of the image. Humans typically have relatively weak peripheral vision. According to one model, resolvable resolution for a user is maximum over a field of fixation (e.g., where the user is gazing) and falls off in an inverse linear fashion. Accordingly, in one implementation, the displayed image displayed by the display panel 240 is a foveated image having a maximum resolution at a field of focus and a resolution that decreases in an inverse linear fashion in proportion to the distance from the field of focus.
[0057] In various implementations, the foveated image perceptually matches a non- foveated image, e.g., the processing is“lossless.” In various implementations, the foveated image is perceptually better than a non-foveated image, e.g., the quality of the image is greater at the gaze location than a non-foveated image of greater size. In various implementations, the foveated image is perceptually degraded as compared to a non-foveated image, but more efficient in power/bandwidth, e.g., the processing is“lossy.” [0058] Because some portions of the image have a lower resolution, an MxN foveated image includes less information than an MxN unfoveated image. Thus, in various implementations, the rendering module 210 generates, as a rendered image, a foveated image. The rendering module 210 can generate an MxN foveated image more quickly and with less processing power (and battery power) than the rendering module 210 can generate an MxN unfoveated image. Also, an MxN foveated image can be expressed with less data than an MxN unfoveated image. In other words, an MxN foveated image file is smaller in size than an MxN unfoveated image file. In various implementations, compressing an MxN foveated image using various compression techniques results in fewer bits than compressing an MxN unfoveated image.
[0059] A foveation ratio, R, can be defined as the amount of information in the MxN unfoveated image divided by the amount of information in the MxN foveated image. In various implementations, the foveation ratio is between 1.5 and 10. For example, in some implementations, the foveation ratio is 2. In some implementations, the foveation ratio is 3 or 4. In some implementations, the foveation ratio is constant among images. In some implementations, the foveation ratio is selected based on the image being rendered.
[0060] In some implementations, in order to render an image for display on the display panel 240, the rendering module 210 generates M/RxN/R pixel values for each pixel of an M/RxN/R warped image. Each pixel of the warped image corresponds to an area greater than a pixel of the display panel 240 at a corresponding location in the display space. Thus, the rendering module 210 generates a pixel value for each of M/RxN/R locations in the display space that are not uniformly distributed in a grid pattern. Similarly, in some implementations, the rendering module 210 generates a tile of TxT pixels, each pixel having a corresponding pixel value, at each of M/(RT)xN/(RT) locations in the display space that are not uniformly distributed in a grid pattern. The respective area in the display space corresponding to each pixel value (or each tile) is defined by the corresponding location in the display space (a rendering location) and a scaling factor (or a set of a horizontal scaling factor and a vertical scaling factor).
[0061] In various implementations, the rendering module 210 generates, as a rendered image, a warped image. In various implementations, the warped image includes a matrix of M/RxN/R pixel values for M/RxN/R locations uniformly spaced in a grid pattern in a warped space that is different than the display space. Particularly, the warped image includes a matrix of M/RxN/R pixel values for M/RxN/R locations in the display space that are not uniformly distributed in a grid pattern. Thus, whereas the resolution of the warped image is uniform in the warped space, the resolution varies in the display space. This is described in greater detail below with respect to Figures 8 A and 8B.
[0062] The rendering module 210 determines the rendering locations and the corresponding scaling factors based on a rendering resolution function that generally characterizes the resolution of the rendered image in the displayed space.
[0063] In one implementation, the rendering resolution function, S(x), is a function of a distance from an origin of the display space (which may correspond to the center of the display panel 240). In another implementation, the rendering resolution function, X(q), is a function of an angle between an optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240. Thus, in one implementation, the rendering resolution function, X(q), is expressed in pixels per degree (PPD).
[0064] Humans typically have relatively weak peripheral vision. According to one model, resolvable resolution for a user is maximum over a field of focus (where the user is gazing) and falls off in an inverse linear fashion as the angle increases from the optical axis. Accordingly, in one implementation, the rendering resolution function (in a first dimension) is defined as:
Figure imgf000015_0001
[0066] where Sma x is the maximum of the rendering resolution function (e.g., approximately 60 PPD), Smm is the asymptote of the rendering resolution function, Of,,/ characterizes the size of the field of focus, and w characterizes the width of the rendering resolution function.
[0067] Figure 3 A illustrates a rendering resolution function 310 (in a first dimension) which falls off in an inverse linear fashion from a field of focus. Figure 3B illustrates a rendering resolution function 320 (in a first dimension) which falls off in a linear fashion from a field of focus. Figure 3C illustrates a rendering resolution function 330 (in a first dimension) which is approximately Gaussian. Figure 3D illustrates a rendering resolution function 340 (in a first dimension) which falls off in a rounded stepwise fashion.
[0068] Each of the rendering resolutions functions 310-340 of Figures 3 A-3D is in the form a peak including a peak height (e.g., a maximum value) and a peak width. The peak width can be defined in a number of ways. In one implementation, the peak width is defined as the size of the field of focus (as illustrated by width 311 of Figure 3A and width 321 of Figure 3B). In one implementation, the peak width is defined as the full width at half maximum (as illustrated by width 331 of Figure 3C). In one implementation, the peak width is defined as the distance between the two inflection points nearest the origin (as illustrated by width 341 of Figure 3D).
[0069] Whereas Figures 3A-3D illustrate rendering resolution functions in a single dimension, it is to be appreciated that the rendering resolution function used by the rendering module 210 can be a two-dimensional function. Figure 4 A illustrates a two-dimensional rendering resolution function 410 in which the rendering resolution function 410 is independent in a horizontal dimension ( Q ) and a vertical dimension (f). Figure 4B illustrates a two- dimensional rendering resolution function 420 in which the rendering resolution function 420 a function of single variable (e.g., D = c]q2 + f2). Figure 4C illustrates a two-dimensional rendering resolution function 430 in which the rendering resolution function 430 is different in a horizontal dimension ( Q ) and a vertical dimension (f). Figure 4D illustrates a two- dimensional rendering resolution function 440 based on a human vision model.
[0070] As described in detail below, the rendering module 210 generates the rendering resolution function based on a number of factors, including biological information regarding human vision, eye tracking data, eye tracking metadata, the SR content, and various constraints (such as constraints imposed by the hardware of the HMD).
[0071] Figure 5A illustrates an example rendering resolution function 510, denoted
X(q), which characterizes a resolution in the display space as a function of angle in the warped space. The rendering resolution function 510 is a constant (e.g., Smax ) within a field of focus (between -Of,,/ and +0ff) and falls off in an inverse linear fashion outside this window.
[0072] Figure 5B illustrates the integral 520, denoted II(Q), of the rendering resolution function 510 of Figure 5 A within a field of view, e.g., from -0f v to +0fov. Thus, U Q) =
I- S(ff)c . The integral 520 ranges from 0 at -0fOV to a maximum value, denoted Umax, at
Vfov
q n·
[0073] Figure 5C illustrates the tangent 530, denoted V(XR), of the inverse of the integral
520 of the rendering resolution 510 of Figure 5A. Thus,
Figure imgf000016_0001
The tangent 530 illustrates a direct mapping from rendered space, in XR, to display space, in xr>. According to the foveation indicated by the rendering resolution function 510, the uniform sampling points in the warped space (equally spaced along the XR axis) corresponding to non-uniform sampling points in the display space (non-equally spaced along the
Figure imgf000017_0001
axis). Scaling factors can be determined by the distances between the non-uniform sampling points in the display space.
[0074] When performing static foveation, the rendering module 210 uses a rendering resolution function that does not depend on the gaze on the user. However, when performing dynamic foveation, the rendering module 210 uses a rendering resolution function that depends on the gaze of the user. In particular, when performing dynamic foveation, the rendering module 210 uses a rendering resolution function that has a peak height at a location corresponding to a location in the display space at which the user is looking (e.g., the point of fixation as determined by the eye tracker 260).
[0075] Figure 6A illustrates a rendering resolution function 610 that may be used by the rendering module 210 when performing static foveation. The rendering module 210 may also use the rendering resolution function 610 of Figure 6 A when performing dynamic foveation and the user is looking at the center of the display panel 240. Figure 6B illustrates a rendering resolution function 620 that may be used by the rendering module when performing dynamic foveation and the user is looking at an angle (0g) away from the center of the display panel 240.
[0076] Figure 7 is a flowchart representation of a method 700 of rendering an image in accordance with some implementations. In some implementations (and as detailed below as an example), the method 700 is performed by a rendering module, such as the rendering module 210 of Figure 2. In various implementations, the method 700 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2. In various implementations, the method 700 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays. In some implementations, the method 700 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 700 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
[0077] The method 700 begins at block 710 with the rendering module obtaining SR content to be rendered into a display space. In various implementations, SR content can include definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), or other information describing content to be represented in the rendered image.
[0078] The method 700 continues at block 720 with the rendering module obtaining a rendering resolution function defining a mapping between the display space and a warped space, the rendering resolution function depending on a fixation point of a user in the display space. Various rendering resolution functions are illustrated in Figures 3A-3D and Figures 4A- 4D. Various methods of generating a rendering resolution function are described further below.
[0079] In various implementations, the rendering resolution function generally characterizes the resolution of the rendered image in the display space. Thus, the integral of the rendering resolution function provides a mapping between the display space and the warped space (as illustrated in Figures 5A-5C). In one implementation, the rendering resolution function, S(x), is a function of a distance from an origin of the display space. In another implementation, the rendering resolution function, S(6), is a function of an angle between an optical axis of the user and the optical axis when the user is looking at the center of the display panel. Accordingly, the rendering resolution function characterizes a resolution in the display space as a function of angle (in the display space). Thus, in one implementation, the rendering resolution function, S(6), is expressed in pixels per degree (PPD).
[0080] In various implementations, the rendering module performs dynamic foveation and the rendering resolution function depends on the gaze of the user. Accordingly, in some implementations, obtaining the rendering resolution function includes obtaining eye tracking data indicative of a gaze of a user, e.g., from the eye tracker 260 of Figure 2, determining the fixation point of the user in the display space based on the eye tracking data, and generating the rendering resolution function based on the fixation point of the user in the display space. In various implementations, the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user. In particular, in various implementations, generating the rendering resolution function based on the eye tracking data includes generating a rendering resolution function having a peak height at a location the user is looking at, as indicated by the eye tracking data.
[0081] The method 700 continues at block 730 with the rendering module generating a rendered image based on the SR content and the rendering resolution function. The rendered image includes a warped image with a plurality of pixels at respective locations uniformly spaced in a grid pattern in the warped space. The plurality of pixels are respectively associated with a plurality of respective pixel values based on the SR content. The plurality of pixels are respectively associated with a plurality of respective scaling factors defining an area in the display space based on the rendering resolution function.
[0082] An image that is said to be in a display space has uniformly spaced regions (e.g., pixels or groups of pixels) that map to uniformly spaced regions (pixels or groups of pixels) of a display. An image that is said to be in a warped space has uniformly spaced regions (e.g., pixels or groups of pixels) that map to non- uniformly spaced regions (e.g., pixels or groups of pixels) in the display space. The relationship between uniformly spaced regions in the warped space to non-uniformly spaced regions in the display space is defined at least in part by the scaling factors. Thus, the plurality of respective scaling factors (like the rendering resolution function) define a mapping between the warped space and the display space. Thus, the plurality of respective scaling factors (like the rendering resolution function) define a mapping between the warped space and the display space.
[0083] In various implementations, the warped image includes a plurality of tiles at respective locations uniformly spaced in a grid pattern in the warped space and each of the plurality of tiles is associated with a respective one or more scaling factors. For example, in some implementations, each tile (including a plurality of pixels) is associated with a single horizontal scaling factor and a single vertical scaling factor. In some implementations, each tile is associated with a single scaling factor that is used for both horizontal and vertical scaling. In various implementations, each tile is a 32x32 matrix of pixels.
[0084] In various implementations, the rendering module transmits the warped image including the plurality of pixel values in association with the plurality of respective scaling factors. Accordingly, the warped image and the scaling factors, rather than a foveated image which could be generated using this information, is propagated through the pipeline.
[0085] In particular, with respect to Figure 2, in various implementations, the rendering module 210 generates a warped image and a plurality of respective scaling factors that are transmitted by the rendering module 210. At various stages in the pipeline 200, the warped image (or a processed version of the warped image) and the plurality of respective scaling factors are received (and used in processing the warped image) by the transport module 220 (and the compression module 222 and decompression module 226 thereof) as described in U.S. Patent App. No. 62/667,727, entitled“DYNAMIC FOVEATED COMPRESSION,” filed on May 7, 2018, and hereby incorporated by reference in its entirety. At various stages in the pipeline 200, the warped image (or a processed version of the warped image) and the plurality of respective scaling factors are received (and used in processing the warped image) by the display module 230 (and the lens compensation module 232 and the panel compensation module 234 thereof) as described in U.S. Patent App. No. 62/667,728, entitled“DYNAMIC FOVEATED DISPLAY,” filed on May 7, 2018, and hereby incorporated by reference in its entirety.
[0086] In various implementations, the rendering module generates the scaling factors based on the rendering resolution function. For example, in some implementations, the scaling factors are generated based on the rendering resolution function as described above with respect to Figures 5A-5C. In various implementations, generating the scaling factors includes determining the integral of the rendering resolution function. In various implementations, generating the scaling factors includes determining the tangent of the inverse of the integral of the rendering resolution function. In various implementations, generating the scaling factors includes, determining, for each of the respective locations uniformly spaced in a grid pattern in the warped space, the respective scaling factors based on the tangent of the inverse of the integral of the rendering resolution function. Accordingly, for a plurality of locations uniformly spaced in the warped space, a plurality of locations non-uniformly spaced in the display space are represented by the scaling factors.
[0087] Figure 8 A illustrates an image representation of SR content 810 to be rendered in a display space. Figure 8B illustrates a warped image 820 generated according to the method 700 of Figure 7. In accordance with a rendering resolution function, different parts of the SR content 810 corresponding to non-uniformly spaced regions (e.g., different amounts of area) in the display space are rendered into uniformly spaced regions (e.g., the same amount of area) in the warped image 820.
[0088] For example, the area at the center of the image representation of SR content
810 of Figure 8 A is represented by an area in the warped image 820 of Figure 8B including K pixels (and K pixel values). Similarly, the area on the corner of the image representation of SR content 810 of Figure 8 A (a larger area than the area at the center of Figure 8A) is also represented by an area in the warped image 820 of Figure 8B including K pixels (and K pixel values).
[0089] As noted above, the rendering module 210 can perform static foveation or dynamic foveation. In various implementations, the rendering module 210 determines a foveation mode to apply for rendering SR content and performs static foveation or dynamic foveation according to the determined foveation mode. In a static foveation mode, the SR content is rendered independently of eye tracking data. In a no-foveation mode, the rendered image is characterized by fixed resolutions per display regions (e.g., a constant number of pixels per tile). In a dynamic foveation mode, the resolution of the rendered image depends on the gaze of a user.
[0090] Figure 9 is a flowchart representation of a method 900 of rendering an image in accordance with some implementations. In some implementations (and as detailed below as an example), the method 900 is performed by a rendering module, such as the rendering module 210 of Figure 2. In various implementations, the method 900 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2. In various implementations, the method 900 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays. In some implementations, the method 900 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 900 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
[0091] The method 900 begins in block 910 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where a user is looking, such as gaze direction or a fixation point of a user). In various implementations, the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
[0092] The method 900 continues in block 920 with the rendering module obtaining
SR content to be rendered. In various implementations, the SR content can include definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), or other information describing content to be represented in a rendered image.
[0093] The method 900 continues in block 930 with the rendering module determining a foveation mode to apply to rendering the SR content. In various implementations, the rendering module determines the foveation mode based on various factors. In some implementations, the rendering module determines the foveation mode based on a rendering processor characteristic. For example, in some implementations, the rendering module determines the foveation mode based on an available processing power, a processing speed, or a processor type of the rendering processor of the rendering module. When the rendering module has a large available processing power (due to a large processing capacity or low usage of the processing capacity), the rendering module selects a dynamic foveation mode and when the rendering module has a small available processing power (due to a small processing capacity or high usage of the processing capacity), the rendering module selects a static foveation mode or no-foveation mode. Referring to Figure 1 , when the rendering is performed by controller 110 (e.g., the rendering processor is at the controller), the rendering module selects a dynamic foveation mode and when the rendering is performed by the HMD 120 (e.g., the rendering processor is at the HMD), the rendering module selects a static foveation mode or a no-foveation mode. In various implementations, switching between static and dynamic foveation modes occurs based on characteristics of the HMD 120, such as the processing power of the HMD 120 relative to the processing power of the controller 110.
[0094] In some implementations, the rendering module selects a static foveation or a no-foveation mode when eye tracking performance (e.g., reliability) becomes sufficiently degraded. For example, in some implementations, static foveation mode or no-foveation mode is selected when eye tracking is lost. As another example, in some implementations, static foveation mode or no-foveation mode is selected when eye tracking performance breaches a threshold, such as when eye tracking accuracy falls too low (e.g., due to large gaps in eye tracking data) and/or latency related to eye tracking exceeds a value. In some implementations, the rendering module shifts focus to the center of the HMD 120 and, using static foveation, gradually increases the field of fixation (FoF) when diminishment of eye tracking performance during dynamic foveation (e.g., after a timeout, as indicated by a low prediction confidence) is suspected.
[0095] In various implementations, the rendering module selects a static foveation mode or no-foveation mode in order to account for other considerations. For example, in some implementations, the rendering module selects a static foveation mode or no-foveation mode where superior eye-tracking sensor performance is desirable. As another example, in some implementations, the rendering module selects a static foveation mode or no-foveation mode when the user wearing the HMD 120 has a medical condition that prevents eye tracking or makes it sufficiently ineffective.
[0096] In various implementations, a static foveation mode or no-foveation mode is selected because it provides better performance of various aspects of the rendering imaging system. For example, in some implementations, static foveation mode or no-foveation mode provides better rate control. As another example, in some implementations, static foveation mode or no-foveation mode provides better concealment of mixed foveated and non-foveated regions (e.g. by making fainter the line demarcating the regions). As another example, in some implementations, a static foveation mode or no-foveation mode provides better display panel consumption bandwidth, by, for instance, using static grouped compensation data to maintain similar power and/or bandwidth. As yet another example, in some implementations, static foveation mode or no-foveation mode mitigates the risk of rendering undesirable visual aspects, such as flicker and/or artifacts (e.g., grouped rolling emission shear artifact).
[0097] The method 900 continues in decision block 935. In accordance with a determination that the foveation mode is a dynamic foveation mode, the method 900 continues in block 940, wherein the rendering module renders the SR content according to dynamic foveation based on the eye tracking data (e.g., as described above with respect to Figure 7). In accordance with a determination that the foveation mode is a static foveation mode, the method 900 continues in block 942, wherein the rendering module renders the SR content according to static foveation independent of the eye tracking data (e.g., as described above with respect to Figure 7).. In accordance with a determination that the foveation mode is a no-foveation mode, the method 900 continues in block 944, wherein the rendering module renders the SR content without foveation.
[0098] In various implementations, the method 900 returns to block 920 where additional SR content is received. In various implementations, the rendering module renders different SR content with different foveation modes depending on changing circumstances. While shown in a particular order, it should be appreciated that blocks of method 900 can be performed in different orders or at the same time. For example, eye tracking data can be obtained (e.g., as in block 910) throughout the performance of method 900 and that blocks relying on that data can use any of the previously obtained (e.g., most recently obtained) eye tracking data or variants thereof (e.g., windowed average or the like).
[0099] Figure 10A illustrates an eyepiece resolution function 1020, E(q), that varies as a function of angle. The eyepiece resolution function 1020 has a maximum at the center of the eyepiece 242 and falls off towards the edges. In various implementations, the eyepiece resolution function 1020 includes a portion of a circle, ellipse, parabola, or hyperbola. [00100] Figure 10A also illustrates an unconstrained rendering resolution function 1010, that has a peak centered at a gaze angle (0·,). Around the peak, the unconstrained rendering resolution function 1010 is greater than the eyepiece resolution function 1020. Thus, if the rendering module 210 were to render an image having the resolution indicated by the unconstrained rendering resolution function 1010, details at those angles would be stripped by the eyepiece 242. Accordingly, in order to avoid the computational expense and delay in rendering those details, in various implementations, the rendering module 210 generates a capped rendering resolution function 1030 (in bold), Sc(0), equal to the lesser of the eyepiece resolution function 1010 and the unconstrained rendering resolution function. Thus,
[00101] Sc(0) = min (£(0), $„(0)).
[00102] In various implementations, the amount of computational expense and delay associated with the rendering module 210 rendering the rendered image is kept relatively constant (e.g., normalized), irrespective of the gaze angle of the user 250. Accordingly, in various implementations, the rendering module 210 renders the rendered image using a rendering resolution function that has a fixed summation value indicative of the total amount of detail in the rendered image. In various implementations, the summation value is generally equal to the integral of the rendering resolution function over the field of view. In other words, the summation value corresponds to the area under the rendering resolution function over the field of view. In various implementations, the summation value corresponds to the number of pixels, tiles, and/or (x,y) locations in the rendered image.
[00103] The summation value of the capped rendering resolution function 1030 is less than the summation value of the unconstrained rendering resolution function 1010. In order to generate a rendering resolution function with a greater summation value, e.g., equal to a fixed summation value, the rendering module increases values of capped rendering resolution function 1030 that were not decreased as compared to the unconstrained rendering resolution function 1010. For example, Figure 10B illustrates a first constrained rendering resolution function 1032 in which the fall-off portions of the rendering resolution function are increased as compared to the fall-off portions of the capped rendering resolution function 1030. As another example, Figure 10C illustrates a second constrained rendering resolution function 1034 in which the peak width of the rendering resolution function is increased as compared to the peak width of the capped rendering resolution function 1030. [00104] Figure 11 is a flowchart representation of a method 1100 of rendering an image in accordance with some implementations. In some implementations (and as detailed below as an example), the method 1100 is performed by a rendering module, such as the rendering module 210 of Figure 2. In various implementations, the method 1100 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2. In various implementations, the method 1100 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays. In some implementations, the method 1100 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 1100 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
[00105] The method 1100 begins in block 1110 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where the user is looking, such as gaze direction, and/or fixation point of the user). In various implementations, the rendering module receives data indicative of performance characteristics of an eyepiece at least at the gaze of the user. In various implementations, performance characteristics of the eyepiece at the gaze of the user can be determined from the eye tracking data.
[00106] The method 1110 continues in block 1120, with the rendering module generating a rendering resolution function based on the eye tracking data, the rendering resolution function having a maximum value dependent on the eye tracking data and a summation value independent of the eye tracking data.
[00107] In various implementations, generating the rendering resolution function includes generating an unconstrained rendering resolution function based on the eye tracking data (such as the unconstrained rendering resolution function 1010 of Figure 10A); determining the maximum value (of the rendering resolution function after constraining) based on the eye tracking data (and, optionally, an eyepiece resolution function such as the eyepiece resolution function 1020 of Figure 10A); decreasing values of the unconstrained rendering resolution function above the maximum value to the maximum value in order to generate a capped rendering resolution function (such as the capped rendering resolution function 1030 of Figure 10A); and increasing non-decreased values of the capped rendering resolution function in order to generate the rendering resolution function. In various implementations, increasing the non- decreased values of the capped rendering resolution function includes increasing fall-off portions of the capped rendering resolution function. In some implementations, peripheral portions of the rendering resolution function fall-off in an inverse-linear fashion (e.g., hyperbolically). In various implementations, increasing the non-decreased values of the capped rendering resolution function includes increasing a peak width of the capped rendering resolution function, such as increasing the size of the field of focus.
[00108] In various implementations, the maximum value is based on a mapping between the gaze of the user and lens performance characteristics. In some implementations, the lens performance characteristics are represented by an eyepiece resolution function or a modulation transfer function (MTF). In some implementations, the lens performance characteristics are determined by surface lens modeling.
[00109] In various implementations, the maximum value is determined as a function of gaze direction (because the eyepiece resolution function varies as a function of gaze direction). In various implementations, the maximum value is based on changes in the gaze of the user, such as gaze motion (e.g., changing gaze location). For example, in some implementations, the maximum value of the rendering resolution function is decreased when the user is looking around (because resolution perception decreasing during eye motion). As another example, in some implementations, when the user blinks, the maximum value of the rendering resolution function is decreased (because resolution perception [and eye tracking confidence] decreases when the user blinks).
[00110] In various implementations, the maximum value is affected by the lens performance characteristics. For example, in some implementations, the maximum value is decreased when the lens performance characteristics indicate that the lens cannot support a higher resolution. In some implementations, the lens performance characteristics include a distortion introduced by a lens.
[00111] The method 1100 continued in block 1130, with the rendering module generating a rendered image based on SR content and the rendering resolution function (e.g., as described above with respect to Figure 7). In various implementations, the rendered image is a foveated image, such as an image having lower resolution outside the user’ s field of fixation (FoF). In various implementations, the rendered image is a warped image, such as an image transformed into a non-uniform space as compared to the SR content.
[00112] Figure 12 is a flowchart representation of a method 1200 of rendering an image in accordance with some implementations. In some implementations (and as detailed below as an example), the method 1200 is performed by a rendering module, such as the rendering module 210 of Figure 2. In various implementations, the method 1200 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2. In various implementations, the method 1200 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays. In some implementations, the method 1200 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 1200 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
[00113] The method 1200 begins at block 1210 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where a user is looking, such as gaze direction or a fixation point of a user). In various implementations, the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
[00114] The method 1200 continues at block 1220 with the rendering module obtaining eye tracking metadata indicative of a characteristic of the eye tracking data. In various implementations, the eye tracking metadata is obtained in association with the corresponding eye tracking data. In various implementations, the eye tracking data and the associated eye tracking metadata are received from an eye tracker, such as eye tracker 260 of Figure 2.
[00115] In various implementations, the eye tracking metadata includes data indicative of a confidence of the eye tracking data. For example, in various implementations, the eye tracking metadata provides a measurement of a belief that the eye tracking data correctly indicates the gaze of the user.
[00116] In various implementations, the data indicative of the confidence of the eye tracking data includes data indicative of an accuracy of the eye tracking data. In various implementations, the rendering module generates the data indicative of the accuracy of the eye tracking data based on a series of recently captured images of the eye of the user, recent measurements of the gaze of the user, user biometrics, and/or other obtained data.
[00117] In various implementations, the data indicative of the confidence of the eye tracking data includes data indicative of a latency of the eye tracking data (e.g., a difference between the time the eye tracking data is generated and the time the eye tracking data is received by the rendering module). In various implementations, the rendering module generates the data indicative of the latency of the eye tracking data based on timestamps of the eye tracking data. In various implementations, the confidence of the eye tracking data is higher when the latency is less than when the latency is more.
[00118] In various implementations, the eye tracking data includes data indicative of a prediction of the gaze of the user, and the data indicative of a confidence of the eye tracking data includes data indicative of a confidence of the prediction. In various implementations, the data indicative of a prediction of the gaze of the user is based on past measurements of the gaze of the user based on past captured images. In various implementations, the prediction of the gaze of the user is based on classifying past motion of the gaze of the user as a continuous fixation, smooth pursuit, or saccade. In various implementations, the confidence of the prediction is based on this classification. In particular, in various implementations, the confidence of the prediction is higher when past motion is classified as a continuous fixation or smooth pursuit than when the past motion is classified as a saccade.
[00119] In various implementations, the eye tracking metadata includes data indicative of one or more biometrics of the user, and, in particular, biometrics which affect the eye tracking metadata or its confidence. In particular, in various implementations, the biometrics of the user include one or more of eye anatomy, ethnicity/physionomegy, eye color, age, visual aids (e.g., corrective lenses), make-up (e.g., eyeliner or mascara), medical condition, historic gaze variation, input preferences or calibration, headset position/orientation, pupil dilation/center-shift, and/or eyelid position.
[00120] In various implementations, the eye tracking metadata includes data indicative of one or more environmental conditions of an environment of the user in which the eye tracking data was generated. In particular, in various implementations, the environmental conditions include one or more of vibration, ambient temperature, IR direction light, or IR light intensity.
[00121] The method 1200 continues at block 1230 with the rendering module generating a rendering resolution function based on the eye tracking data and the eye tracking metadata. In various implementations, the rendering module generates the rendering resolution function with a peak maximum based on the eye tracking data (e.g., the resolution is highest where the user is looking). In various implementations, the rendering module generates the rendering resolution function with a peak width based on the eye tracking metadata (e.g., with a wider peak when the eye tracking metadata indicates less confidence in the correctness of the eye tracking data). [00122] The method 1200 continues at block 1240 with the rendering module generating a rendered image based on the SR content and the rendering resolution function (e.g., as described above with respect to Figure 7). In various implementations, the rendered image is a foveated image, such as an image having lower resolution outside the user’ s field of fixation (FoF). In various implementations, the rendered image is a warped image, such as an image transformed into a non-uniform space as compared to the SR content.
[00123] Figure 13A illustrates a rendering resolution function 1310 that may be used by the rendering module when performing dynamic foveation, when the eye tracking data indicates that the user is looking at an angle (0g) away from the center of the display panel, and when the eye tracking metadata indicates a first confidence resulting in a first peak width 1311. Figure 13B illustrates a rendering resolution function 1320 that may be used by the rendering module when performing dynamic foveation, when the eye tracking data indicates that the user is looking at the angle (0g) away from the center of the display panel, and when the eye tracking metadata indicates a second confidence, less than the first confidence, resulting in a second peak width 1321 greater than the first peak width 1311.
[00124] In various implementations, the rendering module detects loss of an eye tracking stream including the eye tracking metadata and the eye tracking data. In response, the rendering module generates a second rendering resolution function based on detecting the loss of the eye tracking stream and generates a second rendered image based on the SR content and the second rendering resolution function.
[00125] In various implementations, detecting the loss of the eye tracking stream includes determining that the gaze of the user was static at a time of the loss of the eye tracking stream. Accordingly, in various implementations, generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a same location as a peak maximum of the rendering resolution function and with a peak width greater than a peak width of the rendering resolution function. Thus, in various implementations, in response to detecting the loss of an eye tracking stream, the rendering resolution function stays at the same location, but the size of the field of fixation increases.
[00126] In various implementations, detecting the loss of the eye tracking stream includes determining that the gaze of the user was moving at a time of the loss of the eye tracking stream. Accordingly, in various implementations, generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a location displaced toward the center as compared to a peak maximum of the rendering resolution function, and with a peak width greater than a peak width of the rendering resolution function. Thus, in various implementations, in response to detecting the loss of an eye tracking stream, the rendering resolution function moves to the center of the display panel and the size of the field of fixation increases.
[00127] In various implementations, detecting the loss of the eye tracking stream includes determining that the gaze of the user was moving in a direction at a time of the loss of the eye tracking stream. Accordingly, in various implementations, generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a location displaced in the direction as compared to a peak maximum of the rendering resolution function, and with a peak width greater than a peak width of the rendering resolution function. Thus, in various implementations, in response to detecting the loss of an eye tracking stream, the rendering resolution function moves to a predicted location and the size of the field of fixation increases.
[00128] Figure 14 is a flowchart representation of a method 1400 of rendering an image in accordance with some implementations. In some implementations (and as detailed below as an example), the method 1400 is performed by a rendering module, such as the rendering module 210 of Figure 2. In various implementations, the method 1400 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2. In various implementations, the method 1400 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays. In some implementations, the method 1400 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 1400 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
[00129] The method 1400 begins at block 1410 with the rendering module obtaining eye tracking data indicative of a gaze of a user (e.g., where a user is looking, such as gaze direction or a fixation point of a user). In various implementations, the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
[00130] The method 1400 continues at block 1420 with the rendering module obtaining SR content to be rendered. In various implementations, the SR content can include definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), or other information describing content to be represented in a rendered image.
[00131] The method 1400 continues at block 1430 with the rendering module generating a rendering resolution function based on the eye tracking data and the SR content. In various implementations, the rendering module generates the rendering resolution function with a peak maximum based on the eye tracking data (e.g., the resolution is highest where the user is looking). In various implementations, the rendering module generates the rendering resolution function based on the eye tracking data and adjusts the rendering resolution function based on the SR content. For example, in some implementations, the rendering module increases the rendering resolution function in one or more areas of import, such as a game objective or content at which humans are particularly adapted to resolve (e.g., content to which humans are likely to pay attention), like a face or high resolution object. As another example, in some implementations, the rendering module increases the rendering resolution function in one or more areas of motion (e.g., of objects of the SR content).
[00132] In various implementations, the rendering module generates the rendering resolution function based on a brightness of the SR content. For example, in some implementations, because peripheral vision is more light-sensitive than central vision, peripheral resolution is increased in darker conditions (as compared to brighter conditions). In various implementations, increasing the peripheral resolution includes increasing the peak width of the rendering resolution function and/or increasing the fall-off portions of the rendering resolution function.
[00133] In various implementations, the rendering module generates the rendering resolution function based on a color of the SR content. For example, in some implementations, because sensitivity to red-green color variations declines more steeply toward the periphery than sensitivity to luminance or blue-yellow colors, peripheral resolution is decreased when the SR content is primarily red- green as opposed to blue-yellow. In various implementations, decreasing the peripheral resolution includes decreasing the peak width of the rendering resolution function and/or decreasing the fall-off portions of the rendering resolution function.
[00134] In various implementations, generating the rendering resolution function based on the SR content (e.g., a color of the SR content) includes generating different rendering resolution functions for different color channels (e.g., three different rendering resolution functions for three different color channels, such as red, green, and blue). In particular, in various implementations, the rendering module generates a first rendering resolution function for a first color channel and a second rendering resolution function for a second color channel different than the first rendering resolution function for the first color channel. Further, in generating the rendered image (as described below), the rendering module generates a first color channel image of the rendered image based on the first rendering resolution function and a second color channel image of the rendered image based on the second rendering resolution function.
[00135] In various implementations, the rendering module generates the rendering resolution function based on a complexity of the SR content. For example, in various implementations, the rendering module increases the rendering resolution function in areas with high resolution spatial changes and/or fast spatial changes.
[00136] The method 1400 continues at block 1440 with the rendering module generating a rendered image based on the SR content and the rendering resolution function (e.g., as described above with respect to Figure 7). In various implementations, the rendered image is a foveated image, such as an image having lower resolution outside the user’ s field of fixation (FoF). In various implementations, the rendered image is a warped image, such as an image transformed into a non-uniform space as compared to the SR content.
[00137] While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
[00138] It will also be understood that, although the terms“first,”“second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the“first node” are renamed consistently and all occurrences of the“second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
[00139] The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms“a,”“an,” and“the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term“and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms“comprises” and/or“comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[00140] As used herein, the term“if’ may be construed to mean“when” or“upon” or “in response to determining” or“in accordance with a determination” or“in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase“if it is determined [that a stated condition precedent is true]” or“if [a stated condition precedent is true]” or“when [a stated condition precedent is true]” may be construed to mean “upon determining” or“in response to determining” or“in accordance with a determination” or“upon detecting” or“in response to detecting” that the stated condition precedent is true, depending on the context.

Claims

What is claimed is:
1. A method comprising:
obtaining simulated reality (SR) content to be rendered into a display space;
obtaining a rendering resolution function defining a mapping between the display space and a warped space, the rendering resolution function depending on a fixation point of a user in the display space; and
generating a rendered image based on the SR content and the rendering resolution function, wherein the rendered image includes a warped image with a plurality of pixels at respective locations uniformly spaced in a grid pattern in the warped space, wherein the plurality of pixels are respectively associated with a plurality of respective pixel values based on the SR content and a plurality of respective scaling factors defining an area in the display space based on the rendering resolution function.
2. The method of claim 1, wherein obtaining the rendering resolution function includes: obtaining eye tracking data indicative of a gaze of a user;
determining the fixation point of the user in the display space; and
generating the rendering resolution function based on the fixation point of the user in the display space.
3. The method of claim 2, wherein the eye tracking data includes at least one of data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
4. The method of any of claims 1-3, wherein the warped image includes a plurality of tiles at respective locations uniformly spaced in a grid pattern in the warped space, wherein each of the plurality of tiles is associated with a respective one or more scaling factors.
5. The method of claim 4, wherein each of the plurality of tiles is a 32x32 matrix of pixels.
6. The method of any of claims 1-5, wherein one or more of the plurality of respective scaling factors include a horizontal scaling factor and a vertical scaling factor.
7. The method of any of claims 1-6, further comprising transmitting the warped image including the plurality of respective pixel values in association with the plurality of respective scaling factors.
8. The method of any of claims 1-8, wherein the rendering resolution function characterizes a resolution in the display space as a function of angle.
9. The method of claim 8, further comprising:
determining the integral of the rendering resolution function;
determining the tangent of the inverse of the integral of the rendering resolution function; and
determining, for each of the respective locations uniformly spaced in a grid pattern in the warped space, the respective scaling factors based on the tangent of the inverse of the integral of the rendering resolution function.
10. A device comprising:
one or more processors;
a non-transitory memory;
one or more displays; and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to perform any of the methods of claims 1-9.
11. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with one or more displays, cause the device to perform any of the methods of claims 1-9.
12. A device comprising:
one or more processors;
a non-transitory memory;
one or more displays; and
means for causing the device to perform any of the methods of claims 1-9.
13. A device comprising:
a display; and one or more processors to:
obtain simulated reality (SR) content to be rendered into a display space; obtain a rendering resolution function defining a mapping between the display space and a warped space, the rendering resolution function depending on a fixation point of a user in the display space;
generate a rendered image based on the SR content and the rendering resolution function, wherein the rendered image includes a warped image with a plurality of pixels at respective locations uniformly spaced in a grid pattern in the warped space, wherein the plurality of pixels are respectively associated with a plurality of respective pixel values based on the SR content and a plurality of respective scaling factors defining an area in the display space based on the rendering resolution function.
14. A method comprising:
obtaining eye tracking data indicative of a gaze of a user;
obtaining simulated reality (SR) content to be rendered;
determining a foveation mode to apply to rendering the SR content; and
in accordance with a determination that the foveation mode is a dynamic foveation mode, rendering the SR content according to dynamic foveation based on the eye tracking data.
15. The method of claim 14, further comprising:
in accordance with a determination that the foveation mode is a static foveation mode, rendering the SR content according to static foveation independent of the eye tracking data.
16. The method of claims 14 or 15, further comprising:
in accordance with a determination that the foveation mode is a no-foveation mode, rendering the SR content without foveation.
17. The method of any of claims 14-16, wherein the foveation mode is determined based on a rendering processor characteristic.
18. The method of claim 17, wherein the rendering processor characteristic includes an available processing power.
19. The method of claim 17, wherein the rendering processor characteristic includes a processing speed.
20. The method of claim 17, wherein the rendering processor characteristic includes a processor type.
21. The method of any of claims 14-20, wherein the foveation mode is determined based on the eye tracking data.
22. The method of claim 21, wherein the foveation mode is determined based on gaps in the eye tracking data.
23. The method of claim 21 or 22, wherein the foveation mode is determined based on accuracy of the eye tracking data.
24. The method of any of claims 21-23, wherein the foveation mode is determined based on latency of the eye tracking data.
25. The method of any of claims 14-24, wherein the eye tracking data includes at least one of data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
26. A device comprising:
one or more processors;
a non-transitory memory;
one or more displays; and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to perform any of the methods of claims 14-25.
26. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with one or more displays, cause the device to perform any of the methods of claims 14-25.
27. A device comprising:
one or more processors; a non-transitory memory;
one or more displays; and
means for causing the device to perform any of the methods of claims 14-25.
28. A device comprising:
a display; and
one or more processors to:
obtain eye tracking data indicative of a gaze of a user;
obtain simulated reality (SR) content to be rendered;
determine a foveation mode to apply to rendering the SR content; and in accordance with a determination that the foveation mode is a dynamic foveation mode, render the SR content according to dynamic foveation based on the eye tracking data.
29. A method comprising:
obtaining eye tracking data indicative of a gaze of a user;
generating a rendering resolution function based on the eye tracking data, the rendering resolution function having a maximum value dependent on the eye tracking data and a summation value independent of the eye tracking data; and
generating a rendered image based on simulated reality (SR) content and the rendering resolution function.
30. The method of claim 29, wherein generating the rendering resolution function includes: generating an unconstrained rendering resolution function based on the eye tracking data;
determining the maximum value based on the eye tracking data;
decreasing values of the unconstrained rendering resolution function above the maximum value to the maximum value in order to generate a capped rendering resolution function; and
increasing non-decreased values of the capped rendering resolution function in order to generate the rendering resolution function.
31. The method of claim 30, wherein increasing the non-decreased values of the capped rendering resolution function includes increasing fall-off portions of the capped rendering resolution function.
32. The method of claim 30 or 31, wherein increasing the non-decreased values of the capped rendering resolution function includes increasing a peak width of the capped rendering resolution function.
33. The method of any of claims 29-32, wherein the maximum value is based on a mapping between the gaze of the user and lens performance characteristics.
34. The method of claim 33, wherein the lens performance characteristics include a resolution supportable by a lens.
35. The method of claims 33 or 34, wherein the lens performance characteristics include a distortion introduced by a lens.
36. The method of any of claims 33-35, wherein the lens performance characteristics are determined by surface lens modelling.
37. The method of any of claims 29-36, wherein the maximum value is based on motion of the gaze of the user.
38. The method of any of claims 29-37, wherein the maximum value is based on gaps in the eye tracking data.
39. The method of any of claims 29-38, wherein the eye tracking data includes at least one of data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
40. The method of any of claims 29-39, wherein the rendered image is a foveated image.
41. The method of any of claims 29-39, wherein the rendered image is a warped image.
42. A device comprising: one or more processors;
a non-transitory memory;
one or more displays; and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to perform any of the methods of claims 29-41.
43. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with one or more displays, cause the device to perform any of the methods of claims 29 41.
44. A device comprising:
one or more processors;
a non-transitory memory;
one or more displays; and
means for causing the device to perform any of the methods of claims 29 41.
45. A device comprising:
a display; and
one or more processors to:
obtain eye tracking data indicative of a gaze of a user;
generate a rendering resolution function based on the eye tracking data, the rendering resolution function having a maximum value dependent on the eye tracking data and a summation value independent of the eye tracking data; and
generate a rendered image based on simulated reality (SR) content and the rendering resolution function.
46. A method comprising:
obtaining eye tracking data indicative of a gaze of a user;
obtaining eye tracking metadata indicative of a characteristic of the eye tracking data; generating a rendering resolution function based on the eye tracking data and the eye tracking metadata; and
generating a rendered image based on simulated reality (SR) content and the rendering resolution function.
47. The method of claim 46, wherein the eye tracking data includes at least one of data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
48. The method of claims 46 or 47, wherein the eye tracking metadata includes data indicative of a confidence of the eye tracking data.
49. The method of claim 48, wherein the data indicative of a confidence of the eye tracking data includes data indicative of an accuracy of the eye tracking data.
50. The method of claims 48 or 49, wherein the data indicative of a confidence of the eye tracking data includes data indicative of a latency of the eye tracking data.
51. The method of any of claims 48-50, wherein the eye tracking data includes data indicative of a prediction of the gaze of the user and the data indicative of a confidence of the eye tracking data includes data indicative of a confidence of the prediction.
52. The method of any of claims 46-51, wherein the eye tracking metadata includes data indicative of one or more biometrics of the user.
53. The method of any of claims 46-52, wherein the eye tracking metadata includes data indicative of one or more environmental conditions.
54. The method of any of claims 46-53, wherein generating the rendering resolution function includes generating the rendering resolution function with a peak maximum based on the eye tracking data and a peak width based on the eye tracking metadata.
55. The method of any of claims 46-54, wherein the rendered image is a foveated image.
56. The method of any of claims 46-54, wherein the rendered image is a warped image.
57. The method of any of claims 46-56, further comprising:
detecting loss of an eye tracking stream including the eye tracking data and the eye tracking metadata; generating a second rendering resolution function based on detecting the loss of the eye tracking stream; and
generating a second rendered image based on SR content and the second rendering resolution function.
58. The method of claim 57, wherein detecting the loss of the eye tracking stream includes determining that the gaze of the user was static at a time of the loss of the eye tracking stream, and wherein generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a same location as a peak maximum of the rendering resolution function and with a peak width greater than a peak width of the rendering resolution function.
59. The method of claim 57, wherein detecting the loss of the eye tracking stream includes determining that the gaze of the user was moving at a time of the loss of the eye tracking stream and, wherein generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a location displaced toward the center as compared to a peak maximum of the rendering resolution function and with a peak width greater than a peak width of the rendering resolution function.
60. The method of claim 57, wherein detecting the loss of the eye tracking stream includes determining that the gaze of the user was moving in a direction at a time of the loss of the eye tracking stream, and wherein generating the second rendering resolution function includes generating the second rendering resolution function with a peak maximum at a location displaced in the direction as compared to a peak maximum of the rendering resolution function and with a peak width greater than a peak width of the rendering resolution function.
61. A device comprising:
one or more processors;
a non-transitory memory;
one or more displays; and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to perform any of the methods of claims 46-60.
62. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with one or more displays, cause the device to perform any of the methods of claims 46-60.
63. A device comprising:
one or more processors;
a non-transitory memory;
one or more displays; and
means for causing the device to perform any of the methods of claims 46-60.
64. A device comprising:
a display; and
one or more processors to:
obtain eye tracking data indicative of a gaze of a user;
obtain eye tracking metadata indicative of a characteristic of the eye tracking data;
generate a rendering resolution function based on the eye tracking data and the eye tracking metadata; and
generate a rendered image based on simulated reality (SR) content and the rendering resolution function.
65. A method comprising:
obtaining eye tracking data indicative of a gaze of a user;
obtaining simulated reality (SR) content to be rendered;
generating a rendering resolution function based on the eye tracking data and the SR content; and
generating a rendered image based on the SR content and the rendering resolution function.
66. The method of claim 65, wherein generating the rendering resolution function includes generating the rendering resolution function based on a brightness of the SR content.
67. The method of claims 65 or 66, wherein generating the rendering resolution function includes generating the rendering resolution function based on a color of the SR content.
68. The method of any of claims 65-67, wherein generating the rendering resolution function includes generating a first rendering resolution function for a first color channel and a second rendering resolution function for a second color channel different than the first rendering resolution function for the first color channel, wherein generating the rendered image includes generating a first color channel image of the rendered image based on the first rendering resolution function and generating a second color channel image of the rendered image based on the second rendering resolution function.
69. The method of any of claims 65-68, wherein generating the rendering resolution function includes generating the rendering resolution function based on a complexity of the SR content.
70. The method of any of claims 65-69, wherein generating the rendering resolution function includes:
generating the rendering resolution function based on the eye tracking data; and adjusting the rendering resolution function based on the SR content.
71. The method of claim 70, wherein adjusting the rendering resolution function based on the SR content includes increasing the rendering resolution function in one or more areas of import.
72. The method of claims 70 or 71, wherein adjusting the rendering resolution function based on the SR content includes increasing the rendering resolution function in one or more areas of motion.
73. The method of any of claims 65-72, wherein the eye tracking data includes at least one of data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
74. The method of any of claims 65-73, wherein the rendered image is a foveated image.
75. The method of any of claims 65-73, wherein the rendered image is a warped image.
76. A device comprising: one or more processors;
a non-transitory memory;
one or more displays; and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to perform any of the methods of claims 65-75.
77. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with one or more displays, cause the device to perform any of the methods of claims 65-75.
78. A device comprising:
one or more processors;
a non-transitory memory;
one or more displays; and
means for causing the device to perform any of the methods of claims 65-75.
79. A device comprising:
a display; and
one or more processors to:
obtain eye tracking data indicative of a gaze of a user;
obtain simulated reality (SR) content to be rendered;
generate a rendering resolution function based on the eye tracking data and the SR content; and
generate a rendered image based on the SR content and the rendering resolution function.
PCT/US2019/030820 2018-05-07 2019-05-06 Dynamic foveated rendering WO2019217262A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862667723P 2018-05-07 2018-05-07
US62/667,723 2018-05-07

Publications (1)

Publication Number Publication Date
WO2019217262A1 true WO2019217262A1 (en) 2019-11-14

Family

ID=66625278

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/030820 WO2019217262A1 (en) 2018-05-07 2019-05-06 Dynamic foveated rendering

Country Status (1)

Country Link
WO (1) WO2019217262A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210142443A1 (en) * 2018-05-07 2021-05-13 Apple Inc. Dynamic foveated pipeline
WO2022066341A1 (en) * 2020-09-22 2022-03-31 Sterling Labs Llc Attention-driven rendering for computer-generated objects
US11436698B2 (en) 2020-01-28 2022-09-06 Samsung Electronics Co., Ltd. Method of playing back image on display device and display device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142138A1 (en) * 2008-08-20 2011-06-16 Thomson Licensing Refined depth map
EP3111640A1 (en) * 2014-02-26 2017-01-04 Sony Computer Entertainment Europe Limited Image encoding and display
US20170285736A1 (en) * 2016-03-31 2017-10-05 Sony Computer Entertainment Inc. Reducing rendering computation and power consumption by detecting saccades and blinks
WO2018041244A1 (en) * 2016-09-02 2018-03-08 Mediatek Inc. Incremental quality delivery and compositing processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142138A1 (en) * 2008-08-20 2011-06-16 Thomson Licensing Refined depth map
EP3111640A1 (en) * 2014-02-26 2017-01-04 Sony Computer Entertainment Europe Limited Image encoding and display
US20170285736A1 (en) * 2016-03-31 2017-10-05 Sony Computer Entertainment Inc. Reducing rendering computation and power consumption by detecting saccades and blinks
WO2018041244A1 (en) * 2016-09-02 2018-03-08 Mediatek Inc. Incremental quality delivery and compositing processing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210142443A1 (en) * 2018-05-07 2021-05-13 Apple Inc. Dynamic foveated pipeline
US11836885B2 (en) * 2018-05-07 2023-12-05 Apple Inc. Dynamic foveated pipeline
US11436698B2 (en) 2020-01-28 2022-09-06 Samsung Electronics Co., Ltd. Method of playing back image on display device and display device
WO2022066341A1 (en) * 2020-09-22 2022-03-31 Sterling Labs Llc Attention-driven rendering for computer-generated objects

Similar Documents

Publication Publication Date Title
US11836885B2 (en) Dynamic foveated pipeline
US11900578B2 (en) Gaze direction-based adaptive pre-filtering of video data
US10643307B2 (en) Super-resolution based foveated rendering
WO2019217260A1 (en) Dynamic foveated display
GB2533553A (en) Image processing method and apparatus
US10948730B2 (en) Dynamic panel masking
WO2019217262A1 (en) Dynamic foveated rendering
JP2023525191A (en) Displays that use light sensors to generate environmentally consistent artificial reality content
WO2013191120A1 (en) Image processing device, method, and program, and storage medium
US10997741B2 (en) Scene camera retargeting
CN112470484A (en) Partial shadow and HDR
US11749024B2 (en) Graphics processing method and related eye-tracking system
US11543655B1 (en) Rendering for multi-focus display systems
US11726320B2 (en) Information processing apparatus, information processing method, and program
WO2019217264A1 (en) Dynamic foveated compression
US20240095879A1 (en) Image Generation with Resolution Constraints
US20220180473A1 (en) Frame Rate Extrapolation
US20230067584A1 (en) Adaptive Quantization Matrix for Extended Reality Video Encoding
CN114120934A (en) Head-mounted display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19725469

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19725469

Country of ref document: EP

Kind code of ref document: A1