CN112558902A - Gaze-independent dithering for dynamic foveal displays - Google Patents

Gaze-independent dithering for dynamic foveal displays Download PDF

Info

Publication number
CN112558902A
CN112558902A CN202010888754.0A CN202010888754A CN112558902A CN 112558902 A CN112558902 A CN 112558902A CN 202010888754 A CN202010888754 A CN 202010888754A CN 112558902 A CN112558902 A CN 112558902A
Authority
CN
China
Prior art keywords
dither
foveal
blocks
block
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010888754.0A
Other languages
Chinese (zh)
Inventor
王令韬
张晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/928,870 external-priority patent/US11435821B2/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN112558902A publication Critical patent/CN112558902A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present disclosure relates to gaze-independent dithering for dynamic foveal displays. An electronic device is provided herein that includes a display and an eye tracker configured to collect eye tracking data regarding a gaze of one or more eyes of a user on the display. The electronic device also includes processing circuitry operatively coupled to the display and configured to generate pixel data for a frame of content based at least in part on the eye tracking data such that the content is configured to be displayed on the display in a dynamic foveal manner. The processing circuit is further configured to apply a dithering pattern to the content frame independent of the gaze of one or more eyes of a user.

Description

Gaze-independent dithering for dynamic foveal displays
Cross Reference to Related Applications
This application claims the benefit of U.S. provisional patent application 62/906,510 entitled "GAZE-INDEPENDENT DITHERING FOR DYNAMICALLY FOVEATED DISPLAYS" filed on 26.9.2019 and U.S. non-provisional patent application 16/928,870 entitled "GAZE-INDEPENDENT DITHERING FOR DYNAMICALLY FOVEATED DISPLAYS" filed on 14.7.2020, which are hereby incorporated by reference in their entirety FOR all purposes.
Disclosure of Invention
The following sets forth a summary of certain embodiments disclosed herein. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these particular embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, the present disclosure may encompass a variety of aspects that may not be set forth below.
The present disclosure relates to dithering techniques that may be used with foveal content, such as dynamic foveal content. Fovea refers to a technique that changes the amount of detail or resolution on an image based on a fixed point (such as a point or region within the image itself, a point or region of the image on which the viewer's eyes are focused), or based on gaze movement of the viewer's eyes. More specifically, the amount of detail can be varied by using different resolutions in various portions of the image. For example, in a static fovea, the size and location of the various resolution areas of the electronic display are fixed. As another example, in a dynamic fovea, the area of the electronic display that uses various resolutions may change between two or more images based on the viewer's gaze. For example, in content using multiple images (such as videos and video games), the content may be presented to a viewer by displaying the images in rapid succession. The portion of the electronic display that displays content at relatively high resolution and relatively low resolution may change from frame to frame.
Dithering generally refers to a technique that applies noise to image data. For example, a dithering pattern may be applied to image data to be displayed by pixels of an electronic display to prevent color banding from occurring in a content frame. When dynamic foveal content (e.g., an image or a frame of content) is being presented and a dither pattern for that content is determined based on the user's gaze, many different dither patterns may be used over multiple frames of image content. Visual artifacts may occur due to changing dither patterns over time during dynamic fovea. Visual artifacts remaining on the display may be referred to as image retention, image persistence, paste artifacts, and/or ghosting. In addition, visual artifacts may cause the image to appear in the human eye and remain on the electronic display for a period of time after the display no longer provides image content. For example, when the display actually displays a subsequent frame of content, the human eye may perceive that a frame of the content or a portion thereof is being displayed on the display.
Accordingly, to reduce and/or eliminate visual artifacts, a gaze-independent dithering technique is provided. More specifically, by defining a dither block (e.g., a group of pixels that correspond to image data to be dithered in the same manner) based on the original locations of the pixels within the display rather than the locations of pixels in the foveal grouping that may move between frames, a more uniform dither pattern may be achieved between frames of content. By providing a more uniform dither pattern, image artifacts due to dither that is perceptible to the human eye may be reduced or eliminated.
Various modifications to the above-described features may be made to various aspects of the present disclosure. Other features may also be added to these various aspects. These refinements and additional features may exist individually or in any combination. For example, various features discussed below in relation to one or more of the illustrated embodiments may be incorporated into any of the above-described aspects of the present invention alone or in any combination. The brief summary presented above is intended to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure without limitation to the claimed subject matter.
Drawings
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
FIG. 1 is a block diagram of an electronic device having an electronic display according to one embodiment;
FIG. 2 is a perspective view of a notebook computer representing an embodiment of the electronic device of FIG. 1;
FIG. 3 is a front view of a handheld device showing another embodiment of the electronic device of FIG. 1;
FIG. 4 is a front view of another handheld device showing another embodiment of the electronic device of FIG. 1;
FIG. 5 is a front view of a desktop computer showing another embodiment of the electronic device of FIG. 1;
FIG. 6 is a perspective view of a wearable electronic device representing another embodiment of the electronic device of FIG. 1;
FIG. 7A is an illustration of the display of FIG. 1 in which a static fovea is utilized;
FIG. 7B is an illustration of the display of FIG. 1 in which a dynamic fovea is utilized, according to one embodiment;
fig. 8 is a diagram representing gaze-related jitter, according to one embodiment;
FIG. 9 is an image illustrating a dither pattern from two content frames overlaid on top of each other when using gaze-related dithering, according to one embodiment;
fig. 10 is a diagram representing gaze independent jitter, according to one embodiment;
FIG. 11 is an image illustrating a dithering pattern from two content frames overlaid on top of each other when using gaze independent dithering, according to one embodiment;
fig. 12 is a flow diagram of a process for generating a gaze-independent dithering pattern, according to one embodiment;
FIG. 13 illustrates a foveal sub-region according to one embodiment;
FIG. 14 shows the block of FIG. 13 in the original pixel domain, in accordance with one embodiment;
FIG. 15 is a diagram illustrating a comparison of dither block boundaries with foveal packet zone boundaries where foveal boundary mismatch does not occur according to one embodiment;
FIG. 16 is a diagram illustrating a comparison of dither block boundaries with foveal packet zone boundaries where foveal boundary mismatches occur, according to one embodiment;
FIG. 17 is a diagram illustrating correction of a foveal grouping mismatch according to one embodiment; and
fig. 18 is another diagram illustrating correction of a foveal packet mismatch according to one embodiment.
Detailed Description
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
Accordingly, fig. 1 illustrates a block diagram of an electronic device 10 that may provide gaze-independent dithering for foveal content, such as dynamic foveal content. As will be described in more detail below, the electronic device 10 may represent any suitable electronic device, such as a computer, mobile phone, portable media device, tablet computer, television, virtual reality headset, vehicle dashboard, and the like. For example, electronic device 10 may represent a notebook computer 10A as shown in FIG. 2, a handheld device 10B as shown in FIG. 3, a handheld device 10C as shown in FIG. 4, a desktop computer 10D as shown in FIG. 5, a wearable electronic device 10E as shown in FIG. 6, or any suitable similar device.
The electronic device 10 shown in FIG. 1 may include, for example, a processor core complex 12, a local memory 14, a main memory storage device 16, an electronic display 18, an input fabric 22, an input/output (I/O) interface 24, a network interface 26, a power supply 29, image processing circuitry 30, and an eye tracker 32. Image processing circuitry 30 may prepare image data (e.g., pixel data) from processor core complex 12 for display on electronic display 18. Although the image processing circuitry 30 is shown as a component within the processor core complex 12, the image processing circuitry 30 may represent any suitable hardware and/or software that may occur between the initial creation of image data and its preparation for display on the electronic display 18. Thus, the image processing circuitry 30 may be located entirely or partially within the processor core complex 12, entirely or partially as a separate component between the processor core complex 12 and the electronic display 18, or entirely or partially as a component of the electronic display 18.
The various functional blocks shown in fig. 1 may include hardware elements (including circuitry), software elements (including machine-executable instructions stored on a tangible, non-transitory medium, such as local memory 14 or main memory storage device 16), or a combination of hardware and software elements. It should be noted that fig. 1 is only one example of a particular implementation and is intended to illustrate the types of components that may be present in electronic device 10. Indeed, the various depicted components may be combined into fewer components or separated into additional components. For example, the local memory 14 and the main memory storage device 16 may be included in a single component.
The processor core complex 12 may perform various operations on the electronic device 10, such as generating image data to be displayed on the electronic display 18 and applying a dithering pattern to the image data. The processor core complex 12 may include any suitable data processing circuitry for performing these operations, such as one or more microprocessors, one or more application specific processors (ASICs), or one or more Programmable Logic Devices (PLDs). In some cases, the processor core complex 12 may execute programs or instructions (e.g., an operating system or an application program) stored on a suitable article of manufacture, such as the local memory 14 and/or the main memory storage device 16. In addition to instructions for the processor core complex 12, the local memory 14 and/or the main memory storage device 16 may also store data to be processed by the processor core complex 12. By way of example, local memory 14 may comprise Random Access Memory (RAM), and main memory storage device 16 may comprise Read Only Memory (ROM), rewritable non-volatile memory (such as flash memory, hard drives, optical disks, etc.).
The electronic display 18 may display image frames, such as a Graphical User Interface (GUI) or application interface for an operating system, still images, or video content. The processor core complex 12 may provide at least some image frames. The electronic display 18 may be a self-emissive display, such as an Organic Light Emitting Diode (OLED) display, an LED display, or a μ LED display, or may be a Liquid Crystal Display (LCD) illuminated by a backlight. In some embodiments, the electronic display 18 may include a touch screen that may allow a user to interact with a user interface of the electronic device 10. In addition, the electronic display 18 may display foveal content.
The input structures 22 of the electronic device 10 may enable a user to interact with the electronic device 10 (e.g., press a button or icon to increase or decrease the volume level). Just as with the network interface 26, the I/O interface 24 may enable the electronic device 10 to interact with various other electronic devices. The network interface 26 may include, for example, interfaces for: for Personal Area Networks (PANs) such as bluetooth networks, for Local Area Networks (LANs) or Wireless Local Area Networks (WLANs) such as 802.11x Wi-Fi networks, and/or for Wide Area Networks (WANs) such as cellular networks. The network interface 26 may also, for example, include interfaces for: broadband fixed wireless access network (WiMAX), mobile broadband wireless network (mobile WiMAX), asynchronous digital subscriber line (e.g., ADSL, VDSL), digital video terrestrial broadcast (DVB-T) and its extended DVB handheld device (DVB-H), Ultra Wideband (UWB), Alternating Current (AC) power line, and the like. Power supply 29 may include any suitable power source, such as a rechargeable lithium polymer (Li-poly) battery and/or an Alternating Current (AC) power converter.
The eye tracker 32 may measure the position and movement of one or both eyes of a person viewing the electronic display 18 of the electronic device 10. For example, the eye tracker 32 may be a camera that may record the movement of the viewer's eyes as the viewer views the electronic display 18. However, several different operations may be employed to track the movement of the observer's eyes. For example, different types of infrared/near-infrared eye movement tracking techniques may be utilized, such as bright pupil tracking and dark pupil tracking. In both types of eye movement tracking, infrared or near-infrared light is reflected from one or both eyes of an observer to produce corneal reflections. The vector between the center of the eye pupil and the corneal reflection can be used to determine the point on the electronic display 18 that the viewer is looking at. Further, as described below, different portions of the electronic display 18 may be used to display content in a high resolution portion and a low resolution portion based on the position on the electronic display 18 that the viewer's eyes are viewing.
In some embodiments, the electronic device 10 may take the form of: a computer, a portable electronic device, a wearable electronic device, or other type of electronic device. Such computers may include computers that are generally portable (e.g., laptops, notebooks, and tablets) as well as computers that are generally used in one location (e.g., conventional desktop computers, workstations, and/or servers). In some embodiments, the electronic device 10 in the form of a computer may be available from Apple Inc. (Cupertino, California)
Figure BDA0002656302220000061
Pro、MacBook
Figure BDA0002656302220000062
Figure BDA0002656302220000063
mini or Mac
Figure BDA0002656302220000064
The model is the following. By way of example, an electronic device 10 in the form of a laptop computer 10A is shown in fig. 2, according to one embodiment of the present disclosure. The illustrated computer 10A may include a housing or case 36, an electronic display 18, input structures 22, and ports for the I/O interface 24. In one embodiment, input structures 22 (such as a keyboard and/or touchpad) may be used to interact with computer 10A, such as to launch, control, or operate a GUI or applications running on computer 10A. For example, a keyboard and/or touchpad may allow a user to interface a user displayed on the electronic display 18Or navigating on an application program interface. In addition, the computer 10A may also include an eye tracker 32, such as a camera.
Fig. 3 depicts a front view of a handheld device 10B that represents one embodiment of the electronic device 10. Handheld device 10B may represent, for example, a cellular telephone, a media player, a personal data manager, a handheld game platform, or any combination of such devices. By way of example, handheld device 10B may be available from Apple Inc
Figure BDA0002656302220000065
Or
Figure BDA0002656302220000066
The model is the following. Handheld device 10B may include a housing 36 to protect internal components from physical damage and from electromagnetic interference. The housing 36 may enclose the electronic display 18. The I/O interface 24 may be openable through the housing 36 and may include, for example, I/O ports for hard-wired connections for charging and/or content manipulation using standard connectors and protocols, such as a Lightning connector provided by Apple inc. In addition, handheld device 10B may include an eye tracker 32.
User input structures 22 in conjunction with electronic display 18 may allow a user to control handheld device 10B. For example, input structures 22 may activate or deactivate handheld device 10B, navigate a user interface to a home screen, a user-configurable application screen, and/or activate a voice recognition feature of handheld device 10B. Other input structures 22 may provide volume control or may switch between vibration and ring modes. Input structures 22 may also include a microphone to capture a user's voice for various voice-related features, and a speaker that may enable audio playback and/or certain telephony functions. The input structure 22 may also include a headphone input that may provide a connection to an external speaker and/or headphones.
Fig. 4 depicts a front view of another handheld device 10C that represents another embodiment of the electronic device 10. The handheld device 10C may represent, for example, a tablet computer, or eachOne of a portable computing device. By way of example, handheld device 10C may be a tablet-sized embodiment of electronic device 10, which may be available, for example, from Apple inc
Figure BDA0002656302220000071
The model is the following. Like handheld device 10B, handheld device 10C may also include an eye tracker 32.
Referring to FIG. 5, a computer 10D may represent another embodiment of the electronic device 10 of FIG. 1. Computer 10D may be any computer, such as a desktop, server, or notebook computer, but may also be a standalone media player or video game console. By way of example, computer 10D may be Apple Inc
Figure BDA0002656302220000072
Or other similar device. It should be noted that computer 10D may also represent a Personal Computer (PC) of another manufacturer. A similar housing 36 may be provided to protect and enclose internal components of computer 10D, such as electronic display 18. In some embodiments, a user of computer 10D may interact with computer 10D using various peripheral input devices connectable to computer 10D, such as input structures 22A or 22B (e.g., a keyboard and a mouse). Further, computer 10D may include an eye tracker 32.
Similarly, fig. 6 depicts a wearable electronic device 10E representative of another embodiment of the electronic device 10 of fig. 1, configured to operate using the techniques described herein. For example, the wearable electronic device 10E may be virtual reality glasses. However, in other embodiments, wearable electronic device 10E may include other wearable electronic devices, such as augmented reality glasses. When the user is wearing the wearable electronic device 10E, the electronic display 18 of the wearable electronic device 10E is visible to the user. Additionally, the eye tracker of the wearable electronic device 10E may track movement of one or both eyes of the user while the user is wearing the wearable electronic device 10E. In some cases, handheld device 10B may be used in wearable electronic device 10E. For example, a portion 37 of a headset 38 of wearable electronic device 10E may allow a user to hold handheld device 10B in place and view virtual reality content using handheld device 10B.
The electronic display 18 of the electronic device 10 is capable of displaying content such as photographs, videos, and images or frames of video games in a foveal manner. Fovea refers to a technique that changes the amount of detail or resolution on an image based on a fixed point (such as a point or region within the image itself, a point or region of the image on which the viewer's eyes are focused), or based on gaze movement of the viewer's eyes. More specifically, the amount of detail can be varied by using different resolutions in various portions of the image. For example, in one area of the electronic display 18, one pixel resolution may be used to display a portion of an image, while a lower or higher pixel resolution may be used to display another portion of the image in another area of the electronic display 18.
To display foveal content, the electronic display 18 may display content in the foveal region, meaning that the resolution of the content displayed on the electronic display 18 may be different at various portions of the electronic display 18. For example, FIG. 7A is a diagram 60 showing an electronic display 18 utilizing a static fovea. In the static fovea, the size and location of the various resolution areas of the electronic display 18 are fixed. In the illustrated embodiment, the electronic display 18 includes a high resolution region 62, a medium resolution region 64, and a low resolution region 66. However, in other embodiments, there may be two or more foveal regions (e.g., a high resolution region and a low resolution region).
As noted above, electronic displays such as electronic display 18 may also use a dynamic fovea. In a dynamic fovea, the area of the electronic display 18 using various resolutions may change between two or more images based on the viewer's gaze. For example, in content using multiple images (such as videos and video games), the content may be presented to a viewer by displaying the images in rapid succession. The portion of the electronic display 18 that displays content at the relatively high resolution and the relatively low resolution may change, for example, based on data collected by the eye tracker 32 that indicates a location on the electronic display 18 at which the gaze of the observer is focused. Accordingly, fig. 7B shows a diagram 70 showing portions of the electronic display 18 associated with a first frame of content 72, a second frame of content 74, and a third frame of content 76. For each of the frames 72, 74, 76, a high resolution region 78, a medium resolution region 80, and a low resolution region 82 are utilized. During the transition from the first frame 72 to the second frame 74, the high-resolution region 78 and the medium-resolution region 80 are shifted from being positioned near the lower left corner of the electronic display 18 to the top central portion of the electronic display 18, as the viewer's gaze is similarly shifted. Similarly, when the third frame 76 is displayed, the high-resolution region 78 and the medium-resolution region 80 shift toward the lower right corner of the electronic display 18 as the viewer's gaze shifts.
With the foregoing in mind, the present disclosure provides techniques that may be used when dithering a foveal content, such as a dynamic foveal content. Dithering generally refers to the application of noise to image data. For example, to prevent banding (e.g., color banding) in an image (e.g., a continuous frame of image content), a dithering pattern may be applied in which image data to be displayed by some pixels of the electronic display 18 may be modified. As a more specific example, the gray scale level (e.g., a value indicating the brightness of a pixel when illuminated) may be increased (to produce relatively bright content) or decreased (to produce relatively dark display content). There are many dither patterns or dither algorithms that may be used to dither content. Examples include the Floyd-Steinberg dithering algorithm, threshold or average dithering, random dithering, patterning, ordered dithering (e.g., using a dithering matrix), and error diffusion dithering. The techniques discussed herein may be incorporated into or applied in conjunction with such dithering patterns or algorithms.
Continuing with the figures, fig. 8 is a diagram 100 illustrating gaze-related jitter. In other words, the diagram 100 represents a jitter pattern based on a user's gaze (e.g., as tracked by the eye tracker 32). For example, at a first time, the user's gaze 102A may be directed to an area of the electronic display 18. Based on the user's gaze 102A, the processor core complex 12 may determine a foveal grouping 104A, which refers to determining the region of the electronic display 18 in which content of various resolutions will be displayed. For example, the high resolution regions 78, the medium resolution regions 80, and the low resolution regions 82 of fig. 7B may be considered as different foveal groupings. Based on the foveal grouping 104A, the processor core complex 12 may determine a grouped pixel location 106A, which may be a group of pixels within the foveal grouping 104A. Based on the packet pixel location 106A, the processor core complex 12 may determine and apply a dither pattern 108A. Accordingly, dithering may be performed based on the user's gaze or a shift in the user's gaze.
When using gaze-related dithering techniques, the dithering pattern presented on the electronic display 18 may shift over time as the user's gaze moves to different areas of the electronic display 18. Continuing with the example above, at a second time, such as when the user's gaze 102A shifts to gaze 102B, the user may be viewing another portion of the electronic display 18. For example, based on the user's one or both eyes tracked by the eye tracker 32, the processor core complex 12 may determine that the user's gaze has moved from one area of the electronic display 18 to another area of the electronic display 18. The processor core complex 12 may determine the foveal packet 104B based on the user having a gaze 102B. Further, based on the foveal grouping 104B, the processor core complex 12 may determine a grouping pixel location 106B. Further, different dither patterns 108B may be applied to content presented on the electronic display 18 based on the grouped pixel locations 106B. Thus, when utilizing gaze-related dithering techniques, the dithering pattern that appears when dynamic foveal content is displayed on the electronic display 18 may change as the area in which the gaze of the user of the electronic display 18 is focused changes.
To help illustrate the variation of the dither pattern, FIG. 9 is presented. In particular, fig. 9 includes an image 120 showing a dither pattern from two consecutive frames of image content overlaid on top of each other. Image 120 includes various regions 122A-E. The relatively dark region 122A may indicate a similar foveal grouping between two frames forming the image 120 and between two frames of content to which the same dither pattern is applied. Regions 122B-E (and any other portion of image 120 that generally appears brighter than region 122A) indicate that two different dither patterns were used in the two frames. For example, when the foveal grouping 104 changes between frames, different pixels of the electronic display 18 may be darker or lighter in one frame than in another frame. Since the dither patterns are based on the foveal groupings 104, different dither patterns may be used when the user's gaze shifts and different foveal groupings are used. When two content frames having different dither patterns overlap, the resulting appearance (e.g., image 120) may include a relatively large number of lighter areas (e.g., areas 122B-E) indicating the difference in dither patterns between the two content frames.
Using gaze-related jitter may result in visual artifacts that the user can perceive. For example, as the foveal grouping 104 changes, the user may be able to see visual artifacts associated with changes in foveal grouping between content frames (e.g., because different dither patterns are applied to different content frames). The perceptibility of visual artifacts may be reduced or eliminated using the gaze-independent dithering techniques described below.
Gaze-independent dithering may also be performed, meaning that a dithering pattern applied to a frame of image content may be provided independently of the user's gaze (e.g., as detected via eye tracker 32). Turning to fig. 10, a diagram 140 represents applying gaze-independent dithering. Similar to the image 120 in fig. 8, the user's gaze may shift (e.g., as illustrated by the shift of gaze 102C to gaze 102D), and the processor core complex 12 may determine the foveal groupings 104C, 104D based on each of the gazes 102C, 102D. However, when using a gaze-independent dithering technique, the original location of the pixel may be used to determine the grouped pixel location 106C. In other words, the processor core complex 12 may determine the grouped pixel locations 106C based on the location of the pixels on the electronic display 18 rather than based on the foveal group (e.g., foveal group region). In other words, the dither pattern may be decoupled from the foveal grouping. Because the position of the pixels on the electronic display 18 is fixed, the same or similar grouping of pixel locations 106C may be used for each frame of image content.
For example, the processor core complex 12 may apply the dither pattern 108C based on the packet pixel location 106C. Because the packet pixel location 106C is fixed, the dither pattern 108C may be substantially the same across multiple frames of image content. Accordingly, dithering can be performed in a gaze-independent manner when displaying dynamic foveal content (e.g., gaze-related content).
To help illustrate the jitter pattern independent of gaze, fig. 11 is presented. In particular, FIG. 11 includes an image 160 showing a dither pattern from two consecutive frames of image content overlaid on top of each other. Because the same dither pattern is applied (or two very similar dither patterns are applied) in both content frames used to form image 160, image 160 includes regions 162A-D that are relatively more pronounced compared to regions 122A-E of image 120. In other words, even though different foveal packets 104 may be used in two content frames, regions 162A-D indicate that the same or similar dithering scheme is used on the content frames. For example, the relatively dark regions 162A, 162B may correspond to different foveal groupings 104 in two frames of content, where the same dither pattern is applied or substantially the same dither pattern is applied. The relatively bright areas 162C, 162D may indicate areas where the dither patterns of the two frames are different. For example, the bright regions 162C, 162D may correspond to regions in the content frame where different sized foveal packet regions are located (e.g., boundaries between foveal packet regions with different resolutions or boundaries between foveal packet regions and dither blocks) or indicate shifts in foveal packets between content frames. More specifically, the bright regions 162C, 162D may appear at or near the boundaries between foveal grouping regions that include different numbers of pixels (e.g., the boundary between one foveal grouping region associated with relatively high resolution content and another foveal grouping region associated with relatively low resolution content).
Continuing with the gaze-independent dithering technique, fig. 12 is a flow diagram of a process 200 for generating a dithering pattern independent of a user's gaze. The process 200 may be performed by the processor core complex 12, the image processing circuit 30, or a combination thereof by executing instructions stored in the local memory 14 or the main memory storage device 16. Further, while the operations of process 200 are described below in a particular order, it should be noted that the operations of process 200 may be performed in a different order than that described below in other embodiments. The method 200 generally includes receiving a first set of eye tracking data (e.g., process block 202), receiving a second set of eye tracking data (e.g., process block 204), determining a change in position of a user's eyes on the electronic display 18 (e.g., process block 206), determining a foveal grouped region based on the change in position of the user's eyes (e.g., process block 208), generating a dither phase index based on the foveal grouped region (e.g., process block 210), comparing a dither block boundary to a foveal grouped region boundary (e.g., process block 212), determining whether there is a foveal boundary mismatch (e.g., decision block 214), and returning to comparing the dither block boundary to the foveal grouped region boundary when there is no foveal boundary mismatch (e.g., process block 212). When there is a foveal boundary mismatch, the process 200 may include resetting the dither block (e.g., process block 216) and returning to compare the dither block boundary to the foveal packet zone boundary (e.g., process block 212).
At process block 202, a first set of data regarding where the user's eyes are focused on the electronic display 18 at a first time may be received. This data may be acquired and transmitted via an eye tracking component of electronic device 10, such as eye tracker 32. Similarly, at block 204, a second set of data may be received regarding where the user's eyes are focused on the electronic display 18 at a second time. Based on the first and second sets of data, at block 206, a change in position of the user's eye between the first time and the second time may be determined.
At process block 208, a foveal grouping region may be determined based on a change in position of the user's eye. For example, because the user's gaze may have shifted, various portions of the electronic display 18 in which different resolution portions of the content are to be displayed may be determined. The foveal sub-regions may correspond to various regions of the electronic display 18 where content of different resolutions will be displayed. To help illustrate the foveal sub-region, FIG. 13 is provided. In particular, FIG. 13 illustrates various foveal sub-regions 230A-F. Region 230A corresponds to a low resolution portion of electronic display 18. For example, region 230A may be relatively far from the point on electronic display 18 at which the user's eyes are focused. Regions 230B-F may correspond to portions of electronic display 18 in which progressively higher resolution content is to be displayed (e.g., based on a detected gaze of a user), respectively. For example, region 230F may be the highest resolution region, and the user's gaze may have been detected at or near the center point of region 230F.
When using gaze-independent dithering, the dither block or pixel grouping may have the same or similar dithering characteristics (e.g., random numbers indicating dithering of pixels) independent of the foveal grouping region (e.g., regions 230A-F). In practice, the dither blocks may be associated with the original pixel locations on the electronic display 18. However, because the content being displayed on the electronic display 18 is determined based on the foveal grouping, there may be portions of the electronic display 18 where the dither block includes pixels from different foveal grouping regions. When pixels in one dither block include pixels from different foveal packet regions, it can be said that there is a "foveal boundary mismatch". The foveal boundary mismatch may cause the dither pattern to change between content frames. For example, in some cases, when images of consecutive frames overlap, the resulting image may appear more similar to image 120 (associated with a gaze-related dithering technique) than image 160 (associated with a gaze-unrelated dithering technique). Thus, to increase the uniformity of the dither pattern between frames, the techniques discussed below may be utilized to correct for foveal boundary mismatch.
Returning to fig. 12 and the discussion of process 200, at process block 210, the dither phase index may be determined based on the foveal packet region (e.g., regions 230A-F). The dither phase index may enable detection of a foveal boundary mismatch. To determine or generate the dither phase index, the processor core complex 12 (or image processing circuitry 30) may use a multi-order linear feedback shift register in which the size of the step size is determined based on the foveal packet area in each portion of the electronic display 18. For example, FIG. 13 shows several blocks 232A-C being scanned and used to fill a linear feedback shift register. In the foveal domain, the first block 232A is a four-by-four (4 × 4) block, the second block 232B is a two-by-four (2 × 4) block, and the third block 232C is a one-by-four (1 × 4) block. That is, the size of the blocks 232A-C corresponds to the foveal sub-area.
To help further illustrate blocks 232A-C, FIG. 14 is provided. In particular, FIG. 14 shows blocks 232A-C in the original pixel domain. Each of the blocks 232A-C includes a number of smaller blocks 240, which may be referred to as groups of pixels or blocks of pixels. These pixel blocks may be part of the grouped pixel locations 106C. For example, block 232A includes a block 240 of pixels corresponding to sixteen pixels of the electronic display 18 (e.g., a block four pixels wide by four pixels long). Because block 232A is a 4x 4 block, block 232A includes sixteen pixel blocks 240 that correspond to 256 pixels of electronic display 18 (e.g., a sixteen pixel wide by sixteen pixel long region). Block 232A may also correspond to one four-step entry in the linear shift feedback register. The block 232B (e.g., a 2 x 4 block) may comprise eight blocks 240 or 128 pixels of the electronic display 18. In addition, block 232B can be scanned with a step size that is two blocks 240 of pixels, which corresponds to two steps in a linear feedback shift register. Block 232C is a 4x 1 block that includes four pixel blocks 240 corresponding to 64 pixels of electronic display 18. Block 232C may be scanned using a step size that is one block 240 of pixels wide, which corresponds to one step size in the linear feedback region.
Returning to fig. 12 and the discussion of process 200, at process block 212, the dither block boundary may be compared to the foveal packet zone boundary. Fig. 15 includes a diagram 250 illustrating such a comparison. In particular, the actual positions of the sub-blocks 252 of grouped pixels included in the dither blocks 252A, 254B may be compared to the expected positions of the grouped pixels within the dither blocks 254A, 254B. In other words, the size of the step size (e.g., the number of pixel blocks 240) associated with the linear shift feedback register for a particular row of pixel blocks 240 within dither blocks 254A, 254B may be compared to the expected row position. For example, returning briefly to fig. 14, each sub-block of the dither block 252 may correspond to a row 260 of the pixel block 240 (e.g., sub-block 252A corresponds to row 260A, sub-block 252B corresponds to row 260B, sub-block 252C corresponds to row 260A, and sub-block 252D corresponds to row 260D).
Returning to fig. 15, the diagram 250 also includes columns 270A, 270B that indicate the actual location (e.g., number of rows) within the dither block 254 and the expected number of rows within the dither block 254, respectively. For example, in a dithering block 254 formed from four rows of pixel blocks 240, column 270A may indicate whether a row is actually the first, second, third, or fourth row of the dithering block 254. Column 270B may indicate an expected number of rows that may be determined based on the size of the foveal packet region in which the rows (e.g., the rows of pixel blocks 240) are located.
Desired number of rows Nexp(e.g., a row of pixel block 240) may be formed by dividing the number of rows N of the first row of pixels in pixel block 240pixelDivided (in the pixel domain) by the foveal packet size of the foveal packet region in which the pixel block 240 is located. A modulo operation (e.g., MOD4) may be applied to the result. The value 1 may be added to the result of the modulo operation. In a dither block having n rows, the value of the desired number of rows may be a value between 1 and n (including 1 and n). The value of the number of rows may be a value between zero and x-1 (including zero and x-1), where x is the number of rows of pixels included in the electronic display 18.
An example of determining the actual number of rows and the desired number of rows for pixel block 240 will now be provided with reference to sub-block 252C. The pixel sub-block 252C may correspond to a four pixel block 240 wide row 260C of the first block 232A in fig. 14. Since row 260C is the third row of the first block 232A, the actual number of rows in this case would be 3, which is indicated in column 270A. For the desired number of rows, NpixelWill be 8 because the first row of pixels within row 260C is the ninth row of pixels (e.g., pixel rows 0-8 are included in rows 260A and 260B), and the value of G will be 4. Dividing 8 by 4 yields a quotient of 2. The remainder of a division of 2 by 4 (i.e., 2mod4) is 2. And 2 plus 1 is 3. Thus, N of sub-block 260CexpThe value will be 3, as represented by column 270BThe description is given.
Returning to FIG. 12 and the discussion of process 200, at decision block 214, it may be determined whether there is a foveal boundary mismatch. For example, referring to FIG. 15, the values of columns 270A, 270B may be stored in separate registers, and the values of the registers may be compared to each other to determine whether there is a foveal boundary mismatch. As shown, in fig. 15, each of the actual row values in column 270A matches its corresponding expected row value provided in column 270B. Thus, there is no foveal boundary mismatch as shown in fig. 15. The absence of a detected foveal boundary mismatch may correspond to a dither block 254 that includes a row of pixel blocks 240 found within the common foveal packet region. For example, for the dither block 254A, each of the sub-blocks 252A-D is four pixel blocks 240 wide (e.g., as shown by "4 x").
Referring back to FIG. 12, when no foveal boundary mismatch is detected at decision block 214, the processor core complex 12 or image processing circuit 30 may return to process block 212 and continue to compare the dither block boundary to the foveal packet zone boundary. However, if a foveal boundary mismatch is detected at process block 216, the dither block may be reset.
Fig. 16 shows an example of a dished boundary mismatch. More specifically, fig. 16 includes a diagram 280 in which dither block 254C includes four sub-blocks 252E-H indicating rows of pixel blocks 240 that are not all positioned within the same foveal packet zone. For example, the position of the dither block 254C within the electronic display 18 may correspond to block 290 in fig. 13. As shown in fig. 13, a first portion 292 of the box 290 is positioned within a 4x 4 foveal grouping region (e.g., foveal grouping region 230A), while a second portion 294 of the box 290 is positioned within a 4x 2 foveal grouping region (e.g., foveal grouping region 230B). Expanding on this example, the first portion 292 may include three rows of pixel blocks 240 located within the foveal sub-region 230A and a row of pixel blocks 240 located within the foveal sub-region 230B.
Referring back to fig. 16, the actual row values of column 270C of dither block 254C correspond to the rows of pixel blocks 240 found in block 290. The value of column 270D indicates an expected value associated with dither block 254C. As shown in block 300, it is determined that there is a foveal boundary mismatch. More specifically, the foveal boundary mismatch indicated by block 300 is located in a different foveal grouping region than the first portion 292 of block 290 corresponding to the second portion 294 of block 290 (e.g., corresponding to the fourth row of pixel blocks 240 of sub-block 252H of dither block 254C). Although the dither block 254C is four pixel blocks 240 wide, the foveal sub-area 230B where the second portion 294 of the block 290 is located corresponds to the width of two pixel blocks 240. If not processed, more foveal packet mismatches may continue to occur in subsequent dither blocks, as indicated by the different values of the columns 270A, 270B of each sub-block 252. As described above, the foveal packet mismatch may result in different jitter patterns being used in different content frames. For example, the higher the amount of foveal grouping mismatch, the greater the difference between the dither patterns of the two content frames may be, which may increase the amount of perceptible visual artifacts on the electronic display 18.
To help illustrate how a reset is performed to correct for foveal packet mismatch, fig. 16, 17, and 18 are provided. In particular, fig. 17 includes a diagram 320 that illustrates how software (such as algorithms or instructions that may be stored on the local memory 14 or main memory storage device 16 and executed by the processor core complex 12 or image processing circuit 30) may be used to correct for foveal packet mismatches. Similar to fig. 16, a foveal packet mismatch may be detected in the first dither block 254D (e.g., as shown in block 300). The second dither block 254E may be used, and when the second dither block 254E is utilized, the processor core complex 12 or the image processing circuitry 30 may cause a reset to occur by initiating a new dither block (e.g., the third dither block 254F) during the next row of the pixel block 240 for which the row value is expected to be equal to 1. The row corresponding to the pixel block 240 of the sub-block 252I may be included in both the second dither block 254E and the third dither block 254F (e.g., as the last row in the second dither block 254E and the first row in the third dither block 254F). In other words, when a reset is performed, the processor core complex 12 or the image processing circuit 30 may cause the value of the actual number of rows of the index to be modified to match the expected number of rows (e.g., 1), and the new dither block 254 may be used. As shown in FIG. 17, after the reset occurs, the actual number of rows (e.g., as shown in column 270E) and the expected number of rows (e.g., as shown in column 270F) match, which represents the elimination of the detected foveal boundary mismatch.
Fig. 18 shows a diagram 340 representing a foveated boundary mismatch reset 342 performed with hardware included in the electronic device 10, such as a buffer that may be included in the local memory 14 (or the main memory storage device 16). In this method, the dithering operation is accomplished by: a first row (e.g., the row corresponding to the group of pixels 240 of the sub-block 252K of the dither block 254G) is saved to a first buffer and a dither pattern is applied to a second row (e.g., the next row corresponding to the group of pixels 240 of the sub-block 252L of the dither block 254G) and the row saved in the buffer. The application of the dither pattern can continue in this manner until a foveal packet mismatch is detected, in which case the next row of pixel blocks 240 with expected row positions of 1 (e.g., sub-block 252M corresponding to dither blocks 254H, 254I) can be saved to a different second buffer. Dithering may be applied to the next row (e.g., the second row of pixel blocks 240 corresponding to the second sub-block). For example, similar to fig. 17, at sub-block 252N in fig. 18, a foveal packet mismatch may be detected due to the difference in actual and expected row values. A sub-block 252O (expected line value is four) as the first sub-block 252 of the dither block 252H may be stored in the first buffer. The next sub-block 252M included in both of the dither blocks 252H, 252 may be dithered with the sub-block 252O stored in the first buffer and the row of pixel blocks 240 corresponding to the sub-block 252O is saved to the second buffer. The next sub-block 252P, which may be the second sub-block 252 of the dithering block 252I, may be dithered with the lines of the pixel blocks 240 stored in the second buffer and the index of the actual position may be reset.
Returning to fig. 12 and the discussion of process 200, after resetting the dither block (e.g., at process block 216), process 200 may return to process block 212 and continue to compare the dither block boundary to the foveated packet zone boundary. For example, process 200 may be complete when each dither block boundary and foveal packet region in an image (e.g., a content frame) has been compared and/or when each detected foveal packet boundary mismatch has been corrected. For example, as described above, the foveal region boundary mismatch can be corrected by resetting the dither block according to the discussion of fig. 17 and 18 above.
Although process 200 is discussed above as being performed based on a change in location of a user's gaze, it should be noted that in other embodiments, process 200 may be performed based on a user's gaze detected at a time associated with a particular frame. In other words, a foveal grouping region of a dynamic foveal image content frame may be determined based on eye tracking data associated with the content frame, and a dither pattern may be generated for such image content frames.
Accordingly, the present disclosure provides gaze-independent dithering techniques that may be used to dither foveal content, such as dynamic foveal content. For example, as described above, the dithering pattern may be applied based on the original locations of pixels within the electronic display rather than on groups of pixels determined by foveal grouping regions, as may be done when utilizing gaze-related dithering techniques. In addition, the dithering techniques disclosed herein may be used to correct for foveal grouping mismatches that may occur when pixels included in a group of pixels (e.g., a number of pixels defined based on an original position within an electronic display) are positioned in more than one foveal grouping region. Thus, the techniques described herein increase the uniformity of the dither pattern applied when rendering the foveal content on the display.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
The technology described and claimed herein is cited and applied to specific examples of physical and practical nature that significantly improve the art, and thus are not abstract, intangible, or purely theoretical. Furthermore, if any claim appended to the end of this specification contains one or more elements designated as "means for [ performing ] [ function ]. or" step for [ performing ] [ function ]. these elements will be construed in accordance with 35u.s.c.112 (f). However, for any claim containing elements specified in any other way, these elements will not be construed according to 35u.s.c.112 (f).

Claims (22)

1. An electronic device, comprising:
a display;
an eye tracker configured to collect eye tracking data regarding a user's gaze of one or more eyes on the display; and
a processing circuit operatively coupled to the display and configured to:
generating pixel data for each of a plurality of frames of content based at least in part on the eye tracking data, wherein each of the plurality of frames comprises a plurality of foveal grouped regions comprising a relatively high resolution grouped region and a relatively low resolution grouped region, the relatively high resolution grouped region being associated with a first region of the display and the relatively low resolution grouped region being associated with a second, different portion of the display; and
applying a dithering pattern to the frame of the plurality of frames of content independently of the gaze of one or more eyes of the user.
2. The electronic device of claim 1, wherein:
the display includes a plurality of pixels; and
the processing circuitry is configured to:
determining a plurality of dither blocks, wherein each dither block of the plurality of dither blocks corresponds to a subset of the plurality of pixels; and
applying the dither pattern based at least in part on the plurality of dither blocks.
3. The electronic device of claim 2, wherein the processing circuitry is configured to determine a plurality of pixel blocks, wherein each pixel block of the plurality of pixel blocks corresponds to a portion of the plurality of pixels and is defined based at least in part on an original location of the portion of the plurality of pixels within the display.
4. The electronic device of claim 3, wherein the processing circuit is configured to:
determining whether a dither block of the plurality of dither blocks includes pixel blocks of the plurality of pixel blocks that are positioned within more than one of the plurality of foveal packet areas of a single frame of the plurality of frames; and
resetting the dither block when the processing circuitry determines that the dither block includes pixel blocks positioned within more than one of the plurality of foveal packet zones.
5. The electronic device of claim 4, wherein the processing circuit is configured to determine whether the dither block includes pixel blocks positioned within more than one of the plurality of foveal packet regions by determining whether expected line values of a portion of the dither block match actual line values of the portion of the dither block.
6. The electronic device of claim 5, wherein the portion of the dither block corresponds to a line of the plurality of pixel blocks or a portion thereof.
7. The electronic device of any of claims 1-5, wherein when a first dither pattern associated with a first frame of the plurality of content frames overlaps with a second dither pattern associated with a second frame of the plurality of content frames, a resulting image pattern appears substantially similar to FIG. 11.
8. The electronic device of any of claims 1-5, wherein the electronic device comprises a computer, a mobile phone, a portable media device, a tablet, a television, or a virtual reality headset having reduced power consumption due to power savings using the plurality of foveal packet regions while reducing image artifacts using the dither pattern.
9. An electronic device, comprising:
a display;
an eye tracker configured to collect eye tracking data regarding a user's gaze of one or more eyes on the display; and
a processing circuit operatively coupled to the display and configured to:
receiving the eye tracking data;
generating pixel data for each frame of a plurality of frames of content based at least in part on the eye tracking data such that the content is configured to be displayed on the display in a dynamic foveal manner; and
applying a dithering pattern to the frame of the plurality of frames of content independently of the gaze of one or more eyes of a user.
10. The electronic device of claim 9, wherein the processing circuitry is configured to:
determining a plurality of dither blocks for each of the plurality of content frames; and
applying the dither pattern based at least in part on the plurality of dither blocks.
11. The electronic device defined in claim 10 wherein the processing circuitry is configured to determine whether a foveal boundary mismatch exists in a frame of the plurality of frames of content, wherein the foveal boundary mismatch corresponds to a dither block of the plurality of dither blocks that includes pixels that are positioned in more than one foveal packet region of a plurality of foveal packet regions, wherein each foveal packet region of the plurality of foveal packet regions is associated with a resolution of the content and a different portion of the display.
12. The electronic device defined in claim 11 wherein the processing circuitry is configured to determine whether there is the foveal boundary mismatch based at least in part on a linear feedback shift register that is populated based at least in part on the plurality of foveal grouping regions.
13. The electronic device of any of claims 9-12, wherein when a first dither pattern associated with a first frame of the plurality of content frames overlaps with a second dither pattern associated with a second frame of the plurality of content frames, the resulting image includes a plurality of first regions and a plurality of second regions, wherein:
the plurality of first regions correspond to portions of the first frame and the second frame in which the first dither pattern and the second dither pattern are substantially the same; and
the plurality of second areas correspond to portions of the first frame and the second frame in which different dither patterns are applied.
14. The electronic device of claim 13, wherein the plurality of first regions are relatively darker in appearance than the second plurality of regions.
15. The electronic device of claim 13, wherein a region of the second plurality of regions indicates one or more shifts in a foveal packet region between the first frame and the second frame.
16. A non-transitory computer readable medium comprising instructions that, when executed, are configured to cause a processing circuit to:
receiving eye tracking data regarding a gaze of one or more eyes of a user on a display;
generating pixel data for each frame of a plurality of frames of content based at least in part on the eye tracking data such that the content is configured to be displayed on the display in a dynamic foveal manner; and
applying a dither pattern to the frame of the plurality of frames of content based at least in part on a plurality of dither blocks and a plurality of blocks of pixels, wherein each dither block of the plurality of dither blocks comprises a portion of the plurality of blocks of pixels, wherein each block of pixels of the plurality of dither blocks comprises a subset of a plurality of pixels of the display, wherein the plurality of blocks of pixels are determined independent of the gaze of one or more eyes of a user.
17. The non-transitory computer readable storage medium of claim 16, wherein the instructions, when executed, are configured to cause the processing circuit to:
determining whether a foveal boundary mismatch exists in a frame of the plurality of frames of content, wherein the foveal boundary mismatch corresponds to a dither block of the plurality of dither blocks that includes pixels positioned in more than one foveal packet region of a plurality of foveal packet regions, wherein each foveal packet region of the plurality of foveal packet regions is associated with a resolution of the content and a different portion of the display; and
causing a dither block to reset in response to determining that a foveal boundary associated with the dither block does not match.
18. The non-transitory computer readable medium of claim 17, wherein the instructions, when executed, are configured to cause the processing circuit to determine whether there is a foveal boundary mismatch in the dither block by a manner comprising:
determining actual line values for sub-blocks of the dither block, wherein the actual line values for the dither block correspond to lines of pixel blocks of a subset of the plurality of pixel blocks within the dither block;
determining an expected row value for the sub-block; and
determining that the foveal boundary mismatch exists when the actual row value and the expected row value are different.
19. The non-transitory computer readable medium of claim 18, wherein the instructions, when executed, are configured to cause the processing circuit to cause the dither block to reset by causing a new dither block to be used.
20. The non-transitory computer readable medium of claim 19, wherein the instructions, when executed, are configured to cause the processing circuit to cause the new dither block to be used when a pixel block of the plurality of pixel blocks has a second expected line number equal to a lowest expected line number.
21. An electronic device, comprising:
a display;
an eye tracker configured to collect eye tracking data regarding a user's gaze of one or more eyes on the display; and
a processing circuit operatively coupled to the display and configured to:
generating pixel data for each of a plurality of frames of content based at least in part on the eye tracking data, wherein each of the plurality of frames comprises a plurality of foveal grouped regions comprising a relatively high resolution grouped region and a relatively low resolution grouped region, the relatively high resolution grouped region being associated with a first region of the display and the relatively low resolution grouped region being associated with a second, different portion of the display; and
applying a dither pattern to the frame of the plurality of frames of content independent of the gaze of the user's one or more eyes, wherein the dither pattern is applied to the frame of the plurality of frames based at least in part on a plurality of dither blocks and a plurality of pixel blocks, wherein each dither block of the plurality of dither blocks comprises a portion of the plurality of pixel blocks, wherein each pixel block of the plurality of dither blocks comprises a subset of a plurality of pixels of the display.
22. A system, comprising:
means for tracking a user's gaze on a display;
means for generating pixel data for a plurality of frames of content based at least in part on eye tracking data such that the content is displayed on the display in a dynamic foveal manner; and
means for applying a dither pattern to the frame of the plurality of frames of content based at least in part on a plurality of dither blocks and a plurality of blocks of pixels, wherein the plurality of dither blocks comprises a portion of the plurality of blocks of pixels, wherein blocks of pixels of the plurality of dither blocks comprise a subset of a plurality of pixels of the display, wherein the plurality of blocks of pixels are determined independently of the gaze.
CN202010888754.0A 2019-09-26 2020-08-28 Gaze-independent dithering for dynamic foveal displays Pending CN112558902A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962906510P 2019-09-26 2019-09-26
US62/906,510 2019-09-26
US16/928,870 2020-07-14
US16/928,870 US11435821B2 (en) 2019-09-26 2020-07-14 Gaze-independent dithering for dynamically foveated displays

Publications (1)

Publication Number Publication Date
CN112558902A true CN112558902A (en) 2021-03-26

Family

ID=75040961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010888754.0A Pending CN112558902A (en) 2019-09-26 2020-08-28 Gaze-independent dithering for dynamic foveal displays

Country Status (1)

Country Link
CN (1) CN112558902A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012078207A1 (en) * 2010-12-08 2012-06-14 Sony Computer Entertainment Inc. Adaptive displays using gaze tracking
US20120229497A1 (en) * 2011-03-08 2012-09-13 Apple Inc. Devices and methods for dynamic dithering
US20180082626A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Dithering techniques for electronic displays
CN107833262A (en) * 2016-09-05 2018-03-23 Arm 有限公司 Graphic system and graphics processor
CN108136258A (en) * 2015-10-28 2018-06-08 微软技术许可有限责任公司 Picture frame is adjusted based on tracking eye motion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012078207A1 (en) * 2010-12-08 2012-06-14 Sony Computer Entertainment Inc. Adaptive displays using gaze tracking
CN103559006A (en) * 2010-12-08 2014-02-05 索尼电脑娱乐公司 Adaptive displays using gaze tracking
US20120229497A1 (en) * 2011-03-08 2012-09-13 Apple Inc. Devices and methods for dynamic dithering
CN108136258A (en) * 2015-10-28 2018-06-08 微软技术许可有限责任公司 Picture frame is adjusted based on tracking eye motion
CN107833262A (en) * 2016-09-05 2018-03-23 Arm 有限公司 Graphic system and graphics processor
US20180082626A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Dithering techniques for electronic displays

Similar Documents

Publication Publication Date Title
US11435821B2 (en) Gaze-independent dithering for dynamically foveated displays
US11194391B2 (en) Visual artifact mitigation of dynamic foveated displays
US9286658B2 (en) Image enhancement
US10298840B2 (en) Foveated camera for video augmented reality and head mounted display
CN109643517B (en) Display adjustment
US20230333649A1 (en) Recovery from eye-tracking loss in foveated displays
CN116348947A (en) Backlight reconstruction and compensation
Lin et al. ShiftMask: Dynamic OLED power shifting based on visual acuity for interactive mobile applications
US11256097B2 (en) Image generation apparatus, image display system, image generation method, and computer program
US20240045502A1 (en) Peripheral luminance or color remapping for power saving
CN107635132B (en) Display control method and device of naked eye 3D display terminal and display terminal
CN109783043B (en) Method and device for displaying frequency of display and display
US11373270B1 (en) Axis based compression for remote rendering
CN108604367B (en) Display method and handheld electronic device
CN112558902A (en) Gaze-independent dithering for dynamic foveal displays
CN110944194B (en) System and method for toggling display links
US20210097909A1 (en) Intra-Frame Interpolation Based Line-by-Line Tuning for Electronic Displays
US11922867B1 (en) Motion corrected interleaving
US11605330B1 (en) Mitigation of tearing from intra-frame pause
US11929021B1 (en) Optical crosstalk compensation for foveated display
US10839738B2 (en) Interlaced or interleaved variable persistence displays
WO2024119848A1 (en) Eye protection method, display device, eye protection apparatus, and storage medium
CN117351856A (en) Display method, display device, electronic equipment and readable storage medium
CN118015969A (en) Method, device, medium and display equipment for reducing power consumption of display module
CN111045764A (en) Interface adaptation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination