WO2020033875A1 - Apparatus, systems, and methods for foveated display - Google Patents

Apparatus, systems, and methods for foveated display Download PDF

Info

Publication number
WO2020033875A1
WO2020033875A1 PCT/US2019/045975 US2019045975W WO2020033875A1 WO 2020033875 A1 WO2020033875 A1 WO 2020033875A1 US 2019045975 W US2019045975 W US 2019045975W WO 2020033875 A1 WO2020033875 A1 WO 2020033875A1
Authority
WO
WIPO (PCT)
Prior art keywords
zone
data
foveated
display
row
Prior art date
Application number
PCT/US2019/045975
Other languages
French (fr)
Inventor
Stephen John Hart
Aaron L. Boyce
Original Assignee
Compound Photonics Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Compound Photonics Limited filed Critical Compound Photonics Limited
Publication of WO2020033875A1 publication Critical patent/WO2020033875A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/393Arrangements for updating the contents of the bit-mapped memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/399Control of the bit-mapped memory using two or more bit-mapped memories, the operations of which are switched in time, e.g. ping-pong buffers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3188Scale or resolution adjustment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas

Definitions

  • Some systems reduce the image transport bandwidth by having the host add compression hardware/algorithms and the display will need to add decompression capability.
  • these forms of compression do not reduce the data transmit bandwidth between the driver and the display device or reduce the processing time amongst the internal components of the display device during the writing of data to the pixel array, which may limit the maximum frame rate, maximum bit-depth and/or maximum array size.
  • Some systems attempt to mimic a form of foveated imaging by splitting the video into two channels going to different display devices: one channel for the high-resolution area- of-interest and the other channel for the lower resolution periphery /background. These systems accomplish this by mechanically steering the projection optics for the high-resolution area-of-interest image in the direction of the gaze.
  • conventional systems of foveated imaging lack in efficiency and effectiveness for various reasons.
  • conventional foveation systems and methods include redundant data that wastes bandwidth, limits frame update rates, and limits native bit- depth and/or max pixels per display.
  • the zone size and offset parameters are not included in the video data, the use of a static display resolution/configuration does not enable the updating of the foveated image in real-time, on a frame-by -frame basis as the gaze point changes.
  • standard video protocols do not define a mixed resolution frame nor handshake foveation parameters of the display hardware capabilities.
  • multiple row writing of the same data is not supported. In some cases, multiple row writing causes timing errors and/or local over-loading for replications of more than 2-to-l.
  • Embodiments of an apparatus, system, and method for foveated display are provided. It should be appreciated that the present embodiment can be implemented in numerous ways, such as a process, an apparatus, a system, a device, or a method. Several inventive embodiments are described below.
  • a system for foveated display is provided.
  • Foveated rendering is a type of image processing or image generation that takes advantage of the eye’s exponential reduction in acuity/resolution moving outward from the center of the retina (a user’s gaze direction) to the outer periphery of an image by providing a plurality of display zones (e.g., Zone 0, Zone 1, Zone 2, Zone 3, etc.).
  • the system of foveated display described herein provides a unique method and protocol of encapsulation of a foveated image frame and an innovative way of processing a foveated write for displaying the foveated image upon a display device.
  • a system may include a processor that couples to receive input image data and foveation zone definitions to generate a rendered foveated image, which is then processed using a selected protocol, in accordance with the present invention, into a foveated image frame having both image data and header packet data, which identifies two or more zones of differing resolution.
  • Each zone may be defined by a plurality of macropixels, having corresponding macropixel ratios.
  • the first zone Z0 may have horizontal and vertical ratios of 1 to 1, while the second and third zones (Zl, and Z2) may have respective macropixel ratios of 1 to 2, and 1 to 4 (where Zl macropixel is a 2x2 matrix of display pixels and Z2 macropixel is a 4x4 matrix of display pixels).
  • the system may further include a driver controller circuit that couples to receive the foveated image frame to generate foveated bit plane data and covert the bit plane data into modulation planes.
  • One or more modulation devices may couple to receive the modulation planes to generate an expanded dataset, such that the foveated image is produced upon a display or output to the display.
  • a single bit of an associated plurality of macropixels may be copied to a set of display pixels based upon the corresponding macropixel ratios associated with the zone.
  • a method and protocol of foveated display may include receiving image data relating to the image and foveation zones.
  • a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time.
  • the method may further include generating a foveated image frame based upon the image data, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio.
  • the method may further include transmitting the foveated image frame to one or more modulation devices having raster logic coupled to a display circuit including an array of pixels.
  • the foveated image frame having header packet data may be sent to a driver controller circuit that generates foveated bit plane data and converts the same into modulation planes based upon a modulation scheme and header packet data.
  • the modulation planes may be sent to the one or more modulation devices.
  • the method may include writing the modulation plane data to the array of display pixels based upon the header packet data and the corresponding macropixel ratios using the raster logic, wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratios associated with each zone.
  • a tangible, non-transitory, computer-readable media having instructions whereupon which, when executed by a processor, cause the processor to perform the foveated display method described herein.
  • the foveated display method may include receiving image data relating to the image and foveation zones.
  • a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time, to produce a rendered foveated image as is done in the industry according to foveated rendering methods.
  • the method may further include generating a foveated image frame based upon the rendered foveated image data and the selected transmit protocol, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is defined by a plurality of macropixels and corresponding macropixel ratios.
  • the method may further include transmitting the foveated image frame to one or more modulation devices and decoding the frame into an expanded dataset for controlling a display circuit, having an array of pixels.
  • the foveated image frame, having header packet data may be used to generate foveated bit plane data, which is converted into modulation planes based upon a modulation scheme and header packet data.
  • the modulation planes may be sent to the one or more modulation devices and decoded into the expanded dataset.
  • the method may include writing the modulation plane data to the array of display pixels based upon the expanded dataset, using the header packet data and each corresponding macropixel ratios using the raster logic; wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratios associated with each zone.
  • a foveated display device may include a driver controller circuit coupled to receive a foveated image frame to generate modulation planes, wherein the foveated image frame includes header packet data identifying two or more zones of differing resolution and wherein each zone is defined by a plurality of macropixels and corresponding macropixel ratios. Further, the foveated display device may include one or more modulation devices coupled to receive the modulation planes, such that a foveated image is produced upon a display, wherein, for zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a set of display pixels based upon the corresponding macropixel ratios associated with the zone.
  • FIG. 1 A is a system diagram of a foveated electromagnetic radiation modulation system, having processor circuitry, display driver circuitry and foveated modulation plane device circuitry, in accordance with some embodiments.
  • FIG. 1B is a system diagram of a foveated electromagnetic radiation modulation system, having processor circuitry and foveated grayscale device circuitry, in accordance with some embodiments.
  • FIG. 2A is a flow diagram of a method for foveated image frame generation by processor circuitry 110 as in FIG. 1A in accordance with some embodiments.
  • FIG. 2B is a flow diagram of a method for foveated modulation plane generation by display driver circuitry 120 as in FIG. 1A in accordance with some embodiments.
  • FIG. 2C is a flow diagram of a method or process for writing foveated modulation plane data to a display pixel array by a foveated modulation display device 130 as in FIG 1A in accordance with some embodiments.
  • FIG. 2D is a flow diagram of a method or process for writing the foveated image frame data to the array of display pixels of a foveated grayscale display device 162 as in FIG. 1B, in accordance with some embodiments.
  • FIG. 3A is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a center-detail mode of operation, wherein four zones exist in accordance with some embodiments.
  • FIG. 3B is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a periphery-detail mode of operation, wherein four zones exist in accordance with some embodiments.
  • FIG. 4A is a multiple level block diagram of the expansion of the foveation data in the method of FIG.3 A, showing contents of a foveation data block and the contents of a row buffer, in accordance with some embodiments.
  • FIG. 4B is a multiple level block diagram of the continuation of the expansion of the foveation data of FIG.4A, in accordance with some embodiments.
  • FIG. 4C is a multiple step diagram of the continuation of the expansion of the macropixel data of FIG.4A.
  • FIG. 5 illustrates a timing diagram of a Zone-order frame or plane, showing the zone buffer write and read sequences according to one embodiment of the present disclosure.
  • FIG. 6 is an illustration showing an exemplary computing device, which may implement some of the embodiments described herein.
  • FIG. 7 illustrates a timing diagram of transport illumination, showing color sequential images and planes with respect to buffer type, in accordance with some embodiments.
  • FIG. 8A illustrates a data format of foveated image in host memory, in accordance with some embodiments.
  • FIG. 8B illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with zone pad, in accordance with some embodiments.
  • FIG. 8C illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with row pad, in accordance with some embodiments.
  • FIG. 8D illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and zone-set-order with per-zone and line-set row-timing padding, in accordance with some embodiments.
  • FIG. 8E illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and JIT-order with per-zone and line-set row-timing padding, in accordance with some embodiments.
  • FIG. 9A illustrates the physical layout of column multi-driver configurations, in accordance with some embodiments.
  • FIG. 9B illustrates the number of row write times required for the multi-driver arrangements of FIG. 9A for certain line-set conditions and simultaneous groupings, in accordance with some embodiments.
  • FIG. 10 illustrates the diameter, visual field width and cone density of various regions of the human eye relative to the fovea, as available from public sources.
  • a system for foveated display is provided.
  • Foveated rendering is a type of image processing or image generation that takes advantage of the eye’s exponential reduction in acuity/resolution moving outward from the center of the retina (a user’s gaze direction) to the outer periphery of an image by providing a plurality of display zones [e.g., Zone 0 (Z0), Zone 1 (Zl), Zone 2 (Z2), Zone 3 (Z3), and the like].
  • the system of foveated display described herein provides a method system, and protocol of encapsulation of a foveated image frame and an innovative way of processing a foveated write for displaying the foveated image upon a display device.
  • a system may include a processor that couples to receive input image data and foveation zone definition data to generate a rendered foveated image, which is then processed using a selected protocol, in accordance with the present invention, into a foveated image frame having both image data and header packet data, which identifies two or more zones of differing resolution.
  • Each zone may be defined by a plurality of macropixels, having corresponding macropixel ratios.
  • the first zone Z0 may have horizontal and vertical ratios of 1 to 1, while the second and third zones (Zl, and Z2) may have respective macropixel ratios of 1 to 2, and 1 to 4 (where Zl macropixel is a 2x2 matrix of display pixels and Z2 macropixel is a 4x4 matrix of display pixels).
  • the system may further include a driver controller circuit that couples to receive the foveated image frame to generate foveated bit plane data and covert the bit plane data into modulation planes.
  • One or more modulation devices e.g.
  • a display or a liquid crystal-on- silicon (LCoS display) may couple to receive the modulation planes to generate an expanded dataset, which is written to the pixel array such that the foveated image is produced upon a display.
  • a single bit of an associated plurality of macropixels may be copied to a set of display pixels based upon the corresponding macropixel ratios associated with the zone.
  • a method and protocol of foveated display is provided. The method may include receiving image data relating to the image and foveation zones.
  • a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time.
  • the method may further include generating a foveated image frame based upon the image data, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio.
  • the method may further include transmitting the foveated image frame to one or more modulation devices having raster logic coupled to a display circuit including an array of pixels.
  • the foveated image frame having header packet data may be sent to a driver controller circuit that generates foveated bit plane data and converts the same into modulation planes based upon a modulation scheme and header packet data.
  • the modulation planes may be sent to the one or more modulation devices.
  • the method may include outputting the foveated image to the array of display pixels based upon the header packet data and each corresponding macropixel ratio using the raster logic, wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratio associated with each zone.
  • This system and method of foveated display offers two protocol approaches for Foveated Transport: Zone-Order and Raster-Order. Each approach can have multiple formats to define specific protocol standards (more detail presented below).
  • This system and method of foveated display uses two protocol stages: image frame interface (per pixel data between host and the display driver; i.e. 24-bits/pixel) and modulation plane interface (discrete data between driver and display; i.e. 1 bit/pixel).
  • the system foveated display disclosed herein applies the protocols to two applications: center-detail and periphery detail. Further, this system of foveated display provides multiple foveated write embodiments: single column, dual-column, quad-column, and the like.
  • this system and method of foveated display proposes changing the display device to accept a novel foveated protocol; and thus, realize the savings of reduced transmit bandwidth and reduced write time to the display device’s pixel array.
  • system and method of foveated display described herein enables higher video rates, higher bit depths and/or higher display resolutions.
  • the system and method of foveated display proposes a few discrete constraints for the foveated processing to match or ease the implementation in the display device.
  • this system and method of foveated display improves upon existing systems by keeping the zone shapes in rectangular regions, matched to the physical array structure of the display device’s pixels (projection onto the visual field may make these regions non-rectangular/non-linear due to optical characteristics).
  • the foveated image processing method of the present invention typically results in a reduction in transmitted image or image frame data of one sixth- to one-tenth (1/6 to 1/10) of the total display pixels.
  • a 2Kx2K display with 4 Megapixels may only need 0.5
  • the present invention also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
  • FIG. 1 A a system diagram of a foveated electromagnetic radiation modulation system, having foveated light modulation, in accordance with some embodiments is shown.
  • Foveated rendering is a type of image processing or image generation that takes advantage of the eye’s exponential reduction in acuity/resolution moving outward from the center of the retina (a user’s gaze direction) to the outer periphery of an image by providing a plurality of display zones (e.g., zones Z0-Z3).
  • the zones may be rectangular in shape. However, it should be understood by one of ordinary skill in the art that the shape of the zones may vary.
  • the system of foveated display described herein provides a method and protocol of encapsulation of a foveated image frame and a way of processing a foveated write for displaying the foveated image upon a display device.
  • the foveated electromagnetic radiation modulation system 100 e.g. a foveated light modulation system
  • the processor circuitry 110 having a zone definition module 111, a foveated rendering module 112, foveated image memory 114 and image protocol encode logic 115, is generally configured to receive input from one or more input data sources 105 to generate an foveated image frame 103, having header packet data that defines two or more concentric zones of differing resolution, wherein each zone is defined by a plurality of macropixels and a corresponding macropixel ratios; one ratio for the horizontal direction and another ratio for the vertical direction. In an embodiment of the present invention, each macropixel ratio is an integer.
  • the input from the one or more input data sources 105 includes one or more images and associated foveated zone data.
  • the term “foveated image” as used herein, generally means an image or video frame that is divided into two or more resolution zones.
  • the processor 110 may be included within a host computer (not shown), whereby the host computer sends the foveated image frame 103 to the driver controller circuitry 120 associated with the display 146.
  • the driver controller 120 is generally configured to control modulation device circuitry 130 such that the foveated image is output to the display 146 based on input data 105 (e.g., image data).
  • the driver circuitry 120 and/or processor circuitry 110 may also include other known and/or proprietary circuitry and/or logic structures, including for example, frame buffer memory/cache, timing circuitry, vertical/horizontal scan line circuitry, processor circuitry, and the like.
  • the foveated image stored in memory 114 (for example a projected foveated image or a direct view foveated image), which is output to the display and/or produced or rendered upon display 146, may include a plurality of resolution zones Z0, Zl, Z2, Z3, ... ZN, where each zone has a differing resolution.
  • the zones may be generated using a center-detail mode, in accordance with the present invention, where the zone having the highest resolution Z0 tracks the user’s gaze direction and the other zones (Zl, Z2, Z3 ... ZN) have a lower resolution than zone Z0, such that the resolution of each zone decreases in descending order away from the user fixation point. That is, the resolution of the second zone Zl is lower than that of zone Z0; the resolution of the third zone Z2 is lower than that of zone Zl; the resolution of the fourth zone Z3 is lower than that of zone Z2; and the like.
  • the forgoing description of four zones is provided only as an example, and the numbering of the zones, and/or the size of the zone relative to the adjacent zone may vary.
  • the teachings of the present disclosure may be equally applied to a system having N number of foveated image zones.
  • the display 146 in accordance with the present system and method of foveated display may be an amplitude and/or phase display.
  • Applications for the foveated light modulation system 100 of the present disclosure may generally include, for example, target applications such as holography for heads up displays (HUDs), head-mounted displays (HMDs) for augmented reality (AR), mixed reality (MR) or virtual reality (VR), etc.
  • HUDs heads up displays
  • HMDs head-mounted displays
  • AR augmented reality
  • MR mixed reality
  • VR virtual reality
  • the data in a macropixel is no different than the data in a pixel.
  • the foveated rendering module 112 when the foveated rendering module 112 creates pixels and puts them in an image memory, they may be, for example, 24 bits (e.g., full color bits).
  • Macropixels correspond to multiple display pixels, and the image rendering process creates pixels that can also be referred to as macropixels.
  • the encoding module 115 does not create the macropixels, but rather rearranges them to an order indicated by a selected protocol or a particular protocol.
  • Modulation plane pixels are one single bit or correspond to one single bit.
  • the zones may be rectangular in shape, as correlates with the row-column structure of the display’s pixel array.
  • shape of the zones may vary according to the structure and features of the foveated display to facilitate copying of the data according to the macropixel ratios.
  • macropixel ratios in both the horizontal and vertical dimensions are integers that allow the copying of macropixel data to whole and/or individual display pixels. It would be understood by one skilled in the art that grayscale devices may allow scaling of pixel data or otherwise filtering/processing pixel values as the macropixel value is applied or written to multiple or other display pixels.
  • the foveated image frame 103 generated by the processor circuitry 110, generally includes foveation zones and may be encapsulated using known and/or proprietary image transport protocols (e.g., Display Serial Interface (DSI) from the Mobile Industry Processor Interface (MIPI) Alliance, High-Definition Multimedia Interface (HD MI), DisplayPort, and the like).
  • DSI Display Serial Interface
  • MIPI Mobile Industry Processor Interface
  • HD MI High-Definition Multimedia Interface
  • the processor circuitry 110 may embed header and/or command information (e.g., mode, action and/or format selection information) into the foveated image frame 103.
  • the header information may include, for example, the number of foveation zones being used, the resolution of each zone, the size and location of the foveation zones, data order, packing format, expected display capabilities, etc.
  • the size and position of each zone may be selected by the processor circuitry 110 and may be based on, for example, the array size and other properties of the modulation device circuitry 130, the optical characteristics of the system, tracking logic 107, rendering algorithms, operating environment, and the like.
  • the processor circuitry 110 may generate a foveated image frame 103 based on one or more input data source(s) 110.
  • the image data source(s) may include, for example, image sensors (e.g., camera devices) to capture environmental image data, image overlay data, and the like.
  • the input data source(s) 110 may include, for example, a plurality of image sensors to capture image data having different resolutions.
  • the foveated image frame 103 may include three zones: a first zone Z0, a second zone Zl and a third zone Z2.
  • the first zone Z0 may have the highest resolution (e.g., a l-to-l correspondence of foveated image to pixels)
  • the second zone Z2 may have a lower resolution than the first zone, for example, 4-to-l pixel resolution having 1 ⁇ 4 the resolution of the first zone (1 ⁇ 2 in horizontal and 1 ⁇ 2 in vertical)
  • the third zone Z3 may have a lower resolution than the second zone, for example, a l6-to-l pixel resolution having 1 /16 lh the resolution of the first zone (1 ⁇ 4 in horizontal and 1 ⁇ 4 in vertical).
  • the zones may be defined as a binary multiple of the highest resolution zone (e.g., 2-to-l, 4-to-l, l6-to-l, and the like).
  • the processor circuitry 110 may refresh the zones of the encoded foveated image frame 103 on a frame-by-frame basis, sub-frame basis, and/or a predefined basis, such as for example, every other frame.
  • the processor 110 may couple to receive user retina and/or head tracking data from tracking logic module 107, having tracking software electrical, electronic, and/or mechanical components, whether in real-time or stored.
  • the zone definition module 111 may use system optical parameters along with the data from tracking logic 107 which may sense the retina position of the user and generate the foveation zone parameter data corresponding to the sensed retina position.
  • the foveated rendering module 112 may couple to the tracking logic to receive a user’s fixation point based upon retina gaze; wherein the foveated rendering module can calculate the size and location of each zone using a foveated rendering algorithm based upon one or more of the parameters: total field-of-view, optical system distortion, fovea acuity, tolerance of tracking logic, latency of tracking logic, rate of motion, and the like.
  • tracking logic 107 may be included with system 100 to define the location of each zone within an image frame, where the tracking logic 107 is configured to track and locate a position of an eye and or head. It should be understood by one of ordinary skill in the art that any logic, in accordance with the present invention may be implemented via electrical, electronic, and/or mechanical components.
  • the driver controller circuitry 120 may include memory 122, a first conversion unit 124, and a second conversion unit 126.
  • the first conversion unit 124 may couple to receive the foveated image frame and, in response, generate foveated bit plane data based upon the foveated image frame.
  • the second conversion unit 126 may couple to receive the foveated bit plane data and, in response, generate modulation planes 127 based upon the foveated bit plane data and an associated modulation scheme.
  • memory 122 can store the foveated image frame, the foveated bit plane data, or the modulation planes 127.
  • the one or more modulation devices 130 each having an array of display pixels 144, may couple to receive the modulation planes 127, and output the foveated image upon the display 146.
  • the one or more modulation devices 130 may expand each line-set of a modulation plane based upon the header packet data; wherein, for zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratio associated with the zone.
  • the modulation device circuitry 130 may include, for example, protocol decode logic 133, display circuitry l40(e.g., an LCOS display device, panel, display panel, or spatial light modulator), raster logic 150, and memory 132.
  • the decode logic 133 couples to receive the modulation planes from driver controller circuitry 120 to parse the header packet data and raster logic 150 generates expanded datasets based upon controls from the decode logic 133, which may include the header packet data and corresponding macropixel ratios.
  • the raster logic 150 may include a row buffer 156 for holding each row of each line-set during the decoding stage of operations; and a row queue 158 for holding the expanded dataset to be written to the pixel array 144 resulting in an effect displayed upon display 146.
  • the raster logic 150 may further include either a line-set gather circuitry 152 or direct write logic 154.
  • the display circuitry 140 may couple to receive the expanded dataset and, in response, to display the foveated image upon display 146.
  • display circuitry 140 may include a control unit 142 that couples to receive the expanded dataset and generate a plurality of respective binary values to be applied upon pixel array circuitry 144, wherein the plurality of respective binary values that control the amplitude and phase of electromagnetic radiation propagating through each pixel.
  • the display circuitry 140 may include, for example, liquid crystal on silicon (LCoS) display circuitry (not shown) such as those provided by Compound Photonics.
  • the display circuitry 140 may include phase-type and/or amplitude-type, depending on what is required for a given application.
  • the line-set gather circuitry 152 is further detailed in the method of FIG. 3 A and FIG. 3B as part of the memory read from zone buffer data or immediately from received raster order data.
  • the display device may provide direct write logic 154 to enable writing a portion of the row corresponding to one zone area without affecting the other portions of the row. This feature allows directly writing each zone data in zone order without having to gather macropixel data from different zones in line-sets. However, it uses multiple row write times to write all portions of the row, which may be a limiting factor of that embodiment.
  • the header packet data may include a resolution-order toggle bit enabling a center-detail mode and a periphery-detail mode.
  • the foveated image frame includes a plurality of concentric zones (i.e., zones of any shape that share the same center) having a zone of highest resolution located at a center a user fixation point, whereby resolution of the adjacent zone is lower than full resolution by a predetermined value, and the resolution of each zone decreases in descending order away from the user fixation point.
  • the foveated image frame includes a plurality of concentric zones having a zone of highest resolution located at a periphery of the plurality of concentric zones, whereby resolution of the adjacent second interior zone is lower than full resolution by a predetermined value, and the resolution of each concentric interior zone decreases in descending order.
  • the header packet data may further include a transmission-mode toggle bit enabling a raster-order mode and a zone-order mode.
  • data transmission comprises a plurality of line-sets representing rows of data from the plurality of concentric zones corresponding with a display order of the original image.
  • data transmission comprises a sending of each one of the plurality of concentric zones in its entirety before data
  • the header packet data may further include a zone number segment defining number of the plurality of concentric zones; a zone-size segment defining horizontal and vertical size of each one of the plurality of concentric zones; a zone- offset segment defining horizontal and vertical offset associated with each one of the plurality of concentric zones; and a plurality of display parameters.
  • the plurality of display parameters may include a word-size segment defining a plurality of pixels-bits transferred relative to a clock cycle associated with the driver controller circuit. Further, the display parameters may include an x-offset size segment defining a plurality of pixels-bits per horizontal offset Least Significant Bit (LSB) and a line-set size segment defining a maximum number of rows to be simultaneously written.
  • LSB Least Significant Bit
  • the display parameters may include a row-time segment defining a plurality of clocking segments required to write a row and a dual column-drive mode indicator enabling simultaneous writing of two rows.
  • at least one of the zones has a different center from at one of the other zones.
  • the system for foveated display 100 may further include foveated display protocol logic to encode and decode the foveated image frame and the foveated modulation plane data, whether defined in software or hardware, including: image protocol encode 115, image protocol decode 123, plane protocol encode 125, and plane protocol decode 133.
  • processor 110 may include image protocol encode 115.
  • Driver controller 120 may include image protocol decode 123, plane protocol encode 125.
  • the one or more modulation devices 130 may include the plane protocol decode 133.
  • the processor 110 couples to receive image data relating to the image and foveation zones.
  • processor 110 may receive image input data from the one or more input data sources 105; and tracking data from tracking logic 107, based upon retina and/or head location of a user in real-time.
  • processor 110 may couple to receive foveation data from one of the input data sources 105.
  • the processor 110 may generate a foveated image frame based upon the image data, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio.
  • the processor 110 may transmit the foveated image frame to one or more modulation devices 130 having raster logic 150 coupled to a display circuit 140 including an array of pixels 144.
  • the foveated image frame having header packet data may be sent to the driver controller circuit 120 that generates foveated bit plane data using conversion unit 124; and converts the same into modulation planes based upon a modulation scheme and header packet data using conversion unit 126.
  • the driver controller circuit 120 may send the modulation planes 127 to the one or more modulation devices 130.
  • the one or more modulation devices 130 may output the foveated image to the display or array of display pixels by decoding and expanding each modulation plane based upon the header packet data and each corresponding macropixel ratio using the raster logic 150.
  • the system and method of foveated display may comprise a center-detail mode and a periphery-detail mode.
  • zone 0 possesses the highest resolution, at the center of gaze, where the other concentric zones have a differing resolution.
  • zone Zl surrounds zone Z0 and only applies to pixels outside of zone Z0.
  • Zone Zl possesses a resolution that is lower than Zone Z0.
  • Zone Z2 surrounds zone Zl and possesses a resolution that is lower than zone Zl.
  • Zone Z3 surrounds zone Z2 and possesses a resolution that is lower than zone Z2.
  • zone 0 is still the highest resolution but may be sized to the length and width of the entire display.
  • the concentric zones within zone 0 possess a resolution that is lower than zone 0.
  • zone Zl possesses a resolution that is lower than zone Z0; zone Z2 possesses a resolution that is lower than zone 1; and zone Z3 possesses a resolution that is lower than zone Z2.
  • the foveated rendering module 112 may determine the size and location of each zone according to its algorithm which may include factors of total field-of-view, optical
  • the one or more input data sources 105 or the tracking logic may provide the processor 110 with the size and location of each zone.
  • Periphery-detail modes may be more applicable for displays using interference light steering; and hence, do not follow gaze tracking. Consequently, the foveation protocol, method, and system, in accordance with the present invention, reduces the data bandwidth.
  • the center-detail mode when the gaze moves toward the edge of the display, the edges of the zones may align such that zone Z0 has “0” offset from the edge.
  • zone Z3 would not be used if it was the same size and offset as zone Z2.
  • the largest zone may be the same size as the display and thus its offsets would be 0. However, if the display supports partial display mode, the largest zone may be smaller than the display and a fill value would be used for the remaining regions of the display.
  • periphery-detail mode the sizes and locations of the zones may be constant, but they are not required to be.
  • the foveated protocol described herein can allow either dynamic or static sizes and offsets. Applications using periphery mode may not be able to use the large macropixels of zone 3, but the concept and method described herein do allow it. Yet, this capability is dependent upon the display devices capability to support it. If zone Z2 or Z3 had a size of 0x0, it would be the same as saying that zone is not used. In at least one embodiment implementing periphery-detail mode, it would be an error to have a higher number zone extend outside of a lower numbered zone.
  • zone-order mode the data for each zone is sent in its entirety before data for the next zone.
  • Zone Z3 data if it is used, is sent first; while, zone Z0 data is last.
  • Each zone’s data is sent in raster line order: horizontal left to right, then vertical top to bottom. If a raster’s data is less than an integer multiple of words and row padding is enabled, it is padded to fill the last word; wherein, each raster starts on a word boundary. For rasters that cross a void area of another zone, the data on either side of the zone is packed together. Padding is only added at the end of each raster.
  • Each line-set starts with its zone 3 data, then the I st row of zone 2 data. Consequently, the first row of each zone (Zl, Z0) are sent. Following these rows are the consecutive second, third, etc. row of each zone pursuant to the formatted order within the line-set. Rows of data are added as each raster moves down the display rows, with the highest number zone’s data always first. Padding is added at the end of each zone’s data to be integer number of words, so each new row and zone of data starts on a word boundary. Additional padding at the end of each line-set may need to be added to meet the display minimum row, timing requirement.
  • Raster-order mode may provide lower latency and minimal plane storage, compared to zone-order mode.
  • Zone-order may be simpler to implement and use less padding, but more buffer space and latency may be incurred. For example, if the zone ZO region is at the top of the display, the first row cannot be written until all of zones, Z3, Z2 and Zl, are received. Even then, the zone-order implementation may not be able to keep up with the data rate of zone ZO.
  • Zone-order control can still calculate the total time to write the display array using sizes and row-times (see below) and not send planes closer together than that. If the display does not have dual buffers for the zone data, then additional time between planes may be needed to prevent overlap.
  • capabilities/constraints may include: word-size, x-offset-size, line-set-size, row-time, and dual-column-drive.
  • Word-size is the number of pixels-bits transferred with each clock of the interface (i.e. a 64 signal DDR bus would send 128 bit words). This may be the granularity of the horizontal zone side.
  • X-offset-size is the number of pixels-bits per horizontal offset Least Significant Bit (LSB), also referred to as the step-size of X-offsets.
  • the offset size may be one half of the word-size to allow centering of a zone.
  • Line-set-size represents the max number of rows that can be written at the same time.
  • the row-time parameter represents the minimum number of words per row. This is the number of clocks that the display requires to write a row or a multi-row. A line-set that only crosses the highest zone (with a matching line-set-size), will only use one row-time to finish the line-set. Thus, the packet of data needs to be padded to that number of words if the X-resolution would require fewer words. If the line-set crosses 3 zones, then it will require 4 row-times to write the line-set. Thereby, the packet of data needs to be padded to 4 times the row-time (number of words), if the active data would be fewer words.
  • the row-time may be unique per zone level. Using staggered row pulses for high simultaneous row counts can implement this feature. Only the row-time for zone Z3 data can be longer than the row-time for zone Z0 data.
  • each modulation-plane may include a header that includes: (a) the modes used, the number of zones; (b) the size of each zone in both x and y; (c) the offset of each zone in both x and y; and (d) display parameters/constraints used. Following the header packet data, the modulation-plane data is included, which may
  • display circuitry e.g., LCoS circuitry
  • display circuitry 107 may generally include an array (X-Y) of individually addressable (controllable) pixel elements (where each pixel is formed at least in part from liquid crystal material or substance) through electrodes formed on a semiconductor material.
  • the array size may be generally considered an upper limit to the resolution of a foveated image 114, while a foveated image will have two or more zones, where at least one zone has a resolution that is less than the maximum resolution of the displayed image.
  • the array size may be, for example: 2048x2048, 4096x4096 or 6144x6144 pixel elements.
  • the foveation zone sizes may be, for example: 512x512(1x1), 1024x1024(2x2), 1536x1536(3x3), 2048x2048(4x4), 3072x3072(6x6), etc.
  • the corresponding macropixel array size is 512x512.
  • the control of a pixel may include controlling the amplitude and/or delay (i.e., the phase) of electromagnetic radiation (e.g., light) propagating through a pixel (e.g., transmissive and/or reflective propagation); and thus may control, for example, the nature of the displayed foveated image 146.
  • the modulation device circuitry 130 may be configured to receive electromagnetic radiation (e.g., light, such as laser light) and cause a phase shift of the electromagnetic radiation to generate a desired result.
  • the system 100 may include a plurality of modulation devices 130 that can generate, for example, a color foveated image 146 or non-colored foveated image 165 (to be described further in detail with reference to FIG. 1B).
  • each modulation device 130 may be configured to control a color saturation of the projected image, for example a system that includes three modulation devices to separately control red, green blue (RGB) color saturation of the projected image 146.
  • RGB red, green blue
  • a single modulation device may be used, for example, to generate a monochrome projected image or a color sequential image.
  • Each resolution of each zone of the foveated image frame 103 may be defined by a unique macropixel.
  • A“macropixel,” as used herein, generally means a grouping of two or more pixels/physical pixels, which are identically controlled by the modulation device circuitry 130 to generate a portion of an image that has a defined resolution. For example, if a first zone Z0 is defined as a zone having the highest resolution of the modulation device circuitry 130, the macropixel for the first zone Z0 may be defined as lxl (a ltol
  • each pixel of the first zone Z0 of the foveated image frame 103 corresponds to a single physical pixel of the modulation device circuitry 130.
  • the macropixel of the second zone Zl may be defined as 2x2, meaning that one macropixel of the reduced-resolution second zone Zl corresponds to 4 physical pixels (2 physical pixels x 2 physical pixels) of the modulation device circuitry 106, and so on for other defined zones.
  • the encoded foveated image frame 103 since the encoded foveated image frame 103 includes a plurality of lower-resolution zones, the overall data size of the foveated image frame 103 will be substantially less than that of a full resolution image frame. As a result, the bandwidth requirements of a communications interface between the processor circuitry 110 and the driver controller circuitry 130 are reduces, in addition to the reduction in memory /buffer size to store foveated image frame data.
  • the driver controller circuitry 130 is generally configured to receive the foveated image frame data 103 from the processor circuitry 110 and generate foveated bit plane data using the conversion unit 124.
  • Bit plane data may include header information (similar to header information in the foveated image frame 103), an array of binary values for macropixels in each foveated zone (to control the electrodes of corresponding display pixels), and/or pad data between macropixel data, as will be described in further detail below.
  • a number of bit planes are sequentially generated for each frame.
  • the binary values, of the bit planes may be generated via the use of, for example, pulse width modulation (PWM) and/or pulse frequency modulation (PFM) techniques as controlled by functions loaded in the driver control circuitry 120.
  • the series of binary values is based on the saturation value of each pixel of the foveated image frame 103 along with the selected modulation technique.
  • the bit plane generated is an array of binary values, each binary value corresponding to a pixel or macropixel.
  • each binary value corresponds to a physical pixel of the modulation device circuitry 130.
  • each binary value corresponds to a macropixel, which controls a collection of physical pixels in the modulation device circuitry 130 that are defined by the macropixel size.
  • the bandwidth and data throughput requirements of a communication interface between the drive controller circuitry 120 and the modulation device circuitry 130 are substantially reduced for the system and method of foveated display as described herein.
  • the driver controller circuitry 120 may generate a header having one or more defined fields to define the foveated zone size/position, macropixel size, and the like.
  • the header may be formed as the first line or lines of the first modulation plane per frame/sub-frame set of modulation planes or every modulation plane.
  • the header packet data defines the number of zones and the size and location of each of the zones.
  • the last zone may be the same size as the display. In these cases, no offset exists for this zone and its location (offset from the origin, i.e. the upper-left (UL) comer) will be (0,0).
  • the largest zone may be smaller than the whole display, thus allowing the offsets to be non zero.
  • the display logic may fill the surrounding pixels with a predetermined value.
  • H-Offsets are a multiple of H-step-size
  • V-sizes and offset are a multiple of line-set-size (typically 4 display rows)
  • all offsets are relative to the display origin (i.e. upper left comer)
  • H-mpix and V-mpix are the sizes of the macropixel for that zone in horizontal and vertical directions (allowing for non-square macropixels if desired). All fields are in units of display device pixels:
  • the display driver controller 120 is configured to transmit data to the display 146.
  • Each word is a horizontal line of bits from one zone; they are all of the same resolution, on the same row.
  • the data can be sent in different orders and formats, depending on what is supported in the driver and display device.
  • the header may also include the following fields:
  • An exemplary configuration for the system of foveated display may include a foveated image having three zones and 640 Kbits per modulation plane, covering a 4 Megapixel display, the following tables represent the associated header packet data.
  • zone Z2 which includes a right and a left segment of data (Z2A and Z2B), each having four words (whereby each word is 128 bits).
  • zone Zl which each include a right and left segment (ZlArl, ZlAr2, ZlBrl, ZlBr2) of two words.
  • zone Z0 data ZOrl, Z0r2, Z0r3, Z0r4 of four words.
  • a line-set is displayed that represents data through zones 2 and 3 only. As shown, there is one row of zone Z2, which includes a right and a left segment of data (Z2A and Z2B), each having four words (whereby each word is 128 bits). Additionally, there are two rows of zone Z1 (Zlrl, Zlr2) of eight words.
  • a line-set is displayed that represents data through zone 3 only. As shown, there is one row of zone Z2 (Z2rl), each sixteen words (whereby each word is 128 bits).
  • exemplary operating environment 100 are exemplary and more or fewer components may be present in various configurations. It is appreciated that operating environment may be part of a distributed computing environment, a cloud computing environment, a client server environment, and the like.
  • FIG. 1B a system diagram of a foveated electromagnetic radiation modulation system, having grayscale device circuitry, in accordance with some embodiments is shown. Similar to the foveated electromagnetic radiation modulation system 100 of FIG. 1A, the foveated display system 160 of FIG. 1B may include a processor 110, grayscale device circuitry 162, and a display 165.
  • the processor circuitry 110 having a zone definition module 111, a foveated rendering module 112, a foveated image memory 114 and an image protocol encode module 115, is generally configured to receive input from one or more input data sources 105 to generate an foveated image frame 103, having header packet data that defines two or more concentric zones of differing resolution, wherein each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio.
  • the input from the one or more input data sources 105 includes one or more images and associated foveated zone data.
  • the processor 110 may be included within a host computer (not shown), whereby the host computer sends the foveated image frame 103 to the grayscale device circuitry 162 associated with the display 165.
  • the grayscale device circuitry 162 may include image protocol decode logic 163, display circuitry 170, raster logic 180, and memory 164.
  • the display circuitry 170 may include control unit 172 and an array of pixels 174; while the raster logic 180 may include a line-set gather logic 182, direct write logic 184, a row buffer 186 and a row queue 188.
  • the grayscale device circuitry 162 and/or processor circuitry 110 may also include other known and/or proprietary circuitry and/or logic structures, including for example, frame buffer memory/cache, timing circuitry,
  • the foveated image stored in memory 114 may include a plurality of resolution zones Z0, Zl, Z2, Z3, ... ZN, where each zone has a differing resolution.
  • the zones may be generated using a Center-Detail mode, where the zone having the highest resolution Z0 tracks the user’s gaze direction and the other zones (Zl, Z2, Z3 ... ZN) have a lower resolution than zone Z0, such that the resolution of each zone decreases in descending order away from the user fixation point.
  • the line-set gather circuitry 182 is further detailed in the method of FIG. 3 A and FIG. 3B as part of the memory read from zone buffer data or immediately from received raster order data.
  • the display device may provide direct write logic 184 to enable writing a portion of the row corresponding to one zone area without affecting the other portions of the row. This feature allows directly writing each zone data in zone order without having to gather macropixel data from different zones in line-sets. However, it uses multiple row write times to write all portions of the row, which may be a limiting factor of that embodiment.
  • FIG. 2A is a flow diagram of a method 200 for foveated display in accordance with some embodiments.
  • the method and protocol of foveated display includes receiving eye tracking data relating to generating the foveation zone definitions in an action 210.
  • a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time.
  • the method may further include generating a rendered foveated image based upon the image data and parameters defining the foveation zones according to foveated rendering techniques (in an action 215), wherein the rendered foveated image contains macropixel image data corresponding to two or more concentric zones of differing resolution and corresponding macropixel ratios.
  • the method may further include generating a foveated image frame based upon the rendered foveated image and protocol selection parameters (in an action 220), wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is defined by a plurality of macropixels and corresponding macropixel ratios.
  • FIG. 2B is a flow diagram of a method 230 for foveated modulation plane generation by display driver circuitry 120 as in FIG. 1A, in accordance with some embodiments.
  • the foveated image frame is decoded into controlling parameters from the header and macropixel data (e.g., valid macropixel data) of different zones, rows and words according to zones or line-sets as selected by the header parameters.
  • macropixel data e.g., valid macropixel data
  • the method may further process the image pixel data in action 235 to transform the pixel data into a format that is compatible or more compatible to the display, which may involve dithering, scaling/attenuating values and/or splitting into bit planes and optionally storing the result in memory.
  • the system may read the data from memory in action 240 according to a modulation scheme to generate a modulation plane.
  • action 244 may encode the modulation plane data into a modulation plane format/protocol with header data according to a selected plane protocol and transmit the result to one or more modulation devices.
  • FIG. 2C is a flow diagram 250 of a method or process for writing foveated modulation plane data to the array of display pixels in a foveated modulation display device 130 as in FIG. 1A, in accordance with some embodiments.
  • the method may include parsing the header packet data from the modulation plane and identifying the foveation zone information.
  • the method may write the data to memory 258 or directly process the data for writing to the pixel array.
  • the method may further include an address management function according to action 260 using the foveation zone parameters to determine the size and order of the corresponding line-set macropixel data, which may include padding.
  • action 264 may gather macropixel data by reading from memory 262 or direct input data to expand and combine macropixel data, according to corresponding macropixel ratios and zone offsets, into display pixel data in a row buffer (see FIG. 4) and transfer to a row queue for writing to the pixel array, which may write one or multiple rows simultaneously with the same data according to the corresponding macropixel ratio.
  • the method may include advancing the control counters and indexes for the next line-set.
  • the method may include repeating the actions of 260, 264 and 268 until each line-set of modulation plane has been written into the pixel array in an action 269.
  • FIG. 2D is a flow diagram 270 of a method or process for writing foveated image frame data to the array of display pixels in a grayscale display device 162 as in FIG. 1B, in accordance with some embodiments.
  • the method may include parsing the header packet data from the foveated image frame and identifying the foveation zone information and valid macropixel data.
  • the method may write the data to memory 278 or directly process the data for writing to the pixel array.
  • the method may further include an address management function according to action 280 using the foveation zone parameters to determine the size and order of the corresponding line-set macropixel data, which may include padding.
  • response action 284 may gather macropixel data by reading from memory 282 or direct input data to expand and combine macropixel data, according to corresponding macropixel ratios and zone offsets, into display pixel data in a row buffer (see FIG. 4) and transfer to a row queue for writing to the pixel array, which may write one or multiple rows simultaneously with the same data according to the corresponding macropixel ratio.
  • the method may include advancing the control counters and indexes for the next line-set or the next sub-frame. As a looped operation, the method may include repeating the actions of 280, 284 and 290 until each line- set of the image frame has been written into the pixel array in an action 295.
  • FIG. 3A is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a center-detail mode of operation, wherein four zones exist in accordance with some embodiments.
  • operations 300 comprise the process on the display device to decode and act on incoming modulation planes with center-detail (Z0 in the center).
  • the flowchart 300 illustrates operations of a modulation plane state diagram for a center-detailed, with a four-zone foveated image.
  • the method 300 for writing a modulation plane of foveated data into the pixel array of a foveated display device may include parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, in an action 301.
  • the one or more modulation devices 130 may wait for the modulation plane data to be sent in an effort to parse the header packet data.
  • a strobe or data signature may be used to indicate the start of a new header packet
  • the plane decode logic 133 may capture all the header fields to control the modes and parameters of the current modulation- plane. These can be used in the following decision actions.
  • the method of writing a modulation plane of foveated data into the pixel array of a foveated display device may also include detecting whether the transmission-mode toggle bit is set to enable a raster-order mode in a decision action 302. If this is a raster-order mode modulation plane, proceed to processing the incoming data as line-sets in the predetermined data order starting by initializing the counters/indexes for the first row (in an action 310). In response to no detected raster-order mode, the method may proceed to the zone-order mode fork in an action 304. During this phase, two independent processes will begin to deal with writing (380) and reading (306) the zone buffers.
  • the read side will be delayed until near or after the end of the writing sequence so that the data for zone Z0 (with zone 0 last) is ready when needed for the read (to be described with reference to the timing diagram 500 of FIG. 5).
  • Buffer read time may generally be longer than buffer write, because of any needed pad timing added to large macropixel line-sets for array write.
  • the method may include writing respective zone data of the incoming data into a respective zone buffer (Z(n-l), , Z3, Z2, Zl, Z0 ⁇ (in actions 380-388 and waiting for a read delay of predetermined time in an action 306.
  • the method may include toggling a read pointer corresponding to each respective zone buffer and start the read data flow to match the order needed for raster-order (to write line-sets to the array). Some devices may optimize the timing in zone-order mode since the internal buffer read path is not IO bandwidth
  • zone Z0 data can be read with a wider and/or faster bus to match the minimum row write timing to make up for pad timing added to large macropixel line-sets. For this reason, array write line-set timing differs for zone-order in comparison to raster-order. This step enables zone-order modulation planes to be written faster than raster-order, since any need for pad data timing has been removed.
  • the method may include identifying a first row of a line-set of data in action 310.
  • Row and line- set counts keep track of the corresponding location on the display as it relates to size and location of each zone, in an effort to know if the present line-set crosses or intersects with each of the zones.
  • Some displays may provide a feature that defines the modulation plane size smaller than the total display size, and then position the active data of the modulation plane with an offset in the display area. This initialization step 310 would account for such offsets.
  • the method may further include in a decision action 320 detecting whether a high-resolution zone (Z0) of a plurality of concentric zones is present in the identified line-set of data.
  • the decoding logic will detect whether the line-set includes part of zone Z0. In other words, does the row selected from the line-set cross or intersect with zone Z0.
  • the current location of pointer is compared to the zone Z0 size and offset, along with any global offsets. If the answer is affirmative, this line-set will also intersect all the upper zones with zone Z0 is in the center. That is, there will be no need to detect whether the other zones are present in the row with the affirmative detection of zone Z0.
  • the method 300 may proceed to processing the line-set data in actions 322 and 324.
  • the method may include detecting whether a next consecutive zone (Zl, Z2, Z3, ... Z(n-l)) is present in the identified line-set of data until one zone is detected, in decision action steps 330, 340, and 350.
  • the method may include expanding the identified line-set of data, based upon the number of zones, horizontal zone-size, line-set-size, x-offset-size, row-time, and word-size; storing the expanded data in a row buffer based upon the x-offset- size in action steps 322, 332, 342, and 352.
  • the method may include transferring the row buffer to a row queue, wherein k row(s) are written 2 i
  • Z3... Z(n-l) ⁇ are detected (in action steps 324, 334, 344, and 354).
  • the decoding logic will be able to identify how many words will be received for each zone.
  • zone Z0 is last in the row and zone Z3 data is first in the row.
  • action 322 for zone Z0 data each word is expanded 8 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets.
  • zone Z2 data will be next, each word will be expanded 4 times horizontally and stored with in the row buffer with the correct offsets.
  • zone Zl data expanded 2 times horizontally and stored in the row buffer with the correct offsets.
  • zone Z0 data will be last, and stored directly in the row buffer with the correct offsets.
  • one row is written eight times. Additionally, the method 300 may include transferring the row buffer to the row queue. Row writes to the array can begin before all 8 rows are in the queue. Each row is written individually for a total of 8 write cycles. Write cycles will be spaced to match minimum row timing for one row at a time (xl mode), thus the need for the queue.
  • zone Z0 data Following the example of the detection of zone Z0 data, data during the detection of the other respective zones Z1-Z3 (in action steps 330, 340, and 350), the zone data is expanded for each respective zone and combined within the row buffer in actions 322, 332, 342, and 352 in a similar fashion. That is, for zone Z0 data each word is expanded 8 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets. Following this same example, zone Z2 data will be next, each word will be expanded 4 times horizontally and stored with in the row buffer with the correct offsets. Next, zone Zl data expanded 2 times horizontally and stored in the row buffer with the correct offsets.
  • zone Z0 data will be last, and stored directly in the row buffer with the correct offsets.
  • zone Zl two rows are written four times in action 334.
  • zone Z2 four rows are written two times in action 344.
  • zone Z3 eight rows are written one time in action 354.
  • the method action will branch back to checking for the next modulation plane (in action 301). If not, then zone-order mode exists and the process ends with nothing more to do for this thread. In some embodiments, another thread writing buffer data would return to detecting for the next header packet data. In the alternative, this other thread may be currently writing the next modulation plane’s buffer data. Some implementations may perform buffer read-side A/B pointer toggle at the end of the read sequence instead of at the beginning.
  • action steps 380, 382, 384, 386, and 388 data from the respective zones of Z3, Z2, Zl, and Z0 are written into a respective zone buffer (zone 3 buffer, zone 2 buffer, zone 1 buffer, and zone 0 buffer). Some macropixel rows will be full zone width; others that intersect with another adjacent zone may be shorter. Padding modes can be enabled to adjust data steering as needed to match buffer addressing. The expected number of words for each zone data is calculated from the sizes and mode controls.
  • the method may include moving on to the next zone data in actions 382, 384, and 386.
  • an action 388 the method ends the zone-order processing by toggling the buffer A/B write-side pointers and returning to waiting for more modulation planes having header packet data.
  • FIG. 3B is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a periphery-detail mode of operation, wherein four zones exist in accordance with some embodiments.
  • the flowchart 400 illustrates operations of a modulation plane state diagram, with the distinction that the mode of operation where the display device can decode and act upon incoming modulation planes with periphery -detail, where zone Z0 is located on the outer periphery of the display.
  • the method 400 for writing a modulation plane of foveated data into display array pixels may include parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, in an action 401.
  • the one or more modulation devices may wait for the modulation plane data to be sent in an effort to parse the header packet data.
  • a strobe or data signature may be used to indicate the start of a new header packet data/modulation plane.
  • the plane decode logic may capture all the header fields to control the modes and parameters of the current modulation-plane. These can be used in the following decision actions.
  • the method of writing a modulation plane of foveated data into the pixel array of a foveated display device may also include detecting whether the transmission-mode toggle bit is set to enable a raster-order mode in a decision action 402. If this is a raster-order mode modulation plane, proceed to processing the incoming data as line-sets in the predetermined data order starting by initializing the counters/indexes for the first row (in an action 410). In response to no detected raster-order mode, the method may proceed to the zone-order mode fork in an action 404. During this phase, two independent processes will begin to deal with writing (480) and reading (406) the zone buffers.
  • the read side will be delayed until near or after the end of the writing sequence so that the data for zone Z0 (with zone 0 last) is ready when needed for the read (to be described with reference to the timing diagram 500 of FIG. 5).
  • Buffer read time may generally be longer than buffer write, because of any needed pad timing added to large macropixel line-sets for array write.
  • the method may include writing respective zone data of the incoming data into a respective zone buffer (Z(n-l), ... , Z3, Z2, Zl, Z0 ⁇ (in actions 480-486 and waiting for a read delay of predetermined time in an action 406.
  • the method may include toggling a read pointer corresponding to each respective zone buffer and start the read data flow to match the order needed for raster-order (to write line-sets to the array). Some devices may optimize the timing in zone-order mode since the internal buffer read path is not IO bandwidth
  • zone Z0 data can be read with a wider and/or faster bus to match the minimum row write timing to make up for pad timing added to large macropixel line-sets. For this reason, array write line-set timing differs for zone-order in comparison to raster-order. This step enables zone-order modulation planes to be writen faster than raster-order, since any need for pad data timing has been removed.
  • the method may include identifying a first row of a line-set of data in action 410.
  • Row and line- set counts keep track of the corresponding location on the display as it relates to size and location of each zone, in an effort to know if the present line-set crosses or intersects with each of the zones.
  • Some displays may provide a feature that defines the modulation plane size smaller than the total display size, and then position the active data of the modulation plane with an offset in the display area. This initialization step 410 would account for such offsets.
  • the method may further include in a decision action 420 detecting whether the lowest-resolution zone (Z3) centered within the plurality of concentric zones during the peripheral-detailed mode is present in the identified line-set of data.
  • the decoding logic will detect whether the line-set includes part of zone Z3. In other words, does the row selected from the line-set cross or intersect with zone Z3.
  • the current location of pointer is compared to the zone Z3 size and offset, along with any global offsets. If the answer is affirmative, this line-set will also intersect all the upper zones with zone Z3 is in the center. That is, there will be no need to detect whether the other zones are present in the row with the affirmative detection of zone Z3.
  • the method 400 may proceed to processing the line-set data in actions 422 and 424.
  • the method may include detecting whether a next consecutive zone (Z2, Zl, Z0) is present in the identified line-set of data until one zone is detected, in decision action steps 430, 440, and 450.
  • the method may include expanding the identified line-set of data, based upon the number of zones, horizontal zone-size, line-set-size, x-offset-size, row-time, and word-size; storing the expanded data in a row buffer based upon the x-offset-size in action steps 422, 432, and 442.
  • zone Z3 data each word is expanded 8 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets.
  • zone Z2 data will be next, each word will be expanded 4 times horizontally and stored with in the row buffer with the correct offsets.
  • zone Zl data expanded 2 times horizontally and stored in the row buffer with the correct offsets.
  • zone Z0 data will be last, and stored directly in the row buffer with the correct offsets.
  • one row is written eight times.
  • the method 400 may include transferring the row buffer to the row queue. Row writes to the array can begin before all 8 rows are in the queue. Each row is written individually for a total of 8 write cycles. Write cycles will be spaced to match minimum row timing for one row at a time (xl mode), thus the need for the queue.
  • zone data is expanded for each respective zone and combined within the row buffer in actions 422, 432, and 442 in a similar fashion. That is, for zone Z2 data each word is expanded 4 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets.
  • zone Z2 data will be next, each word will be expanded 2 times horizontally and stored with in the row buffer with the correct offsets.
  • zone Zl data expanded 1 times horizontally and stored in the row buffer with the correct offsets.
  • zone Z0 data will be last, and stored directly in the row buffer with the correct offsets (as pass through data in step 452).
  • zone Z2, Zl, and Z0 one row is written eight times in actions 424, 434, 444, and 454, respectively.
  • the method action will branch back to checking for the next modulation plane (in action 401). If not, then zone-order mode exists and the process ends with nothing more to do for this thread. In some embodiments, another thread writing buffer data would return to detecting for the next header packet data. In the alternative, this other thread may be currently writing the next modulation plane’s buffer data. Some implementations may perform buffer read-side A/B pointer toggle at the end of the read sequence instead of at the beginning.
  • action steps 480, 482, and 484 data from the respective zones of Z3, Z2, Zl, and Z0 are written into a respective zone buffer (zone 3 buffer, zone 2 buffer, zone 1 buffer, and zone 0 buffer). Some macropixel rows will be full zone width; others that intersect with another adjacent zone may be shorter. Padding modes can be enabled to adjust data steering as needed to match buffer addressing. The expected number of words for each zone data is calculated from the sizes and mode controls.
  • the method may include moving on to the next zone data in actions 482, 484, and 486.
  • the method ends the zone-order processing by toggling the buffer A/B write-side pointers and returning to waiting for more modulation planes having header packet data.
  • a diagram 490 of a display with 4 zones in center-detail mode indicates a line-set slice 494 of 8 rows across the middle of the display such that the line-set crosses all 4 zones.
  • This figure applies to the Just-In-Time (JIT) data order of a raster order protocol or the line-set gather function as it relates to the expand and combine steps corresponding to action 322 outlined in FIG. 4B and FIG. 4C.
  • JIT Just-In-Time
  • FIG. 4B a multiple step diagram of the expansion of the macropixel data in the method of FIG.3 A, correlating to action 322, showing contents of a macropixel data block and the contents of a row buffer, in accordance with some embodiments is shown.
  • the method includes matching using JIT-order for immediate writes.
  • the method may include internal buffer reading to gather data for the line-set writing. For this given example, there is one row of zone-3 macropixels, two rows of zone-2 macropixels, four rows of zone-l macropixels, and eight rows of zone-0 pixels.
  • the method may include expanding zone-3 macropixels 8 times horizontally for the separate A and B regions representing the left and right side of the zone within the row of the current line-set. This data is written into a row buffer with the zone 3 offsets (z3HoffsetA&B).
  • the method may include expanding zone-2, where row 1 macropixels are copied four times horizontally into the right-side and left-side segments of the row buffer with zone 2 offsets (z2HoffsetA&B).
  • the method may include expanding zone-l, where row 1 macropixels are copied two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffsetA&B).
  • the method may include writing zone-0, where row 1 pixels are copied into the row buffer at the zone 0 offset (zOHoffset). The method may include copying the row buffer to the row queue using the steering horizontal offset
  • the method may include writing the second row (row 2) of zone-0 into the row buffer at the offset for zone 0 (zOHoffet).
  • the row buffer is copied to the row queue using the steering horizontal offset (Hoffset), where outside pixels are filled with the same
  • the method may include expanding the second row of zone-l two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffset).
  • the method may include writing the third row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet).
  • the method may include writing the fourth row of zone-0 into the row buffer at the offset for zone 0
  • the method may include expanding the second row of zone- 2, where row 2 macropixels are copied four times horizontally into the right-side and left-side segments of the row buffer with zone 2 offsets (z2Hoffset).
  • the method may include expanding the third row of zone-l, where row 3 macropixels are copied two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffset).
  • the method may include writing the fifth row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet).
  • the method may include writing the sixth row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet).
  • the method may include expanding the fourth row of zone-l two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffset).
  • the method may include writing the seventh row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet).
  • the method may include writing the eight row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet).
  • the buffer for each zone is divided into A and B halves/sides representing two consecutive planes (or frames); wherein, the data for one plane should fit in one half/side.
  • the data from the first plane is written into the A side of the buffers as it is received, zone-by- zone.
  • the header indicates the delay time from the header to the start of reading from the buffers, from the A side for the first plane.
  • the read it collects data for each line-set in order (line-set 0 first, then set 1, etc.) as needed from all of the zones, then writes them to the array.
  • the read can overlap the end of the write sequence so long as the writes for zone 0 line-sets stay ahead of the reads.
  • the next plane write to the B-side can begin before the A side reads are finished.
  • module might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the present invention.
  • a module might be implemented utilizing any form of hardware, software, or a combination thereof.
  • processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a module.
  • the various modules described herein might be implemented as discrete modules or the functions and features described can be shared in part or in total among one or more modules.
  • FIG. 6 is an illustration showing an exemplary computing device 600 which may implement the embodiments described herein.
  • the computing device of FIG. 6 may be used to perform embodiments of the functionality for performing the foveated image display in accordance with some embodiments.
  • the computing device 600 includes a central processing unit (CPU) 602, which is coupled through a bus 606 to a memory 604, video driver 607 and mass storage device 608.
  • CPU central processing unit
  • Mass storage device 608 represents a persistent data storage device such as a floppy disc drive or a fixed disc drive, which may be local or remote in some embodiments.
  • the mass storage device 608 could implement a backup storage, in some embodiments.
  • Memory 604 may include read only memory, random access memory, etc.
  • Applications resident on the computing device may be stored on or accessed through a computer readable medium such as memory 604 or mass storage device 608 in some embodiments. Applications may also be in the form of modulated electronic signals modulated accessed through a network modem or other network interface of the computing device.
  • CPU 602 may be embodied in a general-purpose processor, a special purpose processor, or a specially programmed logic device in some embodiments.
  • Display 612 is in communication with CPU 602, memory 604, and mass storage device 608, through video driver 607 and bus 606. Display 612 is configured to display any visualization tools or reports associated with the system described herein.
  • Input/output device 610 is coupled to bus 606 in order to communicate information in command selections to CPU 602. It should be appreciated that data to and from external devices may be communicated through the input/output device 610.
  • CPU 602 can be defined to execute the functionality described herein to enable the functionality described with reference to Figs.
  • the code embodying this functionality may be stored within memory 604 or mass storage device 608 for execution by a processor such as CPU 602 in some embodiments.
  • the operating system on the computing device may be iOSTM, MS-WINDOWSTM, OS/2TM, UNIXTM, LINUXTM, VXWORKSTM, or other known operating systems. It should be appreciated that the embodiments described herein may be integrated with virtualized computing system also.
  • image, image-related, and/or other data characteristics represent data, protocol parameters, elements, and/or header components for transferring or sending image, image-related, and/or other data to a display: [00113] H-step-size (Horizontal size/offset step size)
  • H-step-size represents an increment in horizontal offset or horizontal size. Zones are defined by multiples of H-step-size, when defining the size or offset. If a value for H-size or H-offset is not a multiple of H-step-size, it will typically cause an error as it indicates a conflict between the sent data format and the expected decoding of the data at the receiver.
  • two different H-step-size parameters may be used (e.g., to enable a finer step size for H-offset than for H-size). This value may be selected based on, for example, the display device’s capability to decode and combine macropixel data from different zones and minimizing the display multiplexing logic.
  • a system control algorithm utilized by the processor 110 can adjust/move each zone’s size/offset to match these step-size increments.
  • the size of the zone may need to be larger than the required area of interest by one step to ensure that the required area of interest is included within the zone for small gaze offsets.
  • ZOHoffset may be any of ⁇ 0, 32, 64, 96, ... 32X (a multiple of 32) ⁇ .
  • the H-step-size can be a multiple of the largest macropixel’s horizontal size to prevent defining an H-size or H-offset that is a fraction of a macropixel.
  • the system of foveated display having the foveated protocol or method includes two options of data order: zone-order or raster-order.
  • zone-order mode of operation all of the data for a zone is sent first, with any cutout pixels inside the zone removed. If more than one zone is used, all of the data for a second zone is sent next, and so on until the last zone.
  • the order of zones may vary. That is, data for the zone having the highest resolution Z0 can be first or last. Compared to raster-order mode, zone-order may have the least amount of padding, because it may avoid display device dependent timing.
  • the one or more modulation devices may be required to buffer the data, and then collect data for one row (or row-set) from each zone as appropriate before it can send it or write it.
  • words are sent in sets of line data corresponding to Line-Set-Size (LSS) output rows.
  • LSS Line-Set-Size
  • the LSS is also configurable (as described in further detail below).
  • zone-set-order with Zone 0 last
  • JIT just-in-time order
  • the data may flow from the top/start of the image to the bottom/end in a raster like fashion, row-by-row, which may correlate to the display device in either top-to-bottom order or bottom-to-top order.
  • the display device keeps track of the row number and line-set number; and then, compares the present row/line-set to the header information for each zone, to determine if the present row/line-set intersects with one or more zone.
  • the raster logic 150 inspects the corresponding number of words and format of the line-set according to the header parameters (see details given in the disclosure relating to FIGS. 3A and 3B).
  • the one or more modulation devices 130 may flip or reverse the data processing in the horizontal or vertical direction.
  • the modulation devices 130 may steer the entire image or at least a portion of an image by some number of pixels to match system needs. These functions may be orthogonal to the foveation technique and may work together without interference.
  • the data order may be different at the image frame format than at the modulation plane format.
  • the host processor may prefer zone-order format while the display device may prefer raster-order format. Considering these two interfaces as a pair, it may be zone-to-zone if the display device accepts zone-order. It may be zone-to-raster if the driver is able to translate from one to the other. It may be raster-to-raster if the host processor can output raster-order.
  • the system and method of foveated display disclosed herein may include four different types of padding: per-zone padding, per-row padding, per-set padding, and min- row-time padding.
  • Different embodiments may implement different forms of padding.
  • a raster-order format may have all four padding types enabled.
  • a zone-order format may only have per-zone padding enabled. Each can be enabled/required on its own, so one or all padding types may be used in the same frame or plane.
  • the system and method of foveated display disclosed herein implements padding for the last word of a block, when the data block size is not a multiple of word size (Wsize).
  • Wsize word size
  • no additional padding may be required to meet this constraint.
  • this padding is enabled with the plane format, dummy pad pixels are added rather than pixels with image frame format. A single bit per pixel puts more pixels per word in modulation plane format.
  • the driver controller has the ability to insert per-zone padding for zone-order modulation plane output and can accept non-padded image frame input data, the image frame data may not need to have zone-order padding.
  • a system with per-row padding enabled can insert padding between the rows (at the end of the first row), such that the new row’s data starts on a word boundary.
  • a device may allow the partial data at the end of the first row to wrap into the first word of the next row, shifting all of the data for the next row over by the wrap amount.
  • per-row padding may be avoided by not enabling per-row padding.
  • padding at the end of one zone data before the next zone for partial words may still be required, as selected by per-zone padding.
  • Per-set padding is an option that is only available in raster-order format. Only raster-order format (not zone-order) uses line-sets. Per-set padding adds padding at the end of each line-set, to end on a word boundary. Line-sets will end on a word boundary even without per-set padding enabled, if per-zone or per-row padding is enabled.
  • the amount of whole word padding required is calculated by first padding the data words to whole word boundaries as selected by other padding modes, and then subtracting the total number of words with active data for the row or row-set from the total number of word-times required by the row or row-set.
  • min-tow-time padding is only added after the end of the data for the line-set.
  • Different amounts of total time can be required based on the number of row-time periods required by the line-set, which is determined by zone crossing conditions. If there is only one set of unique row data (e.g. when the entire line-set is covered by one macropixel row), then only one row time is needed. If there are two or more sets of unique row data, then the row-times for each set are added together to get the total time required for the line-set. When a line-set is written in only one row time (e.g.
  • a corollary pixel padding may be inserted in the foveated image frame to facilitate a direct translation to the modulation plane format, when the two formats are using the same modes and order.
  • This also indicates a maximum number of rows that can be written at the same time for a zone/macropixel row that is the same size as LSS.
  • a display device may have a fixed LSS, which may require a system and driver to use this LSS or smaller.
  • the value for LSS may also define a matching horizontal expansion capability. When this is not the case, another parameter may be used.
  • Hres horizontal resolution
  • Vres vertical resolution
  • Z0Hsize Hres
  • Z0Vsize Vres. This is a normal display mode at full resolution; every line is the same size: Hres bits.
  • the remaining line-sets will cross both zone Z0 and zone Zl.
  • zone Z2 data is sent first (to ease implementation of writing into the display line buffer), zone Zl and zone ZO data follow in order of rows for just-in-time (JIT) order.
  • JIT just-in-time
  • Each word only contains data from the same zone; there may be unused bits in the last word of a zone set.
  • pad words may not be needed to meet the minimum row timing.
  • Each row will be written one at-a-time because of the unique data per row in zone 0.
  • zone Z2 roundup((Z2Hsize-ZlHsize)/Z2Hmpix/Wsize).
  • some non-standard formats may also be used, where a low- res overlay over the entire display was desired. It may be defined with multiple zones but make the higher resolution zones be of size 0. It may also define a format with just one zone (zone 0), but define its macropixel size to be larger than 1. In this case, also set the LSS equal to the larger macropixel size to enable simultaneous row writes. The display device would have to be compatible with this format and system configuration.
  • Min Row Clocks The display device has timing requirements to write a row to its array. This can be translated into a minimum number of clocks (“n”) at the given clock speed. When only writing high-res data for an entire display, there are plenty of clocks/time to write each row. Yet, when transmitting a single row of macropixels for a line-set all in one low-res zone, the number of data words is much less and may be shorter than the time required to write a row to the array. For example, with three zones and a line-set all in zone Z2, only 1 ⁇ 4 of the clocks/words are provided, which may be too short for the row write timing.
  • the transmitter can pad extra words at the end of the line- set data to meet the required timing.
  • zone Zl and Z2 line-sets 2n (or 2*MRCx2) row times are needed; whereby the rows written in pairs.
  • 4n (or 4*MRCxl) row times are needed; whereby the rows are written independently.
  • the methodology described herein does not use a data enable or wait signal for dynamic flow control; this is a known predictable timing requirement and can be accommodated by padding the transmitted data.
  • FIG. 8A illustrates a data format of foveated image in host memory, in accordance with some embodiments.
  • the foveated Image in the host memory may include a full color image of macropixels for example for three zones. As shown, the length of each row and number of rows in each group depend upon the zone sizes and offsets. In an embodiment of the present invention, even within one zone, a system and/or method, in accordance with the present invention, may have shorter rows, corresponding to areas that have a cut-out of an inner zone; where the data for both sides of the zone on either side of the cut-out are concatenated to save space and bandwidth/time.
  • FIG. 8B illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with zone pad, in accordance with some embodiments.
  • the end-of-zone pad when used causes the macropixel data may end on a display word boundary, as shown. However, in accordance with an embodiment of the present invention, the macropixel data may not end on a video row boundary.
  • the video format of standard image transport interfaces may use a common row length, which does not match zone row lengths. Macropixel rows are packed and wrapped inside the video row active data. In an embodiment of the present invention, there may be blanking time between each video interface row.
  • Plane data formats are usually continuous data. Plane data formats can be very similar to image data formats. They are just l-bit per macropixel, instead of n-bits per macropixel. No horizontal synch (H-sync) or horizontal blanking (H-blanking).
  • the data formats shown in FIGS. 8B, 8C, 8D and 8E could be formats for full- color macropixels.
  • For color-sequential formats the data order after the header is repeated three times. The amount of padding for word boundaries would probably be different for color-sequential data.
  • FIG. 8C illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with row pad, in accordance with some embodiments. It is to be noted that zone Z0 rows could need padding to end on a display word boundary, but often the zone Z0 horizontal size (H-size) is on a word boundary.
  • zone Z0 rows could need padding to end on a display word boundary, but often the zone Z0 horizontal size (H-size) is on a word boundary.
  • FIG. 8D illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and zone-set-order with per-zone and line-set row-timing padding, in accordance with some embodiments. It should be noted that depending on zone vertical offsets, the first line-set usually just contains data for the outside zone, as implied in the figure. But it could have multiple zone data.
  • the H-size of the display will be wider than any macropixel row. Macropixel rows are padded to word boundaries then packed together in the data stream. It should be noted that zone ZO rows could need padding to end on a display word boundary; but often the zone ZO, H-size is on a word boundary.
  • FIG. 8E illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and Just-In-Time (JIT) order with per-zone and line-set row timing padding, in accordance with some embodiments.
  • Macropixel data is packed as row- sets and padded for word boundaries and timing needs then packed together in the data stream. It should be noted that depending on zone vertical offsets, the first line-set usually just contains data for the outside zone, as implied in the figure. Yet, in some embodiments, there could be multiple zone data.
  • FIG. 9A illustrates the physical layout of column multi-driver configurations, in accordance with some embodiments; modulation planes are particularly pressed to meet pixel array write timing and line-sets that have little data (for example those that only cross the highest macropixel ratio zone) may need additional time to finish the multi-row write operation. Rather than adding padding pixels/bits to meet timing, multiple column driver may be used in some embodiments to simultaneously write multiple line-sets at the same time. One, two and four divers per column configurations are shown. Various arrangements of the multiple drivers are shown where 1, 2, 4 or 8 adjacent rows share the same driver; extending these options to larger sets should be understood by someone skilled in the art.
  • FIG. 9B illustrates the number of row write times required for the multi-driver arrangements of FIG. 9A for certain line-set conditions and simultaneous groupings, in accordance with some embodiments.
  • Some systems or embodiments may also provide single pixel steering (vertical and horizontal) of the foveated image which causes non-alignment of the line-set with the adjacent driver arrangement, which further compounds the advantage of multiple drivers and requires larger sets to achieve the time savings as compared to the single driver per column case.
  • display devices may have multiple column drivers per display column. Each of these drivers may be routed to different sets of adjacent rows, so that multiple different rows with different data can be written at the same time; this allows min row timing to apply to multiple row- sets.
  • the data from the first row/line-set can be buffered up to be used with the 2nd row/line- set, and the like.
  • These levels may be defined and may be included in a header or with data sent by a system, method, and/or protocol, in accordance with the present invention, to a display, using the following parameters:
  • [00163] 4ph This represents the use of four drivers per column, one per half line-set: one is connected to all the even line-set top half rows, the next to the even bottom half rows, the next is connected to all the odd line-set top half rows and the last is connected to the odd bottom half rows. This allows more time for both the largest zone and the largest two zones. Again, V-step-size should be 2*LSS.
  • 4ps This represents the use of four drivers per column, one per line-set: one is connected to the first of four line-sets, the next to the second of four line-sets, the next is connected to the third and the last is connected to the fourth. This allows the data time from 4 line-sets to be applied to needed timing for one line-set (V-step-size should be 4*LSS).
  • n applies to the number of clocks for the combined line-sets in zone 2 only; wherein, the number of clocks is 2n, if in zones 1 & 2; and 4n, if the line-sets cross zone 0. It should be noted that this also restricts the size and offsets of the zones in the vertical direction to be a multiple of eight display rows. In this way, both groups of four rows have the same zone crossing condition.
  • FIG. 10 illustrates the diameter, visual field width and cone density of various regions of the human eye relative to the fovea, as available from public sources.
  • the data is used as exemplary of optical design consideration of a foveated display system.
  • a horizontal macropixel ratio is the same as the vertical macropixel ratio for the zone, and, for example, may result in a square array of display pixels per macropixel; however rectangular macropixel sizes could be used.
  • each macropixel ratio may be an integer that enables the display device hardware to copy (or write) each macropixel bit or value to the corresponding display pixels’ bit or value, in the horizontal and/or vertical direction.
  • An imaging application will often pick a resolution for the central area that matches an average resolution for an area larger than the Foveola or FAZ; thus, the application is not using the highest fovea density.
  • a display typically does not include the entire periphery of the user’s FOV; thus, the lowest sensitivity of the human vision in the outer periphery is not used.
  • Each of the lower resolution zones select its resolution based on the highest sensitivity portion of the zone, which is at the inner boundary of the zone. Or in other words, the inner zone boundary is selected based on the vision sensitivity dropping to the selected resolution for the zone. This means the lowest system resolution will be based on the sensitivity at the boundary between the two lowest resolution zones, which is an angle much less than the total FOV. Thus, considering all three of these factors, the ratio of highest to lowest resolution may only be 4 to 1.
  • the number of cones per display pixel in the central high-resolution zone is another way to select the optical system to fit the resolution to the FOV.
  • This system and method of foveated display does not select a cone- to-pixel threshold or sensitivity; that is controlled by the application.
  • This system and method of foveated display just relates the ratio of the central zone resolution to the lower resolution zones.
  • the peripheral vision needs less than 1 ⁇ 4 of the linear resolution as the central region.
  • the use of a 4x4 macro pixel (1 macropixel represents 16 display pixels, which equals a 1/16 reduction in area resolution) is a corresponding option.
  • the resolution for the outer foveation zone would be defined as 4x4.
  • three zones are defined, where one zone coincides with each of the binary resolution steps. Other numbers of zones and steps in resolution may be supported, but these three would be typical for this system and method of foveated display.
  • each transition point can be defined in terms of degrees VF from the center of gaze.
  • a transition from the peripheral low-resolution (macropixels of 4x4) to a mid-resolution (macropixels of 2x2) may be made at the boundary between the Perifovea and the Mid Peripheral (because its cone density is—1/4 linearly of the central region), which is about 9° from the center.
  • the system may add to that some tolerance to allow for an eye tracking accuracy and latency. If that tolerance is 5°, the transition would be pushed out to ⁇ 14°.
  • Display devices typically use a memory array structure to control the pixels of the display; each column in the display has a column data driver and each row has a write enable. Thus, whole rows of pixels can be written at once. Consequently, data from different zones that overlap the same row can be combined together before the row can be written.
  • Some display devices may have block enables to allow only a portion of a row to be written at a time which may allow only writing the portion of the row matching a foveation zone, which would enable direct zone-order writing. Yet, this would require additional time to write to some rows multiple times to address all pixels in the row.
  • the block enable boundaries may put unacceptable constraints on foveation zone boundaries to make them align, which would make zones much larger and thus have limit data bandwidth reduction.
  • Very low resolution zones will likely be unable to write each row-set in the short time to receive its reduced data set, thus requiring padding/throttling of the lowest zones, reducing their benefit. For these reasons, block enables for partial row writing will have limited benefit. Some devices may still segment the rows and have multiple enables; the concepts in this system and method of foveated display still apply.
  • a header may be used at the beginning of a plane to define, the total image or plane size, the size and location of each area-of-interest zone as well as parameters controlling the order and packing of data.
  • This meta data could also be sent as side-band data outside the normal video data (i.e. command packets during the vertical blanking interval).
  • the header method at the beginning of video data could be packed as one parameter per pixel or overlaid as one bit of a parameter per pixel (the pixel data is all zeroes or all ones) so that it passes easily to the modulation plane format.
  • each word of data contains data of just one resolution from one zone (word size is defined by the physical interface to the display device and used by the host in formatting the packed data).
  • word size is defined by the physical interface to the display device and used by the host in formatting the packed data.
  • the number of words needed for each 4-row set varies, according to whether the rows are only in zone 3 or if they cross multiple zones.
  • the transmitted data size per row-set will be larger or smaller as the image is sent from top to bottom; there is no constant line size.
  • the display device can include the ability to write up to 4 rows of pixels at the same time from one short line of input macropixels; it can also be able to save and expand the low-resolution macropixels, mix them with high-resolution pixels for multiple rows, then self-time the write to each row.
  • the total image/display area may be divided into 3 zones, as shown below :
  • Some displays are for monochrome, some are simultaneous full color and others color sequential. Some displays inherently represent the full intensity range as a fractional steady-state intensity (i.e. analog displays or self-modulating digital displays); others are inherently digital and only drive to binary stable levels, which require frequent updates of modulation planes to modulate the intensity.
  • Displays often have a frame buffer memory for storing image data to then drive and illuminate the display using the data from the buffer; there is usually two buffers for ping-pong operation (during one frame one buffer is being written with new frame data, while the other buffer is being read to display the previous frame data; the next frame they swap read and write); however optimized designs benefit from a single buffer if the transport data format/protocol/bandwidth supports the display sequence. The most demanding of these on transfer and write timing is color sequential, binary display with a single frame buffer. The remaining description will focus on this configuration, but the foveation concepts in this system and method of foveated display can also be applied to the other display configurations.
  • FIG. 7 a timing diagram of transport illumination, showing color sequential images and planes with respect to buffer type, in accordance with some embodiments is shown. As shown, timing diagrams are illustrated for various protocol options of these configurations. Specifically, the timing diagram of Image Transport, Plane Transport, Display Write and Illumination for different configurations of Frame Buffer, Image data-order and Display Type are shown. All of these are represents as applied to a color-sequential illumination display. Particularly, a display and driver hardware
  • the buffer In a system with one frame buffer, the buffer should not be read while the data is being written at the same time; it would cause new and old data to get mixed (or corrupted); it must wait for the image write to finish.
  • full-color pixel images the entire image must finish before any CSF reads can start (as shown in the first two drawings).
  • the illumination usually starts soon after the modulation planes begin (the corresponding frame buffer data is usually read multiple times; once for each modulation plane).
  • the illumination is not activated while the display is being updated with the new CSF data. It is often desirable to utilize the illumination system with the highest duty factor (minimize the time when there is no active illumination).
  • the bandwidth of a plane or grayscale transport interface is usually much higher than the bandwidth of an image transport interface (because it is usually a short and wide chip-to-chip interface or an internal on-chip interface).
  • the host usually uses most of the frame time to transmit the image; for a system with dual frame buffers used in a ping-pong fashion (swap the write vs read buffer at each Vsync), there is no advantage to writing the image faster in a small portion of the frame. However, motion sensitive applications need faster response and would benefit from starting the buffer read sooner. If the host can send the image data in color-sequential format, then only one frame buffer is needed; one color is written to the frame buffer while the other colors are read & illuminated.
  • This provides high duty cycle illumination and low latency (time from transport start to matching start of illumination).
  • 3 CSF primary illumination cycles
  • nearly the whole frame time can be used to transmit the image.
  • Higher sets of CSF’s require the transmit time to be reduced to smaller fractions of the frame time and still fit in one frame buffer (i.e. image write must fit in the read time of 4 CSF’s); the reduce data for foveated protocols enables these without significantly increasing the bandwidth of the physical interface.
  • the display device supports direct grayscale write and only 3 CSF’s are used, then no frame buffer is needed (the last diagram); this does require pauses in the image transmit between color sub frames to allow for illumination.
  • the illumination duty cycle depends on the image transport time and frame rate; this can still be relatively high using foveated transport and foveated write. This is mostly applicable to systems with high frame rates or not motion sensitive (that can tolerate just 3 CSF’s).
  • Digital displays may receive modulation plane data (1 bit per pixel) repeated multiple times per frame or sub-frame with various pulse-style patterns to integrate to the desired grayscale. This protocol and method is aimed primarily at that modulation plane interface level although it can also be applied to displays that accept grayscale data per pixel and select their own pulse-style pattern and are still constrained by the internal array structure.
  • each pixel of the display is updated rapidly (by each modulation plane), e.g., at frequencies such as, for example, 10 kHz to 100 kHz, etc.
  • the pixel array is written row(s) at-a-time, using column drivers for each pixel of the row (shared among all/many rows), and a unique row strobe for each row.
  • the crucial timing constraint on writing to the display is the time to write one row and how many rows can be written at the same time. This method defines high- and low-resolution zones that easily merge and overlay onto writing this row structure, so that the entire active area is updated by each modulation plane.
  • the display device in accordance with the present system and method of foveated display, is used in an amplitude mode to provide a visible image in which each pixel on the display corresponds to one pixel in the image as viewed or projected (or to a small group of adjacent pixels)
  • the foveation technique enabled by the present disclosure may be used to achieve high spatial-frequency and/or temporal-frequency updates to a foveation region in the innermost zone, corresponding to the region of the image which the observer is, or is believed to be, observing most closely and high temporal-frequency updates to the lower resolution/spatial-frequency regions in the outer zones corresponding to the peripheral vision of the observer.
  • the high-sensitivity region or regions of an observer’s visual system may map to a substantially larger (in pixel count) region of the display.
  • CGHs Computer Generated Holograms
  • interferograms interferograms
  • holograms holograms
  • interference patterns and phase patterns
  • this pattern, or a version derived from it is displayed on the display device as a distribution of phase values, amplitude values, or a complex combination of phase and amplitude values, such that light diffracts or scatters or otherwise propagates from said displayed pattern to form, at some actual or optical distance from the display device, a second pattern corresponding to the desired image or an image from which the desired image can be obtained by further optical means.
  • some or all of the lower spatial-frequency content of the desired image is encoded or included on the displayed image within a region which is substantially or completely enclosed within a larger region of the displayed image within which some or all of the higher spatial-frequency content of the desired is encoded or included.
  • the systems and methods of the present system and method of foveated display may beneficially be used to provide higher spatial-frequency and/or temporal-frequency updates to a region in the outermost zone or intermediate zones.
  • the region of the display which most beneficially can receive higher spatial-frequency and/or temporal- frequency updates may be an outer or the outermost region rather than the innermost region or inner regions.
  • the region of the display which most beneficially can receive higher spatial-frequency and/or temporal- frequency updates may be an outer or the outermost region rather than the innermost region or inner regions.
  • the greatest visual sensitivity for motion, color, amplitude or other visual parameters
  • a foveated light modulation system comprising: a processor coupled to receive input image data and foveation zone definition data to generate a foveated image frame having header packet data that identifies a first zone having a first resolution and a second zone having a second resolution, and wherein the second resolution is less than the first resolution, and wherein at least one of the first zone and the second zone is compressed based on a macropixel ratio; a driver controller circuit coupled to receive the foveated image frame from the processor that generates modulation planes based at least in part on the foveated bit plane data; and a modulation device, having pixels comprising at least one macropixel, is coupled to the driver controller circuit that receives the modulation planes, and wherein each modulation plane of the modulation plane is expanded based upon the header packet data.
  • a foveated light modulation system wherein for the second zone having the second resolution, a single bit in a modulation plane represents a macropixel of the modulation device, and the single bit is copied to a subset of the display pixels defined by the macropixel ratio.
  • a foveated light modulation system wherein the modulation device is a display.
  • a foveated light modulation system wherein the modulation device is an LCOS display.
  • the foveated light modulation system wherein said modulation device comprises a decode logic module coupled that receives the modulation plane.
  • a foveated light modulation system wherein the modulation device further comprises raster logic, and wherein the decode logic module parses the header packet data and the raster logic.
  • a foveated light modulation system wherein the decode logic generates a dataset based on the header packet data and the macropixel ratios in the modulation planes.
  • a foveated light modulation system further comprising: tracking logic module coupled to the processor, wherein the tracking logic module senses retina gaze data and head position data of a user and generates foveated zone data corresponding to the sensed retina gaze data and head position data.
  • a foveated light modulation system wherein the processor further comprises: a foveated rendering module that couples to the tracking logic module and receives the sensed retina gaze data and head position data and determines the size and location of each zone using a foveated rendering algorithm.
  • a foveated light modulation system wherein the foveation rendering module determines the size and location of each zone using a foveated rendering algorithm based on at least one of total field-of-view data, optical system distortion data, fovea acuity data , tolerance of tracking logic data, latency of tracking logic data, and rate of motion data.
  • header packet data comprises: a resolution-order toggle bit enabling a center-detail mode and a periphery-detail mode, wherein when the center-detail mode is active, the foveated image frame includes a plurality of concentric zones comprising the first and second zones having a zone of highest resolution located at a center of a user fixation point, whereby resolution of a zone adjacent to at least one of the first zone and the second zone is lower than full resolution of the other adjacent zone by a predetermined value, and the resolution of one of the concentric zones decreases in descending order away from the user fixation point, and wherein when the periphery-detail mode is active, the foveated image frame includes a plurality of concentric zones having a zone of highest resolution located at a periphery of the plurality of concentric zones, whereby resolution of the second zone is lower than full resolution than the first zone by a
  • a transmission-mode toggle bit enabling a raster-order mode and a zone- order mode, wherein when the raster-order mode is active, data transmission comprises a plurality of line-sets representing rows of data from the plurality of concentric zones corresponding with a display order of the original image, and wherein when the zone-order mode is active, data transmission comprises a sending of each one of the plurality of concentric zones in its entirety before data transmission of an adjacent zone is sent; a zone number segment defining number of the plurality of concentric zones; a zone-size segment defining horizontal and vertical size of each one of the plurality of concentric zones; a zone- offset segment defining horizontal and vertical offset associated with each one of the plurality of concentric zones; and a plurality of display parameters.
  • a foveated light modulation system wherein the plurality of display parameters comprises: a word-size segment defining a plurality of pixels-bits transferred relative to a clock cycle associated with the driver controller circuit; an x-offset size segment defining a plurality of pixels-bits per horizontal offset Least Significant Bit (LSB); a line-set size segment defining a maximum number of rows to be simultaneously written; a row-time segment defining a plurality of clocking segments required to write a row; and a dual column-drive mode indicator enabling simultaneous writing of two rows.
  • LSB Least Significant Bit
  • a method of producing a foveated image on a display screen comprising:
  • a method wherein the foveated image frame identifies two or more concentric zones of differing resolution, whereby each of the concentric zones is defined by a plurality of macropixels and a corresponding macropixel ratio.
  • a method, wherein the at least one grayscale device and modulation device comprises decode logic and raster logic.
  • a method of claim, wherein at least one grayscale device and modulation device comprises pixels, and further comprising: producing a foveated image upon at least some of the pixels of the at least one grayscale device and modulation device based on the header packet data and each corresponding macropixel ratio utilizing the raster logic.
  • a method wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratio associated with each zone.
  • a method, wherein the receiving image data comprises: receiving tracking data based upon retina and at least one of head gaze direction data and position/location data of a user in real-time.
  • a method wherein the foveated image data based on the input image data, the foveation zone parameters, and the tracking data.
  • a method, wherein generating a rendered foveated image comprises: generating image macropixels using 3D to 2D rendering techniques of foveated rendering or other foveated rendering techniques as known in the industry to represent projected text or graphics in the foveated image space, using the input image data, observer gaze direction data and vantage point data and the foveation zone definition parameters.
  • a method, wherein the generating a foveated image frame comprises: generating header packet data based upon the foveation zone parameters, the selected protocol parameters and display device capabilities; and encapsulating the rendered foveated image data with the header packet data to form a foveated image frame.
  • a method, wherein the transmitting the foveated image frame comprises:
  • a driver controller circuit transmitting the foveated image frame to a driver controller circuit; generating foveated bit plane data; converting the foveated bit plane data into modulation planes based upon an associated modulation scheme and the header packet data; and transmitting the modulation planes to one or more modulation devices having foveated modulation plane raster logic coupled to a display circuit having an array of pixels or transmitting the foveated image frame to a grayscale display device.
  • a method, wherein the producing of the foveated image upon the array of display pixels comprises: parsing the header packet data and foveated data from the foveated image frame or foveated modulation plane; translating a modulation plane of the foveated data into foveated display data; and applying a corresponding binary value of the foveated display data representing the line set to each sub-set of the array of pixels associated with each foveated zone; and repeating the translating and applying until each line-set of foveated data is displayed.
  • a method, wherein the detecting the enabling of a raster-order mode and a zone order mode comprises: parsing the header packet data to detect a transmission mode toggle bit; and detecting whether the transmission-mode toggle bit is set to enable the raster-order mode and the zone-order mode, and detecting, in response to no detected raster-order mode, respective zone data of the incoming data based upon number of zones, horizontal-zone-size, line-set-size, word-size and x-offset-size.
  • a method, wherein the writing in response to the enabled zone-order mode comprises: writing, in response to no detected raster-order mode, respective zone data of the incoming data into a respective zone buffer (Z(n-l), ... , Z3, Z2, Zl, Z0 ⁇ ; waiting, in response to no detected raster-order mode, for a read delay of predetermined time; and toggling, in response to end of read delay, a read pointer corresponding to each respective zone buffer.
  • a method, wherein the identifying whether data from a respective zone exists comprises: parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, wherein row-time is the number of clocks required for writing a row based upon the display circuit; selecting a row of the line-set of data; detecting whether a high resolution zone (Z0) of a plurality of concentric zones is present in the row based upon header packet data; and detecting, in response to no detected presence of the high resolution zone, whether a next consecutive zone (Zl, Z2, Z3, ... Z(n-l)) is present in the identified line-set of data until one zone is detected.
  • a non-transitory computer-readable medium including code for performing a method, the method comprising: receiving image input data; receiving tracking data based upon retina location of user in real-time; generating header packet data based upon the image input data and the tracking data to define foveated zone information; encapsulating the image input data within the header packet data to form a foveated image frame; transmitting the foveated image frame to one or more modulation devices each having an array of pixels; converting the foveated image frame into modulation planes; parsing the header packet data and foveated image data from each modulation plane; translating a modulation plane of foveated image data into foveated display data; and applying a corresponding binary value of the foveated display data representing the line set to each sub-set of the array of pixels associated with each foveated zone.
  • a computer-readable medium wherein the translating a modulation plane of foveated image data comprises: parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, wherein row-time is the number of clocks required for writing a row based upon the display circuit; detecting whether the transmission-mode toggle bit is set to enable a raster-order mode; detecting, in response to no detected raster-order mode, respective zone data of the incoming data based upon number of zones, horizontal-zone-size, line-set-size, word-size and x-offset-size;
  • n number of zones, when respective zones ⁇ Z0, Zl, Z2, Z3 ⁇ are detected; retrieving a next line-set of data; and repeating the detecting, expanding, storing, transferring, and retrieving until every line-set of data is retrieved.
  • a computer-readable medium wherein the expanding data, in response to a detected zone, based upon the number of zones, horizontal zone-size, line-set-size, row-time, and word-size comprises: identifying a left-side segment and a right-side segment of respective zone data; writing the left side segment into a row buffer shifted by x-offset-size corresponding to the respective zone data, wherein the left-side segment is written 2(r-l) times; adding horizontal zone-size and x-offset-sizes of each respective zone data to define a right-side pointer for each right-side segment of the respective zone data; writing the right- side segment into the row buffer shifted by the right-side pointer of the respective zone data; storing the row buffer into a row queue; and repeating the identifying, writing, adding, writing, and storing for each row of the respective zone data.
  • the term“and/or” and the“I” symbol includes any and all combinations of one or more of the associated listed items.
  • the singular forms“a”,“an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
  • the terms“comprises,”“comprising,”“includes,” and/or“including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Therefore, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
  • the embodiments also relate to a device or an apparatus for performing these operations.
  • the apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer.
  • various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • a module, an application, a layer, an agent or other method-operable entity could be implemented as hardware, firmware, or a processor executing software, or combinations thereof.
  • a controller could include a first module and a second module.
  • a controller could be configured to perform various actions, e.g., of a method, an application, a layer or an agent.
  • the embodiments can also be embodied as computer readable code on a non- transitory computer readable medium.
  • the computer readable medium is any data storage device that can store data, which can be thereafter read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, flash memory devices, and other optical and non-optical data storage devices.
  • the computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
  • Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like.
  • the embodiments can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
  • one or more portions of the methods and mechanisms described herein may form part of a cloud-computing environment.
  • resources may be provided over the Internet as services according to one or more various models.
  • models may include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
  • IaaS Infrastructure as a Service
  • PaaS Platform as a Service
  • SaaS Software as a Service
  • SaaS software tools and underlying equipment used by developers to develop software solutions
  • SaaS typically includes a service provider licensing software as a service on demand. The service provider may host the software, or may deploy the software to a customer for a given period of time. Numerous combinations of the above models are possible and are contemplated.
  • unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on).
  • units/circuits/components used with the“configured to” language include hardware; for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/ component is“configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. 112, sixth paragraph, for that
  • “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue.“Configured to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
  • a manufacturing process e.g., a semiconductor fabrication facility
  • devices e.g., integrated circuits

Abstract

Systems and methods of foveated light modulation may take advantage of the eye's exponential reduction in acuity/resolution as it moves outward from the center of gaze by providing a plurality of display zones (Zone 0, 1, 2, … N). Particularly, the system may include a processor that couples to receive image data relating to an image and foveation zones to generate a foveated image frame having header packet data, which identifies two or more zones of differing resolution. Each zone may be defined by a plurality of macropixels, having a corresponding macropixel ratio. The system may further include a driver controller circuit that couples to receive the foveated image frame to generate modulation planes. One or more modulation devices may couple to receive the modulation planes to generate an expanded dataset based upon the corresponding macropixel ratio, such that the foveated image is output to a display.

Description

APPARATUS, SYSTEMS, AND METHODS
FOR FOVEATED DISPLAY
BACKGROUND
[0001] Software imaging applications that employ eye tracking can concentrate the pixel density for a display device within an area-of-interest, where the center region coincides with the user’s eye gazing direction and generate zones around the area of interest at lower resolutions for the surrounding area of the eye’s periphery. This is known as foveated imaging or foveated rendering. Even some applications without eye tracking that use a Head Mounted Display (HMD) can utilize Foveated Imaging to concentrate the pixel density in the center of the display and use lower resolutions to fill the periphery of the Field of View (FOV) in the display. By decreasing image detail and/or resolution at the periphery, foveated imaging can reduce the number of pixels to be processed and transmitted, resulting in faster processing by the host and ideally faster imaging at the display device.
[0002] Current display devices, however, usually require a video source with constant resolution, where the same number of pixels per row must exist upon every row. In addition, in order to match the capabilities of the display device, a host must transform the foveated image into a constant resolution image; and then, transmit this image to the display device using a standard video protocol. This means the lower resolution areas in the periphery are scaled up or replicated to match the high-resolution area in the center. Therefore, although foveated imaging reduces the processing requirements on the host, the corresponding savings at the display device are not realized. Further, the bandwidth required to transmit an image to the display device remains high, while the time to write to the pixel array within the display device remains substantially the same. Some systems reduce the image transport bandwidth by having the host add compression hardware/algorithms and the display will need to add decompression capability. Although there are several forms of compression that may be used to reduce the data bandwidth between the host and the display driver, these forms of compression do not reduce the data transmit bandwidth between the driver and the display device or reduce the processing time amongst the internal components of the display device during the writing of data to the pixel array, which may limit the maximum frame rate, maximum bit-depth and/or maximum array size.
[0003] Some systems attempt to mimic a form of foveated imaging by splitting the video into two channels going to different display devices: one channel for the high-resolution area- of-interest and the other channel for the lower resolution periphery /background. These systems accomplish this by mechanically steering the projection optics for the high-resolution area-of-interest image in the direction of the gaze.
[0004] Conventional display devices without foveation capability typically accept a single homogeneous resolution to cover the entire display. Typically a l-to-l match of the display’s physical layout (X horizontal pixels by Y vertical pixels). Even if the display can be configured to operate in data scaling or replication modes to fill the display with a lower input resolution, the input resolution is still constant across the image (e.g., a mode to map an input with ¼ of the native area resolution wherein each input pixel is represented as a square of 4 display pixels of the same color). Thus, conventional systems with a foveated rendering host connected to non-foveation capable displays must configure the host to transmit the highest resolution matching the display; replicate the low-resolution areas to match the display resolution; and, then, transmit high resolution data for the entire image/display.
[0005] As a disadvantage, conventional systems of foveated imaging lack in efficiency and effectiveness for various reasons. First, conventional foveation systems and methods include redundant data that wastes bandwidth, limits frame update rates, and limits native bit- depth and/or max pixels per display. Second, since the zone size and offset parameters are not included in the video data, the use of a static display resolution/configuration does not enable the updating of the foveated image in real-time, on a frame-by -frame basis as the gaze point changes. Third, standard video protocols do not define a mixed resolution frame nor handshake foveation parameters of the display hardware capabilities. Finally, multiple row writing of the same data is not supported. In some cases, multiple row writing causes timing errors and/or local over-loading for replications of more than 2-to-l.
[0006] It is within this context that the embodiments arise. SUMMARY
[0007] Embodiments of an apparatus, system, and method for foveated display are provided. It should be appreciated that the present embodiment can be implemented in numerous ways, such as a process, an apparatus, a system, a device, or a method. Several inventive embodiments are described below.
[0008] In some embodiments, a system for foveated display is provided. Foveated rendering is a type of image processing or image generation that takes advantage of the eye’s exponential reduction in acuity/resolution moving outward from the center of the retina (a user’s gaze direction) to the outer periphery of an image by providing a plurality of display zones (e.g., Zone 0, Zone 1, Zone 2, Zone 3, etc.). The system of foveated display described herein provides a unique method and protocol of encapsulation of a foveated image frame and an innovative way of processing a foveated write for displaying the foveated image upon a display device. In particular, a system may include a processor that couples to receive input image data and foveation zone definitions to generate a rendered foveated image, which is then processed using a selected protocol, in accordance with the present invention, into a foveated image frame having both image data and header packet data, which identifies two or more zones of differing resolution. Each zone may be defined by a plurality of macropixels, having corresponding macropixel ratios. For example, in a foveated image having three zones, the first zone Z0 may have horizontal and vertical ratios of 1 to 1, while the second and third zones (Zl, and Z2) may have respective macropixel ratios of 1 to 2, and 1 to 4 (where Zl macropixel is a 2x2 matrix of display pixels and Z2 macropixel is a 4x4 matrix of display pixels). The system may further include a driver controller circuit that couples to receive the foveated image frame to generate foveated bit plane data and covert the bit plane data into modulation planes. One or more modulation devices may couple to receive the modulation planes to generate an expanded dataset, such that the foveated image is produced upon a display or output to the display. During the expansion of the modulation plane data for zones having decreased resolution, a single bit of an associated plurality of macropixels may be copied to a set of display pixels based upon the corresponding macropixel ratios associated with the zone.
[0009] In some embodiments, a method and protocol of foveated display is provided. The method may include receiving image data relating to the image and foveation zones. For example, a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time. In response, the method may further include generating a foveated image frame based upon the image data, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio. The method may further include transmitting the foveated image frame to one or more modulation devices having raster logic coupled to a display circuit including an array of pixels. For example, the foveated image frame having header packet data may be sent to a driver controller circuit that generates foveated bit plane data and converts the same into modulation planes based upon a modulation scheme and header packet data. The modulation planes may be sent to the one or more modulation devices. Next, the method may include writing the modulation plane data to the array of display pixels based upon the header packet data and the corresponding macropixel ratios using the raster logic, wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratios associated with each zone.
[0010] In some embodiments, a tangible, non-transitory, computer-readable media having instructions whereupon which, when executed by a processor, cause the processor to perform the foveated display method described herein. The foveated display method may include receiving image data relating to the image and foveation zones. For example, a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time, to produce a rendered foveated image as is done in the industry according to foveated rendering methods. In response, the method may further include generating a foveated image frame based upon the rendered foveated image data and the selected transmit protocol, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is defined by a plurality of macropixels and corresponding macropixel ratios. The method may further include transmitting the foveated image frame to one or more modulation devices and decoding the frame into an expanded dataset for controlling a display circuit, having an array of pixels. For example, the foveated image frame, having header packet data, may be used to generate foveated bit plane data, which is converted into modulation planes based upon a modulation scheme and header packet data. The modulation planes may be sent to the one or more modulation devices and decoded into the expanded dataset. Next, the method may include writing the modulation plane data to the array of display pixels based upon the expanded dataset, using the header packet data and each corresponding macropixel ratios using the raster logic; wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratios associated with each zone.
[0011] In some embodiments, a foveated display device is provided. The foveated display device may include a driver controller circuit coupled to receive a foveated image frame to generate modulation planes, wherein the foveated image frame includes header packet data identifying two or more zones of differing resolution and wherein each zone is defined by a plurality of macropixels and corresponding macropixel ratios. Further, the foveated display device may include one or more modulation devices coupled to receive the modulation planes, such that a foveated image is produced upon a display, wherein, for zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a set of display pixels based upon the corresponding macropixel ratios associated with the zone.
[0012] Other aspects and advantages of the embodiments will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the described embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The described embodiments and the advantages thereof may best be understood by reference to the following description taken in conjunction with the accompanying drawings. These drawings in no way limit any changes in form and detail that may be made to the described embodiments by one so skilled in the art without departing from the spirit and scope of the described embodiments.
[0014] FIG. 1 A is a system diagram of a foveated electromagnetic radiation modulation system, having processor circuitry, display driver circuitry and foveated modulation plane device circuitry, in accordance with some embodiments.
[0015] FIG. 1B is a system diagram of a foveated electromagnetic radiation modulation system, having processor circuitry and foveated grayscale device circuitry, in accordance with some embodiments.
[0016] FIG. 2A is a flow diagram of a method for foveated image frame generation by processor circuitry 110 as in FIG. 1A in accordance with some embodiments.
[0017] FIG. 2B is a flow diagram of a method for foveated modulation plane generation by display driver circuitry 120 as in FIG. 1A in accordance with some embodiments.
[0018] FIG. 2C is a flow diagram of a method or process for writing foveated modulation plane data to a display pixel array by a foveated modulation display device 130 as in FIG 1A in accordance with some embodiments.
[0019] FIG. 2D is a flow diagram of a method or process for writing the foveated image frame data to the array of display pixels of a foveated grayscale display device 162 as in FIG. 1B, in accordance with some embodiments.
[0020] FIG. 3A is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a center-detail mode of operation, wherein four zones exist in accordance with some embodiments.
[0021] FIG. 3B is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a periphery-detail mode of operation, wherein four zones exist in accordance with some embodiments.
[0022] FIG. 4A is a multiple level block diagram of the expansion of the foveation data in the method of FIG.3 A, showing contents of a foveation data block and the contents of a row buffer, in accordance with some embodiments.
[0023] FIG. 4B is a multiple level block diagram of the continuation of the expansion of the foveation data of FIG.4A, in accordance with some embodiments.
[0024] FIG. 4C is a multiple step diagram of the continuation of the expansion of the macropixel data of FIG.4A.
[0025] FIG. 5 illustrates a timing diagram of a Zone-order frame or plane, showing the zone buffer write and read sequences according to one embodiment of the present disclosure.
[0026] FIG. 6 is an illustration showing an exemplary computing device, which may implement some of the embodiments described herein.
[0027] FIG. 7 illustrates a timing diagram of transport illumination, showing color sequential images and planes with respect to buffer type, in accordance with some embodiments.
[0028] FIG. 8A illustrates a data format of foveated image in host memory, in accordance with some embodiments.
[0029] FIG. 8B illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with zone pad, in accordance with some embodiments.
[0030] FIG. 8C illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with row pad, in accordance with some embodiments.
[0031] FIG. 8D illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and zone-set-order with per-zone and line-set row-timing padding, in accordance with some embodiments.
[0032] FIG. 8E illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and JIT-order with per-zone and line-set row-timing padding, in accordance with some embodiments.
[0033] FIG. 9A illustrates the physical layout of column multi-driver configurations, in accordance with some embodiments. [0034] FIG. 9B illustrates the number of row write times required for the multi-driver arrangements of FIG. 9A for certain line-set conditions and simultaneous groupings, in accordance with some embodiments.
[0035] FIG. 10 illustrates the diameter, visual field width and cone density of various regions of the human eye relative to the fovea, as available from public sources.
DETAILED DESCRIPTION
[0036] The following embodiments describe apparatus, systems, and methods of foveated display. It can be appreciated by one skilled in the art, that the embodiments may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the embodiments.
[0037] In some embodiments, a system for foveated display is provided. Foveated rendering is a type of image processing or image generation that takes advantage of the eye’s exponential reduction in acuity/resolution moving outward from the center of the retina (a user’s gaze direction) to the outer periphery of an image by providing a plurality of display zones [e.g., Zone 0 (Z0), Zone 1 (Zl), Zone 2 (Z2), Zone 3 (Z3), and the like]. The system of foveated display described herein provides a method system, and protocol of encapsulation of a foveated image frame and an innovative way of processing a foveated write for displaying the foveated image upon a display device. The foveated write is unique to this system and method of foveated display and solves the final bandwidth bottleneck. In particular, a system may include a processor that couples to receive input image data and foveation zone definition data to generate a rendered foveated image, which is then processed using a selected protocol, in accordance with the present invention, into a foveated image frame having both image data and header packet data, which identifies two or more zones of differing resolution. Each zone may be defined by a plurality of macropixels, having corresponding macropixel ratios. For example, in a foveated image having three zones, the first zone Z0 may have horizontal and vertical ratios of 1 to 1, while the second and third zones (Zl, and Z2) may have respective macropixel ratios of 1 to 2, and 1 to 4 (where Zl macropixel is a 2x2 matrix of display pixels and Z2 macropixel is a 4x4 matrix of display pixels). The system may further include a driver controller circuit that couples to receive the foveated image frame to generate foveated bit plane data and covert the bit plane data into modulation planes. One or more modulation devices (e.g. a display or a liquid crystal-on- silicon (LCoS display)) may couple to receive the modulation planes to generate an expanded dataset, which is written to the pixel array such that the foveated image is produced upon a display. During the expansion of the modulation plane data for zones having decreased resolution, a single bit of an associated plurality of macropixels may be copied to a set of display pixels based upon the corresponding macropixel ratios associated with the zone. [0038] In some embodiments, a method and protocol of foveated display is provided. The method may include receiving image data relating to the image and foveation zones. For example, a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time. In response, the method may further include generating a foveated image frame based upon the image data, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio. The method may further include transmitting the foveated image frame to one or more modulation devices having raster logic coupled to a display circuit including an array of pixels. For example, the foveated image frame having header packet data may be sent to a driver controller circuit that generates foveated bit plane data and converts the same into modulation planes based upon a modulation scheme and header packet data. The modulation planes may be sent to the one or more modulation devices. Next, the method may include outputting the foveated image to the array of display pixels based upon the header packet data and each corresponding macropixel ratio using the raster logic, wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratio associated with each zone.
[0039] This system and method of foveated display offers two protocol approaches for Foveated Transport: Zone-Order and Raster-Order. Each approach can have multiple formats to define specific protocol standards (more detail presented below). This system and method of foveated display uses two protocol stages: image frame interface (per pixel data between host and the display driver; i.e. 24-bits/pixel) and modulation plane interface (discrete data between driver and display; i.e. 1 bit/pixel). The system foveated display disclosed herein applies the protocols to two applications: center-detail and periphery detail. Further, this system of foveated display provides multiple foveated write embodiments: single column, dual-column, quad-column, and the like.
[0040] Advantageously, this system and method of foveated display proposes changing the display device to accept a novel foveated protocol; and thus, realize the savings of reduced transmit bandwidth and reduced write time to the display device’s pixel array.
Additionally, the system and method of foveated display described herein enables higher video rates, higher bit depths and/or higher display resolutions. Furthermore, the system and method of foveated display proposes a few discrete constraints for the foveated processing to match or ease the implementation in the display device. In addition, this system and method of foveated display improves upon existing systems by keeping the zone shapes in rectangular regions, matched to the physical array structure of the display device’s pixels (projection onto the visual field may make these regions non-rectangular/non-linear due to optical characteristics). Depending on the relative sizes of each zone, the foveated image processing method of the present invention, described herein, typically results in a reduction in transmitted image or image frame data of one sixth- to one-tenth (1/6 to 1/10) of the total display pixels. For example, a 2Kx2K display with 4 Megapixels may only need 0.5
Megapixels of input macropixels in the foveated image frame.
[0041] In the following description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
[0042] Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
[0043] It should be home in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as“retrieving,”“generating,”“detecting,”“writing,”“translating,”“receiving,” “expanding,”“parsing”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0044] The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
[0045] Reference in the description to“one embodiment” or“an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The phrase“in one embodiment” located in various places in this description does not necessarily refer to the same
embodiment. Like reference numbers signify like elements throughout the description of the figures.
[0046] Referring to FIG. 1 A, a system diagram of a foveated electromagnetic radiation modulation system, having foveated light modulation, in accordance with some embodiments is shown. Foveated rendering is a type of image processing or image generation that takes advantage of the eye’s exponential reduction in acuity/resolution moving outward from the center of the retina (a user’s gaze direction) to the outer periphery of an image by providing a plurality of display zones (e.g., zones Z0-Z3). In an embodiment of the present invention the zones may be rectangular in shape. However, it should be understood by one of ordinary skill in the art that the shape of the zones may vary. The system of foveated display described herein provides a method and protocol of encapsulation of a foveated image frame and a way of processing a foveated write for displaying the foveated image upon a display device. In particular, the foveated electromagnetic radiation modulation system 100 (e.g. a foveated light modulation system) may include a processor (central processing and/or graphics processing circuitry) 110, driver controller circuitry 120, one or more modulation devices 130 and a display 146. The processor circuitry 110, having a zone definition module 111, a foveated rendering module 112, foveated image memory 114 and image protocol encode logic 115, is generally configured to receive input from one or more input data sources 105 to generate an foveated image frame 103, having header packet data that defines two or more concentric zones of differing resolution, wherein each zone is defined by a plurality of macropixels and a corresponding macropixel ratios; one ratio for the horizontal direction and another ratio for the vertical direction. In an embodiment of the present invention, each macropixel ratio is an integer. In some embodiments, the input from the one or more input data sources 105 includes one or more images and associated foveated zone data. The term “foveated image” as used herein, generally means an image or video frame that is divided into two or more resolution zones. In some embodiments, the processor 110 may be included within a host computer (not shown), whereby the host computer sends the foveated image frame 103 to the driver controller circuitry 120 associated with the display 146. The driver controller 120 is generally configured to control modulation device circuitry 130 such that the foveated image is output to the display 146 based on input data 105 (e.g., image data). Of course, it should be understood that the driver circuitry 120 and/or processor circuitry 110 may also include other known and/or proprietary circuitry and/or logic structures, including for example, frame buffer memory/cache, timing circuitry, vertical/horizontal scan line circuitry, processor circuitry, and the like. The foveated image stored in memory 114 (for example a projected foveated image or a direct view foveated image), which is output to the display and/or produced or rendered upon display 146, may include a plurality of resolution zones Z0, Zl, Z2, Z3, ... ZN, where each zone has a differing resolution. For example as shown, the zones may be generated using a center-detail mode, in accordance with the present invention, where the zone having the highest resolution Z0 tracks the user’s gaze direction and the other zones (Zl, Z2, Z3 ... ZN) have a lower resolution than zone Z0, such that the resolution of each zone decreases in descending order away from the user fixation point. That is, the resolution of the second zone Zl is lower than that of zone Z0; the resolution of the third zone Z2 is lower than that of zone Zl; the resolution of the fourth zone Z3 is lower than that of zone Z2; and the like. As would be appreciated by those skilled in the art, the forgoing description of four zones is provided only as an example, and the numbering of the zones, and/or the size of the zone relative to the adjacent zone may vary. The teachings of the present disclosure may be equally applied to a system having N number of foveated image zones. The display 146 in accordance with the present system and method of foveated display may be an amplitude and/or phase display. Applications for the foveated light modulation system 100 of the present disclosure may generally include, for example, target applications such as holography for heads up displays (HUDs), head-mounted displays (HMDs) for augmented reality (AR), mixed reality (MR) or virtual reality (VR), etc. Of course, these applications are provided only as examples, not as a limitation of the present disclosure. In an embodiment of the present invention, the data in a macropixel is no different than the data in a pixel. In an embodiment of the present invention, when the foveated rendering module 112 creates pixels and puts them in an image memory, they may be, for example, 24 bits (e.g., full color bits). Macropixels correspond to multiple display pixels, and the image rendering process creates pixels that can also be referred to as macropixels. The encoding module 115 does not create the macropixels, but rather rearranges them to an order indicated by a selected protocol or a particular protocol.
Modulation plane pixels are one single bit or correspond to one single bit.
[0047] In an embodiment of the present invention the zones may be rectangular in shape, as correlates with the row-column structure of the display’s pixel array. However, it should be understood by one of ordinary skill in the art that the shape of the zones may vary according to the structure and features of the foveated display to facilitate copying of the data according to the macropixel ratios. Furthermore, in an embodiment of the present invention the macropixel ratios in both the horizontal and vertical dimensions are integers that allow the copying of macropixel data to whole and/or individual display pixels. It would be understood by one skilled in the art that grayscale devices may allow scaling of pixel data or otherwise filtering/processing pixel values as the macropixel value is applied or written to multiple or other display pixels.
[0048] The foveated image frame 103, generated by the processor circuitry 110, generally includes foveation zones and may be encapsulated using known and/or proprietary image transport protocols (e.g., Display Serial Interface (DSI) from the Mobile Industry Processor Interface (MIPI) Alliance, High-Definition Multimedia Interface (HD MI), DisplayPort, and the like). To enable communication with a display, foveated display or foveated image capable display device, the processor circuitry 110 may embed header and/or command information (e.g., mode, action and/or format selection information) into the foveated image frame 103. The header information may include, for example, the number of foveation zones being used, the resolution of each zone, the size and location of the foveation zones, data order, packing format, expected display capabilities, etc. The size and position of each zone may be selected by the processor circuitry 110 and may be based on, for example, the array size and other properties of the modulation device circuitry 130, the optical characteristics of the system, tracking logic 107, rendering algorithms, operating environment, and the like.
The processor circuitry 110 may generate a foveated image frame 103 based on one or more input data source(s) 110. The image data source(s) may include, for example, image sensors (e.g., camera devices) to capture environmental image data, image overlay data, and the like. The input data source(s) 110 may include, for example, a plurality of image sensors to capture image data having different resolutions. For example, the foveated image frame 103 may include three zones: a first zone Z0, a second zone Zl and a third zone Z2. In some embodiments, for example, the first zone Z0 may have the highest resolution (e.g., a l-to-l correspondence of foveated image to pixels), the second zone Z2 may have a lower resolution than the first zone, for example, 4-to-l pixel resolution having ¼ the resolution of the first zone (½ in horizontal and ½ in vertical), and the third zone Z3 may have a lower resolution than the second zone, for example, a l6-to-l pixel resolution having 1 /16lh the resolution of the first zone (¼ in horizontal and ¼ in vertical). The zones may be defined as a binary multiple of the highest resolution zone (e.g., 2-to-l, 4-to-l, l6-to-l, and the like). In some embodiments, the processor circuitry 110 may refresh the zones of the encoded foveated image frame 103 on a frame-by-frame basis, sub-frame basis, and/or a predefined basis, such as for example, every other frame.
[0049] In some embodiments, the processor 110 may couple to receive user retina and/or head tracking data from tracking logic module 107, having tracking software electrical, electronic, and/or mechanical components, whether in real-time or stored. In particular, the zone definition module 111 may use system optical parameters along with the data from tracking logic 107 which may sense the retina position of the user and generate the foveation zone parameter data corresponding to the sensed retina position. In accordance with this particular embodiment, the foveated rendering module 112 may couple to the tracking logic to receive a user’s fixation point based upon retina gaze; wherein the foveated rendering module can calculate the size and location of each zone using a foveated rendering algorithm based upon one or more of the parameters: total field-of-view, optical system distortion, fovea acuity, tolerance of tracking logic, latency of tracking logic, rate of motion, and the like. In some implementations, tracking logic 107 may be included with system 100 to define the location of each zone within an image frame, where the tracking logic 107 is configured to track and locate a position of an eye and or head. It should be understood by one of ordinary skill in the art that any logic, in accordance with the present invention may be implemented via electrical, electronic, and/or mechanical components.
[0050] In some embodiments, the driver controller circuitry 120 may include memory 122, a first conversion unit 124, and a second conversion unit 126. The first conversion unit 124 may couple to receive the foveated image frame and, in response, generate foveated bit plane data based upon the foveated image frame. The second conversion unit 126 may couple to receive the foveated bit plane data and, in response, generate modulation planes 127 based upon the foveated bit plane data and an associated modulation scheme. In some
embodiments, memory 122 can store the foveated image frame, the foveated bit plane data, or the modulation planes 127. The one or more modulation devices 130, each having an array of display pixels 144, may couple to receive the modulation planes 127, and output the foveated image upon the display 146. In particular, the one or more modulation devices 130 may expand each line-set of a modulation plane based upon the header packet data; wherein, for zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratio associated with the zone.
[0051] In some embodiments, the modulation device circuitry 130 may include, for example, protocol decode logic 133, display circuitry l40(e.g., an LCOS display device, panel, display panel, or spatial light modulator), raster logic 150, and memory 132. In some embodiments, the decode logic 133 couples to receive the modulation planes from driver controller circuitry 120 to parse the header packet data and raster logic 150 generates expanded datasets based upon controls from the decode logic 133, which may include the header packet data and corresponding macropixel ratios. The raster logic 150 may include a row buffer 156 for holding each row of each line-set during the decoding stage of operations; and a row queue 158 for holding the expanded dataset to be written to the pixel array 144 resulting in an effect displayed upon display 146. The raster logic 150 may further include either a line-set gather circuitry 152 or direct write logic 154. The display circuitry 140 may couple to receive the expanded dataset and, in response, to display the foveated image upon display 146. In particular, display circuitry 140 may include a control unit 142 that couples to receive the expanded dataset and generate a plurality of respective binary values to be applied upon pixel array circuitry 144, wherein the plurality of respective binary values that control the amplitude and phase of electromagnetic radiation propagating through each pixel. In some embodiments, the display circuitry 140 may include, for example, liquid crystal on silicon (LCoS) display circuitry (not shown) such as those provided by Compound Photonics. The display circuitry 140 may include phase-type and/or amplitude-type, depending on what is required for a given application.
[0052] The line-set gather circuitry 152 is further detailed in the method of FIG. 3 A and FIG. 3B as part of the memory read from zone buffer data or immediately from received raster order data. In some embodiments the display device may provide direct write logic 154 to enable writing a portion of the row corresponding to one zone area without affecting the other portions of the row. This feature allows directly writing each zone data in zone order without having to gather macropixel data from different zones in line-sets. However, it uses multiple row write times to write all portions of the row, which may be a limiting factor of that embodiment.
[0053] In some embodiments, the header packet data may include a resolution-order toggle bit enabling a center-detail mode and a periphery-detail mode. During operation when the center-detail mode is active, the foveated image frame includes a plurality of concentric zones (i.e., zones of any shape that share the same center) having a zone of highest resolution located at a center a user fixation point, whereby resolution of the adjacent zone is lower than full resolution by a predetermined value, and the resolution of each zone decreases in descending order away from the user fixation point. During operation when the periphery- detail mode is active, the foveated image frame includes a plurality of concentric zones having a zone of highest resolution located at a periphery of the plurality of concentric zones, whereby resolution of the adjacent second interior zone is lower than full resolution by a predetermined value, and the resolution of each concentric interior zone decreases in descending order. The header packet data may further include a transmission-mode toggle bit enabling a raster-order mode and a zone-order mode. During operation when the raster-order mode is active, data transmission comprises a plurality of line-sets representing rows of data from the plurality of concentric zones corresponding with a display order of the original image. During operation when the zone-order mode is active, data transmission comprises a sending of each one of the plurality of concentric zones in its entirety before data
transmission of an adjacent zone is sent. The header packet data may further include a zone number segment defining number of the plurality of concentric zones; a zone-size segment defining horizontal and vertical size of each one of the plurality of concentric zones; a zone- offset segment defining horizontal and vertical offset associated with each one of the plurality of concentric zones; and a plurality of display parameters. The plurality of display parameters may include a word-size segment defining a plurality of pixels-bits transferred relative to a clock cycle associated with the driver controller circuit. Further, the display parameters may include an x-offset size segment defining a plurality of pixels-bits per horizontal offset Least Significant Bit (LSB) and a line-set size segment defining a maximum number of rows to be simultaneously written. Additionally, the display parameters may include a row-time segment defining a plurality of clocking segments required to write a row and a dual column-drive mode indicator enabling simultaneous writing of two rows. In an embodiment of the present invention at least one of the zones has a different center from at one of the other zones.
[0054] The system for foveated display 100 may further include foveated display protocol logic to encode and decode the foveated image frame and the foveated modulation plane data, whether defined in software or hardware, including: image protocol encode 115, image protocol decode 123, plane protocol encode 125, and plane protocol decode 133. In particular, processor 110 may include image protocol encode 115. Driver controller 120 may include image protocol decode 123, plane protocol encode 125. The one or more modulation devices 130 may include the plane protocol decode 133.
[0055] In operation, the processor 110 couples to receive image data relating to the image and foveation zones. In particular, processor 110 may receive image input data from the one or more input data sources 105; and tracking data from tracking logic 107, based upon retina and/or head location of a user in real-time. In the alternative, processor 110 may couple to receive foveation data from one of the input data sources 105. In response, the processor 110 may generate a foveated image frame based upon the image data, wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio. The processor 110 may transmit the foveated image frame to one or more modulation devices 130 having raster logic 150 coupled to a display circuit 140 including an array of pixels 144. For example, the foveated image frame having header packet data may be sent to the driver controller circuit 120 that generates foveated bit plane data using conversion unit 124; and converts the same into modulation planes based upon a modulation scheme and header packet data using conversion unit 126. The driver controller circuit 120 may send the modulation planes 127 to the one or more modulation devices 130. Next, the one or more modulation devices 130 may output the foveated image to the display or array of display pixels by decoding and expanding each modulation plane based upon the header packet data and each corresponding macropixel ratio using the raster logic 150. That is, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels 144 based upon the corresponding macropixel ratio associated with each zone. [0056] The system and method of foveated display may comprise a center-detail mode and a periphery-detail mode. For example, in one embodiment having four foveation zones (Z0- Z3) each zone may correspond to a power of 2 where an input pixel is replicated in x and y to fill a display pixel region 2n x 2n (zone Z0=lxl pixels, zone Zl=2x2 pixels, zone Z2=4x4 pixels, zone Z3=8x8 pixels). For center-detail mode in some embodiments, zone 0 possesses the highest resolution, at the center of gaze, where the other concentric zones have a differing resolution. In particular, zone Zl surrounds zone Z0 and only applies to pixels outside of zone Z0. Zone Zl possesses a resolution that is lower than Zone Z0. Zone Z2 surrounds zone Zl and possesses a resolution that is lower than zone Zl. Zone Z3 surrounds zone Z2 and possesses a resolution that is lower than zone Z2. For periphery-detail mode, however, zone 0 is still the highest resolution but may be sized to the length and width of the entire display. The concentric zones within zone 0 possess a resolution that is lower than zone 0. That is, zone Zl possesses a resolution that is lower than zone Z0; zone Z2 possesses a resolution that is lower than zone 1; and zone Z3 possesses a resolution that is lower than zone Z2. The foveated rendering module 112 may determine the size and location of each zone according to its algorithm which may include factors of total field-of-view, optical
Periphery-Detail Mode Center-Detail Mode
Figure imgf000021_0001
system distortion, fovea acuity, tolerance/latency of gaze tracker, rate of motion, and the like. In the alternative, the one or more input data sources 105 or the tracking logic may provide the processor 110 with the size and location of each zone. Periphery-detail modes may be more applicable for displays using interference light steering; and hence, do not follow gaze tracking. Consequently, the foveation protocol, method, and system, in accordance with the present invention, reduces the data bandwidth. In the center-detail mode, when the gaze moves toward the edge of the display, the edges of the zones may align such that zone Z0 has “0” offset from the edge. [0057] There are several possible modes of operation: center-detail v. periphery-detail; zone-order v. raster-order. In at least one embodiment implementing center-detail mode, it would be an error to have a lower numbered zone extend outside the border of a larger number zone. Therefore, zone Z3 would not be used if it was the same size and offset as zone Z2. The largest zone may be the same size as the display and thus its offsets would be 0. However, if the display supports partial display mode, the largest zone may be smaller than the display and a fill value would be used for the remaining regions of the display.
[0058] In periphery-detail mode, the sizes and locations of the zones may be constant, but they are not required to be. The foveated protocol described herein can allow either dynamic or static sizes and offsets. Applications using periphery mode may not be able to use the large macropixels of zone 3, but the concept and method described herein do allow it. Yet, this capability is dependent upon the display devices capability to support it. If zone Z2 or Z3 had a size of 0x0, it would be the same as saying that zone is not used. In at least one embodiment implementing periphery-detail mode, it would be an error to have a higher number zone extend outside of a lower numbered zone.
[0059] In zone-order mode, the data for each zone is sent in its entirety before data for the next zone. Zone Z3 data, if it is used, is sent first; while, zone Z0 data is last. Each zone’s data is sent in raster line order: horizontal left to right, then vertical top to bottom. If a raster’s data is less than an integer multiple of words and row padding is enabled, it is padded to fill the last word; wherein, each raster starts on a word boundary. For rasters that cross a void area of another zone, the data on either side of the zone is packed together. Padding is only added at the end of each raster.
[0060] In raster-order mode, the data is sent by line-sets (see below for more detail).
Each line-set starts with its zone 3 data, then the Ist row of zone 2 data. Consequently, the first row of each zone (Zl, Z0) are sent. Following these rows are the consecutive second, third, etc. row of each zone pursuant to the formatted order within the line-set. Rows of data are added as each raster moves down the display rows, with the highest number zone’s data always first. Padding is added at the end of each zone’s data to be integer number of words, so each new row and zone of data starts on a word boundary. Additional padding at the end of each line-set may need to be added to meet the display minimum row, timing requirement.
[0061] Raster-order mode may provide lower latency and minimal plane storage, compared to zone-order mode. Zone-order may be simpler to implement and use less padding, but more buffer space and latency may be incurred. For example, if the zone ZO region is at the top of the display, the first row cannot be written until all of zones, Z3, Z2 and Zl, are received. Even then, the zone-order implementation may not be able to keep up with the data rate of zone ZO. Zone-order control can still calculate the total time to write the display array using sizes and row-times (see below) and not send planes closer together than that. If the display does not have dual buffers for the zone data, then additional time between planes may be needed to prevent overlap.
[0062] Parameters controlling the data format to match the display’s
capabilities/constraints may include: word-size, x-offset-size, line-set-size, row-time, and dual-column-drive. Word-size is the number of pixels-bits transferred with each clock of the interface (i.e. a 64 signal DDR bus would send 128 bit words). This may be the granularity of the horizontal zone side. X-offset-size is the number of pixels-bits per horizontal offset Least Significant Bit (LSB), also referred to as the step-size of X-offsets. The offset size may be one half of the word-size to allow centering of a zone. Line-set-size represents the max number of rows that can be written at the same time.
[0063] The row-time parameter represents the minimum number of words per row. This is the number of clocks that the display requires to write a row or a multi-row. A line-set that only crosses the highest zone (with a matching line-set-size), will only use one row-time to finish the line-set. Thus, the packet of data needs to be padded to that number of words if the X-resolution would require fewer words. If the line-set crosses 3 zones, then it will require 4 row-times to write the line-set. Thereby, the packet of data needs to be padded to 4 times the row-time (number of words), if the active data would be fewer words. The row-time may be unique per zone level. Using staggered row pulses for high simultaneous row counts can implement this feature. Only the row-time for zone Z3 data can be longer than the row-time for zone Z0 data.
[0064] Regarding dual-column-drive, the system can allow two rows with unique data to be written at the same time. This feature possess two options: l=half-line-sets (the first half of the line-set rows are driven by the first set of column drivers; which allows the pad savings to be resolved within one line-set, but does not allow adjacent line-sets to save time if they are each only using 1 row time), 2=even/odd-line-sets (the even line-sets are driven by the first set of column drivers; which saves time when writing 1 row per line-set, but requires padding to be calculated over 2 line-sets.) [0065] The format of each modulation-plane may include a header that includes: (a) the modes used, the number of zones; (b) the size of each zone in both x and y; (c) the offset of each zone in both x and y; and (d) display parameters/constraints used. Following the header packet data, the modulation-plane data is included, which may be terminated by error detection protocols markers. For example, cyclic redundancy checking (CRC), and the like may be added to the data sent to the one or more modulation devices 130.
[0066] More particularly shown in FIG. 1A, as is known, display circuitry (e.g., LCoS circuitry) 107 may generally include an array (X-Y) of individually addressable (controllable) pixel elements (where each pixel is formed at least in part from liquid crystal material or substance) through electrodes formed on a semiconductor material. The array size may be generally considered an upper limit to the resolution of a foveated image 114, while a foveated image will have two or more zones, where at least one zone has a resolution that is less than the maximum resolution of the displayed image. The array size may be, for example: 2048x2048, 4096x4096 or 6144x6144 pixel elements. The foveation zone sizes may be, for example: 512x512(1x1), 1024x1024(2x2), 1536x1536(3x3), 2048x2048(4x4), 3072x3072(6x6), etc. For each of these foveation zone size examples, the corresponding macropixel array size is 512x512. In an embodiment of the present disclosure, the control of a pixel may include controlling the amplitude and/or delay (i.e., the phase) of electromagnetic radiation (e.g., light) propagating through a pixel (e.g., transmissive and/or reflective propagation); and thus may control, for example, the nature of the displayed foveated image 146. For example, the modulation device circuitry 130 may be configured to receive electromagnetic radiation (e.g., light, such as laser light) and cause a phase shift of the electromagnetic radiation to generate a desired result.
[0067] In some embodiments, the system 100 may include a plurality of modulation devices 130 that can generate, for example, a color foveated image 146 or non-colored foveated image 165 (to be described further in detail with reference to FIG. 1B). In this embodiment, each modulation device 130 may be configured to control a color saturation of the projected image, for example a system that includes three modulation devices to separately control red, green blue (RGB) color saturation of the projected image 146. In other embodiments, a single modulation device may be used, for example, to generate a monochrome projected image or a color sequential image. The number of saturation levels for each pixel may be based on, for example, the number of levels defined by a foveated image, and may be expressed in binary form. For example, a 6-bit image dataset may have 26 = 64 levels per pixel per color.
[0068] Each resolution of each zone of the foveated image frame 103 may be defined by a unique macropixel. A“macropixel,” as used herein, generally means a grouping of two or more pixels/physical pixels, which are identically controlled by the modulation device circuitry 130 to generate a portion of an image that has a defined resolution. For example, if a first zone Z0 is defined as a zone having the highest resolution of the modulation device circuitry 130, the macropixel for the first zone Z0 may be defined as lxl (a ltol
correspondence in both X and Y), meaning that each pixel of the first zone Z0 of the foveated image frame 103 corresponds to a single physical pixel of the modulation device circuitry 130. If the second zone Zl is defined having a resolution that is ¼ the resolution of the first zone Z0 (1 to 2 correspondence in each X and Y), the macropixel of the second zone Zl may be defined as 2x2, meaning that one macropixel of the reduced-resolution second zone Zl corresponds to 4 physical pixels (2 physical pixels x 2 physical pixels) of the modulation device circuitry 106, and so on for other defined zones. Advantageously, since the encoded foveated image frame 103 includes a plurality of lower-resolution zones, the overall data size of the foveated image frame 103 will be substantially less than that of a full resolution image frame. As a result, the bandwidth requirements of a communications interface between the processor circuitry 110 and the driver controller circuitry 130 are reduces, in addition to the reduction in memory /buffer size to store foveated image frame data.
[0069] The driver controller circuitry 130 is generally configured to receive the foveated image frame data 103 from the processor circuitry 110 and generate foveated bit plane data using the conversion unit 124. Bit plane data, as used herein, may include header information (similar to header information in the foveated image frame 103), an array of binary values for macropixels in each foveated zone (to control the electrodes of corresponding display pixels), and/or pad data between macropixel data, as will be described in further detail below. In some embodiments, a number of bit planes are sequentially generated for each frame. The binary values, of the bit planes, may be generated via the use of, for example, pulse width modulation (PWM) and/or pulse frequency modulation (PFM) techniques as controlled by functions loaded in the driver control circuitry 120. The series of binary values is based on the saturation value of each pixel of the foveated image frame 103 along with the selected modulation technique. The bit plane generated is an array of binary values, each binary value corresponding to a pixel or macropixel. In an embodiment of the present system and method of foveated display, each binary value corresponds to a physical pixel of the modulation device circuitry 130. In the alternative, each binary value corresponds to a macropixel, which controls a collection of physical pixels in the modulation device circuitry 130 that are defined by the macropixel size. Advantageously, the bandwidth and data throughput requirements of a communication interface between the drive controller circuitry 120 and the modulation device circuitry 130 are substantially reduced for the system and method of foveated display as described herein. In some embodiments, the driver controller circuitry 120 may generate a header having one or more defined fields to define the foveated zone size/position, macropixel size, and the like. The header may be formed as the first line or lines of the first modulation plane per frame/sub-frame set of modulation planes or every modulation plane.
[0070] In general, the header packet data defines the number of zones and the size and location of each of the zones. Regarding offsets, in some embodiments the last zone may be the same size as the display. In these cases, no offset exists for this zone and its location (offset from the origin, i.e. the upper-left (UL) comer) will be (0,0). In some embodiments, the largest zone may be smaller than the whole display, thus allowing the offsets to be non zero. In such embodiments, the display logic may fill the surrounding pixels with a predetermined value. An example header is shown below, where H-Sizes are a multiple of H-step-size (H-Size may also be a multiple of Word Size, where H-step-size <= Word Size; see definitions below), H-Offsets are a multiple of H-step-size, V-sizes and offset are a multiple of line-set-size (typically 4 display rows), all offsets are relative to the display origin (i.e. upper left comer), and H-mpix and V-mpix are the sizes of the macropixel for that zone in horizontal and vertical directions (allowing for non-square macropixels if desired). All fields are in units of display device pixels:
Figure imgf000026_0001
[0071] In particular, the display driver controller 120 is configured to transmit data to the display 146. Data is transmitted in words, where one word is the block size of the data bus transmitted during one clock [i.e. a 32-bit DDR bus would have a block or Word Size of 64 bits, (Wsize=64)]. Each word is a horizontal line of bits from one zone; they are all of the same resolution, on the same row. The data can be sent in different orders and formats, depending on what is supported in the driver and display device. Thus, to define the format used to create the frame or plane, in some embodiments, the header may also include the following fields:
Figure imgf000027_0001
[0072] An exemplary configuration for the system of foveated display may include a foveated image having three zones and 640 Kbits per modulation plane, covering a 4 Megapixel display, the following tables represent the associated header packet data.
Figure imgf000027_0002
Figure imgf000027_0003
The following represents a line-set through the middle of the image, which crosses all 3 zones. As shown, there is one row of zone Z2, which includes a right and a left segment of data (Z2A and Z2B), each having four words (whereby each word is 128 bits). Additionally, there are two rows of zone Zl, which each include a right and left segment (ZlArl, ZlAr2, ZlBrl, ZlBr2) of two words. Further, there are four rows of zone Z0 data (ZOrl, Z0r2, Z0r3, Z0r4) of four words.
Figure imgf000028_0003
Accordingly, the above line-set would transmit a total of 22 words in the following order:
Figure imgf000028_0001
Figure imgf000028_0004
In a second example, a line-set is displayed that represents data through zones 2 and 3 only. As shown, there is one row of zone Z2, which includes a right and a left segment of data (Z2A and Z2B), each having four words (whereby each word is 128 bits). Additionally, there are two rows of zone Z1 (Zlrl, Zlr2) of eight words.
Figure imgf000028_0005
Accordingly, the above line-set of this second example would transmit a total of 10 words in the following order:
Figure imgf000028_0002
Figure imgf000028_0006
In a third example, a line-set is displayed that represents data through zone 3 only. As shown, there is one row of zone Z2 (Z2rl), each sixteen words (whereby each word is 128 bits).
Figure imgf000028_0007
Figure imgf000029_0002
According, the above line-set of this third example would transmit a total of 5 words in the following order:
Figure imgf000029_0001
[0073] Regarding display implementation, in order to meet row timing of low-res zone 2, all 4 rows need to be written at the same time. The column drivers are connected to 4 consecutive rows, then all 4 row strobes are asserted at the same time. These 4 row strobes are asserted individually when writing zone 0 data, as the column drivers provide unique data for each.
[0074] In some displays, when writing the data for zone 2 only, with LSS=4, Vmpix=4 and Hmpix=4, the large macropixels reduce the data bandwidth so much that an equal amount of padding can be added to meet the minimum row timing. These displays can resolve this problem by using the multi-driver architecture of MD=2ps; the data is buffered for 8 rows; there are 2x Hres column drivers. The first 4 rows are connected to the same column drivers; the 2nd 4 rows are connected to different column drivers; the 3rd set of rows is connected to the same column drivers as the Ist, and so on. This allows the 8 rows to be written together and gives 2x the data time to be applied for 1 row write time. In the above example 1, if LSS=4, MD=2ps and n=8, there is no need for any pad words in any of the cases.
[0075] It is appreciated that the components of exemplary operating environment 100 are exemplary and more or fewer components may be present in various configurations. It is appreciated that operating environment may be part of a distributed computing environment, a cloud computing environment, a client server environment, and the like.
[0076] Referring now to FIG. 1B, a system diagram of a foveated electromagnetic radiation modulation system, having grayscale device circuitry, in accordance with some embodiments is shown. Similar to the foveated electromagnetic radiation modulation system 100 of FIG. 1A, the foveated display system 160 of FIG. 1B may include a processor 110, grayscale device circuitry 162, and a display 165. The processor circuitry 110, having a zone definition module 111, a foveated rendering module 112, a foveated image memory 114 and an image protocol encode module 115, is generally configured to receive input from one or more input data sources 105 to generate an foveated image frame 103, having header packet data that defines two or more concentric zones of differing resolution, wherein each zone is compressed being defined by a plurality of macropixels and a corresponding macropixel ratio. In some embodiments, the input from the one or more input data sources 105 includes one or more images and associated foveated zone data. In some embodiments, the processor 110 may be included within a host computer (not shown), whereby the host computer sends the foveated image frame 103 to the grayscale device circuitry 162 associated with the display 165. The grayscale device circuitry 162 may include image protocol decode logic 163, display circuitry 170, raster logic 180, and memory 164. The display circuitry 170 may include control unit 172 and an array of pixels 174; while the raster logic 180 may include a line-set gather logic 182, direct write logic 184, a row buffer 186 and a row queue 188. Of course, it should be understood that the grayscale device circuitry 162 and/or processor circuitry 110 may also include other known and/or proprietary circuitry and/or logic structures, including for example, frame buffer memory/cache, timing circuitry,
vertical/horizontal scan line circuitry, processor circuitry, and the like. The foveated image stored in memory 114 (for example a foveated image or a direct view foveated image), which is output to or rendered upon display 165, may include a plurality of resolution zones Z0, Zl, Z2, Z3, ... ZN, where each zone has a differing resolution. For example as shown, the zones may be generated using a Center-Detail mode, where the zone having the highest resolution Z0 tracks the user’s gaze direction and the other zones (Zl, Z2, Z3 ... ZN) have a lower resolution than zone Z0, such that the resolution of each zone decreases in descending order away from the user fixation point.
[0077] The line-set gather circuitry 182 is further detailed in the method of FIG. 3 A and FIG. 3B as part of the memory read from zone buffer data or immediately from received raster order data. In some embodiments the display device may provide direct write logic 184 to enable writing a portion of the row corresponding to one zone area without affecting the other portions of the row. This feature allows directly writing each zone data in zone order without having to gather macropixel data from different zones in line-sets. However, it uses multiple row write times to write all portions of the row, which may be a limiting factor of that embodiment.
[0078] FIG. 2A is a flow diagram of a method 200 for foveated display in accordance with some embodiments. In some embodiments, the method and protocol of foveated display includes receiving eye tracking data relating to generating the foveation zone definitions in an action 210. For example, a processor may receive image input data and tracking data, based upon retina and/or head location of a user in real-time. In response, the method may further include generating a rendered foveated image based upon the image data and parameters defining the foveation zones according to foveated rendering techniques (in an action 215), wherein the rendered foveated image contains macropixel image data corresponding to two or more concentric zones of differing resolution and corresponding macropixel ratios. In response, the method may further include generating a foveated image frame based upon the rendered foveated image and protocol selection parameters (in an action 220), wherein the foveated image frame includes header packet data that identifies two or more concentric zones of differing resolution, whereby each zone is defined by a plurality of macropixels and corresponding macropixel ratios.
[0079] FIG. 2B is a flow diagram of a method 230 for foveated modulation plane generation by display driver circuitry 120 as in FIG. 1A, in accordance with some embodiments. In an action 233 the foveated image frame is decoded into controlling parameters from the header and macropixel data (e.g., valid macropixel data) of different zones, rows and words according to zones or line-sets as selected by the header parameters.
In response the method may further process the image pixel data in action 235 to transform the pixel data into a format that is compatible or more compatible to the display, which may involve dithering, scaling/attenuating values and/or splitting into bit planes and optionally storing the result in memory. In response the system may read the data from memory in action 240 according to a modulation scheme to generate a modulation plane. In response action 244 may encode the modulation plane data into a modulation plane format/protocol with header data according to a selected plane protocol and transmit the result to one or more modulation devices. In response as a looped operation 248 the method determines if the modulation scheme is finish and if not, it repeats actions 240 and 244 until it is finished for that foveated image frame, whereupon it waits for the next start of image frame to begin again at action 233. [0080] FIG. 2C is a flow diagram 250 of a method or process for writing foveated modulation plane data to the array of display pixels in a foveated modulation display device 130 as in FIG. 1A, in accordance with some embodiments. In an action 252, the method may include parsing the header packet data from the modulation plane and identifying the foveation zone information. In response the method, according to action 254, may write the data to memory 258 or directly process the data for writing to the pixel array. The method may further include an address management function according to action 260 using the foveation zone parameters to determine the size and order of the corresponding line-set macropixel data, which may include padding. In response action 264 may gather macropixel data by reading from memory 262 or direct input data to expand and combine macropixel data, according to corresponding macropixel ratios and zone offsets, into display pixel data in a row buffer (see FIG. 4) and transfer to a row queue for writing to the pixel array, which may write one or multiple rows simultaneously with the same data according to the corresponding macropixel ratio. Further in an action 268, the method may include advancing the control counters and indexes for the next line-set. As a looped operation, the method may include repeating the actions of 260, 264 and 268 until each line-set of modulation plane has been written into the pixel array in an action 269.
[0081] FIG. 2D is a flow diagram 270 of a method or process for writing foveated image frame data to the array of display pixels in a grayscale display device 162 as in FIG. 1B, in accordance with some embodiments. In an action 274, the method may include parsing the header packet data from the foveated image frame and identifying the foveation zone information and valid macropixel data. In response the method, according to action 276, may write the data to memory 278 or directly process the data for writing to the pixel array. The method may further include an address management function according to action 280 using the foveation zone parameters to determine the size and order of the corresponding line-set macropixel data, which may include padding. In response action 284 may gather macropixel data by reading from memory 282 or direct input data to expand and combine macropixel data, according to corresponding macropixel ratios and zone offsets, into display pixel data in a row buffer (see FIG. 4) and transfer to a row queue for writing to the pixel array, which may write one or multiple rows simultaneously with the same data according to the corresponding macropixel ratio. Further in an action 290, the method may include advancing the control counters and indexes for the next line-set or the next sub-frame. As a looped operation, the method may include repeating the actions of 280, 284 and 290 until each line- set of the image frame has been written into the pixel array in an action 295.
[0082] FIG. 3A is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a center-detail mode of operation, wherein four zones exist in accordance with some embodiments. In general, operations 300 comprise the process on the display device to decode and act on incoming modulation planes with center-detail (Z0 in the center). In particular, the flowchart 300 illustrates operations of a modulation plane state diagram for a center-detailed, with a four-zone foveated image. As an example, the four zones each use a square macropixel with a size equal to the power of two for that zone (i.e. zone 3 mpix size is 23=8 or 8x8).
[0083] In some embodiments, the method 300 for writing a modulation plane of foveated data into the pixel array of a foveated display device may include parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, in an action 301. For example, the one or more modulation devices 130 may wait for the modulation plane data to be sent in an effort to parse the header packet data. A strobe or data signature may be used to indicate the start of a new header packet
data/modulation plane.
[0084] In an embodiment of the present invention, the plane decode logic 133 may capture all the header fields to control the modes and parameters of the current modulation- plane. These can be used in the following decision actions.
[0085] The method of writing a modulation plane of foveated data into the pixel array of a foveated display device may also include detecting whether the transmission-mode toggle bit is set to enable a raster-order mode in a decision action 302. If this is a raster-order mode modulation plane, proceed to processing the incoming data as line-sets in the predetermined data order starting by initializing the counters/indexes for the first row (in an action 310). In response to no detected raster-order mode, the method may proceed to the zone-order mode fork in an action 304. During this phase, two independent processes will begin to deal with writing (380) and reading (306) the zone buffers. It should be noted that the read side will be delayed until near or after the end of the writing sequence so that the data for zone Z0 (with zone 0 last) is ready when needed for the read (to be described with reference to the timing diagram 500 of FIG. 5). Buffer read time may generally be longer than buffer write, because of any needed pad timing added to large macropixel line-sets for array write. In the zone- order mode, the method may include writing respective zone data of the incoming data into a respective zone buffer (Z(n-l), , Z3, Z2, Zl, Z0} (in actions 380-388 and waiting for a read delay of predetermined time in an action 306.
[0086] At the end of the read delay of action 306, the method may include toggling a read pointer corresponding to each respective zone buffer and start the read data flow to match the order needed for raster-order (to write line-sets to the array). Some devices may optimize the timing in zone-order mode since the internal buffer read path is not IO bandwidth
constrained, zone Z0 data can be read with a wider and/or faster bus to match the minimum row write timing to make up for pad timing added to large macropixel line-sets. For this reason, array write line-set timing differs for zone-order in comparison to raster-order. This step enables zone-order modulation planes to be written faster than raster-order, since any need for pad data timing has been removed.
[0087] At the end of read delay and in response to a detected raster-order mode, the method may include identifying a first row of a line-set of data in action 310. Row and line- set counts keep track of the corresponding location on the display as it relates to size and location of each zone, in an effort to know if the present line-set crosses or intersects with each of the zones. Some displays may provide a feature that defines the modulation plane size smaller than the total display size, and then position the active data of the modulation plane with an offset in the display area. This initialization step 310 would account for such offsets.
[0088] The method may further include in a decision action 320 detecting whether a high-resolution zone (Z0) of a plurality of concentric zones is present in the identified line-set of data. In particular, the decoding logic will detect whether the line-set includes part of zone Z0. In other words, does the row selected from the line-set cross or intersect with zone Z0. Specifically, the current location of pointer is compared to the zone Z0 size and offset, along with any global offsets. If the answer is affirmative, this line-set will also intersect all the upper zones with zone Z0 is in the center. That is, there will be no need to detect whether the other zones are present in the row with the affirmative detection of zone Z0. The method 300 may proceed to processing the line-set data in actions 322 and 324.
[0089] In response to no detected presence of the high resolution zone, the method may include detecting whether a next consecutive zone (Zl, Z2, Z3, ... Z(n-l)) is present in the identified line-set of data until one zone is detected, in decision action steps 330, 340, and 350. In response to a detected zone, the method may include expanding the identified line-set of data, based upon the number of zones, horizontal zone-size, line-set-size, x-offset-size, row-time, and word-size; storing the expanded data in a row buffer based upon the x-offset- size in action steps 322, 332, 342, and 352. Further, the method may include transferring the row buffer to a row queue, wherein k row(s) are written 2i | times, where r = {n, n-l, n-2, n- 3, ... 1}, k = {1, 2, 4, 8,
Figure imgf000035_0001
n = number of zones, when respective zones {Z0, Zl, Z2,
Z3... Z(n-l)} are detected (in action steps 324, 334, 344, and 354).
[0090] Regarding the expanding of the line-set, since the header packet data contains the details of the horizontal sizes of each zone, the decoding logic will be able to identify how many words will be received for each zone. For the particular example displayed in FIG. 3A, it is assumed JIT order, where zone Z0 is last in the row and zone Z3 data is first in the row. In action 322, for zone Z0 data each word is expanded 8 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets. Following this same example, zone Z2 data will be next, each word will be expanded 4 times horizontally and stored with in the row buffer with the correct offsets. Next, zone Zl data expanded 2 times horizontally and stored in the row buffer with the correct offsets. Further, zone Z0 data will be last, and stored directly in the row buffer with the correct offsets. In an action 324, one row is written eight times. Additionally, the method 300 may include transferring the row buffer to the row queue. Row writes to the array can begin before all 8 rows are in the queue. Each row is written individually for a total of 8 write cycles. Write cycles will be spaced to match minimum row timing for one row at a time (xl mode), thus the need for the queue.
[0091] Following the example of the detection of zone Z0 data, data during the detection of the other respective zones Z1-Z3 (in action steps 330, 340, and 350), the zone data is expanded for each respective zone and combined within the row buffer in actions 322, 332, 342, and 352 in a similar fashion. That is, for zone Z0 data each word is expanded 8 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets. Following this same example, zone Z2 data will be next, each word will be expanded 4 times horizontally and stored with in the row buffer with the correct offsets. Next, zone Zl data expanded 2 times horizontally and stored in the row buffer with the correct offsets. Further, zone Z0 data will be last, and stored directly in the row buffer with the correct offsets. During the detection of zone Zl, two rows are written four times in action 334. During the detection of zone Z2, four rows are written two times in action 344. During the detection of zone Z3, eight rows are written one time in action 354.
[0092] Moreover, the method may include retrieving a next line-set of data in an action 360, and repeating the detecting, expanding, storing, transferring, and retrieving until every line-set of data is retrieved (actions 362 and 320-360). That is for example, the process continues as data from zones Z0-Z2 are received for the remaining seven rows of the line-set (where line-set = 8 rows) and these are transferred into the row queue. Accordingly, in a decision action 362, the method may detect whether the current row is the last row. If not, the process will proceed to action 320 and process the next line-set. If the current row is the last row, then the method proceeds to decision action 364 to detect if the raster mode is enabled. When the raster-order mode is enabled, the method action will branch back to checking for the next modulation plane (in action 301). If not, then zone-order mode exists and the process ends with nothing more to do for this thread. In some embodiments, another thread writing buffer data would return to detecting for the next header packet data. In the alternative, this other thread may be currently writing the next modulation plane’s buffer data. Some implementations may perform buffer read-side A/B pointer toggle at the end of the read sequence instead of at the beginning.
[0093] During action steps 380, 382, 384, 386, and 388, data from the respective zones of Z3, Z2, Zl, and Z0 are written into a respective zone buffer (zone 3 buffer, zone 2 buffer, zone 1 buffer, and zone 0 buffer). Some macropixel rows will be full zone width; others that intersect with another adjacent zone may be shorter. Padding modes can be enabled to adjust data steering as needed to match buffer addressing. The expected number of words for each zone data is calculated from the sizes and mode controls. When each zone data is written, the method may include moving on to the next zone data in actions 382, 384, and 386. In an action 388, the method ends the zone-order processing by toggling the buffer A/B write-side pointers and returning to waiting for more modulation planes having header packet data.
[0094] FIG. 3B is a flow diagram of a method or process for writing a modulation plane of foveated data into the pixel array of a foveated modulation display device 130 of FIG. 1A during a periphery-detail mode of operation, wherein four zones exist in accordance with some embodiments. Similar to the example of FIG. 3 A the flowchart 400 illustrates operations of a modulation plane state diagram, with the distinction that the mode of operation where the display device can decode and act upon incoming modulation planes with periphery -detail, where zone Z0 is located on the outer periphery of the display. For this example, a four-zone foveated image is to be displayed, where the four zones each use a square macropixel with a size equal to the power of two for that zone (i.e. zone 3 mpix size is 23=8 or 8x8).
[0095] In some embodiments, the method 400 for writing a modulation plane of foveated data into display array pixels may include parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, in an action 401. For example, the one or more modulation devices may wait for the modulation plane data to be sent in an effort to parse the header packet data. A strobe or data signature may be used to indicate the start of a new header packet data/modulation plane. Effectively, the plane decode logic may capture all the header fields to control the modes and parameters of the current modulation-plane. These can be used in the following decision actions.
[0096] During the peripheral-detail mode, the method of writing a modulation plane of foveated data into the pixel array of a foveated display device may also include detecting whether the transmission-mode toggle bit is set to enable a raster-order mode in a decision action 402. If this is a raster-order mode modulation plane, proceed to processing the incoming data as line-sets in the predetermined data order starting by initializing the counters/indexes for the first row (in an action 410). In response to no detected raster-order mode, the method may proceed to the zone-order mode fork in an action 404. During this phase, two independent processes will begin to deal with writing (480) and reading (406) the zone buffers. It should be noted that the read side will be delayed until near or after the end of the writing sequence so that the data for zone Z0 (with zone 0 last) is ready when needed for the read (to be described with reference to the timing diagram 500 of FIG. 5). Buffer read time may generally be longer than buffer write, because of any needed pad timing added to large macropixel line-sets for array write. In the zone-order mode, the method may include writing respective zone data of the incoming data into a respective zone buffer (Z(n-l), ... , Z3, Z2, Zl, Z0} (in actions 480-486 and waiting for a read delay of predetermined time in an action 406.
[0097] At the end of the read delay of action 406, the method may include toggling a read pointer corresponding to each respective zone buffer and start the read data flow to match the order needed for raster-order (to write line-sets to the array). Some devices may optimize the timing in zone-order mode since the internal buffer read path is not IO bandwidth
constrained, zone Z0 data can be read with a wider and/or faster bus to match the minimum row write timing to make up for pad timing added to large macropixel line-sets. For this reason, array write line-set timing differs for zone-order in comparison to raster-order. This step enables zone-order modulation planes to be writen faster than raster-order, since any need for pad data timing has been removed.
[0098] At the end of read delay and in response to a detected raster-order mode, the method may include identifying a first row of a line-set of data in action 410. Row and line- set counts keep track of the corresponding location on the display as it relates to size and location of each zone, in an effort to know if the present line-set crosses or intersects with each of the zones. Some displays may provide a feature that defines the modulation plane size smaller than the total display size, and then position the active data of the modulation plane with an offset in the display area. This initialization step 410 would account for such offsets.
[0099] The method may further include in a decision action 420 detecting whether the lowest-resolution zone (Z3) centered within the plurality of concentric zones during the peripheral-detailed mode is present in the identified line-set of data. In particular, the decoding logic will detect whether the line-set includes part of zone Z3. In other words, does the row selected from the line-set cross or intersect with zone Z3. Specifically, the current location of pointer is compared to the zone Z3 size and offset, along with any global offsets. If the answer is affirmative, this line-set will also intersect all the upper zones with zone Z3 is in the center. That is, there will be no need to detect whether the other zones are present in the row with the affirmative detection of zone Z3. The method 400 may proceed to processing the line-set data in actions 422 and 424.
[00100] In response to no detected presence of the lowest resolution zone, the method may include detecting whether a next consecutive zone (Z2, Zl, Z0) is present in the identified line-set of data until one zone is detected, in decision action steps 430, 440, and 450. In response to a detected zone, the method may include expanding the identified line-set of data, based upon the number of zones, horizontal zone-size, line-set-size, x-offset-size, row-time, and word-size; storing the expanded data in a row buffer based upon the x-offset-size in action steps 422, 432, and 442. Further, the method may include transferring the row buffer to a row queue, wherein each row is writen 2(n_1) times (or, for this example, 23 times = 8 times), where n = number of zones (for this example, n-l=3), in action steps 424, 434, 444, and 454. [00101] Regarding the expanding of the line-set, since the header packet data contains the details of the horizontal sizes of each zone, the decoding logic will be able to identify how many words will be received for each zone. For the particular example displayed in FIG. 3B, it is assumed JIT order, where zone Z3 is last in the row and zone Z0 data is first in the row. In action 422, for zone Z3 data each word is expanded 8 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets. Following this same example, zone Z2 data will be next, each word will be expanded 4 times horizontally and stored with in the row buffer with the correct offsets. Next, zone Zl data expanded 2 times horizontally and stored in the row buffer with the correct offsets. Further, zone Z0 data will be last, and stored directly in the row buffer with the correct offsets. In an action 424, one row is written eight times. Additionally, the method 400 may include transferring the row buffer to the row queue. Row writes to the array can begin before all 8 rows are in the queue. Each row is written individually for a total of 8 write cycles. Write cycles will be spaced to match minimum row timing for one row at a time (xl mode), thus the need for the queue.
[00102] Following the example of the detection of zone Z3 data, data during the detection of the other respective zones Z2-Z1 (in action steps 430, 440, and 450), the zone data is expanded for each respective zone and combined within the row buffer in actions 422, 432, and 442 in a similar fashion. That is, for zone Z2 data each word is expanded 4 times horizontally, wherein the final result of combined/concatenated words are stored in the row buffer with the correct offsets. Following this same example, zone Z2 data will be next, each word will be expanded 2 times horizontally and stored with in the row buffer with the correct offsets. Next, zone Zl data expanded 1 times horizontally and stored in the row buffer with the correct offsets. Further, zone Z0 data will be last, and stored directly in the row buffer with the correct offsets (as pass through data in step 452). During the detection of zone Z2, Zl, and Z0, one row is written eight times in actions 424, 434, 444, and 454, respectively.
[00103] Moreover, the method may include retrieving a next line-set of data in an action 460, and repeating the detecting, expanding, storing, transferring, and retrieving until every line-set of data is retrieved (actions 462 and 420-460). That is for example, the process continues as data from zones Z3-Z0 are received for the remaining seven rows of the line-set (where line-set = 8 rows) and these are transferred into the row queue. Accordingly, in a decision action 462, the method may detect whether the current row is the last row. If not, the process will proceed to action 420 and process the next line-set. If the current row is the last row, then the method proceeds to decision action 464 to detect if the raster mode is enabled. When the raster-order mode is enabled, the method action will branch back to checking for the next modulation plane (in action 401). If not, then zone-order mode exists and the process ends with nothing more to do for this thread. In some embodiments, another thread writing buffer data would return to detecting for the next header packet data. In the alternative, this other thread may be currently writing the next modulation plane’s buffer data. Some implementations may perform buffer read-side A/B pointer toggle at the end of the read sequence instead of at the beginning.
[00104] During action steps 480, 482, and 484, data from the respective zones of Z3, Z2, Zl, and Z0 are written into a respective zone buffer (zone 3 buffer, zone 2 buffer, zone 1 buffer, and zone 0 buffer). Some macropixel rows will be full zone width; others that intersect with another adjacent zone may be shorter. Padding modes can be enabled to adjust data steering as needed to match buffer addressing. The expected number of words for each zone data is calculated from the sizes and mode controls. When each zone data is written, the method may include moving on to the next zone data in actions 482, 484, and 486. In an action 488, the method ends the zone-order processing by toggling the buffer A/B write-side pointers and returning to waiting for more modulation planes having header packet data.
[00105] Referring to FIG. 4A, a diagram 490 of a display with 4 zones in center-detail mode, indicates a line-set slice 494 of 8 rows across the middle of the display such that the line-set crosses all 4 zones. This figure applies to the Just-In-Time (JIT) data order of a raster order protocol or the line-set gather function as it relates to the expand and combine steps corresponding to action 322 outlined in FIG. 4B and FIG. 4C.
[00106] Referring to FIG. 4B a multiple step diagram of the expansion of the macropixel data in the method of FIG.3 A, correlating to action 322, showing contents of a macropixel data block and the contents of a row buffer, in accordance with some embodiments is shown. During raster-order mode, the method includes matching using JIT-order for immediate writes. During zone-order mode, the method may include internal buffer reading to gather data for the line-set writing. For this given example, there is one row of zone-3 macropixels, two rows of zone-2 macropixels, four rows of zone-l macropixels, and eight rows of zone-0 pixels. As shown at step 1, the method may include expanding zone-3 macropixels 8 times horizontally for the separate A and B regions representing the left and right side of the zone within the row of the current line-set. This data is written into a row buffer with the zone 3 offsets (z3HoffsetA&B). As shown at step 2, the method may include expanding zone-2, where row 1 macropixels are copied four times horizontally into the right-side and left-side segments of the row buffer with zone 2 offsets (z2HoffsetA&B). As shown at step 3, the method may include expanding zone-l, where row 1 macropixels are copied two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffsetA&B). As shown at step 4, the method may include writing zone-0, where row 1 pixels are copied into the row buffer at the zone 0 offset (zOHoffset). The method may include copying the row buffer to the row queue using the steering horizontal offset
(Hoffset), where outside pixels are filled with a predetermined fill value. As shown at step 5, the method may include writing the second row (row 2) of zone-0 into the row buffer at the offset for zone 0 (zOHoffet). Next, the row buffer is copied to the row queue using the steering horizontal offset (Hoffset), where outside pixels are filled with the same
predetermined fill value.
[00107] Referring now to FIG. 4C, a multiple step diagram of the continuation of the expansion of the macropixel data of FIG.4A in accordance with some embodiments is shown. As shown at step 6, the method may include expanding the second row of zone-l two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffset). As shown at step 7, the method may include writing the third row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet). As shown at step 8, the method may include writing the fourth row of zone-0 into the row buffer at the offset for zone 0
(zOHoffet). As shown at step 9, the method may include expanding the second row of zone- 2, where row 2 macropixels are copied four times horizontally into the right-side and left-side segments of the row buffer with zone 2 offsets (z2Hoffset). As shown at step 10, the method may include expanding the third row of zone-l, where row 3 macropixels are copied two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffset). As shown at step 11, the method may include writing the fifth row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet). As shown at step 12, the method may include writing the sixth row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet). As shown at step 13, the method may include expanding the fourth row of zone-l two times horizontally into the right-side and left-side segments of the row buffer with zone 1 offsets (zlHoffset). As shown at step 14, the method may include writing the seventh row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet). As shown at step 15, the method may include writing the eight row of zone-0 into the row buffer at the offset for zone 0 (zOHoffet). [00108] Referring to now FIG. 5, a timing diagram 500 of a Zone-order image frame or modulation plane, showing the zone buffer write and read sequences according to one embodiment of the present disclosure is shown. The following is according to a modulation plane embodiment with a modulation display device and could apply similarly to an image frame embodiment of a grayscale display device. The buffer for each zone is divided into A and B halves/sides representing two consecutive planes (or frames); wherein, the data for one plane should fit in one half/side. Assuming the write pointers are selecting the A side, the data from the first plane is written into the A side of the buffers as it is received, zone-by- zone. The header indicates the delay time from the header to the start of reading from the buffers, from the A side for the first plane. During the read it collects data for each line-set in order (line-set 0 first, then set 1, etc.) as needed from all of the zones, then writes them to the array. The read can overlap the end of the write sequence so long as the writes for zone 0 line-sets stay ahead of the reads. The next plane write to the B-side can begin before the A side reads are finished. At the end of the array writes some displays require some actions to apply the data to the active pixel elements.
[00109] As used herein, the term module might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the present invention. As used herein, a module might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a module. In implementation, the various modules described herein might be implemented as discrete modules or the functions and features described can be shared in part or in total among one or more modules. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared modules in various combinations and permutations. Even though various features or elements of functionality may be individually described or claimed as separate modules, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common software and hardware elements, and such description shall not require or imply that separate hardware or software components are used to implement such features or functionality.
[00110] It should be appreciated that the methods described herein may be performed with a digital processing system, such as a conventional, general-purpose computer system. Special purpose computers, which are designed or programmed to perform only one function may be used in the alternative. FIG. 6 is an illustration showing an exemplary computing device 600 which may implement the embodiments described herein. The computing device of FIG. 6 may be used to perform embodiments of the functionality for performing the foveated image display in accordance with some embodiments. The computing device 600 includes a central processing unit (CPU) 602, which is coupled through a bus 606 to a memory 604, video driver 607 and mass storage device 608. Mass storage device 608 represents a persistent data storage device such as a floppy disc drive or a fixed disc drive, which may be local or remote in some embodiments. The mass storage device 608 could implement a backup storage, in some embodiments. Memory 604 may include read only memory, random access memory, etc. Applications resident on the computing device may be stored on or accessed through a computer readable medium such as memory 604 or mass storage device 608 in some embodiments. Applications may also be in the form of modulated electronic signals modulated accessed through a network modem or other network interface of the computing device. It should be appreciated that CPU 602 may be embodied in a general-purpose processor, a special purpose processor, or a specially programmed logic device in some embodiments.
[00111] Display 612 is in communication with CPU 602, memory 604, and mass storage device 608, through video driver 607 and bus 606. Display 612 is configured to display any visualization tools or reports associated with the system described herein. Input/output device 610 is coupled to bus 606 in order to communicate information in command selections to CPU 602. It should be appreciated that data to and from external devices may be communicated through the input/output device 610. CPU 602 can be defined to execute the functionality described herein to enable the functionality described with reference to Figs.
1 A-4B. The code embodying this functionality may be stored within memory 604 or mass storage device 608 for execution by a processor such as CPU 602 in some embodiments. The operating system on the computing device may be iOS™, MS-WINDOWS™, OS/2™, UNIX™, LINUX™, VXWORKS™, or other known operating systems. It should be appreciated that the embodiments described herein may be integrated with virtualized computing system also.
[00112] The following image, image-related, and/or other data characteristics represent data, protocol parameters, elements, and/or header components for transferring or sending image, image-related, and/or other data to a display: [00113] H-step-size (Horizontal size/offset step size)
[00114] H-step-size represents an increment in horizontal offset or horizontal size. Zones are defined by multiples of H-step-size, when defining the size or offset. If a value for H-size or H-offset is not a multiple of H-step-size, it will typically cause an error as it indicates a conflict between the sent data format and the expected decoding of the data at the receiver.
In some embodiments, two different H-step-size parameters may be used (e.g., to enable a finer step size for H-offset than for H-size). This value may be selected based on, for example, the display device’s capability to decode and combine macropixel data from different zones and minimizing the display multiplexing logic. A system control algorithm utilized by the processor 110 can adjust/move each zone’s size/offset to match these step-size increments. In some implementations, the size of the zone may need to be larger than the required area of interest by one step to ensure that the required area of interest is included within the zone for small gaze offsets. For example, given H-step-size=32 and Z0Hsize=5l2, then ZOHoffset may be any of {0, 32, 64, 96, ... 32X (a multiple of 32)}. The area of interest with eye tracking tolerance used to calculate ZOHsize would be greater than or equal to 480 (480 + 32 = 512 which is a multiple of 32). Having a small step size reduces the amount of extra high-resolution bandwidth. In some embodiments, the H-step-size can be a multiple of the largest macropixel’s horizontal size to prevent defining an H-size or H-offset that is a fraction of a macropixel.
[00115] Data Order
[00116] The system of foveated display having the foveated protocol or method, in accordance with the present invention, includes two options of data order: zone-order or raster-order. In the zone-order mode of operation, all of the data for a zone is sent first, with any cutout pixels inside the zone removed. If more than one zone is used, all of the data for a second zone is sent next, and so on until the last zone. Further, in the zone-order mode of operation, the order of zones may vary. That is, data for the zone having the highest resolution Z0 can be first or last. Compared to raster-order mode, zone-order may have the least amount of padding, because it may avoid display device dependent timing. In the zone- order mode, the one or more modulation devices may be required to buffer the data, and then collect data for one row (or row-set) from each zone as appropriate before it can send it or write it. [00117] When using the raster-order mode of operation, words are sent in sets of line data corresponding to Line-Set-Size (LSS) output rows. For example, the LSS may be four (LSS=4). Yet, the LSS is also configurable (as described in further detail below). Within a line-set, at least two options can be available: zone-set-order (with Zone 0 last) or just-in-time order (JIT). JIT makes the buffering the easiest by collecting all the data needed for the first row first (with zone Z0 data last), then just the new data for row 2, then the new data for row 3, and so on.
[00118] In both the raster-order and zone-order mode of operation, the data may flow from the top/start of the image to the bottom/end in a raster like fashion, row-by-row, which may correlate to the display device in either top-to-bottom order or bottom-to-top order. In raster- order mode, the display device keeps track of the row number and line-set number; and then, compares the present row/line-set to the header information for each zone, to determine if the present row/line-set intersects with one or more zone. The raster logic 150 inspects the corresponding number of words and format of the line-set according to the header parameters (see details given in the disclosure relating to FIGS. 3A and 3B).
[00119] The one or more modulation devices 130 may flip or reverse the data processing in the horizontal or vertical direction. In addition, the modulation devices 130 may steer the entire image or at least a portion of an image by some number of pixels to match system needs. These functions may be orthogonal to the foveation technique and may work together without interference.
[00120] The data order may be different at the image frame format than at the modulation plane format. In some embodiments, the host processor may prefer zone-order format while the display device may prefer raster-order format. Considering these two interfaces as a pair, it may be zone-to-zone if the display device accepts zone-order. It may be zone-to-raster if the driver is able to translate from one to the other. It may be raster-to-raster if the host processor can output raster-order.
[00121] Padding
[00122] The system and method of foveated display disclosed herein may include four different types of padding: per-zone padding, per-row padding, per-set padding, and min- row-time padding. Different embodiments may implement different forms of padding. In at least one embodiment, a raster-order format may have all four padding types enabled. In another embodiment, a zone-order format may only have per-zone padding enabled. Each can be enabled/required on its own, so one or all padding types may be used in the same frame or plane.
[00123] Per-zone padding
[00124] Transitions between data of different resolutions from different zones may require different controls to expand data and/or change destinations. For this reason, zones of differing resolutions should not be mixed within the same transfer word. As a consequence, the system and method of foveated display disclosed herein implements padding for the last word of a block, when the data block size is not a multiple of word size (Wsize). Depending on other configuration options (e.g. word size, zone size and bit-depth), no additional padding may be required to meet this constraint. When this padding is enabled with the plane format, dummy pad pixels are added rather than pixels with image frame format. A single bit per pixel puts more pixels per word in modulation plane format. In embodiments where the driver controller has the ability to insert per-zone padding for zone-order modulation plane output and can accept non-padded image frame input data, the image frame data may not need to have zone-order padding.
[00125] Per-row padding
[00126] When the Hsize is not a multiple of Hmpix * Wsize and data for two rows of the same zone are adjacent, a system with per-row padding enabled can insert padding between the rows (at the end of the first row), such that the new row’s data starts on a word boundary. In some embodiments, a device may allow the partial data at the end of the first row to wrap into the first word of the next row, shifting all of the data for the next row over by the wrap amount. In this case, per-row padding may be avoided by not enabling per-row padding. Yet, padding at the end of one zone data before the next zone for partial words may still be required, as selected by per-zone padding.
[00127] Per-set padding
[00128] Per-set padding is an option that is only available in raster-order format. Only raster-order format (not zone-order) uses line-sets. Per-set padding adds padding at the end of each line-set, to end on a word boundary. Line-sets will end on a word boundary even without per-set padding enabled, if per-zone or per-row padding is enabled.
[00129] Min-row-time padding [00130] Min-row-time padding may add whole or partial words of padding at the end of a row or row-set to allow the display device sufficient time to meet the row strobe timing. If enabled, a parameter is used to define how many word times are needed to write a row or multi-row. Simultaneous writes of multiple rows may use a staggered write approach. The amount of stagger depends on the number of rows being written simultaneously. A different value may be defined for each size of multi -row write options available. For example the Minimum-Row- Words (i.e. Minimum-Row-Clocks = MRC) for 1 row-at-a-time may be 6 (MRCxl=6). The MRC for two rows-at-a-time may be 6 (MRCx2=6); the MRC for four rows-at-a-time may be 7 (MRCx4=7); and the MRC for eight rows-at-a-time may be 9 (MRCx8=9). The amount of whole word padding required is calculated by first padding the data words to whole word boundaries as selected by other padding modes, and then subtracting the total number of words with active data for the row or row-set from the total number of word-times required by the row or row-set.
[00131] If partial rows are being written directly in zone-order, this may apply to each line of zone data (which may involve a single row write or a multi-row simultaneous write, depending on the macropixel size for that zone).
[00132] For raster-order modes where data is grouped in line-sets, min-tow-time padding is only added after the end of the data for the line-set. Different amounts of total time can be required based on the number of row-time periods required by the line-set, which is determined by zone crossing conditions. If there is only one set of unique row data (e.g. when the entire line-set is covered by one macropixel row), then only one row time is needed. If there are two or more sets of unique row data, then the row-times for each set are added together to get the total time required for the line-set. When a line-set is written in only one row time (e.g. when line-set is covered by just one macropixel row), all rows may be written simultaneously, and then the corresponding min-row-time can be used to calculate the amount of pad needed. For example, if four rows are being written at the same time given MRCx4=7 and there are four words of macropixels, then three words of pad can be inserted/expected. If the line-set crosses zone Z0, where all the rows are be written individually, LSS=4, MRCxl=6 and there are 22 words to transfer the line-set data, then two words of pad are needed to make a total of 6*4=24 words/clocks.
[00133] In all these cases, which apply to the modulation plane protocol, a corollary pixel padding may be inserted in the foveated image frame to facilitate a direct translation to the modulation plane format, when the two formats are using the same modes and order. First calculate the padding needed for the modulation plane words. Then include a pad pixel in the image frame for every pad bit in a modulation plane word.
[00134] In some embodiments, the use of horizontal sizes (Hsizes) that are a multiple of the word size (Wsize) times the macropixel size for a corresponding zone may reduce a need for padding between zone data in most cases. For example: if Wsize=l28, Z2Hsize=2048, ZlHsize=l024 and Z2Hmpix=4, then the number of Z2words outside of zone 1 is: (2048- 1024)/4/128 = 2.0; however, if zone 2 is smaller: (1920-1024)/4/128 = 1.75; or if zone 1 is smaller (if the eye gaze is near the edge): (2048-896)/4/l28 = 2.25. This may also cause padding in zone 1 data; there may be overall less padding (fewer words) if a zone size is not reduced as the gaze approaches the edge.
[00135] Line-Set-Size (LSS)
[00136] The number of display rows which are written as a set (in Raster-order mode).
This also indicates a maximum number of rows that can be written at the same time for a zone/macropixel row that is the same size as LSS. Due to physical routing layout, a display device may have a fixed LSS, which may require a system and driver to use this LSS or smaller. The value for LSS may also define a matching horizontal expansion capability. When this is not the case, another parameter may be used. Thus, if a display can support a max LSS=4 (max Vmpix=4), then the display can also support a max Hmpix=4.
[00137] As a non-limiting example, if there is just one zone, the line-set size may be one (LSS=l), implying there is no foveated reduction; zone 0 fills the display, with Hres (horizontal resolution) pixels in each row, and Vres (vertical resolution) number of rows of pixels. Put differently, Z0Hsize=Hres; Z0Vsize=Vres. This is a normal display mode at full resolution; every line is the same size: Hres bits.
[00138] As another non-limiting example, if there are only 2 zones (e.g., zone 0 and zone 1), where zone 1 vertical & horizontal macropixel size is 2L1=2, and LSS=2, then a line-set covers 2 display rows. ZlHsize=Hres; ZlVsize=Vres; ZlHbits = ZlHsize/ZlHmpix =
Hres/2. Some line-sets will only be in zone 1 : one line of ZlHbits bits corresponds to two rows of display pixels. It will take x=ZlHbits/Wsize words to transmit the data for that line- set. If this is a non-integer number of words and per-zone padding is enabled (or per-set padding is enabled, since there is only one zone in this line-set), then it is rounded up. The last word in the set will have unused/pad data in it. In addition, with min-row-time padding enabled, if x < n, then pad words are added to meet n, where n = the minimum number of clocks to write a single row. In this example, the line-set is just one macropixel row.
[00139] The remaining line-sets will cross both zone Z0 and zone Zl. In this case, the zone 1 words are sent first, followed by both rows of zone Z0 data. If there is zone Zl data on each side of the zone Z0 data in the display, the zone Zl data is packed together in the zone Zl word group. The display will separate these using the offset parameters from the header. Total words for this line-set is xO (zone ZO words) + xl (zone Zl words) + xp (pad words); xO+xl+xp >= 2n.
[00140] As another non-limiting example, if there are 3 zones and LSS=4, then the line-set covers four display lines. Additionally, Z2Hsize=Hres and Z2Hmpix=4, ZlHmpix=2. Some line-sets will only be in zone Z2 (outside of zone Zl): one line of Hres/4 bits corresponds to Z2Vmpix=4 rows of display pixels. It will take x2=Hres/4/Wsize words to transmit the data for that line-set. If this is a non-integer number of words and corresponding padding is enabled, then it is rounded up. The last word in the set will have unused/pad data in it. In addition, with min-row-time padding enabled, if x2 < n, then pad words are added to meet n.
[00141] If the line-set crosses all 3 zones, then zone Z2 data is sent first (to ease implementation of writing into the display line buffer), zone Zl and zone ZO data follow in order of rows for just-in-time (JIT) order. Each word only contains data from the same zone; there may be unused bits in the last word of a zone set. Depending on zone sizes, pad words may not be needed to meet the minimum row timing. Each row will be written one at-a-time because of the unique data per row in zone 0. The total words for the line-set are: x = x0+xl+x2+xp, where x>=4n. The number of words for zone Z2 data is less than it was for the previous example because it does not include the portion of the line in zones Zl and Z0. Therefore, x2 = roundup((Z2Hsize-ZlHsize)/Z2Hmpix/Wsize).
[00142] In some embodiments, some non-standard formats may also be used, where a low- res overlay over the entire display was desired. It may be defined with multiple zones but make the higher resolution zones be of size 0. It may also define a format with just one zone (zone 0), but define its macropixel size to be larger than 1. In this case, also set the LSS equal to the larger macropixel size to enable simultaneous row writes. The display device would have to be compatible with this format and system configuration.
[00143] Min Row Clocks [00144] The display device has timing requirements to write a row to its array. This can be translated into a minimum number of clocks (“n”) at the given clock speed. When only writing high-res data for an entire display, there are plenty of clocks/time to write each row. Yet, when transmitting a single row of macropixels for a line-set all in one low-res zone, the number of data words is much less and may be shorter than the time required to write a row to the array. For example, with three zones and a line-set all in zone Z2, only ¼ of the clocks/words are provided, which may be too short for the row write timing. By defining the minimum number of clocks needed, the transmitter can pad extra words at the end of the line- set data to meet the required timing. For zone Z2 only line-sets with LSS=4, only In (or l*MRCx4) row time is needed; and thus, all four rows can be written simultaneously. For zone Zl and Z2 line-sets, 2n (or 2*MRCx2) row times are needed; whereby the rows written in pairs. For line-sets crossing all three zones, 4n (or 4*MRCxl) row times are needed; whereby the rows are written independently.
[00145] In some embodiments, the methodology described herein does not use a data enable or wait signal for dynamic flow control; this is a known predictable timing requirement and can be accommodated by padding the transmitted data.
[00146] Data Formats
[00147] FIG. 8A illustrates a data format of foveated image in host memory, in accordance with some embodiments. The foveated Image in the host memory may include a full color image of macropixels for example for three zones. As shown, the length of each row and number of rows in each group depend upon the zone sizes and offsets. In an embodiment of the present invention, even within one zone, a system and/or method, in accordance with the present invention, may have shorter rows, corresponding to areas that have a cut-out of an inner zone; where the data for both sides of the zone on either side of the cut-out are concatenated to save space and bandwidth/time.
[00148] FIG. 8B illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with zone pad, in accordance with some embodiments. It is to be noted, the end-of-zone pad when used causes the macropixel data may end on a display word boundary, as shown. However, in accordance with an embodiment of the present invention, the macropixel data may not end on a video row boundary.
[00149] The video format of standard image transport interfaces may use a common row length, which does not match zone row lengths. Macropixel rows are packed and wrapped inside the video row active data. In an embodiment of the present invention, there may be blanking time between each video interface row.
[00150] Plane data formats are usually continuous data. Plane data formats can be very similar to image data formats. They are just l-bit per macropixel, instead of n-bits per macropixel. No horizontal synch (H-sync) or horizontal blanking (H-blanking).
[00151] The data formats shown in FIGS. 8B, 8C, 8D and 8E could be formats for full- color macropixels. For color-sequential formats, the data order after the header is repeated three times. The amount of padding for word boundaries would probably be different for color-sequential data.
[00152] FIG. 8C illustrates a data format for image frames or modulation planes, where the data is sent via Zone-order with row pad, in accordance with some embodiments. It is to be noted that zone Z0 rows could need padding to end on a display word boundary, but often the zone Z0 horizontal size (H-size) is on a word boundary.
[00153] FIG. 8D illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and zone-set-order with per-zone and line-set row-timing padding, in accordance with some embodiments. It should be noted that depending on zone vertical offsets, the first line-set usually just contains data for the outside zone, as implied in the figure. But it could have multiple zone data.
[00154] The H-size of the display will be wider than any macropixel row. Macropixel rows are padded to word boundaries then packed together in the data stream. It should be noted that zone ZO rows could need padding to end on a display word boundary; but often the zone ZO, H-size is on a word boundary.
[00155] FIG. 8E illustrates a data format for image frames or modulation planes, where the data is sent via Raster-order and Just-In-Time (JIT) order with per-zone and line-set row timing padding, in accordance with some embodiments. Macropixel data is packed as row- sets and padded for word boundaries and timing needs then packed together in the data stream. It should be noted that depending on zone vertical offsets, the first line-set usually just contains data for the outside zone, as implied in the figure. Yet, in some embodiments, there could be multiple zone data.
[00156] Multi-Drivers Per Array Column [00157] For multi-driver configurations, FIG. 9A illustrates the physical layout of column multi-driver configurations, in accordance with some embodiments; modulation planes are particularly pressed to meet pixel array write timing and line-sets that have little data (for example those that only cross the highest macropixel ratio zone) may need additional time to finish the multi-row write operation. Rather than adding padding pixels/bits to meet timing, multiple column driver may be used in some embodiments to simultaneously write multiple line-sets at the same time. One, two and four divers per column configurations are shown. Various arrangements of the multiple drivers are shown where 1, 2, 4 or 8 adjacent rows share the same driver; extending these options to larger sets should be understood by someone skilled in the art.
[00158] FIG. 9B illustrates the number of row write times required for the multi-driver arrangements of FIG. 9A for certain line-set conditions and simultaneous groupings, in accordance with some embodiments. There is a design trade-off to use multiple column drivers verses routing space and row write times. Also another trade-off between larger adjacent driver arrangements and additional buffering to write more line-sets simultaneously. Some systems or embodiments may also provide single pixel steering (vertical and horizontal) of the foveated image which causes non-alignment of the line-set with the adjacent driver arrangement, which further compounds the advantage of multiple drivers and requires larger sets to achieve the time savings as compared to the single driver per column case.
[00159] To overcome the min-row-time padding associated with large macropixels, display devices may have multiple column drivers per display column. Each of these drivers may be routed to different sets of adjacent rows, so that multiple different rows with different data can be written at the same time; this allows min row timing to apply to multiple row- sets. The data from the first row/line-set can be buffered up to be used with the 2nd row/line- set, and the like. These levels may be defined and may be included in a header or with data sent by a system, method, and/or protocol, in accordance with the present invention, to a display, using the following parameters:
[00160] ltol: This represents the use of one driver per column.
[00161] 2ps: This represents the use of two drivers per column, one per line-set: one is connected to all the even line-set rows, the other is connected to all the odd line-set rows.
This allows the data/clocks from two line-sets to be applied toward the timing required for the rows of one line-set. This assumes that both line-sets cross the same number of zones (V- step-size should be 2*LSS). Any padding that is still needed is applied after the 2nd line-set.
[00162] 2ph: This represents the use of two drivers per column, one per half line-set: one is connected to the top half of the line-set rows, the other is connected to the bottom half of the line-set rows. This allows line-sets crossing the 2 largest zones, which would normally require 2 row times to be written at the same time; uses the clocks for the whole line-set to meet the MRC. This option is probably less preferred than a full line-set mapping.
[00163] 4ph: This represents the use of four drivers per column, one per half line-set: one is connected to all the even line-set top half rows, the next to the even bottom half rows, the next is connected to all the odd line-set top half rows and the last is connected to the odd bottom half rows. This allows more time for both the largest zone and the largest two zones. Again, V-step-size should be 2*LSS.
[00164] 4ps: This represents the use of four drivers per column, one per line-set: one is connected to the first of four line-sets, the next to the second of four line-sets, the next is connected to the third and the last is connected to the fourth. This allows the data time from 4 line-sets to be applied to needed timing for one line-set (V-step-size should be 4*LSS).
[00165] As a non-limiting example, if a display utilizes 2ps multi-drivers (the header selects MD=2ps) and LSS=4, this is like two line-sets being treated as one, for min-row-time padding. Yet, the display buffers the data to write both line-sets simultaneously. Therefore, n applies to the number of clocks for the combined line-sets in zone 2 only; wherein, the number of clocks is 2n, if in zones 1 & 2; and 4n, if the line-sets cross zone 0. It should be noted that this also restricts the size and offsets of the zones in the vertical direction to be a multiple of eight display rows. In this way, both groups of four rows have the same zone crossing condition.
[00166] Macropixel ratios, Pixels-per-Degree and Cones-per-Pixel
[00167] FIG. 10 illustrates the diameter, visual field width and cone density of various regions of the human eye relative to the fovea, as available from public sources. The data is used as exemplary of optical design consideration of a foveated display system.
[00168] The most straight-forward approach to analyze the human vision response that would match display hardware characteristics would be to use binary multiples in pixel resolution (rational for integer steps and other options are discussed later). For example, let the central high-resolution macropixels be lxl display pixels, the next step in reduced resolution would be a macropixel that is equivalent in size to 2x2 of the high resolution pixels, the next would be 4x4, then 8x8 and so forth. However, since the greatest ratio between central and peripheral cone density is 10-14 to one (see Figure 10), there is no binary multiple above 8x8 that stays within the human vision sensitivity. In accordance with an embodiment of the present invention, a horizontal macropixel ratio is the same as the vertical macropixel ratio for the zone, and, for example, may result in a square array of display pixels per macropixel; however rectangular macropixel sizes could be used. In an embodiment of the present invention, each macropixel ratio may be an integer that enables the display device hardware to copy (or write) each macropixel bit or value to the corresponding display pixels’ bit or value, in the horizontal and/or vertical direction.
[00169] An imaging application will often pick a resolution for the central area that matches an average resolution for an area larger than the Foveola or FAZ; thus, the application is not using the highest fovea density. A display typically does not include the entire periphery of the user’s FOV; thus, the lowest sensitivity of the human vision in the outer periphery is not used. Each of the lower resolution zones select its resolution based on the highest sensitivity portion of the zone, which is at the inner boundary of the zone. Or in other words, the inner zone boundary is selected based on the vision sensitivity dropping to the selected resolution for the zone. This means the lowest system resolution will be based on the sensitivity at the boundary between the two lowest resolution zones, which is an angle much less than the total FOV. Thus, considering all three of these factors, the ratio of highest to lowest resolution may only be 4 to 1.
[00170] The number of cones per display pixel in the central high-resolution zone is another way to select the optical system to fit the resolution to the FOV. The industry considers a“Retina Display” to have 60 pixels per degree of VF for the central region, which equates to roughly 5-8 cones per pixel (where, Foveola: 500cones/mm * .35mm/degree / 60pixels/degree = 2.9cones/pixel in one dimension; 2.92 = 8.5 cones/pixel in area. FAZ: 400cones/mm * .5 mm/ 1.5 degree / 60pixels/degree = 2.22cones/pixel in one dimension; 2.222 = 4.9cones/pixel in area). This system and method of foveated display does not select a cone- to-pixel threshold or sensitivity; that is controlled by the application. This system and method of foveated display just relates the ratio of the central zone resolution to the lower resolution zones.
[00171] As discussed above, using power of 2 steps in resolution, the peripheral vision needs less than ¼ of the linear resolution as the central region. Hence, the use of a 4x4 macro pixel (1 macropixel represents 16 display pixels, which equals a 1/16 reduction in area resolution) is a corresponding option. Thereby, the resolution for the outer foveation zone would be defined as 4x4. There is only one binary step between 4x4 and lxl, which is 2x2. Accordingly, three zones are defined, where one zone coincides with each of the binary resolution steps. Other numbers of zones and steps in resolution may be supported, but these three would be typical for this system and method of foveated display.
[00172] Sizes of Zones or Transition Between Zones
[00173] The next consideration may where to transition between zones and how much would be the total reduction in bandwidth. In an embodiment of the present invention, where to transition between zones is dependent on the optical system parameters and the needs of the host processor. This system and method of foveated display is agnostic to the sizes of each zone (with minor constraints of overlapping regions and step sizes in area as discussed later). It is instructive to observe the profile of the fovea regions to appreciate the magnitude of the resolution reductions that the host will control. Analyzing the resolution in terms of the user’s visual field, each transition point can be defined in terms of degrees VF from the center of gaze. A transition from the peripheral low-resolution (macropixels of 4x4) to a mid-resolution (macropixels of 2x2) may be made at the boundary between the Perifovea and the Mid Peripheral (because its cone density is—1/4 linearly of the central region), which is about 9° from the center. The system may add to that some tolerance to allow for an eye tracking accuracy and latency. If that tolerance is 5°, the transition would be pushed out to ±14°. Similarly, there would be a transition from the mid-resolution (macropixels of 2x2) to full resolution (lxl pixels) at the boundary between the Fovea Centralis and the Parafovea (because its cone density is—1/2 of the central region), which is about ±3°; with the same tracking tolerance, it makes the zone transition at about ±8°. Some systems with a very wide Field of View (FOV) may even accept a 3rd transition from ¼ resolution to 1/8 resolution.
[00174] All these transition considerations have little impact on this system and method of foveated display. The system designer will have to determine the worst case zone sizes to calculate the maximum bandwidth requirements. As will be shown later, a typical configuration of 3 zones with 60 degree FOV per eye, cuts the bandwidth to l/6th; wider FOV and tighter tracking tolerance will reduce it even further.
[00175] Digital Display Device [00176] Display devices typically use a memory array structure to control the pixels of the display; each column in the display has a column data driver and each row has a write enable. Thus, whole rows of pixels can be written at once. Consequently, data from different zones that overlap the same row can be combined together before the row can be written. Some display devices may have block enables to allow only a portion of a row to be written at a time which may allow only writing the portion of the row matching a foveation zone, which would enable direct zone-order writing. Yet, this would require additional time to write to some rows multiple times to address all pixels in the row. The block enable boundaries may put unacceptable constraints on foveation zone boundaries to make them align, which would make zones much larger and thus have limit data bandwidth reduction. Very low resolution zones will likely be unable to write each row-set in the short time to receive its reduced data set, thus requiring padding/throttling of the lowest zones, reducing their benefit. For these reasons, block enables for partial row writing will have limited benefit. Some devices may still segment the rows and have multiple enables; the concepts in this system and method of foveated display still apply.
[00177] To communicate zone definitions and other meta data, a header may be used at the beginning of a plane to define, the total image or plane size, the size and location of each area-of-interest zone as well as parameters controlling the order and packing of data. This meta data could also be sent as side-band data outside the normal video data (i.e. command packets during the vertical blanking interval). The header method at the beginning of video data could be packed as one parameter per pixel or overlaid as one bit of a parameter per pixel (the pixel data is all zeroes or all ones) so that it passes easily to the modulation plane format. For example, in one protocol configuration for the plane interface there are three (3) zones and data is grouped in sets of 4 output rows, where each word of data contains data of just one resolution from one zone (word size is defined by the physical interface to the display device and used by the host in formatting the packed data). The number of words needed for each 4-row set varies, according to whether the rows are only in zone 3 or if they cross multiple zones. Thus, the transmitted data size per row-set will be larger or smaller as the image is sent from top to bottom; there is no constant line size. In this same example, the display device can include the ability to write up to 4 rows of pixels at the same time from one short line of input macropixels; it can also be able to save and expand the low-resolution macropixels, mix them with high-resolution pixels for multiple rows, then self-time the write to each row. [00178] As an example, the total image/display area may be divided into 3 zones, as shown below :
Figure imgf000057_0001
Figure imgf000057_0002
[00179] There are multiple types of display devices applicable to this system and method of foveated display. Some displays are for monochrome, some are simultaneous full color and others color sequential. Some displays inherently represent the full intensity range as a fractional steady-state intensity (i.e. analog displays or self-modulating digital displays); others are inherently digital and only drive to binary stable levels, which require frequent updates of modulation planes to modulate the intensity. Displays often have a frame buffer memory for storing image data to then drive and illuminate the display using the data from the buffer; there is usually two buffers for ping-pong operation (during one frame one buffer is being written with new frame data, while the other buffer is being read to display the previous frame data; the next frame they swap read and write); however optimized designs benefit from a single buffer if the transport data format/protocol/bandwidth supports the display sequence. The most demanding of these on transfer and write timing is color sequential, binary display with a single frame buffer. The remaining description will focus on this configuration, but the foveation concepts in this system and method of foveated display can also be applied to the other display configurations.
[00180] Referring now to FIG. 7, a timing diagram of transport illumination, showing color sequential images and planes with respect to buffer type, in accordance with some embodiments is shown. As shown, timing diagrams are illustrated for various protocol options of these configurations. Specifically, the timing diagram of Image Transport, Plane Transport, Display Write and Illumination for different configurations of Frame Buffer, Image data-order and Display Type are shown. All of these are represents as applied to a color-sequential illumination display. Particularly, a display and driver hardware
configuration could have one, two (or more) or no frame buffer memories. Most display drivers use a dual ping-pong frame buffer architecture. Color display architectures are usually either simultaneous 3-path illumination & optics or color-sequential; color-sequential is the more economical and timing challenged approach; these diagrams and descriptions are focused on it although the foveated protocol improvements could also apply to monochrome or 3-path systems. The sequence is generally the same: at the start of the Vertical Sync, the image must first be transported/transmitted from the host/source to the display driver, processed as needed and stored in the frame buffer(s). Color-sequential systems often have more than 3 color-sub-frames (CSF) per input image frame, to provide multiple sets of primary pulses, to improve image quality. After the image is received, the frame buffer can be read to form the data for the CSF and transmitted over the plane (or grayscale) transport interface; the data is then stored/used in the native display elements to represent that CSF portion of the image; this is repeated for each CSF.
[00181] In a system with one frame buffer, the buffer should not be read while the data is being written at the same time; it would cause new and old data to get mixed (or corrupted); it must wait for the image write to finish. For full-color pixel images, the entire image must finish before any CSF reads can start (as shown in the first two drawings). For modulated plane displays, the illumination usually starts soon after the modulation planes begin (the corresponding frame buffer data is usually read multiple times; once for each modulation plane). For grayscale displays, the illumination is not activated while the display is being updated with the new CSF data. It is often desirable to utilize the illumination system with the highest duty factor (minimize the time when there is no active illumination). Thus these full-color single frame buffer systems try to minimize the image transport time and the grayscale transport time; this is where the reduced bandwidth of a foveated data protocol helps greatly. Furthermore, the modulation plane transport systems are also limited by the time to transmit each plane, which limits how many modulation pulses will fit in an illumination cycle or it limits the gamma level that can be achieved which requires packing some modulation pulses closer together than other pulses. Foveated plane transport solves this bohleneck too by greatly reducing the data (size) of the transmitted planes, so the modulation pulses can be shorter.
[00182] The bandwidth of a plane or grayscale transport interface is usually much higher than the bandwidth of an image transport interface (because it is usually a short and wide chip-to-chip interface or an internal on-chip interface). The host usually uses most of the frame time to transmit the image; for a system with dual frame buffers used in a ping-pong fashion (swap the write vs read buffer at each Vsync), there is no advantage to writing the image faster in a small portion of the frame. However, motion sensitive applications need faster response and would benefit from starting the buffer read sooner. If the host can send the image data in color-sequential format, then only one frame buffer is needed; one color is written to the frame buffer while the other colors are read & illuminated. This provides high duty cycle illumination and low latency (time from transport start to matching start of illumination). With only one set of primary illumination cycles (3 CSF’s), nearly the whole frame time can be used to transmit the image. Higher sets of CSF’s require the transmit time to be reduced to smaller fractions of the frame time and still fit in one frame buffer (i.e. image write must fit in the read time of 4 CSF’s); the reduce data for foveated protocols enables these without significantly increasing the bandwidth of the physical interface.
[00183] If the display device supports direct grayscale write and only 3 CSF’s are used, then no frame buffer is needed (the last diagram); this does require pauses in the image transmit between color sub frames to allow for illumination. The illumination duty cycle depends on the image transport time and frame rate; this can still be relatively high using foveated transport and foveated write. This is mostly applicable to systems with high frame rates or not motion sensitive (that can tolerate just 3 CSF’s).
[00184] Digital displays may receive modulation plane data (1 bit per pixel) repeated multiple times per frame or sub-frame with various pulse-style patterns to integrate to the desired grayscale. This protocol and method is aimed primarily at that modulation plane interface level although it can also be applied to displays that accept grayscale data per pixel and select their own pulse-style pattern and are still constrained by the internal array structure.
[00185] To integrate the pulses such that the human visual response sees each pixel as a constant grayscale or color level, each pixel of the display is updated rapidly (by each modulation plane), e.g., at frequencies such as, for example, 10 kHz to 100 kHz, etc. The pixel array is written row(s) at-a-time, using column drivers for each pixel of the row (shared among all/many rows), and a unique row strobe for each row. The crucial timing constraint on writing to the display is the time to write one row and how many rows can be written at the same time. This method defines high- and low-resolution zones that easily merge and overlay onto writing this row structure, so that the entire active area is updated by each modulation plane.
[00186] At the modulation-plane interface, since pixels are only 1 bit deep, there is no opportunity to interpolate. Alternatively, averaging over multiple pixels would undo the benefits of spatial dithering, which may already be encoded in the data. Even if multi-bit data was available per pixel, making unique pixel data in the expanded low-resolution areas would prevent the time savings of writing multiple rows at the same time with the same column driver data. Most displays will not modify the input data and are thus limited to just replicate in the low-resolution areas.
[00187] Amplitude v. Phase Mode
[00188] If the display device, in accordance with the present system and method of foveated display, is used in an amplitude mode to provide a visible image in which each pixel on the display corresponds to one pixel in the image as viewed or projected (or to a small group of adjacent pixels) then the foveation technique enabled by the present disclosure may be used to achieve high spatial-frequency and/or temporal-frequency updates to a foveation region in the innermost zone, corresponding to the region of the image which the observer is, or is believed to be, observing most closely and high temporal-frequency updates to the lower resolution/spatial-frequency regions in the outer zones corresponding to the peripheral vision of the observer. In such an amplitude mode display, there is a substantially one-to-one relationship between display pixels/macropixels and image pixels, so there is a similar region-to-region mapping for the region of the display which can beneficially receive more information to match the high sensitivity region of a user’s visual system.
[00189] However, in a phase mode display many display pixels (in some implementations, all the display pixels) contribute to each image pixel (in some implementations, all image pixels). In this case, the high-sensitivity region or regions of an observer’s visual system may map to a substantially larger (in pixel count) region of the display. In such a system, many different algorithms or methods can be used to generate patterns that are variously known as, for example, Computer Generated Holograms (CGHs), interferograms, holograms, interference patterns, and phase patterns, wherein this pattern, or a version derived from it, is displayed on the display device as a distribution of phase values, amplitude values, or a complex combination of phase and amplitude values, such that light diffracts or scatters or otherwise propagates from said displayed pattern to form, at some actual or optical distance from the display device, a second pattern corresponding to the desired image or an image from which the desired image can be obtained by further optical means. In at least some of these algorithms or methods, some or all of the lower spatial-frequency content of the desired image is encoded or included on the displayed image within a region which is substantially or completely enclosed within a larger region of the displayed image within which some or all of the higher spatial-frequency content of the desired is encoded or included. In such cases, the systems and methods of the present system and method of foveated display may beneficially be used to provide higher spatial-frequency and/or temporal-frequency updates to a region in the outermost zone or intermediate zones.
[00190] Furthermore, there are also amplitude mode applications wherein the region of the display which most beneficially can receive higher spatial-frequency and/or temporal- frequency updates may be an outer or the outermost region rather than the innermost region or inner regions. For example, for low brightness scenes viewed in scotopic or mesopic rather than photopic mode by the observer (also known as“night vision”), the greatest visual sensitivity (for motion, color, amplitude or other visual parameters) can be outside or surrounding the foveal region.
[00191] Below are embodiments of the present invention. However other embodiments are described above. Thus, the following embodiments are not intended to limit the embodiments of the present invention.
[00192] A foveated light modulation system comprising: a processor coupled to receive input image data and foveation zone definition data to generate a foveated image frame having header packet data that identifies a first zone having a first resolution and a second zone having a second resolution, and wherein the second resolution is less than the first resolution, and wherein at least one of the first zone and the second zone is compressed based on a macropixel ratio; a driver controller circuit coupled to receive the foveated image frame from the processor that generates modulation planes based at least in part on the foveated bit plane data; and a modulation device, having pixels comprising at least one macropixel, is coupled to the driver controller circuit that receives the modulation planes, and wherein each modulation plane of the modulation plane is expanded based upon the header packet data.
[00193] A foveated light modulation system wherein for the second zone having the second resolution, a single bit in a modulation plane represents a macropixel of the modulation device, and the single bit is copied to a subset of the display pixels defined by the macropixel ratio.
[00194] A foveated light modulation system, wherein the modulation device is a display.
[00195] A foveated light modulation system, wherein the modulation device is an LCOS display.
[00196] The foveated light modulation system, wherein said modulation device comprises a decode logic module coupled that receives the modulation plane. [00197] A foveated light modulation system, wherein the modulation device further comprises raster logic, and wherein the decode logic module parses the header packet data and the raster logic.
[00198] A foveated light modulation system, wherein the decode logic generates a dataset based on the header packet data and the macropixel ratios in the modulation planes.
[00199] A foveated light modulation system, further comprising: tracking logic module coupled to the processor, wherein the tracking logic module senses retina gaze data and head position data of a user and generates foveated zone data corresponding to the sensed retina gaze data and head position data.
[00200] A foveated light modulation system, wherein the processor further comprises: a foveated rendering module that couples to the tracking logic module and receives the sensed retina gaze data and head position data and determines the size and location of each zone using a foveated rendering algorithm.
[00201] A foveated light modulation system, wherein the foveation rendering module determines the size and location of each zone using a foveated rendering algorithm based on at least one of total field-of-view data, optical system distortion data, fovea acuity data , tolerance of tracking logic data, latency of tracking logic data, and rate of motion data.
[00202] A foveated light modulation system, wherein header packet data comprises: a resolution-order toggle bit enabling a center-detail mode and a periphery-detail mode, wherein when the center-detail mode is active, the foveated image frame includes a plurality of concentric zones comprising the first and second zones having a zone of highest resolution located at a center of a user fixation point, whereby resolution of a zone adjacent to at least one of the first zone and the second zone is lower than full resolution of the other adjacent zone by a predetermined value, and the resolution of one of the concentric zones decreases in descending order away from the user fixation point, and wherein when the periphery-detail mode is active, the foveated image frame includes a plurality of concentric zones having a zone of highest resolution located at a periphery of the plurality of concentric zones, whereby resolution of the second zone is lower than full resolution than the first zone by a
predetermined value, and the resolution of each concentric interior zone decreases in descending order; a transmission-mode toggle bit enabling a raster-order mode and a zone- order mode, wherein when the raster-order mode is active, data transmission comprises a plurality of line-sets representing rows of data from the plurality of concentric zones corresponding with a display order of the original image, and wherein when the zone-order mode is active, data transmission comprises a sending of each one of the plurality of concentric zones in its entirety before data transmission of an adjacent zone is sent; a zone number segment defining number of the plurality of concentric zones; a zone-size segment defining horizontal and vertical size of each one of the plurality of concentric zones; a zone- offset segment defining horizontal and vertical offset associated with each one of the plurality of concentric zones; and a plurality of display parameters.
[00203] A foveated light modulation system, wherein the plurality of display parameters comprises: a word-size segment defining a plurality of pixels-bits transferred relative to a clock cycle associated with the driver controller circuit; an x-offset size segment defining a plurality of pixels-bits per horizontal offset Least Significant Bit (LSB); a line-set size segment defining a maximum number of rows to be simultaneously written; a row-time segment defining a plurality of clocking segments required to write a row; and a dual column-drive mode indicator enabling simultaneous writing of two rows.
[00204] A method of producing a foveated image on a display screen, comprising:
receiving input image data relating to the image and foveation zone parameters, and protocol parameters; generating rendered foveated image data based on the input image data, and the foveation zone parameters, generating a foveated image frame, based on the rendered foveated image data the protocol parameters, wherein the foveated image frame includes header packet data, and transmitting the foveated image frame to at least one of a grayscale device and a modulation device.
[00205] A method, wherein the foveated image frame identifies two or more concentric zones of differing resolution, whereby each of the concentric zones is defined by a plurality of macropixels and a corresponding macropixel ratio.
[00206] A method, wherein the at least one grayscale device and modulation device comprises decode logic and raster logic.
[00207] A method of claim, wherein at least one grayscale device and modulation device comprises pixels, and further comprising: producing a foveated image upon at least some of the pixels of the at least one grayscale device and modulation device based on the header packet data and each corresponding macropixel ratio utilizing the raster logic. [00208] A method, wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratio associated with each zone.
[00209] A method, wherein the receiving image data comprises: receiving tracking data based upon retina and at least one of head gaze direction data and position/location data of a user in real-time.
[00210] A method, wherein the foveated image data based on the input image data, the foveation zone parameters, and the tracking data.
[00211] A method, wherein generating a rendered foveated image comprises: generating image macropixels using 3D to 2D rendering techniques of foveated rendering or other foveated rendering techniques as known in the industry to represent projected text or graphics in the foveated image space, using the input image data, observer gaze direction data and vantage point data and the foveation zone definition parameters.
[00212] A method, wherein the generating a foveated image frame comprises: generating header packet data based upon the foveation zone parameters, the selected protocol parameters and display device capabilities; and encapsulating the rendered foveated image data with the header packet data to form a foveated image frame.
[00213] A method, wherein the transmitting the foveated image frame comprises:
transmitting the foveated image frame to a driver controller circuit; generating foveated bit plane data; converting the foveated bit plane data into modulation planes based upon an associated modulation scheme and the header packet data; and transmitting the modulation planes to one or more modulation devices having foveated modulation plane raster logic coupled to a display circuit having an array of pixels or transmitting the foveated image frame to a grayscale display device.
[00214] A method, wherein the producing of the foveated image upon the array of display pixels comprises: parsing the header packet data and foveated data from the foveated image frame or foveated modulation plane; translating a modulation plane of the foveated data into foveated display data; and applying a corresponding binary value of the foveated display data representing the line set to each sub-set of the array of pixels associated with each foveated zone; and repeating the translating and applying until each line-set of foveated data is displayed. [00215] A method, wherein the detecting the enabling of a raster-order mode and a zone order mode comprises: parsing the header packet data to detect a transmission mode toggle bit; and detecting whether the transmission-mode toggle bit is set to enable the raster-order mode and the zone-order mode, and detecting, in response to no detected raster-order mode, respective zone data of the incoming data based upon number of zones, horizontal-zone-size, line-set-size, word-size and x-offset-size.
[00216] A method, wherein the writing in response to the enabled zone-order mode comprises: writing, in response to no detected raster-order mode, respective zone data of the incoming data into a respective zone buffer (Z(n-l), ... , Z3, Z2, Zl, Z0}; waiting, in response to no detected raster-order mode, for a read delay of predetermined time; and toggling, in response to end of read delay, a read pointer corresponding to each respective zone buffer.
[00217] A method, wherein the identifying whether data from a respective zone exists comprises: parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, wherein row-time is the number of clocks required for writing a row based upon the display circuit; selecting a row of the line-set of data; detecting whether a high resolution zone (Z0) of a plurality of concentric zones is present in the row based upon header packet data; and detecting, in response to no detected presence of the high resolution zone, whether a next consecutive zone (Zl, Z2, Z3, ... Z(n-l)) is present in the identified line-set of data until one zone is detected.
[00218] A method, wherein the expanding the line-set into foveated display data comprises: parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, wherein row-time is the number of clocks required for writing a row based upon the display circuit; selecting a row of the identified line-set of foveated data based line-set-size and row-time; identifying a left-side segment and a right-side segment of associated with each respective zone; writing the left side segment into a row buffer shifted by x-offset-size corresponding to the respective zone data, wherein the left-side segment is written 2(r-l) times, whereby 2(r-l) represents the corresponding macropixel ratio, where r = {n, n-l, n-2, n-3, ... 1 }, n = number of zones, when respective zones {Z0, Zl, Z2, Z3... Z(n-l)} are detected; summing the horizontal zone-size and x- offset-size of each respective zone data to define a right-side pointer for each right-side segment of the respective zone data; writing the right-side segment into the row buffer shifted by the right-side pointer of the respective zone data; storing the row buffer into a row queue wherein k rows are written 2(r-l) times, where k = {1, 2, 4, 8, ...2(n-l)}; and repeating the selecting, identifying, writing, summing, writing, and storing until each row of the identified line-set is selected.
[00219] A non-transitory computer-readable medium including code for performing a method, the method comprising: receiving image input data; receiving tracking data based upon retina location of user in real-time; generating header packet data based upon the image input data and the tracking data to define foveated zone information; encapsulating the image input data within the header packet data to form a foveated image frame; transmitting the foveated image frame to one or more modulation devices each having an array of pixels; converting the foveated image frame into modulation planes; parsing the header packet data and foveated image data from each modulation plane; translating a modulation plane of foveated image data into foveated display data; and applying a corresponding binary value of the foveated display data representing the line set to each sub-set of the array of pixels associated with each foveated zone.
[00220] A computer-readable medium, wherein the translating a modulation plane of foveated image data comprises: parsing the header packet data to detect number of zones, horizontal-zone-size, line-set-size, row-time, word-size and x-offset-size, wherein row-time is the number of clocks required for writing a row based upon the display circuit; detecting whether the transmission-mode toggle bit is set to enable a raster-order mode; detecting, in response to no detected raster-order mode, respective zone data of the incoming data based upon number of zones, horizontal-zone-size, line-set-size, word-size and x-offset-size;
writing, in response to no detected raster-order mode, respective zone data of the incoming data into a respective zone buffer {Z3, Z2, Zl, Z0}; waiting, in response to no detected raster-order mode, for a read delay of predetermined time; toggling, in response to end of read delay, a read pointer corresponding to each respective zone buffer; identifying, in response to a detected raster-order mode and in response to end of read delay, a first row of a line-set of data; detecting whether a high resolution zone (Z0) of a plurality of concentric zones is present in the identified line-set of data; detecting, in response to no detected presence of the high resolution zone, whether a next consecutive zone (Zl, Z2, Z3) is present in the identified line-set of data until one zone is detected; expanding data, in response to a detected zone, based upon the number of zones, horizontal zone-size, line-set-size, x-offset- size, row-time, and word-size; storing the expanded data in a row buffer based upon the x- offset-size; transferring the row buffer to a row queue, wherein k row(s) are written 2(r-l) times, where r = {n, n-l, n-2, n-3, ... 1 }, k = {1, 2, 4, 8, ...2(n-l)}, n = number of zones, when respective zones {Z0, Zl, Z2, Z3} are detected; retrieving a next line-set of data; and repeating the detecting, expanding, storing, transferring, and retrieving until every line-set of data is retrieved.
[00221] A computer-readable medium, wherein the expanding data, in response to a detected zone, based upon the number of zones, horizontal zone-size, line-set-size, row-time, and word-size comprises: identifying a left-side segment and a right-side segment of respective zone data; writing the left side segment into a row buffer shifted by x-offset-size corresponding to the respective zone data, wherein the left-side segment is written 2(r-l) times; adding horizontal zone-size and x-offset-sizes of each respective zone data to define a right-side pointer for each right-side segment of the respective zone data; writing the right- side segment into the row buffer shifted by the right-side pointer of the respective zone data; storing the row buffer into a row queue; and repeating the identifying, writing, adding, writing, and storing for each row of the respective zone data.
[00222] In the above description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
[00223] It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
[00224] Detailed illustrative embodiments are disclosed herein. However, specific functional details disclosed herein are merely representative for purposes of describing embodiments. Embodiments may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein. [00225] It should be understood that although the terms first, second, etc. may be used herein to describe various steps or calculations, these steps or calculations should not be limited by these terms. These terms are only used to distinguish one step or calculation from another. For example, a first calculation could be termed a second calculation, and, similarly, a second step could be termed a first step, without departing from the scope of this disclosure. As used herein, the term“and/or” and the“I” symbol includes any and all combinations of one or more of the associated listed items. As used herein, the singular forms“a”,“an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms“comprises,”“comprising,”“includes,” and/or“including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Therefore, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
[00226] It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved. With the above embodiments in mind, it should be understood that the embodiments might employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing. Any of the operations described herein that form part of the embodiments are useful machine operations. The embodiments also relate to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general- purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. [00227] A module, an application, a layer, an agent or other method-operable entity could be implemented as hardware, firmware, or a processor executing software, or combinations thereof. It should be appreciated that, where a software-based embodiment is disclosed herein, the software can be embodied in a physical machine such as a controller. For example, a controller could include a first module and a second module. A controller could be configured to perform various actions, e.g., of a method, an application, a layer or an agent.
[00228] The embodiments can also be embodied as computer readable code on a non- transitory computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, flash memory devices, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion. Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The embodiments can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
[00229] Although the method operations were described in a specific order, it should be understood that other operations may be performed in between described operations, described operations may be adjusted so that they occur at slightly different times or the described operations may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing.
[00230] In various embodiments, one or more portions of the methods and mechanisms described herein may form part of a cloud-computing environment. In such embodiments, resources may be provided over the Internet as services according to one or more various models. Such models may include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). In IaaS, computer infrastructure is delivered as a service. In such a case, the computing equipment is generally owned and operated by the service provider. In the PaaS model, software tools and underlying equipment used by developers to develop software solutions may be provided as a service and hosted by the service provider. SaaS typically includes a service provider licensing software as a service on demand. The service provider may host the software, or may deploy the software to a customer for a given period of time. Numerous combinations of the above models are possible and are contemplated.
[00231] Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, the phrase“configured to” is used to so connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs the task or tasks during operation. As such, the
unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The
units/circuits/components used with the“configured to” language include hardware; for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/ component is“configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. 112, sixth paragraph, for that
unit/circuit/component. Additionally,“configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue.“Configured to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
[00232] The foregoing description, for the purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the embodiments and its practical applications, to thereby enable others skilled in the art to best utilize the
embodiments and various modifications as may be suited to the particular use contemplated. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims

CLAIMS What is claimed is:
1. A foveated light modulation system comprising:
a processor coupled to receive input image data and foveation zone definition data to generate a foveated image frame having header packet data that identifies a first zone having a first resolution and a second zone having a second resolution, and wherein the second resolution is less than the first resolution, and wherein at least one of the first zone and the second zone is compressed based on a macropixel ratio;
a driver controller circuit coupled to receive the foveated image frame from the processor and generate modulation planes based at least in part on foveated bit plane data; and
a modulation device, having pixels comprising at least one macropixel, is coupled to the driver controller circuit that receives the modulation planes, and wherein each modulation plane of the modulation plane is expanded based upon the header packet data.
2. The foveated light modulation system of claim 1, wherein for the second zone having the second resolution, a single bit in a modulation plane represents a macropixel of the modulation device, and the single bit is copied to a subset of display pixels defined by the macropixel ratio.
3. The foveated light modulation system of claim 1, wherein the modulation device is a display.
4. The foveated light modulation system of claim 1, wherein the modulation device is an LCOS display.
5. The foveated light modulation system of claim 1, wherein said modulation device comprises a decode logic module coupled that receives the modulation plane.
6. The foveated light modulation system of claim 5, wherein the modulation device further comprises raster logic, and wherein the decode logic module parses the header packet data and the raster logic.
7. The foveated light modulation system of claim 6, wherein the decode logic module generates a dataset based on the header packet data and the macropixel ratios in the modulation planes.
8. The foveated light modulation system of claim 1, further comprising:
a tracking logic module coupled to the processor, wherein the tracking logic module senses retina gaze data and head position data of a user and generates foveated zone data corresponding to the sensed retina gaze data and head position data.
9. The foveated light modulation system of claim 1, wherein the processor further comprises:
a foveated rendering module that couples to the tracking logic module and receives the sensed retina gaze data and head position data and determines the size and location of each zone using a foveated rendering algorithm.
10. The foveated light modulation system of claim 9, wherein the foveation rendering module determines the size and location of each zone using a foveated rendering algorithm based on at least one of total field-of-view data, optical system distortion data, fovea acuity data , tolerance of tracking logic data, latency of tracking logic data, and rate of motion data
11. The foveated light modulation system of claim 1 , wherein header packet data comprises: a resolution-order toggle bit enabling a center-detail mode and a periphery- detail mode, wherein when the center-detail mode is active, the foveated image frame includes a plurality of concentric zones comprising the first and second zones having a zone of highest resolution located at a center of a user fixation point, whereby resolution of a zone adjacent to at least one of the first zone and the second zone is lower than full resolution of the other adjacent zone by a predetermined value, and the resolution of one of the concentric zones decreases in descending order away from the user fixation point, and wherein when the periphery-detail mode is active, the foveated image frame includes a plurality of concentric zones having a zone of highest resolution located at a periphery of the plurality of concentric zones, whereby resolution of the second zone is lower than full resolution than the first zone by a predetermined value, and the resolution of each concentric interior zone decreases in descending order;
a transmission-mode toggle bit enabling a raster-order mode and a zone-order mode, wherein when the raster-order mode is active, data transmission comprises a plurality of line- sets representing rows of data from the plurality of concentric zones corresponding with a display order of the original image, and wherein when the zone-order mode is active, data transmission comprises a sending of each one of the plurality of concentric zones in its entirety before data transmission of an adjacent zone is sent;
a zone number segment defining number of the plurality of concentric zones;
a zone-size segment defining horizontal and vertical size of each one of the plurality of concentric zones;
a zone-offset segment defining horizontal and vertical offset associated with each one of the plurality of concentric zones; and
a plurality of display parameters.
12. The foveated light modulation system of claim 11, wherein the plurality of display parameters comprises:
a word-size segment defining a plurality of pixels-bits transferred relative to a clock cycle associated with the driver controller circuit;
an x-offset size segment defining a plurality of pixels-bits per horizontal offset Least Significant Bit (LSB);
a line-set size segment defining a maximum number of rows to be simultaneously written;
a row-time segment defining a plurality of clocking segments required to write a row; and
a dual column-drive mode indicator enabling simultaneous writing of two rows.
13. A method of producing a foveated image on a display screen, comprising: receiving input image data relating to the image and foveation zone parameters, and protocol parameters;
generating rendered foveated image data based on the input image data, and the foveation zone parameters;
generating a foveated image frame, based on the rendered foveated image data the protocol parameters, wherein the foveated image frame includes header packet data; and transmitting the foveated image frame to at least one of a grayscale device and a modulation device.
14. The method of claim 13, wherein the foveated image frame identifies two or more concentric zones of differing resolution, whereby each of the concentric zones is defined by a plurality of macropixels and a corresponding macropixel ratio.
15. The method of claim 13, wherein the at least one of a grayscale device and a modulation device comprises decode logic and raster logic.
16. The method of claim 15, wherein at least one grayscale device and modulation device comprises pixels, and further comprising:
producing a foveated image upon at least some of the pixels of the at least one grayscale device and modulation device based on the header packet data and each corresponding macropixel ratio utilizing the raster logic.
17. The method of claim 16, wherein, for foveation zones having decreased resolution, a single bit of an associated plurality of macropixels is copied to a subset of the array of display pixels based upon the corresponding macropixel ratio associated with each zone.
18. The method of claim 13, wherein the receiving image data comprises:
receiving tracking data based upon retina and at least one of head gaze direction data and position data of a user in real-time.
19. The method of claim 18, wherein the foveated image data based on the input image data, the foveation zone parameters, and the tracking data.
20. The method of claim 13, wherein generating a rendered foveated image comprises: generating image macropixels using 3D to 2D rendering techniques of foveated rendering or other foveated rendering techniques as known in the industry to represent projected text or graphics in the foveated image space, using the input image data, observer gaze direction data and vantage point data and the foveation zone definition parameters.
PCT/US2019/045975 2018-08-10 2019-08-09 Apparatus, systems, and methods for foveated display WO2020033875A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862717698P 2018-08-10 2018-08-10
US62/717,698 2018-08-10

Publications (1)

Publication Number Publication Date
WO2020033875A1 true WO2020033875A1 (en) 2020-02-13

Family

ID=67766359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/045975 WO2020033875A1 (en) 2018-08-10 2019-08-09 Apparatus, systems, and methods for foveated display

Country Status (2)

Country Link
TW (1) TWI813736B (en)
WO (1) WO2020033875A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11238836B2 (en) 2018-03-16 2022-02-01 Magic Leap, Inc. Depth based foveated rendering for display systems
CN114565664A (en) * 2021-12-27 2022-05-31 北京控制工程研究所 Modulation-based centering method and system
CN115236871A (en) * 2022-05-17 2022-10-25 北京邮电大学 Desktop type light field display system and method based on human eye tracking and bidirectional backlight
EP4092521A1 (en) * 2021-05-21 2022-11-23 Varjo Technologies Oy Method of transmitting a frame
US11644669B2 (en) 2017-03-22 2023-05-09 Magic Leap, Inc. Depth based foveated rendering for display systems
CN117092415A (en) * 2023-10-18 2023-11-21 深圳市城市公共安全技术研究院有限公司 Regional electromagnetic environment monitoring method, device, equipment and medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11249314B1 (en) * 2020-08-04 2022-02-15 Htc Corporation Method for switching input devices, head-mounted display and computer readable storage medium
US11778322B2 (en) * 2020-08-17 2023-10-03 Mediatek Inc. Method and apparatus for performing electronic image stabilization with dynamic margin

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050018911A1 (en) * 2003-07-24 2005-01-27 Eastman Kodak Company Foveated video coding system and method
US20180090052A1 (en) * 2016-09-01 2018-03-29 Innovega Inc. Non-Uniform Resolution, Large Field-of-View Headworn Display
US20180137602A1 (en) * 2016-11-14 2018-05-17 Google Inc. Low resolution rgb rendering for efficient transmission

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050018911A1 (en) * 2003-07-24 2005-01-27 Eastman Kodak Company Foveated video coding system and method
US20180090052A1 (en) * 2016-09-01 2018-03-29 Innovega Inc. Non-Uniform Resolution, Large Field-of-View Headworn Display
US20180137602A1 (en) * 2016-11-14 2018-05-17 Google Inc. Low resolution rgb rendering for efficient transmission

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KORTUM P ET AL: "Implementation of a foveated image coding system for image bandwidth reduction", VISUAL COMMUNICATIONS AND IMAGE PROCESSING; 20-1-2004 - 20-1-2004; SAN JOSE,, vol. 2657, 1 January 1996 (1996-01-01), pages 350 - 360, XP002636638, ISBN: 978-1-62841-730-2, DOI: 10.1117/12.238732 *
YUN XIE ET AL: "ROI coding with separated code block", MACHINE LEARNING AND CYBERNETICS, 2005. PROCEEDINGS OF 2005 INTERNATIO NAL CONFERENCE ON GUANGZHOU, CHINA 18-21 AUG. 2005, 1 January 2005 (2005-01-01), Piscataway, NJ, USA, pages 5447, XP055642263, ISBN: 978-0-7803-9091-1, DOI: 10.1109/ICMLC.2005.1527907 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11644669B2 (en) 2017-03-22 2023-05-09 Magic Leap, Inc. Depth based foveated rendering for display systems
US11238836B2 (en) 2018-03-16 2022-02-01 Magic Leap, Inc. Depth based foveated rendering for display systems
US11710469B2 (en) 2018-03-16 2023-07-25 Magic Leap, Inc. Depth based foveated rendering for display systems
EP4092521A1 (en) * 2021-05-21 2022-11-23 Varjo Technologies Oy Method of transmitting a frame
US11863786B2 (en) 2021-05-21 2024-01-02 Varjo Technologies Oy Method of transporting a framebuffer
CN114565664A (en) * 2021-12-27 2022-05-31 北京控制工程研究所 Modulation-based centering method and system
CN114565664B (en) * 2021-12-27 2023-08-11 北京控制工程研究所 Centering method and system based on modulation
CN115236871A (en) * 2022-05-17 2022-10-25 北京邮电大学 Desktop type light field display system and method based on human eye tracking and bidirectional backlight
CN117092415A (en) * 2023-10-18 2023-11-21 深圳市城市公共安全技术研究院有限公司 Regional electromagnetic environment monitoring method, device, equipment and medium
CN117092415B (en) * 2023-10-18 2024-01-19 深圳市城市公共安全技术研究院有限公司 Regional electromagnetic environment monitoring method, device, equipment and medium

Also Published As

Publication number Publication date
TW202023272A (en) 2020-06-16
TWI813736B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
WO2020033875A1 (en) Apparatus, systems, and methods for foveated display
EP3538986B1 (en) Dual-path foveated graphics pipeline
KR101785027B1 (en) Image distortion compensation display device and image distortion compensation method using the same
US10262387B2 (en) Early sub-pixel rendering
US20200143516A1 (en) Data processing systems
US20180137602A1 (en) Low resolution rgb rendering for efficient transmission
US10878527B2 (en) Variable resolution graphics processing
KR100959470B1 (en) Scalable high performance 3d graphics
US8907987B2 (en) System and method for downsizing video data for memory bandwidth optimization
EP3818515A2 (en) Display processing circuitry
KR100932805B1 (en) Apparatus and Method for Edge Handling in Image Processing
JP2011059694A (en) Image data set with embedded pre-subpixel rendered image
KR20180100486A (en) Data processing systems
JP2022543729A (en) System and method for foveated rendering
US20200193563A1 (en) Image processing apparatus and method, and related circuit
WO2019182869A1 (en) Controlling image display via mapping of pixel values to pixels
KR20200002626A (en) Data Processing Systems
US8384722B1 (en) Apparatus, system and method for processing image data using look up tables
JP2004536388A (en) Multi-channel, demand-driven display controller
EP0895166A2 (en) Method and apparatus for interfacing with ram
US11496720B2 (en) Image slicing to generate in put frames for a digital micromirror device
CA2962512C (en) Accelerated image gradient based on one-dimensional data
CN116959344A (en) Image display method, device, projection equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19759194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19759194

Country of ref document: EP

Kind code of ref document: A1