US20200160818A1 - Systems and methods for head-mounted display adapted to human visual mechanism - Google Patents

Systems and methods for head-mounted display adapted to human visual mechanism Download PDF

Info

Publication number
US20200160818A1
US20200160818A1 US16/584,357 US201916584357A US2020160818A1 US 20200160818 A1 US20200160818 A1 US 20200160818A1 US 201916584357 A US201916584357 A US 201916584357A US 2020160818 A1 US2020160818 A1 US 2020160818A1
Authority
US
United States
Prior art keywords
foveal
data
extra
pixels
subset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/584,357
Inventor
Brendan BARRY
David Moloney
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MOVIDIUS LIMITED
Movidius Ltd Netherland
Original Assignee
MOVIDIUS LIMITED
Movidius Ltd Netherland
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MOVIDIUS LIMITED, Movidius Ltd Netherland filed Critical MOVIDIUS LIMITED
Priority to US16/584,357 priority Critical patent/US20200160818A1/en
Publication of US20200160818A1 publication Critical patent/US20200160818A1/en
Assigned to MOVIDIUS LIMITED reassignment MOVIDIUS LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: LINEAR ALGEBRA TECHNOLOGIES LIMITED
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/391Resolution modifying circuits, e.g. variable screen formats
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/006Details of the interface to the display terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/18Use of a frame buffer in a display terminal, inclusive of the display panel

Definitions

  • This present application relates generally to computer vision processing, and more specifically, to efficient image simulation in a head mounted display.
  • the human retina has a high resolution at the center or fovea, a small region which is packed densely with photoreceptors and responsible for half the nerve fibers in the optic nerve. The rest of the retina provides lower resolution with a reduced density of photoreceptors.
  • the human visual system gives the illusion of uniform high resolution by scanning the fovea around the scene and maintaining an internal mental model of the scene in the visual cortex.
  • systems and methods are provided for rendering of a dual eye-specific display. Recognizing the resolution limits of extra-foveal human vision provides an opportunity for resource savings by minimizing resources spent on extra-foveal rendering.
  • the system tracks the user's eye movements and/or positions, in some implementations, based on electroencephalography (EEG) of the user, to correctly label the central (foveal) and peripheral (extra-foveal) areas of the display.
  • EEG electroencephalography
  • Foveal data is fully rendered while extra-foveal data is reduced in resolution and, in some implementations, shared between the two displays.
  • the disclosure relates to a head-mounted display device with eye-tracking of a user's two eyes.
  • the device comprises a first display for the left eye and a second display for the right eye.
  • the device may further comprise a memory, coupled to the first display and the second display.
  • the memory may comprise a first frame buffer for the first display and a second frame buffer for the second display, and the first frame buffer may comprise a first foveal frame buffer and a first extra-foveal frame buffer and the second frame buffer may comprise a second foveal frame buffer and a second extra-foveal frame buffer.
  • the device may further comprise one or more processors, coupled to the memory.
  • the one or more processors may be configured to receive eye-tracking information of the user wearing the head mounted display.
  • the eye-tracking information may comprise at least one of an eye movement and an eye position of the user.
  • the one or more processors may be further configured to determine, based on the eye-tracking information, a first foveal region of the first display, a first extra-foveal region of the first display, a second foveal region of the second display, and a second extra-foveal region of the second display.
  • the one or more processors may be further configured to load, into the memory, first foveal region pixels of the first foveal region, first extra-foveal region pixels of the first extra-foveal region, second foveal region pixels of the second foveal region, second extra-foveal region pixels of the second extra-foveal region.
  • the first foveal region pixels may be represented with full resolution and loaded into the first foveal frame buffer.
  • the first extra-foveal region pixels may be represented with reduced resolution and loaded into the first extra-foveal frame buffer.
  • the second foveal region pixels may be represented with full resolution and loaded into the second foveal frame buffer.
  • the second extra-foveal region pixels may be represented with reduced resolution and loaded into the second extra-foveal frame buffer.
  • the eye-tracking information may be based on infrared light projection onto the user's eyes, either directly or obliquely.
  • the eye-tracking information may be based on electroencephalography (EEG) of the user.
  • EEG electroencephalography
  • At least one of the first foveal region, the first extra-foveal region, the second foveal region, or the second extra-foveal region is adjustable
  • the one or more processors may be configured to duplicate a pixel value.
  • the one or more processors may be further configured to duplicate the pixel value across at least one of 2 ⁇ 2 pixels, 3 ⁇ 3 pixels, 4 ⁇ 4 pixels, or 5 ⁇ 5 pixels.
  • the device may further comprise a Display Serial Interface coupled to the memory and at least one of the first display or the second display.
  • the one or more processors may be configured to duplicate the pixel value after receiving the pixel value from the Display Serial Interface.
  • the device may further comprise a plurality of Display Serial Interfaces. Some of the plurality of Display Serial Interfaces are configured to transfer, in parallel, display data corresponding to at least one of the first foveal region or the second foveal region.
  • the first extra-foveal frame buffer and the second extra-foveal frame buffer may be shared.
  • the one or more processors may be further configured to load the first foveal frame buffer and the first extra-foveal frame buffer at different rates and the second foveal frame buffer and the second extra-foveal frame buffer at different rates.
  • the one or more processors may be further configured to apply alpha-blending between depth-planes of the first foveal region pixels and depth-planes of the first extra-foveal region pixels and between depth-planes of the second foveal region pixels and depth-planes of the second extra-foveal region pixels.
  • the disclosure relates to a method for a head-mounted display device with eye-tracking of a user's two eyes.
  • eye-tracking information of the user wearing the head-mounted display may be received.
  • the eye-tracking information may comprise at least one of an eye movement and an eye position of the user.
  • a first foveal region of a first display for the left eye a first extra-foveal region of the first display, a second foveal region of a second display for the right eye, and a second extra-foveal region of the second display may be determined.
  • First foveal region pixels that are represented with full resolution of the first foveal region may be loaded into a first foveal frame buffer of a first frame buffer for the first display.
  • First extra-foveal region pixels that are represented with reduced resolution of the first extra-foveal region may be loaded into a first extra-foveal frame buffer of the first frame buffer for the first display.
  • Second foveal region pixels that are represented with full resolution of the second foveal region may be loaded into a second foveal frame buffer of a second frame buffer for the second display.
  • Second extra-foveal region pixels that are represented with reduced resolution of the second extra-foveal region may be loaded a second extra-foveal frame buffer of the second frame buffer for the second display.
  • the disclosure relates to a non-transitory computer readable medium storing a computer-readable program for a head-mounted display device with eye-tracking of a user's two eyes.
  • the program may include computer-readable instructions to receive eye-tracking information of the user wearing the head mounted display.
  • the eye-tracking information may comprise at least one of an eye movement and an eye position of the user.
  • the program may include computer-readable instructions to determine, based on the eye-tracking information, a first foveal region of a first display for the left eye, a first extra-foveal region of the first display, a second foveal region of a second display for the right eye, and a second extra-foveal region of the second display.
  • the program may include computer-readable instructions to load, into a first foveal frame buffer of a first frame buffer for the first display, first foveal region pixels that are represented with full resolution of the first foveal region.
  • the program may include computer-readable instructions to load, into a first extra-foveal frame buffer of the first frame buffer for the first display, first extra-foveal region pixels that are represented with reduced resolution of the first extra-foveal region.
  • the program may include computer-readable instructions to load, into a second foveal frame buffer of a second frame buffer for the second display, second foveal region pixels that are represented with full resolution of the second foveal region.
  • the program may include computer-readable instructions to load, into a second extra-foveal frame buffer of the second frame buffer for the second display, second extra-foveal region pixels that are represented with reduced resolution of the second extra-foveal region.
  • FIG. 1 illustrates an example scene with a foveal region and an extra-foveal region.
  • FIG. 2 illustrates a head-mounted display in an example state-of-the art implementation.
  • FIG. 3 is a block diagram illustrating an improved head-mounted display design, in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a block diagram illustrating an improved head-mounted display design, in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a block diagram illustrating alpha-blending between foveal and extra-foveal depth-planes to produce composite image for transmission to display panel(s), in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating data processing of the output of a Display Serial Interface, in accordance with an embodiment of the present disclosure.
  • FIG. 7 shows a flowchart for graphical rendering in accordance with an embodiment of the present disclosure.
  • the present invention makes use of a head-mounted display with eye-tracking.
  • Display panels backed with frame-buffers are used to render each pixel in the output frame.
  • Eye/pupil movement and/or position is measured and the information is fed to a frame-buffer manager to control the loading of the frame-buffer, as well as control the representation of the data in the frame-buffer.
  • a frame-buffer manager to control the loading of the frame-buffer, as well as control the representation of the data in the frame-buffer.
  • In the foveal region pixels are represented with the full precision, while in the extra-foveal region, compromises in detail are made to increase efficiency.
  • Extra-foveal rendering techniques may include reduced resolution, reduced frame rate, and duplication between the two displays. Notably, these techniques are applied to the region identified as peripheral (extra-foveal) while the central (foveal) region of each display includes more detailed rendering.
  • the compressed representation can be transmitted across the interface, thus saving power.
  • the logic required to duplicate pixels can then be provided on the output of the DSI receiver.
  • FIG. 1 illustrates an example scene with a foveal region and an extra-foveal region.
  • Region 100 represents an example scene.
  • Region 100 may be an image of a virtual reality scene.
  • Region 104 represents an extra-foveal region. As described above, this portion of visual field projects to the retina region of reduced receptors and ganglion density. A human may see this portion with low resolution.
  • Region 104 may be a portion within region 100 . Humans may typically see about 135 degrees vertically and 160 degrees horizontally. Accordingly, portions of objects 106 and 108 may not be rendered in a virtual reality image for a user.
  • Region 102 represents a foveal region. This portion of the visual field (e.g., a 5 or other degree central circle) projects to the retinal region called the fovea, tightly packed with color cone receptors. As described above, region 102 is a region where a user may see with great detail.
  • FIG. 2 illustrates a head-mounted display in an example state-of-the art implementation.
  • a state of the art head-mounted display is shown where images to be interpreted by the left and right eyes of the observer are output at full pixel resolution from a pair of display buffers. As shown in FIG. 2 ,
  • a head-mounted display 200 comprises a display panel 201 for a user's left eye, a display panel 202 for the user's right eye, a frame buffer 203 which is connected to the display panel 202 via a serial/parallel interface 208 , a frame buffer 204 which is connected to the display panel 202 via a serial/parallel interface 210 , a memory 205 which comprises the frame buffer 203 and the frame buffer 204 , and a processor 206 which may control the loading of pixels/image data into the frame buffers 203 / 204 .
  • all pixels for the display panels 201 and 202 are rendered with the same resolution. In some embodiments, all pixels for display panels 201 and 202 are rendered with full resolution.
  • One disadvantage of this method is that the amount of data memory required to store the pixels to be output to the two displays is large, especially where the resolution of these displays is climbing rapidly towards 4 k and beyond.
  • Another disadvantage is that since a large amount of data is transmitted via the serial/parallel interfaces 208 / 210 , transmission via the serial/parallel interfaces 208 / 210 may become a performance bottleneck.
  • FIG. 3 is a block diagram illustrating an improved head-mounted display design with separate left and right foveal display and lower resolution extra-foveal display buffers combined with alpha-blending where each buffer is controlled by eye tracking using at least one of infrared projection and detection or EEG-based eye-tracking, in accordance with an embodiment of the present disclosure.
  • a head-mounted display 300 comprises devices 301 / 302 that are used to track the eye movements and/or positions of the user who is wearing the head-mounted display.
  • the devices 301 / 302 project infrared lights onto the user's eyes to track the user's eye movements and positions.
  • the devices use cameras to take pictures or videos of the user's eyes.
  • the devices 301 / 302 detect electroencephalogram of the user to track user's eye movements and/or positions.
  • the devices 301 / 302 may track and record brain wave patterns.
  • the user may have electrodes attached to his/her scalp, which then send signals to a processor 304 to record signals and detect movements and/or positions of the user's eyes (e.g., detect movements and/or positions of the pupils).
  • a user's eye movements and positions are tracked based on both infrared light tracking and EEG tracking. In other words, both results are combined to more accurately detect user's eye movements and/or positions.
  • the processor 304 may load into memory 306 images that correspond to what a user may see.
  • pixels that correspond to the extra-foveal region of the user's left eye are loaded into an extra-foveal frame buffer 308 and are loaded with lower resolution (e.g., lower resolution compared to original resolution of available image data). Pixels that correspond to the foveal region of the user's left eye are loaded into a foveal frame buffer 310 and are loaded with full resolution (e.g., original full resolution of available image data). Pixels that correspond to the extra-foveal region of the user's right eye are loaded into an extra-foveal frame buffer 312 and are loaded with lower resolution. Pixels that correspond to the foveal region of the user's right eye are loaded into a foveal frame buffer 314 and are loaded with full resolution.
  • lower resolution e.g., lower resolution compared to original resolution of available image data
  • Pixels that correspond to the foveal region of the user's left eye are loaded into a foveal frame buffer 310 and are loaded with full resolution (e.g., original full resolution of available image data
  • alpha blending 316 / 318 is applied to pixels in respective frame buffers 308 / 310 / 312 / 314 to create the appearance of partial or full transparency.
  • Alpha blending is used to display an alpha bitmap, which is a bitmap that has transparent or semi-transparent pixels.
  • each pixel in an alpha bitmap has a transparency component known as its alpha channel.
  • the alpha channel typically contains as many bits as a color channel. For example, an 8-bit alpha channel can represent 256 levels of transparency, from 0 (the entire bitmap is transparent) to 255 (the entire bitmap is opaque).
  • Display panels 320 / 322 may display images comprising lower resolution and full resolution pixels.
  • the visual field of an extra-foveal region projects to the retina region of reduced receptors and ganglion density.
  • a human may see this portion with low resolution.
  • the visual field of an foveal region projects to the retinal region called the fovea, tightly packed with color cone receptors.
  • a human may see this portion with high resolution.
  • the foveal region may be a 5 or other degree central circle. Humans may typically see about 135 degrees vertically and 160 degrees horizontally.
  • the extra-foveal region may be outside the foveal region and within 135 degrees vertically and 160 degrees horizontally.
  • the extents of the foveal region and/or the extra-foveal region may be determined empirically and/or programmable via registers in hardware. In other words, the foveal region and/or the extra-foveal region may not need to be determined at design time.
  • Hardware may be designed such that the foveal region can extend to the entire visible field, which may be the worst-case.
  • software may be implemented which controls the registers that determine the extent of the foveal and extra foveal regions and/or their relative frame rates.
  • the system may gradually test and adjust the foveal region to a point where the user is happy (e.g., it doesn't degrade the user experience).
  • the foveal region may be automatically adjusted, for example, to conserve battery energy. It may be better to have some display rather than no display because the battery is depleted.
  • pixels that correspond to the foveal region(s) are loaded into foveal frame buffer(s) with full resolution (i.e., original resolution of the available image data). Pixels that correspond to the extra-foveal regions(s) are loaded into extra-foveal frame buffer(s) with reduced resolution. For example, as described in detail below, not all pixels from the available image data are loaded into an extra-foveal frame buffer.
  • reduced resolution is resolution that is lower than the original full resolution.
  • Resolution and/or frame-rate of the foveal region may be different than that of the extra-foveal region.
  • the extra-foveal region may be represented with at lower resolution and/or lower frame-rate.
  • multiple Display Serial Interfaces may be used in parallel. Some of the Display Serial Interfaces may be configured to transfer, in parallel, display data corresponding to the foveal region at relatively higher frame rate and/or resolution as compared to the extra-foveal data which can be transmitted at relatively lower frame rate and/or resolution. Some of the Display Serial Interfaces may be configured to transfer, in parallel, display data corresponding to the extra-foveal region.
  • pixel resolutions are represented in various (e.g., more than two) resolutions.
  • pixels for the foveal region e.g., a 5 or other degree central circle
  • full resolution e.g., original full resolution of available image data.
  • Pixels for the extra-foveal region that are close to the foveal region may be represented with medium resolution, which may be just slightly lower resolution than the full resolution.
  • pixels for the extra-foveal region that are not close to the foveal region e.g., a peripheral portion or an edge of the image
  • lower resolution which may be lower resolution compared to the medium resolution.
  • the processor 304 can be a general purpose processor executing computer executable instructions stored on a tangible computer readable medium.
  • the processor 304 can be or can include special purpose circuitry such as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • the processor 304 is part of hardware for accelerated graphics processing.
  • the processor 304 is a system on a chip—an integrated circuit (IC) that integrates all components of a computer or other electronic system into a single chip. The processor 304 may integrate all components to enable efficient graphics processing and loading of frame buffers.
  • pixel values are duplicated across 2 ⁇ 2 pixels, 3 ⁇ 3 pixels, 4 ⁇ 4 pixels, 5 ⁇ 5 pixels, or other multiple pixels.
  • the processor 304 may load a subset of pixels that correspond to extra-foveal regions into the extra-foveal frame buffer 308 / 312 , and duplicate a pixel value as its neighbor pixel values and store in the extra-foveal frame buffers.
  • extra-foveal frame buffers 308 / 312 and foveal frame buffers 310 / 314 are loaded with different rates (e.g., different frame rates).
  • extra-foveal frame buffers 308 / 312 may be loaded at 60 Hz and foveal frame buffers 310 / 314 may be loaded at 120 Hz.
  • the extra-foveal frame buffers 308 / 312 and the foveal frame buffers 310 / 314 may be loaded at any other suitable rates.
  • display panels 320 / 322 may be LCD panels, LED panels, or display panels using other display technologies.
  • FIG. 4 is a block diagram illustrating an improved head-mounted display design with separate full resolution left and right foveal frame buffer and shared lower resolution extra-foveal frame buffer, in accordance with an embodiment of the present disclosure.
  • a head-mounted display 400 comprises eye track devices 401 / 402 , a processor 404 , a memory 406 , foveal frame buffers 410 / 414 , a shared extra-foveal frame buffer 408 , and alpha-blending 416 / 418 .
  • the eye track devices 401 / 402 , the processor 404 , the memory 406 , the foveal frame buffers 410 / 414 , the shared extra-foveal frame buffer 408 , and the alpha-blending 416 / 418 are similar to the eye track devices 301 / 302 , the processor 304 , the memory 306 , the foveal frame buffers 310 / 314 , and the alpha-blending 316 / 318 in FIG. 3 . As shown in FIG.
  • the extra-foveal frame buffer is shared between the two display panels.
  • a pixel that corresponds to a unit of image data for the extra-foveal region may be allocated a single corresponding memory space, rather than one memory space for the display panel for the left eye, and another memory space for the display panel for the right eye.
  • the shared extra-foveal frame buffer 410 may reduce the memory space requirement.
  • FIG. 5 is a block diagram illustrating alpha-blending between foveal and extra-foveal depth-planes to produce composite image for transmission to display panel(s), in accordance with an embodiment of the present disclosure.
  • layer 502 comprises pixels in full resolution for the foveal region.
  • Layer 504 comprises pixels in lower resolution for the extra-foveal region.
  • layer 502 and layer 504 overlap with each other.
  • the foveal region is rendered with full resolution in layer 502 and with lower resolution in layer 504 .
  • depth-planes for pixels in layer 502 are combined with depth-planes for pixels in layer 504 via alpha-blending 503 to produce image 501 to be transmitted to display panel(s).
  • Such blending may enable a smooth-step transition such that an image comprising a foveal region and an extra-foveal region for a display panel is smoothly composited. Smoothstep is an interpolation function.
  • FIG. 6 is a block diagram illustrating data processing of the output of a Display Serial Interface, in accordance with an embodiment of the present disclosure.
  • not all pixels that correspond to the extra-foveal region are loaded into the extra-foveal frame buffer 602 .
  • the extra-foveal frame buffer 602 For example, for each 3 ⁇ 3 pixels 620 , only one pixel value “x” for pixel 622 is loaded into the extra-foveal frame buffer 602 .
  • 2 ⁇ 2 pixels, 4 ⁇ 4 pixels, or 5 ⁇ 5 pixels only one pixel value is loaded into the extra-foveal frame buffer.
  • 2 or more pixels for each 2 ⁇ 2 pixels, 4 ⁇ 4 pixels, or 5 ⁇ 5 pixels may be loaded into the extra-foveal buffer.
  • the pixel data/values are transmitted via serial/parallel interface 604 to display panel 610 .
  • the serial/parallel interface 604 may be a Display Serial Interface.
  • processor 608 receives output for the serial/parallel interface 604 .
  • the process 608 may duplicate a pixel value to its neighbouring pixels. For example, as shown in FIG. 6 , value “x” of pixel 622 may be duplicated to the neighbouring pixels 620 , and the duplicated pixel values may be used to render an image on a display panel 610 . Accordingly, not all pixels need to be stored in a memory. In other words, the memory only need to store a compressed representation of the image. For example, as shown in FIG. 6 , if for each 3 ⁇ 3 pixels, only one pixel is loaded into the memory, the memory then only needs to store 1/9 of total pixels and 8 of 9 pixels for each 3 ⁇ 3 pixels may be duplicated by the processor 608 .
  • FIG. 7 shows a flowchart for graphical rendering in accordance with an embodiment of the present disclosure.
  • Method 700 may include receiving eye-tracking information of the user wearing the head-mounted display ( 702 ); determining, based on the eye-tracking information, a first foveal region of a first display for the left eye, a first extra-foveal region of the first display, a second foveal region of a second display for the right eye, and a second extra-foveal region of the second display ( 704 ); and load, into a memory, first foveal region pixels of the first foveal region with full resolution, first extra-foveal region pixels of the first extra-foveal region with reduced resolution, second foveal region pixels of the second foveal region with full resolution, second extra-foveal region pixels of the second extra-foveal region with reduced resolution ( 706 ).
  • Method 700 may include receiving eye-tracking information of the user wearing the head mounted display ( 702 ).
  • the eye-tracking information may be based on infrared light projection onto user's pupils.
  • the eye-tracking information may be based on EEG of the user (e.g., electrodes may be attached to the user's scalp, which then may send signals to a processor).
  • eye tracking information may be obtained from special contact lens with an embedded mirror or magnetic field sensor that a user wears.
  • foveal and extra-foveal regions for the display panel for the user's left eye, and foveal and extra-foveal regions for the display panel of the user's right eye may be determined.
  • pixels that correspond to foveal regions may be loaded into one or more foveal frame buffers and these pixels may be represented with full resolution; pixels that correspond to extra-foveal regions may be loaded into one or more extra-foveal frame buffers and these pixels may be represented with reduced resolution.
  • the extra-foveal buffer may be shared between the display panel for the left eye and the display panel for the right eye.
  • a pixel value may be duplicated across including but not limited to 2 ⁇ 2 pixels, 3 ⁇ 3 pixels, 4 ⁇ 4 panels, or 5 ⁇ 5 pixels. Other multiple pixels are possible.
  • the duplicated values may be stored in a frame buffer (e.g., an extra-foveal frame buffer).
  • the output of a Display Serial Interface may be processed to duplicate pixel values, which may enable compressed storage in a memory.
  • connections may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise the connections may for example be direct connections or indirect connections.
  • any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermediate components.
  • any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of other elements or steps than those listed in a claim.
  • the terms “a” or “an,” as used herein, are defined as one or more than one.

Abstract

Systems and methods are provided for rendering of a dual eye-specific display. The system tracks the user's eye movements and/or positions, in some implementations, based on electroencephalography (EEG) of the user, to correctly label the central (foveal) and peripheral (extra-foveal) areas of the display. Foveal data is fully rendered while extra-foveal data is reduced in resolution and, in some implementations, shared between the two displays.

Description

    FIELD OF THE APPLICATION
  • This present application relates generally to computer vision processing, and more specifically, to efficient image simulation in a head mounted display.
  • BACKGROUND
  • Continued advances in the speed and acuity of computers, particularly as related to graphics hardware and software, have recently made practical decades-old aspirations regarding immersive computer-generated environments. Virtual reality devices, with limb-tracking, haptic feedback, and head-mounted sensory equipment, are now being developed for consumer use. As cutting-edge systems attempt to provide more impressive and realistic artificial environments, the demands on the graphical systems remain intensive. In addition, efficient rendering and display, both in hardware and software, remain of paramount importance.
  • Advances in computer graphics and image technology have resulted in a new generation of head-mounted display devices with the intent to provide as realistic of a visual experience as possible for the user. The typical approach for these devices is to use conventional LCD display panels backed by frame-buffers that represent each pixel in the output frame.
  • This approach is wasteful in terms of power and area as it doesn't take account of the human visual system. The human retina has a high resolution at the center or fovea, a small region which is packed densely with photoreceptors and responsible for half the nerve fibers in the optic nerve. The rest of the retina provides lower resolution with a reduced density of photoreceptors. The human visual system gives the illusion of uniform high resolution by scanning the fovea around the scene and maintaining an internal mental model of the scene in the visual cortex.
  • SUMMARY
  • In accordance with the disclosed subject matter, systems and methods are provided for rendering of a dual eye-specific display. Recognizing the resolution limits of extra-foveal human vision provides an opportunity for resource savings by minimizing resources spent on extra-foveal rendering. The system tracks the user's eye movements and/or positions, in some implementations, based on electroencephalography (EEG) of the user, to correctly label the central (foveal) and peripheral (extra-foveal) areas of the display. Foveal data is fully rendered while extra-foveal data is reduced in resolution and, in some implementations, shared between the two displays.
  • In accordance with the disclosed subject matter, systems and methods are provided for a head-mounted display adapted to human visual mechanism. In one embodiment, the disclosure relates to a head-mounted display device with eye-tracking of a user's two eyes. The device comprises a first display for the left eye and a second display for the right eye. The device may further comprise a memory, coupled to the first display and the second display. The memory may comprise a first frame buffer for the first display and a second frame buffer for the second display, and the first frame buffer may comprise a first foveal frame buffer and a first extra-foveal frame buffer and the second frame buffer may comprise a second foveal frame buffer and a second extra-foveal frame buffer. The device may further comprise one or more processors, coupled to the memory. The one or more processors may be configured to receive eye-tracking information of the user wearing the head mounted display. The eye-tracking information may comprise at least one of an eye movement and an eye position of the user. The one or more processors may be further configured to determine, based on the eye-tracking information, a first foveal region of the first display, a first extra-foveal region of the first display, a second foveal region of the second display, and a second extra-foveal region of the second display. The one or more processors may be further configured to load, into the memory, first foveal region pixels of the first foveal region, first extra-foveal region pixels of the first extra-foveal region, second foveal region pixels of the second foveal region, second extra-foveal region pixels of the second extra-foveal region. The first foveal region pixels may be represented with full resolution and loaded into the first foveal frame buffer. The first extra-foveal region pixels may be represented with reduced resolution and loaded into the first extra-foveal frame buffer. The second foveal region pixels may be represented with full resolution and loaded into the second foveal frame buffer. The second extra-foveal region pixels may be represented with reduced resolution and loaded into the second extra-foveal frame buffer.
  • In accordance with other aspects of this embodiment, the eye-tracking information may be based on infrared light projection onto the user's eyes, either directly or obliquely.
  • In accordance with other aspects of this embodiment, the eye-tracking information may be based on electroencephalography (EEG) of the user.
  • In accordance with other aspects of this embodiment, at least one of the first foveal region, the first extra-foveal region, the second foveal region, or the second extra-foveal region is adjustable
  • In accordance with other aspects of this embodiment, to represent the first extra-foveal region pixels and the second extra-foveal region pixel in reduced resolution, the one or more processors may be configured to duplicate a pixel value.
  • In accordance with other aspects of this embodiment, the one or more processors may be further configured to duplicate the pixel value across at least one of 2×2 pixels, 3×3 pixels, 4×4 pixels, or 5×5 pixels.
  • In accordance with other aspects of this embodiment, the device may further comprise a Display Serial Interface coupled to the memory and at least one of the first display or the second display. The one or more processors may be configured to duplicate the pixel value after receiving the pixel value from the Display Serial Interface.
  • In accordance with other aspects of this embodiment, the device may further comprise a plurality of Display Serial Interfaces. Some of the plurality of Display Serial Interfaces are configured to transfer, in parallel, display data corresponding to at least one of the first foveal region or the second foveal region.
  • In accordance with other aspects of this embodiment, the first extra-foveal frame buffer and the second extra-foveal frame buffer may be shared.
  • In accordance with other aspects of this embodiment, the one or more processors may be further configured to load the first foveal frame buffer and the first extra-foveal frame buffer at different rates and the second foveal frame buffer and the second extra-foveal frame buffer at different rates.
  • In accordance with other aspects of this embodiment, the one or more processors may be further configured to apply alpha-blending between depth-planes of the first foveal region pixels and depth-planes of the first extra-foveal region pixels and between depth-planes of the second foveal region pixels and depth-planes of the second extra-foveal region pixels.
  • In another embodiment, the disclosure relates to a method for a head-mounted display device with eye-tracking of a user's two eyes. According to the method, eye-tracking information of the user wearing the head-mounted display may be received. The eye-tracking information may comprise at least one of an eye movement and an eye position of the user. Based on the eye-tracking information, a first foveal region of a first display for the left eye, a first extra-foveal region of the first display, a second foveal region of a second display for the right eye, and a second extra-foveal region of the second display may be determined. First foveal region pixels that are represented with full resolution of the first foveal region may be loaded into a first foveal frame buffer of a first frame buffer for the first display. First extra-foveal region pixels that are represented with reduced resolution of the first extra-foveal region may be loaded into a first extra-foveal frame buffer of the first frame buffer for the first display. Second foveal region pixels that are represented with full resolution of the second foveal region may be loaded into a second foveal frame buffer of a second frame buffer for the second display. Second extra-foveal region pixels that are represented with reduced resolution of the second extra-foveal region may be loaded a second extra-foveal frame buffer of the second frame buffer for the second display.
  • In still another embodiment, the disclosure relates to a non-transitory computer readable medium storing a computer-readable program for a head-mounted display device with eye-tracking of a user's two eyes. The program may include computer-readable instructions to receive eye-tracking information of the user wearing the head mounted display. The eye-tracking information may comprise at least one of an eye movement and an eye position of the user. The program may include computer-readable instructions to determine, based on the eye-tracking information, a first foveal region of a first display for the left eye, a first extra-foveal region of the first display, a second foveal region of a second display for the right eye, and a second extra-foveal region of the second display. The program may include computer-readable instructions to load, into a first foveal frame buffer of a first frame buffer for the first display, first foveal region pixels that are represented with full resolution of the first foveal region. The program may include computer-readable instructions to load, into a first extra-foveal frame buffer of the first frame buffer for the first display, first extra-foveal region pixels that are represented with reduced resolution of the first extra-foveal region. The program may include computer-readable instructions to load, into a second foveal frame buffer of a second frame buffer for the second display, second foveal region pixels that are represented with full resolution of the second foveal region. The program may include computer-readable instructions to load, into a second extra-foveal frame buffer of the second frame buffer for the second display, second extra-foveal region pixels that are represented with reduced resolution of the second extra-foveal region.
  • The present invention will now be described in more detail with reference to particular embodiments thereof as shown in the accompanying drawings. While the present disclosure is described below with reference to particular embodiments, it should be understood that the present disclosure is not limited thereto. Those of ordinary skill in the art having access to the teachings herein will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein, and with respect to which the present disclosure may be of significant utility.
  • DESCRIPTION OF DRAWINGS
  • Various objects, features, and advantages of the disclosed subject matter can be more fully appreciated with reference to the following detailed description of the disclosed subject matter when considered in connection with the following drawings, in which like reference numerals identify like elements. The accompanying figures are schematic and are not intended to be drawn to scale. For purposes of clarity, not every component is labelled in every figure. Nor is every component of each embodiment of the disclosed subject matter shown where illustration is not necessary to allow those of ordinary skill in the art to understand the disclosed subject matter.
  • FIG. 1 illustrates an example scene with a foveal region and an extra-foveal region.
  • FIG. 2 illustrates a head-mounted display in an example state-of-the art implementation.
  • FIG. 3 is a block diagram illustrating an improved head-mounted display design, in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a block diagram illustrating an improved head-mounted display design, in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a block diagram illustrating alpha-blending between foveal and extra-foveal depth-planes to produce composite image for transmission to display panel(s), in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating data processing of the output of a Display Serial Interface, in accordance with an embodiment of the present disclosure.
  • FIG. 7 shows a flowchart for graphical rendering in accordance with an embodiment of the present disclosure.
  • DETAILED DESCRIPTION
  • In the following description, numerous specific details are set forth regarding the systems and methods of the disclosed subject matter and the environment in which such systems and methods may operate, etc., in order to provide a thorough understanding of the disclosed subject matter. It will be apparent to one skilled in the art, however, that the disclosed subject matter may be practiced without such specific details, and that certain features, which are well known in the art, are not described in detail in order to avoid complication of the disclosed subject matter. In addition, it will be understood that the examples provided below are exemplary, and that it is contemplated that there are other systems and methods that are within the scope of the disclosed subject matter.
  • The present invention makes use of a head-mounted display with eye-tracking. Display panels backed with frame-buffers are used to render each pixel in the output frame. Eye/pupil movement and/or position is measured and the information is fed to a frame-buffer manager to control the loading of the frame-buffer, as well as control the representation of the data in the frame-buffer. In the foveal region, pixels are represented with the full precision, while in the extra-foveal region, compromises in detail are made to increase efficiency.
  • Extra-foveal rendering techniques may include reduced resolution, reduced frame rate, and duplication between the two displays. Notably, these techniques are applied to the region identified as peripheral (extra-foveal) while the central (foveal) region of each display includes more detailed rendering.
  • Where an intelligent panel is used rather than duplicating pixels at the output of the frame-buffer in the extra-foveal region of the display, the compressed representation can be transmitted across the interface, thus saving power. The logic required to duplicate pixels can then be provided on the output of the DSI receiver.
  • FIG. 1 illustrates an example scene with a foveal region and an extra-foveal region. Region 100 represents an example scene. Region 100 may be an image of a virtual reality scene. Region 104 represents an extra-foveal region. As described above, this portion of visual field projects to the retina region of reduced receptors and ganglion density. A human may see this portion with low resolution. Region 104 may be a portion within region 100. Humans may typically see about 135 degrees vertically and 160 degrees horizontally. Accordingly, portions of objects 106 and 108 may not be rendered in a virtual reality image for a user. Region 102 represents a foveal region. This portion of the visual field (e.g., a 5 or other degree central circle) projects to the retinal region called the fovea, tightly packed with color cone receptors. As described above, region 102 is a region where a user may see with great detail.
  • FIG. 2 illustrates a head-mounted display in an example state-of-the art implementation. In FIG. 2, a state of the art head-mounted display is shown where images to be interpreted by the left and right eyes of the observer are output at full pixel resolution from a pair of display buffers. As shown in FIG. 2, a head-mounted display 200 comprises a display panel 201 for a user's left eye, a display panel 202 for the user's right eye, a frame buffer 203 which is connected to the display panel 202 via a serial/parallel interface 208, a frame buffer 204 which is connected to the display panel 202 via a serial/parallel interface 210, a memory 205 which comprises the frame buffer 203 and the frame buffer 204, and a processor 206 which may control the loading of pixels/image data into the frame buffers 203/204. In this embodiment, all pixels for the display panels 201 and 202 are rendered with the same resolution. In some embodiments, all pixels for display panels 201 and 202 are rendered with full resolution. One disadvantage of this method is that the amount of data memory required to store the pixels to be output to the two displays is large, especially where the resolution of these displays is climbing rapidly towards 4 k and beyond. Another disadvantage is that since a large amount of data is transmitted via the serial/parallel interfaces 208/210, transmission via the serial/parallel interfaces 208/210 may become a performance bottleneck.
  • FIG. 3 is a block diagram illustrating an improved head-mounted display design with separate left and right foveal display and lower resolution extra-foveal display buffers combined with alpha-blending where each buffer is controlled by eye tracking using at least one of infrared projection and detection or EEG-based eye-tracking, in accordance with an embodiment of the present disclosure. As shown in FIG. 3, a head-mounted display 300 comprises devices 301/302 that are used to track the eye movements and/or positions of the user who is wearing the head-mounted display. In some embodiments, the devices 301/302 project infrared lights onto the user's eyes to track the user's eye movements and positions. In some embodiments, the devices use cameras to take pictures or videos of the user's eyes. In other embodiments, the devices 301/302 detect electroencephalogram of the user to track user's eye movements and/or positions. The devices 301/302 may track and record brain wave patterns. For example, the user may have electrodes attached to his/her scalp, which then send signals to a processor 304 to record signals and detect movements and/or positions of the user's eyes (e.g., detect movements and/or positions of the pupils). In some embodiments, a user's eye movements and positions are tracked based on both infrared light tracking and EEG tracking. In other words, both results are combined to more accurately detect user's eye movements and/or positions. Based on the eye tracking information, the processor 304 may load into memory 306 images that correspond to what a user may see. In one embodiment, pixels that correspond to the extra-foveal region of the user's left eye are loaded into an extra-foveal frame buffer 308 and are loaded with lower resolution (e.g., lower resolution compared to original resolution of available image data). Pixels that correspond to the foveal region of the user's left eye are loaded into a foveal frame buffer 310 and are loaded with full resolution (e.g., original full resolution of available image data). Pixels that correspond to the extra-foveal region of the user's right eye are loaded into an extra-foveal frame buffer 312 and are loaded with lower resolution. Pixels that correspond to the foveal region of the user's right eye are loaded into a foveal frame buffer 314 and are loaded with full resolution.
  • In some embodiments, alpha blending 316/318 is applied to pixels in respective frame buffers 308/310/312/314 to create the appearance of partial or full transparency. Alpha blending is used to display an alpha bitmap, which is a bitmap that has transparent or semi-transparent pixels. In addition to a red, green, and blue color channel, each pixel in an alpha bitmap has a transparency component known as its alpha channel. The alpha channel typically contains as many bits as a color channel. For example, an 8-bit alpha channel can represent 256 levels of transparency, from 0 (the entire bitmap is transparent) to 255 (the entire bitmap is opaque).
  • Display panels 320/322 may display images comprising lower resolution and full resolution pixels.
  • In some embodiments, as described in relation to FIG. 1, the visual field of an extra-foveal region projects to the retina region of reduced receptors and ganglion density. A human may see this portion with low resolution. The visual field of an foveal region projects to the retinal region called the fovea, tightly packed with color cone receptors. A human may see this portion with high resolution. The foveal region may be a 5 or other degree central circle. Humans may typically see about 135 degrees vertically and 160 degrees horizontally. The extra-foveal region may be outside the foveal region and within 135 degrees vertically and 160 degrees horizontally. In some embodiments, the extents of the foveal region and/or the extra-foveal region may be determined empirically and/or programmable via registers in hardware. In other words, the foveal region and/or the extra-foveal region may not need to be determined at design time. Hardware may be designed such that the foveal region can extend to the entire visible field, which may be the worst-case. In some embodiments, software may be implemented which controls the registers that determine the extent of the foveal and extra foveal regions and/or their relative frame rates.
  • In some embodiments, the system may gradually test and adjust the foveal region to a point where the user is happy (e.g., it doesn't degrade the user experience). In other embodiments, the foveal region may be automatically adjusted, for example, to conserve battery energy. It may be better to have some display rather than no display because the battery is depleted.
  • For example, in a virtual reality application, where all image data are already available (e.g., a gaming virtual reality application that has all image data available), pixels that correspond to the foveal region(s) are loaded into foveal frame buffer(s) with full resolution (i.e., original resolution of the available image data). Pixels that correspond to the extra-foveal regions(s) are loaded into extra-foveal frame buffer(s) with reduced resolution. For example, as described in detail below, not all pixels from the available image data are loaded into an extra-foveal frame buffer. Rather, for example, for each 3×3 pixels of the available image data that correspond to an extra-foveal region, only one pixel is loaded into the extra-foveal frame buffer, and the value of that pixel is duplicated to its neighboring 8 pixels. In other words, the size of the pixel has been increased and the resolution has been reduced. In some embodiments, reduced resolution is resolution that is lower than the original full resolution.
  • Resolution and/or frame-rate of the foveal region may be different than that of the extra-foveal region. In some embodiments, the extra-foveal region may be represented with at lower resolution and/or lower frame-rate.
  • In some embodiments, multiple Display Serial Interfaces may be used in parallel. Some of the Display Serial Interfaces may be configured to transfer, in parallel, display data corresponding to the foveal region at relatively higher frame rate and/or resolution as compared to the extra-foveal data which can be transmitted at relatively lower frame rate and/or resolution. Some of the Display Serial Interfaces may be configured to transfer, in parallel, display data corresponding to the extra-foveal region.
  • In some implementations, pixel resolutions are represented in various (e.g., more than two) resolutions. For example, pixels for the foveal region (e.g., a 5 or other degree central circle) may be represented with full resolution (e.g., original full resolution of available image data.) Pixels for the extra-foveal region that are close to the foveal region may be represented with medium resolution, which may be just slightly lower resolution than the full resolution. And pixels for the extra-foveal region that are not close to the foveal region (e.g., a peripheral portion or an edge of the image) may be represented with lower resolution, which may be lower resolution compared to the medium resolution.
  • In some implementations, the processor 304 can be a general purpose processor executing computer executable instructions stored on a tangible computer readable medium. In some implementations, the processor 304 can be or can include special purpose circuitry such as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). In some implementations, the processor 304 is part of hardware for accelerated graphics processing. In some implementations, the processor 304 is a system on a chip—an integrated circuit (IC) that integrates all components of a computer or other electronic system into a single chip. The processor 304 may integrate all components to enable efficient graphics processing and loading of frame buffers.
  • In some implementations, for the extra-foveal frame buffers 308/312, pixel values are duplicated across 2×2 pixels, 3×3 pixels, 4×4 pixels, 5×5 pixels, or other multiple pixels. In other words, rather than loading each pixel into the extra-foveal frame buffers 308/312, the processor 304 may load a subset of pixels that correspond to extra-foveal regions into the extra-foveal frame buffer 308/312, and duplicate a pixel value as its neighbor pixel values and store in the extra-foveal frame buffers. In some embodiments, extra-foveal frame buffers 308/312 and foveal frame buffers 310/314 are loaded with different rates (e.g., different frame rates). For example, extra-foveal frame buffers 308/312 may be loaded at 60 Hz and foveal frame buffers 310/314 may be loaded at 120 Hz. The extra-foveal frame buffers 308/312 and the foveal frame buffers 310/314 may be loaded at any other suitable rates.
  • In some implementations, display panels 320/322 may be LCD panels, LED panels, or display panels using other display technologies.
  • FIG. 4 is a block diagram illustrating an improved head-mounted display design with separate full resolution left and right foveal frame buffer and shared lower resolution extra-foveal frame buffer, in accordance with an embodiment of the present disclosure. A head-mounted display 400 comprises eye track devices 401/402, a processor 404, a memory 406, foveal frame buffers 410/414, a shared extra-foveal frame buffer 408, and alpha-blending 416/418. The eye track devices 401/402, the processor 404, the memory 406, the foveal frame buffers 410/414, the shared extra-foveal frame buffer 408, and the alpha-blending 416/418 are similar to the eye track devices 301/302, the processor 304, the memory 306, the foveal frame buffers 310/314, and the alpha-blending 316/318 in FIG. 3. As shown in FIG. 4, rather than using two separate extra-foveal frame buffers (one extra-foveal frame buffer for the display panel for the left eye, and another extra-foveal frame buffer for the display panel for the right eye), in this embodiment, the extra-foveal frame buffer is shared between the two display panels. In other words, a pixel that corresponds to a unit of image data for the extra-foveal region may be allocated a single corresponding memory space, rather than one memory space for the display panel for the left eye, and another memory space for the display panel for the right eye. The shared extra-foveal frame buffer 410 may reduce the memory space requirement.
  • FIG. 5 is a block diagram illustrating alpha-blending between foveal and extra-foveal depth-planes to produce composite image for transmission to display panel(s), in accordance with an embodiment of the present disclosure. As shown in FIG. 5, layer 502 comprises pixels in full resolution for the foveal region. Layer 504 comprises pixels in lower resolution for the extra-foveal region. In some embodiments, layer 502 and layer 504 overlap with each other. In other words, the foveal region is rendered with full resolution in layer 502 and with lower resolution in layer 504. In this embodiments, depth-planes for pixels in layer 502 are combined with depth-planes for pixels in layer 504 via alpha-blending 503 to produce image 501 to be transmitted to display panel(s). Such blending may enable a smooth-step transition such that an image comprising a foveal region and an extra-foveal region for a display panel is smoothly composited. Smoothstep is an interpolation function.
  • FIG. 6 is a block diagram illustrating data processing of the output of a Display Serial Interface, in accordance with an embodiment of the present disclosure. In this embodiment, not all pixels that correspond to the extra-foveal region are loaded into the extra-foveal frame buffer 602. For example, for each 3×3 pixels 620, only one pixel value “x” for pixel 622 is loaded into the extra-foveal frame buffer 602. As another example, for each 2×2 pixels, 4×4 pixels, or 5×5 pixels, only one pixel value is loaded into the extra-foveal frame buffer. In some implementations, 2 or more pixels for each 2×2 pixels, 4×4 pixels, or 5×5 pixels may be loaded into the extra-foveal buffer. The pixel data/values are transmitted via serial/parallel interface 604 to display panel 610. The serial/parallel interface 604 may be a Display Serial Interface.
  • In some embodiments, processor 608 receives output for the serial/parallel interface 604. The process 608 may duplicate a pixel value to its neighbouring pixels. For example, as shown in FIG. 6, value “x” of pixel 622 may be duplicated to the neighbouring pixels 620, and the duplicated pixel values may be used to render an image on a display panel 610. Accordingly, not all pixels need to be stored in a memory. In other words, the memory only need to store a compressed representation of the image. For example, as shown in FIG. 6, if for each 3×3 pixels, only one pixel is loaded into the memory, the memory then only needs to store 1/9 of total pixels and 8 of 9 pixels for each 3×3 pixels may be duplicated by the processor 608.
  • FIG. 7 shows a flowchart for graphical rendering in accordance with an embodiment of the present disclosure.
  • Method 700 may include receiving eye-tracking information of the user wearing the head-mounted display (702); determining, based on the eye-tracking information, a first foveal region of a first display for the left eye, a first extra-foveal region of the first display, a second foveal region of a second display for the right eye, and a second extra-foveal region of the second display (704); and load, into a memory, first foveal region pixels of the first foveal region with full resolution, first extra-foveal region pixels of the first extra-foveal region with reduced resolution, second foveal region pixels of the second foveal region with full resolution, second extra-foveal region pixels of the second extra-foveal region with reduced resolution (706).
  • Method 700 may include receiving eye-tracking information of the user wearing the head mounted display (702). In some implementations, as described above, the eye-tracking information may be based on infrared light projection onto user's pupils. In other implementations, the eye-tracking information may be based on EEG of the user (e.g., electrodes may be attached to the user's scalp, which then may send signals to a processor). In further implementations, eye tracking information may be obtained from special contact lens with an embedded mirror or magnetic field sensor that a user wears.
  • At step 704, based on the eye-tracking information, foveal and extra-foveal regions for the display panel for the user's left eye, and foveal and extra-foveal regions for the display panel of the user's right eye may be determined.
  • At step 706, pixels that correspond to foveal regions may be loaded into one or more foveal frame buffers and these pixels may be represented with full resolution; pixels that correspond to extra-foveal regions may be loaded into one or more extra-foveal frame buffers and these pixels may be represented with reduced resolution. As described above, the extra-foveal buffer may be shared between the display panel for the left eye and the display panel for the right eye. A pixel value may be duplicated across including but not limited to 2×2 pixels, 3×3 pixels, 4×4 panels, or 5×5 pixels. Other multiple pixels are possible. In some embodiments, the duplicated values may be stored in a frame buffer (e.g., an extra-foveal frame buffer). In other embodiments, the output of a Display Serial Interface may be processed to duplicate pixel values, which may enable compressed storage in a memory.
  • It will be appreciated that whilst several different arrangements have been described herein, that the features of each may be advantageously combined together in a variety of forms to achieve advantage.
  • In the foregoing specification, the application has been described with reference to specific examples. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims. For example, the connections may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise the connections may for example be direct connections or indirect connections.
  • It is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermediate components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
  • Furthermore, those skilled in the art will recognize that boundaries between the functionality of the above described operations are merely illustrative. The functionality of multiple operations may be combined into a single operation, and/or the functionality of a single operation may be distributed in additional operations. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.
  • However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
  • In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word “comprising” does not exclude the presence of other elements or steps than those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (20)

  1. 21. A head-mounted display device comprising:
    memory to store data, the data to be displayed via a first display and via a second display; and
    one or more processors to:
    determine, based on eye-tracking information of a user wearing the head mounted display, a first foveal region of the first display, a second foveal region of the second display, and extra-foveal regions of the first and second displays, the eye-tracking information representative of at least one of an eye movement or an eye position of the user;
    identify (a) a first subset of the data as corresponding to first foveal region pixels associated with the first foveal region, (b) a second subset of the data as corresponding to second foveal region pixels associated with the second foveal region, and (c) a third subset of the data as corresponding to extra-foveal region pixels associated with the extra-foveal regions;
    load the first subset of the data at full resolution from the memory to a first foveal frame buffer;
    load the second subset of the data at the full resolution from the memory to a second foveal frame buffer; and
    load the third subset of the data at reduced resolution from the memory to a shared extra-foveal frame buffer.
  2. 22. The device of claim 21, wherein the one or more processors are to adjust at least one of the first foveal region, the second foveal region, or the extra-foveal regions based on the eye-tracking information.
  3. 23. The device of claim 21, wherein the one or more processors are to reduce resolution of a pixel value of the reduced resolution by:
    selecting the pixel value in a group of pixels; and
    duplicating the pixel value to neighboring pixels.
  4. 24. The device of claim 23, wherein the one or more processors are to duplicate the pixel value in the group of pixels across at least one of 2×2 pixels, 3×3 pixels, 4×4 pixels, or 5×5 pixels.
  5. 25. The device of claim 21, wherein the one or more processors are to, responsive to the data being rendered:
    load the first foveal frame buffer at a first rate;
    load the second foveal frame buffer at a second rate; and
    load the shared extra-foveal frame buffer at a third rate.
  6. 26. The device of claim 25, wherein the data is rendered when the first subset of the data and the second subset of the data are generated at the full resolution and the third subset of the data is generated at the reduced resolution.
  7. 27. The device of claim 25, wherein the first rate is equivalent to the second rate, the first rate and the second rate being different than the third rate.
  8. 28. At least one storage device or storage disk comprising instructions that, when executed, cause at least one processor to, at least:
    determine, based on eye-tracking information of a user wearing a head mounted display, a first foveal region of a first display, a second foveal region of a second display, and extra-foveal regions of the first and second displays, the eye-tracking information representative of at least one of an eye movement or an eye position of the user;
    identify (a) a first subset of data as corresponding to first foveal region pixels associated with the first foveal region, (b) a second subset of the data as corresponding to second foveal region pixels associated with the second foveal region, and (c) a third subset of the data as corresponding to extra-foveal region pixels associated with the extra-foveal regions;
    load the first subset of the data at full resolution from memory to a first foveal frame buffer;
    load the second subset of the data at the full resolution from the memory to a second foveal frame buffer; and
    load the third subset of the data at reduced resolution from the memory to a shared extra-foveal frame buffer.
  9. 29. The at least one storage device or storage disk of claim 28, wherein the instructions, when executed, cause the at least one processor to reduce a resolution of a pixel value of the reduced resolution by:
    selecting the pixel value in a group of pixels; and
    duplicating the pixel value to neighboring pixels.
  10. 30. The at least one storage device or storage disk of claim 28, wherein the instructions, when executed, cause the at least one processor to adjust at least one of the first foveal region, the second foveal region, or the extra-foveal regions based on the eye-tracking information.
  11. 31. The at least one storage device or storage disk of claim 28, wherein the instructions, when executed, cause the at least one processor to duplicate a pixel value after receiving the pixel value from a Display Serial Interface in communication with the memory and at least one of the first display or the second display.
  12. 32. The at least one storage device or storage disk of claim 28, wherein the instructions, when executed, cause the at least one processor to, responsive to the data being rendered:
    load the first foveal frame buffer at a first rate;
    load the second foveal frame buffer at a second rate; and
    load the shared extra-foveal frame buffer at a third rate.
  13. 33. The at least one storage device or storage disk of claim 32, wherein the data is rendered when the first subset of the data and the second subset of the data are generated at the full resolution and the third subset of the data is generated at the reduced resolution.
  14. 34. The at least one storage device or storage disk of claim 28, wherein the instructions, when executed, cause the at least one processor to apply alpha-blending (a) between depth-planes of the first foveal region pixels and depth-planes of the extra-foveal region pixels and (b) between depth-planes of the second foveal region pixels and depth-planes of the extra-foveal region pixels.
  15. 35. A method comprising:
    determining, based on eye-tracking information of a user wearing a head mounted display, a first foveal region of a first display, a second foveal region of a second display, and extra-foveal regions of the first and second displays, the eye-tracking information representative of at least one of an eye movement or an eye position of the user;
    identifying (a) a first subset of data as corresponding to first foveal region pixels associated with the first foveal region, (b) a second subset of the data as corresponding to second foveal region pixels associated with the second foveal region, and (c) a third subset of the data as corresponding to extra-foveal region pixels associated with the extra-foveal regions;
    loading the first subset of the data at full resolution from memory to a first foveal frame buffer;
    loading the second subset of the data at the full resolution from the memory to a second foveal frame buffer; and
    loading the third subset of the data at reduced resolution from the memory to a shared extra-foveal frame buffer.
  16. 36. The method of claim 35, further including reducing a resolution of a pixel value of the reduced resolution by:
    selecting the pixel value in a group of pixels; and
    duplicating the pixel value to neighboring pixels.
  17. 37. The method of claim 35, further including adjusting at least one of the first foveal region, the second foveal region, or the extra-foveal regions based on the eye-tracking information.
  18. 38. The method of claim 35, further including duplicating a pixel value after receiving the pixel value from a Display Serial Interface in communication with the memory and at least one of the first display or the second display.
  19. 39. The method of claim 35, further including, responsive to the data being rendered:
    loading the first foveal frame buffer at a first rate;
    loading the second foveal frame buffer at a second rate; and
    loading the shared extra-foveal frame buffer at a third rate.
  20. 40. The method of claim 35, wherein the data is rendered when the first subset of the data and the second subset of the data are generated at the full resolution and the third subset of the data is generated at the reduced resolution.
US16/584,357 2016-04-01 2019-09-26 Systems and methods for head-mounted display adapted to human visual mechanism Abandoned US20200160818A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/584,357 US20200160818A1 (en) 2016-04-01 2019-09-26 Systems and methods for head-mounted display adapted to human visual mechanism

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/088,816 US10460704B2 (en) 2016-04-01 2016-04-01 Systems and methods for head-mounted display adapted to human visual mechanism
US16/584,357 US20200160818A1 (en) 2016-04-01 2019-09-26 Systems and methods for head-mounted display adapted to human visual mechanism

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/088,816 Continuation US10460704B2 (en) 2016-04-01 2016-04-01 Systems and methods for head-mounted display adapted to human visual mechanism

Publications (1)

Publication Number Publication Date
US20200160818A1 true US20200160818A1 (en) 2020-05-21

Family

ID=58692528

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/088,816 Active US10460704B2 (en) 2016-04-01 2016-04-01 Systems and methods for head-mounted display adapted to human visual mechanism
US16/584,357 Abandoned US20200160818A1 (en) 2016-04-01 2019-09-26 Systems and methods for head-mounted display adapted to human visual mechanism

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/088,816 Active US10460704B2 (en) 2016-04-01 2016-04-01 Systems and methods for head-mounted display adapted to human visual mechanism

Country Status (6)

Country Link
US (2) US10460704B2 (en)
EP (1) EP3437317A1 (en)
JP (1) JP2019512750A (en)
KR (1) KR102140389B1 (en)
CN (1) CN110140353A (en)
WO (1) WO2017168229A1 (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10284840B2 (en) * 2013-06-28 2019-05-07 Electronics And Telecommunications Research Institute Apparatus and method for reproducing 3D image
CN104469344B (en) * 2014-12-03 2017-03-01 北京智谷技术服务有限公司 Light field display control method and device, light field display device
US10353202B2 (en) * 2016-06-09 2019-07-16 Microsoft Technology Licensing, Llc Wrapped waveguide with large field of view
US10373592B2 (en) * 2016-08-01 2019-08-06 Facebook Technologies, Llc Adaptive parameters in image regions based on eye tracking information
US10564715B2 (en) * 2016-11-14 2020-02-18 Google Llc Dual-path foveated graphics pipeline
US10262387B2 (en) 2016-11-14 2019-04-16 Google Llc Early sub-pixel rendering
WO2018183405A1 (en) 2017-03-27 2018-10-04 Avegant Corp. Steerable foveal display
US10553010B2 (en) * 2017-04-01 2020-02-04 Intel IP Corporation Temporal data structures in a ray tracing architecture
US11164352B2 (en) * 2017-04-21 2021-11-02 Intel Corporation Low power foveated rendering to save power on GPU and/or display
GB2568261B (en) 2017-11-08 2022-01-26 Displaylink Uk Ltd System and method for presenting data at variable quality
US10395624B2 (en) 2017-11-21 2019-08-27 Nvidia Corporation Adjusting an angular sampling rate during rendering utilizing gaze information
US10580207B2 (en) * 2017-11-24 2020-03-03 Frederic Bavastro Augmented reality method and system for design
US10977859B2 (en) * 2017-11-24 2021-04-13 Frederic Bavastro Augmented reality method and system for design
GB2569176B (en) * 2017-12-08 2022-04-13 Displaylink Uk Ltd Processing visual information for display on a screen
KR102532972B1 (en) * 2017-12-29 2023-05-16 엘지디스플레이 주식회사 Compensation Method for Display and the Display comprising a memory storing compensation values
US10949947B2 (en) 2017-12-29 2021-03-16 Intel Corporation Foveated image rendering for head-mounted display devices
CN110324601A (en) * 2018-03-27 2019-10-11 京东方科技集团股份有限公司 Rendering method, computer product and display device
CN110858896B (en) * 2018-08-24 2021-06-08 东方梦幻虚拟现实科技有限公司 VR image processing method
KR20210097190A (en) 2018-12-07 2021-08-06 아브간트 코포레이션 steerable positioning elements
KR20240042166A (en) 2019-01-07 2024-04-01 아브간트 코포레이션 Control system and rendering pipeline
GB2583061B (en) * 2019-02-12 2023-03-15 Advanced Risc Mach Ltd Data processing systems
CA3134149A1 (en) 2019-03-29 2020-10-08 Avegant Corp. Steerable hybrid display using a waveguide
GB2583741B (en) * 2019-05-08 2022-03-16 Univ Newcastle Kinase screening assays
CN110166758B (en) * 2019-06-24 2021-08-13 京东方科技集团股份有限公司 Image processing method, image processing device, terminal equipment and storage medium
US11307655B2 (en) 2019-09-19 2022-04-19 Ati Technologies Ulc Multi-stream foveal display transport
US11093033B1 (en) * 2019-10-28 2021-08-17 Facebook, Inc. Identifying object of user focus with eye tracking and visually evoked potentials
CN110855972B (en) * 2019-11-21 2021-07-27 Oppo广东移动通信有限公司 Image processing method, electronic device, and storage medium
EP4062225A4 (en) 2020-01-06 2023-12-27 Avegant Corp. A head mounted system with color specific modulation
US11568783B1 (en) * 2021-08-17 2023-01-31 Varjo Technologies Oy Display drivers, apparatuses and methods for improving image quality in foveated images
US20230065296A1 (en) * 2021-08-30 2023-03-02 Facebook Technologies, Llc Eye-tracking using embedded electrodes in a wearable device
CN115032797B (en) * 2022-06-30 2023-12-08 恒玄科技(上海)股份有限公司 Display method for wireless intelligent glasses and wireless intelligent glasses

Family Cites Families (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB710876A (en) 1951-04-27 1954-06-23 Chamberlain & Hookham Ltd Protective apparatus for electricity distributing systems
US3553651A (en) 1967-12-06 1971-01-05 Singer General Precision Memory storage system
US3919534A (en) 1974-05-17 1975-11-11 Control Data Corp Data processing system
US4281312A (en) 1975-11-04 1981-07-28 Massachusetts Institute Of Technology System to effect digital encoding of an image
GB1488538A (en) 1975-11-28 1977-10-12 Ibm Compressed refresh buffer
JPS6015771A (en) 1983-07-08 1985-01-26 Hitachi Ltd Memory controller
CA1236584A (en) 1984-12-03 1988-05-10 William E. Hall Parallel processing system
US5081573A (en) 1984-12-03 1992-01-14 Floating Point Systems, Inc. Parallel processing system
US5226171A (en) 1984-12-03 1993-07-06 Cray Research, Inc. Parallel vector processing system for individual and broadcast distribution of operands and control information
US4850027A (en) 1985-07-26 1989-07-18 International Business Machines Corporation Configurable parallel pipeline image processing system
US5021945A (en) 1985-10-31 1991-06-04 Mcc Development, Ltd. Parallel processor system for processing natural concurrencies and method therefor
JPH0731669B2 (en) 1986-04-04 1995-04-10 株式会社日立製作所 Vector processor
GB2190560B (en) 1986-05-08 1990-06-20 Gen Electric Plc Data compression
JPH02290626A (en) 1989-04-27 1990-11-30 Nhk Spring Co Ltd Method and device for manufacturing metallic bellows
US5434623A (en) 1991-12-20 1995-07-18 Ampex Corporation Method and apparatus for image data compression using combined luminance/chrominance coding
US5262973A (en) 1992-03-13 1993-11-16 Sun Microsystems, Inc. Method and apparatus for optimizing complex arithmetic units for trivial operands
JPH05297853A (en) * 1992-04-16 1993-11-12 Hitachi Ltd Display controller
US5861873A (en) 1992-06-29 1999-01-19 Elonex I.P. Holdings, Ltd. Modular portable computer with removable pointer device
US5586300A (en) 1994-07-20 1996-12-17 Emc Corporation Flexible addressing memory controller wherein multiple memory modules may be accessed according to comparison of configuration addresses
FI97096C (en) 1994-09-13 1996-10-10 Nokia Mobile Phones Ltd A video
JP3727961B2 (en) * 1994-10-28 2005-12-21 キヤノン株式会社 Head-mounted display device
GB2311882B (en) 1996-04-04 2000-08-09 Videologic Ltd A data processing management system
JPH1091441A (en) 1996-09-13 1998-04-10 Sanyo Electric Co Ltd Program execution method and device using the method
US5963642A (en) 1996-12-30 1999-10-05 Goldstein; Benjamin D. Method and apparatus for secure storage of data
US6252989B1 (en) 1997-01-07 2001-06-26 Board Of The Regents, The University Of Texas System Foveated image coding system and method for image bandwidth reduction
US6009511A (en) 1997-06-11 1999-12-28 Advanced Micro Devices, Inc. Apparatus and method for tagging floating point operands and results for rapid detection of special floating point numbers
JPH1185512A (en) 1997-09-03 1999-03-30 Fujitsu Ltd Data processor having instruction compression storage and instruction restoration function
US6173389B1 (en) 1997-12-04 2001-01-09 Billions Of Operations Per Second, Inc. Methods and apparatus for dynamic very long instruction word sub-instruction selection for execution time parallelism in an indirect very long instruction word processor
US6366999B1 (en) 1998-01-28 2002-04-02 Bops, Inc. Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution
US6717578B1 (en) 1998-02-17 2004-04-06 Sun Microsystems, Inc. Graphics system with a variable-resolution sample buffer
WO2000004484A2 (en) 1998-07-17 2000-01-27 Intergraph Corporation Wide instruction word graphics processor
WO2000013136A1 (en) 1998-08-31 2000-03-09 Digital Video Express, L.P. Watermarking system and methodology for digital multimedia content
US6839728B2 (en) 1998-10-09 2005-01-04 Pts Corporation Efficient complex multiplication and fast fourier transform (FFT) implementation on the manarray architecture
EP1190571A1 (en) 1999-04-08 2002-03-27 New York University Extremely high resolution foveated display
US20080007562A1 (en) 1999-04-09 2008-01-10 Dave Stuttard Parallel data processing apparatus
GB2348971B (en) 1999-04-09 2004-03-03 Pixelfusion Ltd Parallel data processing systems
US6535644B1 (en) * 1999-07-01 2003-03-18 Koninklijke Philips Electronics N.V. Hierarchical foveation based on wavelets
US6591019B1 (en) 1999-12-07 2003-07-08 Nintendo Co., Ltd. 3D transformation matrix compression and decompression
JP3262772B2 (en) 1999-12-17 2002-03-04 株式会社ナムコ Image generation system and information storage medium
US6859870B1 (en) 2000-03-07 2005-02-22 University Of Washington Method and apparatus for compressing VLIW instruction and sharing subinstructions
US6779066B2 (en) 2000-05-01 2004-08-17 Matsushita Electric Industrial Co., Ltd. Module having application-specific program stored therein
GB2362055A (en) 2000-05-03 2001-11-07 Clearstream Tech Ltd Image compression using a codebook
AU5235501A (en) 2000-05-03 2001-11-12 Clearstream Technologies Limited Video data transmission
GB2362733B (en) 2000-05-25 2002-02-27 Siroyan Ltd Processors having compressed instructions.
CA2357236C (en) 2000-10-17 2011-09-06 Spx Development Corporation Plug-in module for portable computing device
JP4046969B2 (en) 2000-11-09 2008-02-13 キヤノン株式会社 Image processing apparatus, method thereof, program, and storage medium
US7305092B2 (en) 2000-12-19 2007-12-04 Qualcomm Incorporated Method and system to accelerate cryptographic functions for secure e-commerce applications
EP1241892A1 (en) 2001-03-06 2002-09-18 Siemens Aktiengesellschaft Hardware accelerator for video signal processing system
US7395297B2 (en) 2001-05-25 2008-07-01 Sun Microsystems, Inc. Floating point system that represents status flag information within a floating point operand
US20030005261A1 (en) 2001-06-29 2003-01-02 Gad Sheaffer Method and apparatus for attaching accelerator hardware containing internal state to a processing core
US20030149822A1 (en) 2002-02-01 2003-08-07 Bryan Scott Method for integrating an intelligent docking station with a handheld personal computer
KR100464406B1 (en) 2002-02-08 2005-01-03 삼성전자주식회사 Apparatus and method for dispatching very long instruction word with variable length
US7088777B2 (en) 2002-11-22 2006-08-08 Microsoft Corp. System and method for low bit rate watercolor video
US7038687B2 (en) 2003-06-30 2006-05-02 Intel Corporation System and method for high-speed communications between an application processor and coprocessor
US20080074431A1 (en) 2003-11-19 2008-03-27 Reuven Bakalash Computing system capable of parallelizing the operation of multiple graphics processing units (GPUS) supported on external graphics cards
US7050068B1 (en) * 2003-12-02 2006-05-23 Nvidia Corporation Generation of jittered sub-pixel samples using programmable sub-pixel offsets
US8028164B2 (en) 2004-03-19 2011-09-27 Nokia Corporation Practical and secure storage encryption
JP2005321479A (en) * 2004-05-06 2005-11-17 Olympus Corp Head mounted type display device
EP1810249A4 (en) * 2004-10-15 2014-09-10 Seadragon Software Inc System and method for managing communication and/or storage of image data
US20070291571A1 (en) 2006-06-08 2007-12-20 Intel Corporation Increasing the battery life of a mobile computing system in a reduced power state through memory compression
KR100828128B1 (en) 2006-07-20 2008-05-09 에이디반도체(주) Method and apparatus for detecting capacitance using time division multi-frequency
JP2008085388A (en) 2006-09-25 2008-04-10 Fujifilm Corp Imaging apparatus
US8094965B2 (en) * 2006-12-19 2012-01-10 California Institute Of Technology Image processor
GB0700877D0 (en) 2007-01-17 2007-02-21 Linear Algebra Technologies Lt A device
GB2447494A (en) 2007-03-15 2008-09-17 Linear Algebra Technologies Lt A method and circuit for compressing data using a bitmap to identify the location of data values
GB2447428A (en) 2007-03-15 2008-09-17 Linear Algebra Technologies Lt Processor having a trivial operand register
US7755671B2 (en) 2007-04-23 2010-07-13 Hewlett-Packard Development Company, L.P. Correcting a captured image in digital imaging devices
JP2008277926A (en) 2007-04-25 2008-11-13 Kyocera Corp Image data processing method and imaging device using same
US7884823B2 (en) 2007-06-12 2011-02-08 Microsoft Corporation Three dimensional rendering of display information using viewer eye coordinates
US7973834B2 (en) 2007-09-24 2011-07-05 Jianwen Yang Electro-optical foveated imaging and tracking system
GB2457303A (en) 2008-02-11 2009-08-12 Linear Algebra Technologies Randomly accessing elements of compressed matrix data by calculating offsets from non-zero values of a bitmap
US7502918B1 (en) 2008-03-28 2009-03-10 International Business Machines Corporation Method and system for data dependent performance increment and power reduction
US20100149073A1 (en) 2008-11-02 2010-06-17 David Chaum Near to Eye Display System and Appliance
JP5079589B2 (en) * 2008-04-30 2012-11-21 パナソニック株式会社 Display control apparatus and display control method
US8200594B1 (en) 2008-09-10 2012-06-12 Nvidia Corporation System, method, and computer program product for accelerating a game artificial intelligence process
WO2010062479A1 (en) 2008-11-02 2010-06-03 David Chaum System and apparatus for eyeglass appliance platform
US8130292B2 (en) 2008-12-31 2012-03-06 Aptina Imaging Corporation Scene illumination adaptive lens shading correction for imaging devices
FR2944662B1 (en) * 2009-04-20 2011-06-03 Stmicroelectronics Wireless Sas VIDEO TRANSMISSION ON A SERIAL INTERFACE
JP5169997B2 (en) 2009-05-29 2013-03-27 ソニー株式会社 Filter circuit, image processing apparatus, imaging apparatus, image processing method, and program
JP2011124954A (en) 2009-12-14 2011-06-23 Canon Inc Image processing method and image processing apparatus
GB2476800A (en) 2010-01-07 2011-07-13 Linear Algebra Technologies Ltd Sparse matrix vector multiplier using a bit map of non-zero elements to control scheduling of arithmetic operations
US8493390B2 (en) * 2010-12-08 2013-07-23 Sony Computer Entertainment America, Inc. Adaptive displays using gaze tracking
US9690099B2 (en) * 2010-12-17 2017-06-27 Microsoft Technology Licensing, Llc Optimized focal area for augmented reality displays
US8464190B2 (en) 2011-02-17 2013-06-11 Maxeler Technologies Ltd. Method of, and apparatus for, stream scheduling in parallel pipelined hardware
US8711268B2 (en) 2011-05-17 2014-04-29 Samsung Electronics Co., Ltd. Methods and apparatuses for anti-shading correction with extended color correlated temperature dependency
JP2012256202A (en) 2011-06-09 2012-12-27 Sony Corp Image processing apparatus and method, and program
US8184069B1 (en) 2011-06-20 2012-05-22 Google Inc. Systems and methods for adaptive transmission of data
US9030583B2 (en) 2011-09-21 2015-05-12 Semiconductor Components Industries, Llc Imaging system with foveated imaging capabilites
US8914262B2 (en) 2011-11-08 2014-12-16 The Mathworks, Inc. Visualization of data dependency in graphical models
EP2786196A4 (en) * 2011-12-02 2015-11-11 Jerry G Aguren Wide field-of-view 3d stereo vision platform with dynamic control of immersive or heads-up display operation
US9080916B2 (en) 2012-08-30 2015-07-14 Apple Inc. Correction factor for color response calibration
US9300846B2 (en) 2012-09-10 2016-03-29 Apple Inc. Signal shaping for improved mobile video communication
US20140146394A1 (en) * 2012-11-28 2014-05-29 Nigel David Tout Peripheral display for a near-eye display device
US10514541B2 (en) * 2012-12-27 2019-12-24 Microsoft Technology Licensing, Llc Display update time reduction for a near-eye display
US9727991B2 (en) * 2013-03-01 2017-08-08 Microsoft Technology Licensing, Llc Foveated image rendering
US9934043B2 (en) 2013-08-08 2018-04-03 Linear Algebra Technologies Limited Apparatus, systems, and methods for providing computational imaging pipeline
US9196017B2 (en) 2013-11-15 2015-11-24 Linear Algebra Technologies Limited Apparatus, systems, and methods for removing noise from an image
US9270872B2 (en) 2013-11-26 2016-02-23 Linear Algebra Technologies Limited Apparatus, systems, and methods for removing shading effect from image
US9905046B2 (en) 2014-04-03 2018-02-27 Intel Corporation Mapping multi-rate shading to monolithic programs
JP2015222470A (en) * 2014-05-22 2015-12-10 ソニー株式会社 Video image display device, information processing device, and video image display system
WO2016094928A1 (en) 2014-12-18 2016-06-23 Halgo Pty Limited Replicating effects of optical lenses
WO2016102355A1 (en) 2014-12-22 2016-06-30 Thomson Licensing Apparatus and method for generating an extrapolated image using a recursive hierarchical process
EP3238213B1 (en) 2014-12-22 2023-06-21 InterDigital CE Patent Holdings Method and apparatus for generating an extrapolated image based on object detection
US10152764B2 (en) 2015-03-24 2018-12-11 Intel Corporation Hardware based free lists for multi-rate shader
US11010956B2 (en) 2015-12-09 2021-05-18 Imagination Technologies Limited Foveated rendering
US10109039B1 (en) 2017-04-24 2018-10-23 Intel Corporation Display engine surface blending and adaptive texel to pixel ratio sample rate system, apparatus and method
US10949947B2 (en) 2017-12-29 2021-03-16 Intel Corporation Foveated image rendering for head-mounted display devices

Also Published As

Publication number Publication date
KR20180131594A (en) 2018-12-10
WO2017168229A1 (en) 2017-10-05
KR102140389B1 (en) 2020-08-03
EP3437317A1 (en) 2019-02-06
JP2019512750A (en) 2019-05-16
WO2017168229A8 (en) 2018-11-22
CN110140353A (en) 2019-08-16
US10460704B2 (en) 2019-10-29
US20170287447A1 (en) 2017-10-05

Similar Documents

Publication Publication Date Title
US20200160818A1 (en) Systems and methods for head-mounted display adapted to human visual mechanism
JP7397777B2 (en) Virtual reality, augmented reality, and mixed reality systems and methods
US10338677B2 (en) Adjusting image frames based on tracking motion of eyes
CN105992965B (en) In response to the stereoscopic display of focus shift
US10082867B2 (en) Display control method and display control apparatus
EP3485350B1 (en) Foveated rendering
WO2019089094A1 (en) Multi-perspective eye-tracking for vr/ar systems
US10699673B2 (en) Apparatus, systems, and methods for local dimming in brightness-controlled environments
CN112470464B (en) In-field subcode timing in a field sequential display
US20200292825A1 (en) Attention direction on optical passthrough displays
US10209523B1 (en) Apparatus, system, and method for blur reduction for head-mounted displays
TW201913622A (en) Variable DPI across a display and control thereof
US11823343B1 (en) Method and device for modifying content according to various simulation characteristics
CN110010082B (en) Apparatus, system, and method for preventing display flicker
US11769465B1 (en) Identifying regions of visible media data that belong to a trigger content type
US11748956B2 (en) Device and method for foveated rendering
US20230282139A1 (en) Head-mountable display (hmd) image and foveal region brightness computation
CN117334145A (en) Display brightness adjusting method, device and equipment of VR equipment and storage medium
Swafford Visual Perception in Simulated Reality
GB2563832A (en) Display method and apparatus

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MOVIDIUS LIMITED, NETHERLANDS

Free format text: MERGER;ASSIGNOR:LINEAR ALGEBRA TECHNOLOGIES LIMITED;REEL/FRAME:061546/0001

Effective date: 20181207