WO2018136251A1 - Focal surface display - Google Patents

Focal surface display Download PDF

Info

Publication number
WO2018136251A1
WO2018136251A1 PCT/US2018/012777 US2018012777W WO2018136251A1 WO 2018136251 A1 WO2018136251 A1 WO 2018136251A1 US 2018012777 W US2018012777 W US 2018012777W WO 2018136251 A1 WO2018136251 A1 WO 2018136251A1
Authority
WO
WIPO (PCT)
Prior art keywords
focal surface
scene
focal
virtual scene
shape
Prior art date
Application number
PCT/US2018/012777
Other languages
French (fr)
Inventor
Alexander Jobe Fix
Nathan Seigo Matsuda
Douglas Robert Lanman
Original Assignee
Oculus Vr, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oculus Vr, Llc filed Critical Oculus Vr, Llc
Priority to CN201880018483.4A priority Critical patent/CN110431470B/en
Priority to CN202210220513.8A priority patent/CN114594603A/en
Priority to EP18151916.6A priority patent/EP3351999A1/en
Publication of WO2018136251A1 publication Critical patent/WO2018136251A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21VFUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
    • F21V19/00Fastening of light sources or lamp holders
    • F21V19/02Fastening of light sources or lamp holders with provision for adjustment, e.g. for focusing
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/06Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the phase of light
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/28Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for polarising
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C7/00Optical parts
    • G02C7/02Lenses; Lens systems ; Methods of designing lenses
    • G02C7/022Ophthalmic lenses having special refractive features achieved by special materials or material structures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0145Head-up displays characterised by optical features creating an intermediate image
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0185Displaying image at variable distance
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/02Simple or compound lenses with non-spherical faces
    • G02B3/08Simple or compound lenses with non-spherical faces with discontinuous faces, e.g. Fresnel lens

Definitions

  • the present disclosure generally relates to enhancing images from electronic displays, and specifically to varying the focal length of optics to enhance the images.
  • a head mounted display can be used to simulate virtual environments.
  • Conventional binocular HMDs vary the stimulus to vergence with the information being presented to a viewing user in a virtual scene, while the stimulus to accommodation remains fixed at the apparent distance of the display, as created by the viewing optics.
  • Sustained vergence-accommodation conflict has been associated with visual discomfort, motivating numerous proposals for delivering near- correct accommodation cues.
  • Vergence is the simultaneous movement or rotation of both eyes in opposite directions to obtain or maintain single binocular vision which is connected to accommodation of the eye.
  • changing the focus of the eyes to look at an object at a different distance automatically causes vergence and accommodation.
  • vergence depth corresponds to where the user is looking, which also equals the focal length of the user's eyes.
  • 3D electronic displays For example, as a virtual object is rendered on the 3D electronic display to move closer to a user looking at the object, each of the user's eyes rotates inward to stay verged on the object, but the power or focal length of each eye is not reduced; hence, the user's eyes do not accommodate as in the real-world.
  • the eyes Instead of reducing power or focal length to accommodate for the closer vergence depth, the eyes maintain accommodation at a distance associated with 3D electronic display.
  • the vergence depth often does not equal the focal length for the human eye for objects displayed on 3D electronic displays. This discrepancy between vergence depth and focal length is referred to as "vergence-accommodation conflict."
  • a user experiencing only vergence or accommodation (and not both simultaneously) can experience some degree of fatigue or nausea, which is undesirable for virtual reality system creators.
  • a head mounted display adjusts the phase of light of a virtual scene received from an electronic display using a spatially programmable focusing element, such as a spatial light modulator operating in a phase-modulation mode.
  • a spatially programmable focusing element such as a spatial light modulator operating in a phase-modulation mode.
  • the headset receives virtual scene data for the virtual scene that includes scene geometry data or depth values for different components of or points in the virtual scene.
  • a spatial light modulator adjusts a focal partem or wavefront of the light for the virtual scene.
  • the SLM operates as a programmable lens with a spatially varying focal length, allowing the virtual image of different pixels of the HMD's display to appear (from an exit pupil of the HMD) to be formed at different depths within the virtual scene, thereby, shaping synthesized focal surfaces to conform to the virtual scene geometry.
  • the range of depth values of the virtual scene are approximated to a set of one or more discrete depth values based on the scene geometry data.
  • scene points in the virtual scene are clustered based on their associated depth values to identify the set of one or more discrete depth values corresponding to the mean depth value of each cluster.
  • the depth of the virtual scene is then segmented into one or more focal planes at the each of the one or more discrete depth values within the virtual scene. Accordingly, for each focal plane, the shape of the focal plane is adjusted to minimize the distance of the focal plane to each scene point in the cluster.
  • the resulting shape of the focal surface is a continuous piecewise smooth three-dimensional curve, unlike multifocal displays with planar surfaces located at fixed focal depths.
  • a scene could be segmented into three focal surfaces (near, intermediate, far) that are each differently bent or warped to respectively conform to (near, intermediate, far) objects in the virtual scene.
  • phase function Given a set of focal surfaces, a phase function is generated for each focal surface.
  • the phase function when executed by the SLM, causes the SLM to reproduce a focal pattern corresponding to the each focal surface. This is achieved by the SLM adding phase delays to a wavefront of the light from the electronic display. The phase delays cause the shape of the wavefront to be bent and warped into the shape of each focal surface to thereby produce a focal pattern that conforms to the scene geometry.
  • Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, and a system, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, computer program product, as well.
  • the dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof is disclosed and can be claimed regardless of the dependencies chosen in the attached claims.
  • a system comprises:
  • an electronic display element configured to display a virtual scene
  • memory including instructions that, when executed by the at least one processor, cause the at least one processor to:
  • each focal surface being associated with a set of nearest scene points; adjust a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface;
  • an optics block including a spatially programmable focusing element configured to:
  • the memory includes instructions that, when executed by the at least one processor, further cause the at least one processor to:
  • cluster scene points in the virtual scene each scene point being associated with scene geometry data corresponding to a location of the scene point in the virtual scene; and determine a location for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data.
  • the memory further includes instructions that, when executed by the at least one processor, cause the at least one processor to:
  • each focal surface is a spatially varying piecewise-smooth curve.
  • adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes: applying a non-linear least squares optimization between each of the set of nearest scene points to the focal surface.
  • the spatially programmable focusing element time- multiplexes adjustment of the wavefront for each focal surface of the set of focal surfaces.
  • each phase function shifts the light from the electronic display element for each corresponding focal surface by introducing phase delays associated with the shape of the focal surface, and wherein each focal surface is at least one of a continuously-varying shape or a discontinuous surface.
  • a head mounted display comprising:
  • an electronic display element configured to display a virtual scene
  • a spatial light modulator configured to:
  • the wavefront of light from the electronic display element adjusts, at one or more second times, the wavefront of light from the electronic display element to provide focus for a second focal surface positioned at second depth in the virtual scene relative to the exit pupil of the HMD; and direct the adjusted wavefront of light providing focus for the first focal surface and the second focal surface to an exit pupil of the HMD, the wavefront of light for the first focal surface and the second focal surface combining.
  • the SLM time-multiplexes adjustment of the wavefront for the first focal surface and the second focal surface between the one or more first times and the one or more second times.
  • generating the first focal surface and the second focal surface comprises: segmenting the virtual scene into the first focal surface and the second focal surface based on scene geometry obtained for the virtual scene, the first focal surface associated with a first set of nearest scene points and the second focal surface associated with a second set of nearest scene points;
  • the HMD further comprises:
  • the HMD further comprises:
  • adjusting the first shape of the first focal surface and the second shape of the second focal surface includes: determining the first shape of the first focal surface by applying a nonlinear least squares optimization between each of the first set of nearest scene points to the first focal surface; and determining the second shape of the second focal surface by applying the non-linear least squares optimization between each of the second set of nearest scene points to the second focal surface.
  • each phase function shifts the light from the electronic display element for each of the first focal surface and the second focal surface by introducing phase delays associated with the first shape of the first focal surface and the second shape of the second focal surface, and wherein each of the first focal surface and the second focal surface is at least one of a continuously-varying shape or a discontinuous surface.
  • a method comprises:
  • each focal surface being associated with a set of nearest scene points
  • adjusting a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface, the adjusted shape being a spatially varying piecewise-smooth curved surface; and generating, for each focal surface, a phase function for a spatial light modulator (SLM) to adjust a wavefront of light of the virtual scene is received from on an electronic display element, the phase function when applied by the SLM introduces phase delays in the wavefront that reproduces a focal pattern corresponding to the adjusted shape of the focal surface.
  • SLM spatial light modulator
  • segmenting the virtual scene into a set of focal surfaces based on the scene geometry includes:
  • clustering the scene points in the virtual scene and determining a depth for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data.
  • the method further comprises:
  • the SLM and the electronic display element are included in a head mounted display and the SLM time-multiplexes adjustment of the wavefront for each focal surface based on the corresponding phase function to cause a composite image of the virtual scene to be provided to a user viewing the virtual scene through an exit pupil of the head mounted display.
  • each phase function shifts the light from the electronic display element for each focal surface by introducing phase delays associated with the shape of each focal surface, and wherein each focal surface is at least one of a continuously-varying shape or a discontinuous surface.
  • adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes:
  • FIG. 1 shows example ray diagram for an optical system that can be incorporated into a head mounted display, in accordance with at least one
  • FIG. 2 shows an example system, in accordance with at least one embodiment.
  • FIG. 3 shows a diagram of a head mounted display, in accordance with at least one embodiment.
  • FIG. 4 shows an example process for mitigating vergence- accommodation conflict, in accordance with at least one embodiment.
  • FIG. 1 shows a focal surface display 100 that includes electronic display element ("display") 102 and a phase-modifying spatial light modulator (SLM) 104 between eyepiece 106 and display 102.
  • the SLM 104 operates as a programmable lens with a spatially varying focal length, allowing the virtual image 110 of different display pixels of the display 102 to appear to be formed at different depths within a virtual scene.
  • the SLM 104 acts as a dynamic freeform lens, shaping synthesized focal surfaces that conform to the virtual scene geometry. Accordingly, a system and method for decomposing a virtual scene into one or more of these focal surfaces is disclosed.
  • a depth map, representing the scene geometry, and a focal stack, modeling the variation of retinal blur with changes in accommodation are provided to the system as inputs.
  • both inputs are rendered from the perspective of a viewing user's entrance pupil and the outputs of the system are k phase functions ⁇ , . . . , ⁇ and color images c ⁇ , . . . , c , to be presented by the SLM 104 and display 102.
  • the phase functions and color images are jointly optimize; however, this results in a large, nonlinear problem. Accordingly, approximations are introduced to ensure that the method is computationally tractable.
  • a depth map of the virtual scene is decomposed or segmented into a set of smooth focal surfaces to which at least most (and if not all) depths of the virtual scene can be approximated.
  • the phase functions for the SLM are optimized to approximate these focal surfaces.
  • the color images are optimized to reproduce the target focal stack.
  • FIG. 2 is system environment in which a console 250 operates.
  • the system environment includes HMD 200, imaging device 260, and input interface 270, which are each coupled to console 250.
  • FIG. 2 shows a single HMD 200, a single imaging device 260, and a single input interface 270, in other embodiments, any number of these components may be included in the system.
  • different and/or additional components may also be included in the system environment 200.
  • HMD 200 is a Head-Mounted Display (HMD) that presents content to a user.
  • Example content includes images, video, audio, or some combination thereof. Audio content may be presented via a separate device (e.g., speakers and/or headphones) external to HMD 200 that receives audio information from HMD 200, console 250, or both.
  • HMD 200 includes electronic display 202, optics block 204, spatial light modulator (SLM) block 206, one or more locators 208, internal measurement unit (IMU) 210, head tracking sensors 212, and scene rendering module 214.
  • SLM spatial light modulator
  • IMU internal measurement unit
  • Optics block 204 directs light from display 202 via SLM block 206 to an exit pupil of HMD 200 for viewing by a user using one or more optical elements, such as apertures, Fresnel lenses, convex lenses, concave lenses, filters, and so forth, and may include combinations of different optical elements.
  • one or more optical elements in optics block 204 may have one or more coatings, such as anti-reflective coatings. Magnification of the image light by optics block 204 allows electronic display 202 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification of the image light may increase a field of view of the displayed content. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., 150 degrees diagonal), and in some cases all, of the user's field of view.
  • Spatial Light Modulator (SLM) block 206 includes one or more drivers to control electronic display 202 and an SLM to generate and display images of the virtual scene with dynamic spatiotemporal focal surfaces.
  • SLM block 206 provided in optical series with optics block 204, operates in phase-only mode and, for a given frame, may generate multiple phase functions each corresponding to a focal pattem of a different range of depths within the virtual scene.
  • SLM block 206 could be integrated into optics block 204.
  • the each focal surface adjusts the focal pattern to vary the focal length (or optical power) of HMD 200 to keep a user's eyes in a zone of comfort while viewing content of the virtual scene.
  • the position of SLM block 206 within the optical system of HMD 200 is subject to design rules which limit certain performance parameters based on the respective configuration because of the limited resolution of the SLM.
  • design rules which limit certain performance parameters based on the respective configuration because of the limited resolution of the SLM.
  • the focal range of HMD 200 can be limited based on the location of SML block 206 relative to electronic display 202 and optics block 204.
  • a larger focal range is achieved as the SLM is positioned closer to optics block 204 rather than display 202.
  • the focal range is limited.
  • a larger field of view is also achievable as the SLM is positioned closer to optics block 204 rather than display 202 and, thus, as the SLM is positioned closer to display 202, the field of view is limited.
  • edge boundary sharpness in the virtual scene is degraded.
  • the closer the SLM is positioned to display 202 the sharper the edge boundaries. Accordingly, there are design tradeoffs and a balance to be sought between edge sharpness versus focal range and field of view.
  • Locators 208 are objects located in specific positions on HMD 200 relative to one another and relative to a specific reference point on HMD 200.
  • Locator 208 may be a light emitting diode (LED), a corner cube reflector, a reflective marker, a type of light source that contrasts with an environment in which HMD 200 operates, or some combination thereof.
  • Active locators 208 i.e., an LED or other type of light emitting device
  • Locators 208 can be located beneath an outer surface of HMD 200, which is transparent to the wavelengths of light emitted or reflected by locators 208 or is thin enough not to substantially attenuate the wavelengths of light emitted or reflected by locators 208. Further, the outer surface or other portions of HMD 200 can be opaque in the visible band of wavelengths of light. Thus, locators 208 may emit light in the IR band while under an outer surface of HMD 200 that is transparent in the IR band but opaque in the visible band.
  • IMU 210 is an electronic device that generates fast calibration data based on measurement signals received from one or more of head tracking sensors 210, which generate one or more measurement signals in response to motion of HMD 200.
  • head tracking sensors 212 include accelerometers, gyroscopes, magnetometers, other sensors suitable for detecting motion, correcting error associated with IMU 210, or some combination thereof. Head tracking sensors 212 may be located external to IMU 210, internal to IMU 210, or some combination thereof.
  • IMU 210 Based on the measurement signals from head tracking sensors 212, IMU 210 generates fast calibration data indicating an estimated position of HMD 200 relative to an initial position of HMD 200.
  • head tracking sensors 212 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, and roll).
  • IMU 210 can, for example, rapidly sample the measurement signals and calculate the estimated position of HMD 200 from the sampled data.
  • IMU 210 integrates measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated position of a reference point on HMD 200.
  • the reference point is a point that may be used to describe the position of HMD 200. While the reference point may generally be defined as a point in space, in various embodiments, reference point is defined as a point within HMD 200 (e.g., a center of the IMU 210). Alternatively, IMU 210 provides the sampled measurement signals to console 250, which determines the fast calibration data.
  • IMU 210 can additionally receive one or more calibration parameters from console 250. As further discussed below, the one or more calibration parameters are used to maintain tracking of HMD 200. Based on a received calibration parameter, IMU 210 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain calibration parameters cause IMU 210 to update an initial position of the reference point to correspond to a next calibrated position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with determining the estimated position. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to "drift" away from the actual position of the reference point over time.
  • drift error causes the estimated position of the reference point to "drift" away from the actual position of the reference point over time.
  • Scene rendering module 214 receives content for the virtual scene from engine 256 and provides the content for display on electronic display 202.
  • scene rendering module 214 determines a portion of the content to be displayed on electronic display 202 based on one or more of tracking module 254, head tracking sensors 212, or IMU 210, as described further below.
  • Imaging device 260 generates slow calibration data in accordance with calibration parameters received from console 250.
  • Slow calibration data includes one or more images showing observed positions of locators 208 that are detectable by imaging device 260.
  • Imaging device 260 may include one or more cameras, one or more video cameras, other devices capable of capturing images including one or more locators 208, or some combination thereof. Additionally, imaging device 260 may include one or more filters (e.g., for increasing signal to noise ratio). Imaging device 260 is configured to detect light emitted or reflected from locators 208 in a field of view of imaging device 260.
  • imaging device 260 may include a light source that illuminates some or all of locators 208, which retro-reflect the light towards the light source in imaging device 260.
  • Slow calibration data is communicated from imaging device 260 to console 250, and imaging device 260 receives one or more calibration parameters from console 250 to adjust one or more imaging parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, etc.).
  • Input interface 270 is a device that allows a user to send action requests to console 250.
  • An action request is a request to perform a particular action.
  • an action request may be to start or end an application or to perform a particular action within the application.
  • Input interface 270 may include one or more input devices.
  • Example input devices include a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the received action requests to console 250.
  • An action request received by input interface 270 is communicated to console 250, which performs an action corresponding to the action request.
  • input interface 270 may provide haptic feedback to the user in accordance with instructions received from console 250. For example, haptic feedback is provided by the input interface 470 when an action request is received, or console 250 communicates instructions to input interface 270 causing input interface 270 to generate haptic feedback when console 250 performs an action.
  • Console 250 provides content to HMD 200 for presentation to the user in accordance with information received from imaging device 260, HMD 200, or input interface 270.
  • console 250 includes application store 252, tracking module 254, and virtual reality (VR) engine 256.
  • Some embodiments of console 250 have different or additional modules than those described in conjunction with FIG. 2.
  • the functions further described below may be distributed among components of console 250 in a different manner than is described here.
  • Application store 252 stores one or more applications for execution by console 250.
  • An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of HMD 200 or input interface 270. Examples of applications include gaming applications, conferencing applications, video playback application, or other suitable applications.
  • Tracking module 254 calibrates the system using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determining position of HMD 200. For example, tracking module 254 adjusts the focus of imaging device 260 to obtain a more accurate position for observed locators 212 on HMD 200. Moreover, calibration performed by tracking module 254 also accounts for information received from IMU 210. Additionally, if tracking of HMD 200 is lost (e.g., imaging device 260 loses line of sight of at least a threshold number of locators 208), tracking module 254 re-calibrates some or all of the system components.
  • tracking module 254 re-calibrates some or all of the system components.
  • tracking module 254 tracks the movement of HMD 200 using slow calibration information from imaging device 260 and determines positions of a reference point on HMD 200 using observed locators from the slow calibration information and a model of HMD 200. Tracking module 254 also determines positions of the reference point on HMD 200 using position information from the fast calibration information from IMU 210 on HMD 200. Additionally, tracking module 254 may use portions of the fast calibration information, the slow calibration information, or some combination thereof, to predict a future location of HMD 200, which is provided to engine 256.
  • Engine 256 executes applications within the system and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof for HMD 200 from tracking module 254. Based on the received information, engine 256 determines content to provide to HMD 200 for presentation to the user, such as a virtual scene. For example, if the received information indicates that the user has looked to the left, engine 256 generates content for HMD 200 that mirrors or tracks the user's movement in a virtual environment. Additionally, engine 256 performs an action within an application executing on console 250 in response to an action request received from the input interface 270 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via HMD 200 or haptic feedback via input interface 270.
  • FIG. 3 is a diagram of HMD 200, in accordance with at least one embodiment.
  • HMD 200 includes a front rigid body and a band that goes around a user's head.
  • the front rigid body includes one or more electronic display elements corresponding to electronic display 202, IMU 210, head tracking sensors 212, and locators 208.
  • head tracking sensors 212 are located within IMU 210.
  • Locators 208 are located in fixed positions on the front rigid body relative to one another and relative to reference point 300.
  • reference point 300 is located at the center of IMU 210.
  • Each of locators 208 emits light that is detectable by imaging device 260.
  • Locators 208, or portions of locators 208 are located on a front side, a top side, a bottom side, a right side, and a left side of the front rigid body, as shown FIG. 3.
  • FIG. 4 shows an example process 400 for mitigating vergence- accommodation conflict, in accordance with at least one embodiment.
  • a head mounted display can provide focus accommodation and depth of field blur using a SLM.
  • virtual scene data for displaying a virtual scene is obtained 402.
  • each pixel of electronic display is associated with an individual depth value, such as infinity for sky displayed in the virtual scene, one meter for an object on a table and varying distance between half a meter and one meter for the surface of the table, 3 meters for a far wall in the virtual scene, and so forth.
  • the virtual scene is segmented 402 into a set of focal surfaces.
  • the range of depth values of the virtual scene are approximated to a set of one or more discrete depth values based on the scene geometry data. For example, given a target virtual scene, let d(Q x ,Q Y ) be the depth (in diopters) along each viewing angle ( ⁇ , 6y) £ ⁇ , for chief rays passing through the center of a viewing user's pupil with ⁇ being the discrete set of retinal image samples.
  • a depth map of the virtual scene is segmented 402 into k smooth focal surfaces di, . . . ,dk.
  • k smooth focal surfaces di For example, if for every viewing angle ( ⁇ ⁇ , 6y) there is at least one focal surface di(Q x , 6y) close to a target depth map d(Q x ,Q Y ), then every scene element can be depicted with near correct retinal blur, as light from display 102 will appear to originate from the correct scene depth.
  • Optimized blending methods still benefit the rendition of occluding, semi-transparent, and reflective objects). Given this goal, the followin optimization problem has been formulated.
  • generating a focal surface using a phase function ⁇ may introduce optical aberrations. Observationally, aberrations are minimized if the second derivatives of the focal surface are small. This observation is reflected by the bound constraints in the above optimization problem. Note, however, that no explicit bound constraints are imposed on the optical powers d t of the focal surfaces. This would appear to contradict this derivation of the minimum realizable focal length of SLM 104. Rather than adding these constraints directly, the target depth map d has been truncated to a realizable range of depths.
  • the shape of the focal plane is modified 406 to minimize the distance of the focal plane to each scene point in the cluster. This warps the shape of the focal plane to where it makes sense to start referring to the focal plane as a focal surface since the focal plane is bent, warped, and/or modified to conform to a set of scene points, components, or features nearest to the focal plane in the virtual scene.
  • the resulting shape of the focal surface is a continuous piecewise smooth three-dimensional curve, unlike multifocal displays with planar surfaces located at fixed focal depths.
  • is a conditioning parameter that is tuned for a given application.
  • Equation 2 Equation 1
  • Equation 2 Equation 2
  • bound constraints as soft constraints
  • a phase function is generated 408 for each focal surface.
  • the next step includes solving for a set of phase functions ⁇ to generate each focal surface.
  • the optical properties of a phase SLM must be understood. Variations in optical path length through a lens cause refraction. Similarly, differences in phase modulation across an SLM result in diffraction. Simulation of light propagation through a high- resolution SLM, via wave optics modeling, is currently computationally infeasible, but one can approximate these diffractive effects using geometric optics.
  • is the illumination wavelength.
  • the SLM operates as a prism, adding a constant offset to the direction of every ray.
  • An SLM may also act as a thin lens by presenting a quadratically varying phase as follows.
  • Equation 6 is the Hessian of Equation 5
  • phase in any real-valued function.
  • an SLM will have a bounded range, typically from [0, 2 ⁇ ]. Phases outside this range are "wrapped", modulo 2 ⁇ . In addition, achievable phase functions are restricted by the Nyquist limit. The phase can change by no more 2 ⁇ over a distance of 2 ⁇ , where bp is the SLM pixel pitch.
  • a phase function ⁇ to best realize a given target focal surface d is determined.
  • Equation 6 By casting viewing ray ( ⁇ ⁇ , 0y) from the viewer's pupil to SLM 104, and then by applying Equation 7, a target focal length f P can be assigned for each SLM pixel (px, py) to create a virtual image 110 at the desired focal surface depth. To realize this focal length, Equation 6 re ariess a phase function ⁇ with the Hessian
  • color images c t are determined for presentation on display 102, to reproduce the target focal stack.
  • This focal stack is represented by a set of / retinal images n, . . . , n.
  • a ray -traced model of retinal blur is described and then this model is applied to evaluate the forward and adjoint operators required to solve the linear least squares problem representing optimized blending.
  • An optical ray is traced through the system under a geometric optics model where each ray originates at a point within the viewer's pupil. The ray then passes through the front and back of the eyepiece 106, the SLM 104, and then impinges on the display 102. At the eyepiece 106 surfaces, rays are refracted using the radius of curvature of the lens, its optical index, and the paraxial approximation. Equation 4 models light transport through the SLM 104. Each ray is assigned the color interpolated at its coordinate of intersection with the display 102. The locations on the display are denoted by (q x , q y ) and the set of display pixel centers by Q q . Note that any rays that miss the bounds of the eyepiece 106, SLM 104, or display 102 are culled (i.e., are assigned a black color).
  • the forward operator ⁇ , ⁇ (C) is linear in the color image c.
  • the forward operator for a given set of color images c, gives the focal stack r that would be produced on the retina— minimizing
  • 2 gives the color image best approximating the desired focal stack.
  • the transpose of ⁇ , ⁇ , mapping retinal image samples to display pixels, can be similarly evaluated with ray tracing operations with accumulation in the color image c rather than the retinal image r. In conclusion, these forward and adjoint operators are applied with an iterative least squares solver.
  • the phase functions when executed by the SLM, causes the SLM to reproduce a focal pattern corresponding to the each focal surface. This is achieved by the SLM adding phase delays to a wavefront of the light from the electronic display. The phase delays cause the shape of the wavefront to be bent and warped into the shape of each focal surface to thereby produce a focal pattern that conforms to the scene geometry.
  • the SLM time-multiplexes the adjustment of the wavefront for each focal surface in order to provide focus for each of the different focal surfaces in the virtual scene to a viewing user.
  • the SLM adjusts the wavefront for a far depth; at a second time, the SLM adjust the wavefront for an intermediate depth; and at a third time, the SLM adjusts the wavefront for near depth.
  • the speed in which the time-multiplexing adjustment of these three depths occurs is generally too fast for the human eye to notice and, therefore, the viewing user observes the virtual scene in focus, or at least as modeled and/or approximated.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)

Abstract

A head mounted display (HMD) adjusts the phase of light of a virtual scene using a spatially programmable focusing element. Depths of the virtual scene are approximated to one or more focal surfaces and the shape of the focal surfaces is then adjusted to minimize the distance of the focal surface to features in the virtual scene. The resulting shape of the focal surface is a continuous piecewise smooth three-dimensional curve. A phase function is generated for each focal surface that, when executed by the spatially programmable focusing element, reproduces a focal pattern corresponding to the each focal surface, which bends and shapes the wavefront to produce a focal pattern that conforms to the scene geometry.

Description

FOCAL SURFACE DISPLAY
BACKGROUND
[0001] The present disclosure generally relates to enhancing images from electronic displays, and specifically to varying the focal length of optics to enhance the images.
[0002] A head mounted display (HMD) can be used to simulate virtual environments. Conventional binocular HMDs vary the stimulus to vergence with the information being presented to a viewing user in a virtual scene, while the stimulus to accommodation remains fixed at the apparent distance of the display, as created by the viewing optics. Sustained vergence-accommodation conflict (VAC) has been associated with visual discomfort, motivating numerous proposals for delivering near- correct accommodation cues.
[0003] Vergence is the simultaneous movement or rotation of both eyes in opposite directions to obtain or maintain single binocular vision which is connected to accommodation of the eye. Under normal conditions, changing the focus of the eyes to look at an object at a different distance automatically causes vergence and accommodation. For example, as a real object moves closer to a user looking at the real object, the user's eyes rotate inward to stay verged on the object. As the object gets closer to the user, the eyes must "accommodate" for the closer distance by reducing the power or focal length, which is achieved automatically by each eye changing its shape. Thus, under normal conditions in the real world, the vergence depth corresponds to where the user is looking, which also equals the focal length of the user's eyes.
[0004] A conflict between vergence and accommodation, however, often occurs with some three-dimensional (3D) electronic displays. For example, as a virtual object is rendered on the 3D electronic display to move closer to a user looking at the object, each of the user's eyes rotates inward to stay verged on the object, but the power or focal length of each eye is not reduced; hence, the user's eyes do not accommodate as in the real-world. Instead of reducing power or focal length to accommodate for the closer vergence depth, the eyes maintain accommodation at a distance associated with 3D electronic display. Thus, the vergence depth often does not equal the focal length for the human eye for objects displayed on 3D electronic displays. This discrepancy between vergence depth and focal length is referred to as "vergence-accommodation conflict." A user experiencing only vergence or accommodation (and not both simultaneously) can experience some degree of fatigue or nausea, which is undesirable for virtual reality system creators.
SUMMARY
[0005] A head mounted display (HMD) adjusts the phase of light of a virtual scene received from an electronic display using a spatially programmable focusing element, such as a spatial light modulator operating in a phase-modulation mode. For example, the headset receives virtual scene data for the virtual scene that includes scene geometry data or depth values for different components of or points in the virtual scene. Before light of the virtual scene is received by an eye of a user viewing the virtual scene, a spatial light modulator (SLM) adjusts a focal partem or wavefront of the light for the virtual scene. While conventional head mounted displays typically deliver a single fixed focal surface, the SLM operates as a programmable lens with a spatially varying focal length, allowing the virtual image of different pixels of the HMD's display to appear (from an exit pupil of the HMD) to be formed at different depths within the virtual scene, thereby, shaping synthesized focal surfaces to conform to the virtual scene geometry.
[0006] To determine the positions of the focal surfaces in the virtual scene, the range of depth values of the virtual scene are approximated to a set of one or more discrete depth values based on the scene geometry data. In one embodiment, scene points in the virtual scene are clustered based on their associated depth values to identify the set of one or more discrete depth values corresponding to the mean depth value of each cluster. The depth of the virtual scene is then segmented into one or more focal planes at the each of the one or more discrete depth values within the virtual scene. Accordingly, for each focal plane, the shape of the focal plane is adjusted to minimize the distance of the focal plane to each scene point in the cluster. This warps the shape of the focal plane to where it makes sense to start referring to the focal plane as a focal surface since the focal plane is bent, warped, and/or modified to conform to a set of scene points, components, or features nearest to the focal plane in the virtual scene. The resulting shape of the focal surface is a continuous piecewise smooth three-dimensional curve, unlike multifocal displays with planar surfaces located at fixed focal depths. Thus, for example, a scene could be segmented into three focal surfaces (near, intermediate, far) that are each differently bent or warped to respectively conform to (near, intermediate, far) objects in the virtual scene.
[0007] Given a set of focal surfaces, a phase function is generated for each focal surface. The phase function, when executed by the SLM, causes the SLM to reproduce a focal pattern corresponding to the each focal surface. This is achieved by the SLM adding phase delays to a wavefront of the light from the electronic display. The phase delays cause the shape of the wavefront to be bent and warped into the shape of each focal surface to thereby produce a focal pattern that conforms to the scene geometry.
[0008] Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, and a system, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, computer program product, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof is disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.
[0009] In an embodiment a system comprises:
at least one processor;
an electronic display element configured to display a virtual scene; and memory including instructions that, when executed by the at least one processor, cause the at least one processor to:
segment the virtual scene into a set of focal surfaces in the virtual scene based on scene geometry obtained for the virtual scene, each focal surface being associated with a set of nearest scene points; adjust a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface;
generate, for each focal surface, a phase function for adjusting a wavefront of light of the virtual scene consistent with the adjusted shape of the focal surface; and
an optics block including a spatially programmable focusing element configured to:
receive the wavefront of light of the virtual scene from the electronic display element;
adjust, for each focal surface of the set of focal surfaces, the wavefront based on the phase function associated with the focal surface; and
provide the adjusted wavefront of the light of the virtual scene for each focal surface to a user via an exit pupil of the system.
[0010] In an embodiment the memory includes instructions that, when executed by the at least one processor, further cause the at least one processor to:
cluster scene points in the virtual scene, each scene point being associated with scene geometry data corresponding to a location of the scene point in the virtual scene; and determine a location for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data.
[0011] In an embodiment the memory further includes instructions that, when executed by the at least one processor, cause the at least one processor to:
determine a color image of the virtual scene to be displayed on the electronic display element for each phase function.
[0012] In an embodiment the adjusted shape of each focal surface is a spatially varying piecewise-smooth curve.
[0013] In an embodiment adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes: applying a non-linear least squares optimization between each of the set of nearest scene points to the focal surface.
[0014] In an embodiment the spatially programmable focusing element time- multiplexes adjustment of the wavefront for each focal surface of the set of focal surfaces.
[0015] In an embodiment each phase function shifts the light from the electronic display element for each corresponding focal surface by introducing phase delays associated with the shape of the focal surface, and wherein each focal surface is at least one of a continuously-varying shape or a discontinuous surface.
[0016] In an embodiment a head mounted display (HMD) is provided comprising:
at least one processor;
an electronic display element configured to display a virtual scene;
a spatial light modulator (SLM) configured to:
receive a wavefront of light from the electronic display element for the virtual scene;
adjust, at one or more first times, the wavefront of light from the electronic display element to provide focus for a first focal surface positioned at first depth in the virtual scene relative to an exit pupil of the HMD;
adjust, at one or more second times, the wavefront of light from the electronic display element to provide focus for a second focal surface positioned at second depth in the virtual scene relative to the exit pupil of the HMD; and direct the adjusted wavefront of light providing focus for the first focal surface and the second focal surface to an exit pupil of the HMD, the wavefront of light for the first focal surface and the second focal surface combining.
[0017] In an embodiment the SLM time-multiplexes adjustment of the wavefront for the first focal surface and the second focal surface between the one or more first times and the one or more second times.
[0018] In an embodiment generating the first focal surface and the second focal surface comprises: segmenting the virtual scene into the first focal surface and the second focal surface based on scene geometry obtained for the virtual scene, the first focal surface associated with a first set of nearest scene points and the second focal surface associated with a second set of nearest scene points;
adjusting a first shape of the first focal surface to minimize first distances between each scene point of the first set of nearest scene points to the first focal surface and the first focal surface;
adjusting a second shape of the second focal surface to
minimize second distances between each scene point of the second set of nearest scene points to the second focal surface and the second focal surface; and generate, for each of the first focal surface and the second focal surface, a phase function for adjusting a wavefront of light of the virtual scene consistent with the adjusted shape of the first focal surface and the second focal surface.
[0019] In an embodiment the HMD further comprises:
clustering scene points in the virtual scene, each scene point being associated with scene geometry data
corresponding to a location of the scene point in the virtual scene; and
determining a first depth for the first focal surface and a second depth of the second focal surface based on the clustered scene points and associated scene geometry data.
[0020] In an embodiment the HMD further comprises:
determining a color image of the virtual scene to be displayed on the electronic display element for each phase function.
[0021] In an embodiment adjusting the first shape of the first focal surface and the second shape of the second focal surface includes: determining the first shape of the first focal surface by applying a nonlinear least squares optimization between each of the first set of nearest scene points to the first focal surface; and determining the second shape of the second focal surface by applying the non-linear least squares optimization between each of the second set of nearest scene points to the second focal surface.
[0022] In an embodiment of the HMD each phase function shifts the light from the electronic display element for each of the first focal surface and the second focal surface by introducing phase delays associated with the first shape of the first focal surface and the second shape of the second focal surface, and wherein each of the first focal surface and the second focal surface is at least one of a continuously-varying shape or a discontinuous surface.
[0023] In an embodiment a method comprises:
obtaining a virtual scene including scene geometry data identifying a depth associated with each scene point in the virtual scene;
segmenting the virtual scene into a set of focal surfaces based on the scene geometry data, each focal surface being associated with a set of nearest scene points;
adjusting a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface, the adjusted shape being a spatially varying piecewise-smooth curved surface; and generating, for each focal surface, a phase function for a spatial light modulator (SLM) to adjust a wavefront of light of the virtual scene is received from on an electronic display element, the phase function when applied by the SLM introduces phase delays in the wavefront that reproduces a focal pattern corresponding to the adjusted shape of the focal surface.
[0024] In an embodiment segmenting the virtual scene into a set of focal surfaces based on the scene geometry includes:
clustering the scene points in the virtual scene; and determining a depth for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data.
[0025] In an embodiment the method further comprises:
determining, for each phase function, a color image of the virtual scene to be displayed on the electronic display element for each focal surface.
[0026] In an embodiment the SLM and the electronic display element are included in a head mounted display and the SLM time-multiplexes adjustment of the wavefront for each focal surface based on the corresponding phase function to cause a composite image of the virtual scene to be provided to a user viewing the virtual scene through an exit pupil of the head mounted display.
[0027] In an embodiment of the method each phase function shifts the light from the electronic display element for each focal surface by introducing phase delays associated with the shape of each focal surface, and wherein each focal surface is at least one of a continuously-varying shape or a discontinuous surface.
[0028] In an embodiment adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes:
applying a non-linear least squares optimization between each of the set of nearest scene points to the focal surface.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] FIG. 1 shows example ray diagram for an optical system that can be incorporated into a head mounted display, in accordance with at least one
embodiment.
[0030] FIG. 2 shows an example system, in accordance with at least one embodiment.
[0031] FIG. 3 shows a diagram of a head mounted display, in accordance with at least one embodiment.
[0032] FIG. 4 shows an example process for mitigating vergence- accommodation conflict, in accordance with at least one embodiment.
[0033] The figures depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.
DETAILED DESCRIPTION
[0034] A conventional head mounted displays contain an eyepiece and an electronic display that deliver a single, fixed focal surface. FIG. 1 shows a focal surface display 100 that includes electronic display element ("display") 102 and a phase-modifying spatial light modulator (SLM) 104 between eyepiece 106 and display 102. The SLM 104 operates as a programmable lens with a spatially varying focal length, allowing the virtual image 110 of different display pixels of the display 102 to appear to be formed at different depths within a virtual scene. Thus, the SLM 104 acts as a dynamic freeform lens, shaping synthesized focal surfaces that conform to the virtual scene geometry. Accordingly, a system and method for decomposing a virtual scene into one or more of these focal surfaces is disclosed.
[0035] A depth map, representing the scene geometry, and a focal stack, modeling the variation of retinal blur with changes in accommodation are provided to the system as inputs. In one embodiment, both inputs are rendered from the perspective of a viewing user's entrance pupil and the outputs of the system are k phase functions φι, . . . ,φί and color images c\, . . . , c , to be presented by the SLM 104 and display 102. Ideally, the phase functions and color images are jointly optimize; however, this results in a large, nonlinear problem. Accordingly, approximations are introduced to ensure that the method is computationally tractable. First, instead of accounting for every possible depth in the virtual scene, a depth map of the virtual scene is decomposed or segmented into a set of smooth focal surfaces to which at least most (and if not all) depths of the virtual scene can be approximated. Then, the phase functions for the SLM are optimized to approximate these focal surfaces. Finally, the color images are optimized to reproduce the target focal stack. Thus, while the disclosed system provides for multiple focal surfaces, a single focal surface may theoretically achieve similar retinal blur fidelity; however, multiple focal surfaces can offer an advantageous trade-off between system complexity (e.g., the need for time multiplexing) and image quality (e.g., suppression of compression artifacts) relative to a single focal surface as provided by other prior multifocal display systems. System Overview
[0036] FIG. 2 is system environment in which a console 250 operates. In this example, the system environment includes HMD 200, imaging device 260, and input interface 270, which are each coupled to console 250. While FIG. 2 shows a single HMD 200, a single imaging device 260, and a single input interface 270, in other embodiments, any number of these components may be included in the system. For example, there may be multiple headsets 202 each having an associated input interface 270 and being monitored by one or more imaging devices 260, with each headset 202, input interface 270, and imaging devices 260 communicating with the console 250. In alternative configurations, different and/or additional components may also be included in the system environment 200.
[0037] HMD 200 is a Head-Mounted Display (HMD) that presents content to a user. Example content includes images, video, audio, or some combination thereof. Audio content may be presented via a separate device (e.g., speakers and/or headphones) external to HMD 200 that receives audio information from HMD 200, console 250, or both. HMD 200 includes electronic display 202, optics block 204, spatial light modulator (SLM) block 206, one or more locators 208, internal measurement unit (IMU) 210, head tracking sensors 212, and scene rendering module 214.
[0038] Optics block 204 directs light from display 202 via SLM block 206 to an exit pupil of HMD 200 for viewing by a user using one or more optical elements, such as apertures, Fresnel lenses, convex lenses, concave lenses, filters, and so forth, and may include combinations of different optical elements. In some embodiments, one or more optical elements in optics block 204 may have one or more coatings, such as anti-reflective coatings. Magnification of the image light by optics block 204 allows electronic display 202 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification of the image light may increase a field of view of the displayed content. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., 150 degrees diagonal), and in some cases all, of the user's field of view.
[0039] Spatial Light Modulator (SLM) block 206 includes one or more drivers to control electronic display 202 and an SLM to generate and display images of the virtual scene with dynamic spatiotemporal focal surfaces. SLM block 206, provided in optical series with optics block 204, operates in phase-only mode and, for a given frame, may generate multiple phase functions each corresponding to a focal pattem of a different range of depths within the virtual scene. In various embodiments, SLM block 206 could be integrated into optics block 204. In one embodiment, the each focal surface adjusts the focal pattern to vary the focal length (or optical power) of HMD 200 to keep a user's eyes in a zone of comfort while viewing content of the virtual scene.
[0040] The position of SLM block 206 within the optical system of HMD 200 is subject to design rules which limit certain performance parameters based on the respective configuration because of the limited resolution of the SLM. Thus, there are tradeoffs between configurations and their associated performance. For example, the focal range of HMD 200 can be limited based on the location of SML block 206 relative to electronic display 202 and optics block 204. In this example, a larger focal range is achieved as the SLM is positioned closer to optics block 204 rather than display 202. Thus, as the SLM is positioned closer to display 202, the focal range is limited. Additionally, a larger field of view is also achievable as the SLM is positioned closer to optics block 204 rather than display 202 and, thus, as the SLM is positioned closer to display 202, the field of view is limited. However, as the SLM is positioned closer to optics block 204, edge boundary sharpness in the virtual scene is degraded. Thus, the closer the SLM is positioned to display 202, the sharper the edge boundaries. Accordingly, there are design tradeoffs and a balance to be sought between edge sharpness versus focal range and field of view.
[0041] Locators 208 are objects located in specific positions on HMD 200 relative to one another and relative to a specific reference point on HMD 200.
Locator 208 may be a light emitting diode (LED), a corner cube reflector, a reflective marker, a type of light source that contrasts with an environment in which HMD 200 operates, or some combination thereof. Active locators 208 (i.e., an LED or other type of light emitting device) may emit light in the visible band (-380 nm to 750 nm), in the infrared (IR) band (-750 nm to 1 mm), in the ultraviolet band (10 nm to 380 nm), some other portion of the electromagnetic spectrum, or some combination thereof.
[0042] Locators 208 can be located beneath an outer surface of HMD 200, which is transparent to the wavelengths of light emitted or reflected by locators 208 or is thin enough not to substantially attenuate the wavelengths of light emitted or reflected by locators 208. Further, the outer surface or other portions of HMD 200 can be opaque in the visible band of wavelengths of light. Thus, locators 208 may emit light in the IR band while under an outer surface of HMD 200 that is transparent in the IR band but opaque in the visible band.
[0043] IMU 210 is an electronic device that generates fast calibration data based on measurement signals received from one or more of head tracking sensors 210, which generate one or more measurement signals in response to motion of HMD 200. Examples of head tracking sensors 212 include accelerometers, gyroscopes, magnetometers, other sensors suitable for detecting motion, correcting error associated with IMU 210, or some combination thereof. Head tracking sensors 212 may be located external to IMU 210, internal to IMU 210, or some combination thereof.
[0044] Based on the measurement signals from head tracking sensors 212, IMU 210 generates fast calibration data indicating an estimated position of HMD 200 relative to an initial position of HMD 200. For example, head tracking sensors 212 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, and roll). IMU 210 can, for example, rapidly sample the measurement signals and calculate the estimated position of HMD 200 from the sampled data. For example, IMU 210 integrates measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated position of a reference point on HMD 200. The reference point is a point that may be used to describe the position of HMD 200. While the reference point may generally be defined as a point in space, in various embodiments, reference point is defined as a point within HMD 200 (e.g., a center of the IMU 210). Alternatively, IMU 210 provides the sampled measurement signals to console 250, which determines the fast calibration data.
[0045] IMU 210 can additionally receive one or more calibration parameters from console 250. As further discussed below, the one or more calibration parameters are used to maintain tracking of HMD 200. Based on a received calibration parameter, IMU 210 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain calibration parameters cause IMU 210 to update an initial position of the reference point to correspond to a next calibrated position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with determining the estimated position. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to "drift" away from the actual position of the reference point over time.
[0046] Scene rendering module 214 receives content for the virtual scene from engine 256 and provides the content for display on electronic display 202.
Additionally, scene rendering module 214 determines a portion of the content to be displayed on electronic display 202 based on one or more of tracking module 254, head tracking sensors 212, or IMU 210, as described further below.
[0047] Imaging device 260 generates slow calibration data in accordance with calibration parameters received from console 250. Slow calibration data includes one or more images showing observed positions of locators 208 that are detectable by imaging device 260. Imaging device 260 may include one or more cameras, one or more video cameras, other devices capable of capturing images including one or more locators 208, or some combination thereof. Additionally, imaging device 260 may include one or more filters (e.g., for increasing signal to noise ratio). Imaging device 260 is configured to detect light emitted or reflected from locators 208 in a field of view of imaging device 260. In embodiments where locators 208 include passive elements (e.g., a retroreflector), imaging device 260 may include a light source that illuminates some or all of locators 208, which retro-reflect the light towards the light source in imaging device 260. Slow calibration data is communicated from imaging device 260 to console 250, and imaging device 260 receives one or more calibration parameters from console 250 to adjust one or more imaging parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, etc.).
[0048] Input interface 270 is a device that allows a user to send action requests to console 250. An action request is a request to perform a particular action. For example, an action request may be to start or end an application or to perform a particular action within the application. Input interface 270 may include one or more input devices. Example input devices include a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the received action requests to console 250. An action request received by input interface 270 is communicated to console 250, which performs an action corresponding to the action request. In some embodiments, input interface 270 may provide haptic feedback to the user in accordance with instructions received from console 250. For example, haptic feedback is provided by the input interface 470 when an action request is received, or console 250 communicates instructions to input interface 270 causing input interface 270 to generate haptic feedback when console 250 performs an action.
[0049] Console 250 provides content to HMD 200 for presentation to the user in accordance with information received from imaging device 260, HMD 200, or input interface 270. In the example shown in FIG. 2, console 250 includes application store 252, tracking module 254, and virtual reality (VR) engine 256. Some embodiments of console 250 have different or additional modules than those described in conjunction with FIG. 2. Similarly, the functions further described below may be distributed among components of console 250 in a different manner than is described here.
[0050] Application store 252 stores one or more applications for execution by console 250. An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of HMD 200 or input interface 270. Examples of applications include gaming applications, conferencing applications, video playback application, or other suitable applications.
[0051] Tracking module 254 calibrates the system using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determining position of HMD 200. For example, tracking module 254 adjusts the focus of imaging device 260 to obtain a more accurate position for observed locators 212 on HMD 200. Moreover, calibration performed by tracking module 254 also accounts for information received from IMU 210. Additionally, if tracking of HMD 200 is lost (e.g., imaging device 260 loses line of sight of at least a threshold number of locators 208), tracking module 254 re-calibrates some or all of the system components.
[0052] Additionally, tracking module 254 tracks the movement of HMD 200 using slow calibration information from imaging device 260 and determines positions of a reference point on HMD 200 using observed locators from the slow calibration information and a model of HMD 200. Tracking module 254 also determines positions of the reference point on HMD 200 using position information from the fast calibration information from IMU 210 on HMD 200. Additionally, tracking module 254 may use portions of the fast calibration information, the slow calibration information, or some combination thereof, to predict a future location of HMD 200, which is provided to engine 256.
[0053] Engine 256 executes applications within the system and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof for HMD 200 from tracking module 254. Based on the received information, engine 256 determines content to provide to HMD 200 for presentation to the user, such as a virtual scene. For example, if the received information indicates that the user has looked to the left, engine 256 generates content for HMD 200 that mirrors or tracks the user's movement in a virtual environment. Additionally, engine 256 performs an action within an application executing on console 250 in response to an action request received from the input interface 270 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via HMD 200 or haptic feedback via input interface 270.
[0054] FIG. 3 is a diagram of HMD 200, in accordance with at least one embodiment. In this example, HMD 200 includes a front rigid body and a band that goes around a user's head. The front rigid body includes one or more electronic display elements corresponding to electronic display 202, IMU 210, head tracking sensors 212, and locators 208. In this example, head tracking sensors 212 are located within IMU 210.
[0055] Locators 208 are located in fixed positions on the front rigid body relative to one another and relative to reference point 300. In this example, reference point 300 is located at the center of IMU 210. Each of locators 208 emits light that is detectable by imaging device 260. Locators 208, or portions of locators 208, are located on a front side, a top side, a bottom side, a right side, and a left side of the front rigid body, as shown FIG. 3.
Focal Surface Display Method
[0056] FIG. 4 shows an example process 400 for mitigating vergence- accommodation conflict, in accordance with at least one embodiment. As discussed above, a head mounted display can provide focus accommodation and depth of field blur using a SLM. Accordingly, in this example, virtual scene data for displaying a virtual scene is obtained 402. In one example, each pixel of electronic display is associated with an individual depth value, such as infinity for sky displayed in the virtual scene, one meter for an object on a table and varying distance between half a meter and one meter for the surface of the table, 3 meters for a far wall in the virtual scene, and so forth.
[0057] Using the scene geometry data, the virtual scene is segmented 402 into a set of focal surfaces. To determine the positions of the focal surfaces in the virtual scene, the range of depth values of the virtual scene are approximated to a set of one or more discrete depth values based on the scene geometry data. For example, given a target virtual scene, let d(Qx,QY) be the depth (in diopters) along each viewing angle (θχ, 6y) £ ΩΘ, for chief rays passing through the center of a viewing user's pupil with ΩΘ being the discrete set of retinal image samples. If it were possible for phase SLMs to produce focal surfaces of arbitrary topology, then no further optimization would be required; however, this is not the case since focal surfaces are required to be smooth. Accordingly, a depth map of the virtual scene is segmented 402 into k smooth focal surfaces di, . . . ,dk. For example, if for every viewing angle (θχ, 6y) there is at least one focal surface di(Qx, 6y) close to a target depth map d(Qx,QY), then every scene element can be depicted with near correct retinal blur, as light from display 102 will appear to originate from the correct scene depth. Optimized blending methods still benefit the rendition of occluding, semi-transparent, and reflective objects). Given this goal, the followin optimization problem has been formulated.
Figure imgf000019_0001
[0058] As further discussed below, generating a focal surface using a phase function φ may introduce optical aberrations. Observationally, aberrations are minimized if the second derivatives of the focal surface are small. This observation is reflected by the bound constraints in the above optimization problem. Note, however, that no explicit bound constraints are imposed on the optical powers dt of the focal surfaces. This would appear to contradict this derivation of the minimum realizable focal length of SLM 104. Rather than adding these constraints directly, the target depth map d has been truncated to a realizable range of depths.
[0059] Accordingly, for each focal plane, the shape of the focal plane is modified 406 to minimize the distance of the focal plane to each scene point in the cluster. This warps the shape of the focal plane to where it makes sense to start referring to the focal plane as a focal surface since the focal plane is bent, warped, and/or modified to conform to a set of scene points, components, or features nearest to the focal plane in the virtual scene. The resulting shape of the focal surface is a continuous piecewise smooth three-dimensional curve, unlike multifocal displays with planar surfaces located at fixed focal depths. A nonlinear least squares is applied to solve Equation 1, which scales to large problem sizes. Note that the objective involves the nonlinear residual gex y(d) = mini | d(6x,6y) - di(6x,6y) | for each pixel
(θχ, 6y). This residual is not differentiable, which is a problem for nonlinear least squares. However, a close approximation is obtained by replacing the min with a "soft minimum" (soft-min), with the following definition:
d id) - ~-f log e- ^^ d MM/t^ (2 where ί is a conditioning parameter that is tuned for a given application.
[0060] Applying Equation 2 to Equation 1, and re-expressing bound constraints as soft constraints, yields the following nonlinear least squares problem:
min > {((a id})* + γ > Wdt iBx. f}-,, )^, (nS where d2d(Qx, 6y) is the vector of second partial derivatives of dt at (θχ, 6y) and γ is a weighting parameter.
[0061] Given a set of focal surfaces, a phase function is generated 408 for each focal surface. Provided the set of focal surfaces d, the next step includes solving for a set of phase functions φι to generate each focal surface. To solve this problem, the optical properties of a phase SLM must be understood. Variations in optical path length through a lens cause refraction. Similarly, differences in phase modulation across an SLM result in diffraction. Simulation of light propagation through a high- resolution SLM, via wave optics modeling, is currently computationally infeasible, but one can approximate these diffractive effects using geometric optics.
[0062] Accordingly, let (px, py) denote SLM locations, with Ωρ being the discrete set of SLM pixel centers. Optical rays intersecting an SLM are redirected depending on the phase φ. For small angles (i.e., under the paraxial approximation), the deflection is proportional to the gradient of φ. If an incident ray has direction vector (x, y, 1) and intersects the SLM at (px, py), then the outgoing ray has direction vector:
Figure imgf000021_0001
where λ is the illumination wavelength. Thus, if φ is a linear function, then the SLM operates as a prism, adding a constant offset to the direction of every ray. (Note that monochromatic illumination in this derivation is assumed, with practical
considerations for broadband illumination sources presented later). An SLM may also act as a thin lens by presenting a quadratically varying phase as follows.
Figure imgf000021_0002
[0063] Note that these optical properties are local. The deflection of a single ray only depends on the first-order Taylor series of the phase (i.e., the phase gradient) around the point of intersection with the SLM. Similarly, the change in focus of an e- sized bundle of rays intersecting the SLM only depends on the second-order Taylor series. Specifically, if the Hessian of φ at a point (px, py) is given by
Figure imgf000021_0003
where l is the 2x2 identity matrix, then the <r-sized neighborhood around (px, py) functions as a lens of focal length f (i.e., Equation 6 is the Hessian of Equation 5).
[0064] To this point, we have allowed the phase to be any real-valued function. In practice, an SLM will have a bounded range, typically from [0, 2π]. Phases outside this range are "wrapped", modulo 2π. In addition, achievable phase functions are restricted by the Nyquist limit. The phase can change by no more 2π over a distance of 2δρ, where bp is the SLM pixel pitch.
[0065] Accordingly, with this paraxial model of the SLM, a phase function φ to best realize a given target focal surface d is determined. First, referring to FIG. 1, how the focal length fp (from Equation 5) of SLM 104 affects a focal surface distance zv is determined. As shown in FIG. 1, SLM 104 operates within a focal surface display 100 that is parameterized by the eyepiece 106 distance (z=0), the SLM 104 distance Zp, and the display 102 distance zd. Ignoring the eyepiece 106, the SLM 104 produces an intermediate image 108 of the display 102 at distance zv. Intermediate image 108 is transformed into a virtual image 110 of the display 102, located at zv, depending on the eyepiece 106 focal length fi . These relations are compactly summarized by application of the thin lens equation:
~
fp 31>' ~ Zp % ~ zp j « 3 'v z t
[0066] By casting viewing ray (θχ, 0y) from the viewer's pupil to SLM 104, and then by applying Equation 7, a target focal length fP can be assigned for each SLM pixel (px, py) to create a virtual image 110 at the desired focal surface depth. To realize this focal length, Equation 6 re uires a phase function φ with the Hessian
Figure imgf000022_0001
[0067] There may not be φ that satisfies this expression. In fact, such a φ only exists when /is constant and φ is quadratic (i.e., the phase represents a uniform lens). Since Equation 8 cannot be exactly satisfied, the following linear least squares problem is solved for to obtain a phase function φ that is as close as possible:
Figure imgf000022_0002
where II · ||2F is the Frobenius norm and where H[ ] is the discrete Hessian operator, given by finite differences of φ. Note that the phase function φ plus any linear function a+bx+cy has the same Hessian H, so we additionally constrain φ(0, 0)=0 and 7φ(0, 0)=0.
[0068] Having determined k phase functions φι, corresponding to focal surfaces di, color images ct are determined for presentation on display 102, to reproduce the target focal stack. This focal stack is represented by a set of / retinal images n, . . . , n. First, a ray -traced model of retinal blur is described and then this model is applied to evaluate the forward and adjoint operators required to solve the linear least squares problem representing optimized blending.
[0069] An optical ray is traced through the system under a geometric optics model where each ray originates at a point within the viewer's pupil. The ray then passes through the front and back of the eyepiece 106, the SLM 104, and then impinges on the display 102. At the eyepiece 106 surfaces, rays are refracted using the radius of curvature of the lens, its optical index, and the paraxial approximation. Equation 4 models light transport through the SLM 104. Each ray is assigned the color interpolated at its coordinate of intersection with the display 102. The locations on the display are denoted by (qx, qy) and the set of display pixel centers by Qq. Note that any rays that miss the bounds of the eyepiece 106, SLM 104, or display 102 are culled (i.e., are assigned a black color).
[0070] To model retinal blur, rays that span the viewer's pupil are accumulated, which are sampled using a Poisson distribution. In this manner, the viewer's eye are approximated as an ideal lens focused at a depth z which changes depending on the viewer's accommodative state. For each chief ray (θχ, 6y) and depth z, a bundle of rays are summed across R , ey, z from the Poisson-sampled pupil. This produces an estimate of the retinal blur when focused at a depth z. These are defined preceding steps as the forward operator r = Az,</> (c), which accepts a phase function φ and color image c and predicts the perceived retinal image r when focused at a distance z.
[0071] For a fixed phase function φ and accommodation depth z, the forward operator ΑΖ,Φ (C) is linear in the color image c. The rendering operators Αζ,φι (a ) combine additively, so our combined forward operator, representing viewing of multiple-component focal surface displays, is Az (ci, . . . , Ck ) =∑i Αζ,ψί (ci ). The forward renders can be concatenate for multiple accommodation depths zi, . . . , zi to estimate the reconstructed focal stack, with corresponding linear operator A = [Azi ; . . . ;Azi ]. The forward operator, for a given set of color images c, gives the focal stack r that would be produced on the retina— minimizing ||Ac - r ||2 gives the color image best approximating the desired focal stack. The transpose of Αζ,φ, mapping retinal image samples to display pixels, can be similarly evaluated with ray tracing operations with accumulation in the color image c rather than the retinal image r. In conclusion, these forward and adjoint operators are applied with an iterative least squares solver.
[0072] The phase functions, when executed by the SLM, causes the SLM to reproduce a focal pattern corresponding to the each focal surface. This is achieved by the SLM adding phase delays to a wavefront of the light from the electronic display. The phase delays cause the shape of the wavefront to be bent and warped into the shape of each focal surface to thereby produce a focal pattern that conforms to the scene geometry. In one embodiment, the SLM time-multiplexes the adjustment of the wavefront for each focal surface in order to provide focus for each of the different focal surfaces in the virtual scene to a viewing user. For example, at a first time, the SLM adjusts the wavefront for a far depth; at a second time, the SLM adjust the wavefront for an intermediate depth; and at a third time, the SLM adjusts the wavefront for near depth. The speed in which the time-multiplexing adjustment of these three depths occurs is generally too fast for the human eye to notice and, therefore, the viewing user observes the virtual scene in focus, or at least as modeled and/or approximated.
Additional Configuration Information
[0073] The foregoing description of the embodiments has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the patent rights to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
[0074] The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the patent rights be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent rights.

Claims

What is claimed is:
1. A system comprising:
at least one processor;
an electronic display element configured to display a virtual scene; and memory including instructions that, when executed by the at least one processor, cause the at least one processor to:
segment the virtual scene into a set of focal surfaces in the virtual scene based on scene geometry obtained for the virtual scene, each focal surface being associated with a set of nearest scene points;
adjust a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface;
generate, for each focal surface, a phase function for adjusting a wavefront of light of the virtual scene consistent with the adjusted shape of the focal surface; and
an optics block including a spatially programmable focusing element configured to:
receive the wavefront of light of the virtual scene from the electronic display element;
adjust, for each focal surface of the set of focal surfaces, the wavefront based on the phase function associated with the focal surface; and
provide the adjusted wavefront of the light of the virtual scene for each focal surface to a user via an exit pupil of the system.
2. The system of claim 1 , wherein the memory including the instructions that, when executed by the at least one processor, further causes the at least one processor to:
cluster scene points in the virtual scene, each scene point being associated with scene geometry data corresponding to a location of the scene point in the virtual scene; and determine a location for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data.
3. The system of claim 1 , wherein the memory further includes instructions that, when executed by the at least one processor, cause the at least one processor to:
determine a color image of the virtual scene to be displayed on the electronic display element for each phase function.
4. The system of claim 1 , wherein the adjusted shape of each focal surface is a spatially varying piecewise-smooth curve.
5. The system of claim 1 , wherein adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes:
applying a non-linear least squares optimization between each of the set of nearest scene points to the focal surface.
6. The system of claim 1 , wherein the spatially programmable focusing element time-multiplexes adjustment of the wavefront for each focal surface of the set of focal surfaces.
7. The system of claim 1 , wherein each phase function shifts the light from the electronic display element for each corresponding focal surface by introducing phase delays associated with the shape of the focal surface, and wherein each focal surface is at least one of a continuously-varying shape or a discontinuous surface.
8. A head mounted display (HMD) comprising:
at least one processor;
an electronic display element configured to display a virtual scene;
a spatial light modulator (SLM) configured to:
receive a wavefront of light from the electronic display element for the virtual scene;
adjust, at one or more first times, the wavefront of light from the electronic display element to provide focus for a first focal surface positioned at first depth in the virtual scene relative to an exit pupil of the HMD; adjust, at one or more second times, the wavefront of light from the electronic display element to provide focus for a second focal surface positioned at second depth in the virtual scene relative to the exit pupil of the HMD; and direct the adjusted wavefront of light providing focus for the first focal surface and the second focal surface to an exit pupil of the HMD, the wavefront of light for the first focal surface and the second focal surface combining.
9. The HMD of claim 8, wherein the SLM time-multiplexes adjustment of the wavefront for the first focal surface and the second focal surface between the one or more first times and the one or more second times.
10. The HMD of claim 8, wherein generating the first focal surface and the second focal surface comprises:
segmenting the virtual scene into the first focal surface and the second focal surface based on scene geometry obtained for the virtual scene, the first focal surface associated with a first set of nearest scene points and the second focal surface associated with a second set of nearest scene points;
adjusting a first shape of the first focal surface to minimize first distances between each scene point of the first set of nearest scene points to the first focal surface and the first focal surface;
adjusting a second shape of the second focal surface to
minimize second distances between each scene point of the second set of nearest scene points to the second focal surface and the second focal surface; and generate, for each of the first focal surface and the second focal surface, a phase function for adjusting a wavefront of light of the virtual scene consistent with the adjusted shape of the first focal surface and the second focal surface.
1 1. The HMD of claim 10, further comprising: clustering scene points in the virtual scene, each scene point being associated with scene geometry data
corresponding to a location of the scene point in the virtual scene; and
determining a first depth for the first focal surface and a second depth of the second focal surface based on the clustered scene points and associated scene geometry data.
12. The HMD of claim 10, further comprising:
determining a color image of the virtual scene to be displayed on the electronic display element for each phase function.
13. The HMD of claim 10, wherein adjusting the first shape of the first focal surface and the second shape of the second focal surface includes:
determining the first shape of the first focal surface by applying a nonlinear least squares optimization between each of the first set of nearest scene points to the first focal surface; and determining the second shape of the second focal surface by applying the non-linear least squares optimization between each of the second set of nearest scene points to the second focal surface.
14. The HMD of claim 10, wherein each phase function shifts the light from the electronic display element for each of the first focal surface and the second focal surface by introducing phase delays associated with the first shape of the first focal surface and the second shape of the second focal surface, and wherein each of the first focal surface and the second focal surface is at least one of a continuously- varying shape or a discontinuous surface.
15. A method comprising:
obtaining a virtual scene including scene geometry data identifying a depth associated with each scene point in the virtual scene;
segmenting the virtual scene into a set of focal surfaces based on the scene geometry data, each focal surface being associated with a set of nearest scene points; adjusting a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface, the adjusted shape being a spatially varying piecewise-smooth curved surface; and generating, for each focal surface, a phase function for a spatial light modulator (SLM) to adjust a wavefront of light of the virtual scene is received from on an electronic display element, the phase function when applied by the SLM introduces phase delays in the wavefront that reproduces a focal pattern corresponding to the adjusted shape of the focal surface.
16. The method of claim 15, wherein segmenting the virtual scene into a set of focal surfaces based on the scene geometry includes:
clustering the scene points in the virtual scene; and
determining a depth for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data.
17. The method of claim 15, further comprising:
determining, for each phase function, a color image of the virtual scene to be displayed on the electronic display element for each focal surface.
18. The method of claim 15, wherein the SLM and the electronic display element are included in a head mounted display and the SLM time-multiplexes adjustment of the wavefront for each focal surface based on the corresponding phase function to cause a composite image of the virtual scene to be provided to a user viewing the virtual scene through an exit pupil of the head mounted display.
19. The method of claim 15, wherein each phase function shifts the light from the electronic display element for each focal surface by introducing phase delays associated with the shape of each focal surface, and wherein each focal surface is at least one of a continuously-varying shape or a discontinuous surface.
20. The method of claim 15, wherein adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes:
applying a non-linear least squares optimization between each of the set of nearest scene points to the focal surface.
21. A system comprising:
at least one processor;
an electronic display element configured to display a virtual scene; and memory including instructions that, when executed by the at least one processor, cause the at least one processor to:
segment the virtual scene into a set of focal surfaces in the virtual scene based on scene geometry obtained for the virtual scene, each focal surface being associated with a set of nearest scene points;
adjust a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface;
generate, for each focal surface, a phase function for adjusting a wavefront of light of the virtual scene consistent with the adjusted shape of the focal surface; and
an optics block including a spatially programmable focusing element configured to:
receive the wavefront of light of the virtual scene from the electronic display element;
adjust, for each focal surface of the set of focal surfaces, the wavefront based on the phase function associated with the focal surface; and
provide the adjusted wavefront of the light of the virtual scene for each focal surface to a user via an exit pupil of the system.
22. The system of claim 21, the memory including instructions that, when executed by the at least one processor, further cause the at least one processor to: cluster scene points in the virtual scene, each scene point being associated with scene geometry data corresponding to a location of the scene point in the virtual scene; and determine a location for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data
and/or
wherein the memory further includes instructions that, when executed by the at least one processor, cause the at least one processor to:
determine a color image of the virtual scene to be displayed on the electronic display element for each phase function.
23. The system of claim 21 or 22, wherein the adjusted shape of each focal surface is a spatially varying piecewise-smooth curve
and/or
wherein adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes:
applying a non-linear least squares optimization between each of the set of nearest scene points to the focal surface.
24. The system of any of claims 21 to 23, wherein the spatially
programmable focusing element time-multiplexes adjustment of the wavefront for each focal surface of the set of focal surfaces
and/or
wherein each phase function shifts the light from the electronic display element for each corresponding focal surface by introducing phase delays associated with the shape of the focal surface, and wherein each focal surface is at least one of a continuously-varying shape or a discontinuous surface.
25. A head mounted display (HMD) comprising:
at least one processor;
an electronic display element configured to display a virtual scene;
a spatial light modulator (SLM) configured to:
receive a wavefront of light from the electronic display element for the virtual scene; adjust, at one or more first times, the wavefront of light from the electronic display element to provide focus for a first focal surface positioned at first depth in the virtual scene relative to an exit pupil of the HMD;
adjust, at one or more second times, the wavefront of light from the electronic display element to provide focus for a second focal surface positioned at second depth in the virtual scene relative to the exit pupil of the HMD; and direct the adjusted wavefront of light providing focus for the first focal surface and the second focal surface to an exit pupil of the HMD, the wavefront of light for the first focal surface and the second focal surface combining.
26. The HMD of claim 25, wherein the SLM time-multiplexes adjustment of the wavefront for the first focal surface and the second focal surface between the one or more first times and the one or more second times.
27. The HMD of claim 25 or 26, wherein generating the first focal surface and the second focal surface comprises:
segmenting the virtual scene into the first focal surface and the second focal surface based on scene geometry obtained for the virtual scene, the first focal surface associated with a first set of nearest scene points and the second focal surface associated with a second set of nearest scene points;
adjusting a first shape of the first focal surface to minimize first distances between each scene point of the first set of nearest scene points to the first focal surface and the first focal surface;
adjusting a second shape of the second focal surface to
minimize second distances between each scene point of the second set of nearest scene points to the second focal surface and the second focal surface; and generate, for each of the first focal surface and the second focal surface, a phase function for adjusting a wavefront of light of the virtual scene consistent with the adjusted shape of the first focal surface and the second focal surface.
28. The HMD of claim 27, further comprising:
clustering scene points in the virtual scene, each scene point being associated with scene geometry data
corresponding to a location of the scene point in the virtual scene; and
determining a first depth for the first focal surface and a second depth of the second focal surface based on the clustered scene points and associated scene geometry data.
and/orfurther comprising:
determining a color image of the virtual scene to be displayed on the electronic display element for each phase function.
29. The HMD of claim 27 or 28, wherein adjusting the first shape of the first focal surface and the second shape of the second focal surface includes:
determining the first shape of the first focal surface by applying a nonlinear least squares optimization between each of the first set of nearest scene points to the first focal surface; and determining the second shape of the second focal surface by applying the non-linear least squares optimization between each of the second set of nearest scene points to the second focal surface.
30. The HMD of any of claims 27 to 29, wherein each phase function shifts the light from the electronic display element for each of the first focal surface and the second focal surface by introducing phase delays associated with the first shape of the first focal surface and the second shape of the second focal surface, and wherein each of the first focal surface and the second focal surface is at least one of a continuously-varying shape or a discontinuous surface.
31. A method comprising:
obtaining a virtual scene including scene geometry data identifying a depth associated with each scene point in the virtual scene;
segmenting the virtual scene into a set of focal surfaces based on the scene geometry data, each focal surface being associated with a set of nearest scene points;
adjusting a shape of each focal surface to minimize a distance between each scene point of the set of nearest scene points to the focal surface and the focal surface, the adjusted shape being a spatially varying piecewise-smooth curved surface; and generating, for each focal surface, a phase function for a spatial light modulator (SLM) to adjust a wavefront of light of the virtual scene is received from on an electronic display element, the phase function when applied by the SLM introduces phase delays in the wavefront that reproduces a focal pattern corresponding to the adjusted shape of the focal surface.
32. The method of claim 31,
wherein segmenting the virtual scene into a set of focal surfaces based on the scene geometry includes:
clustering the scene points in the virtual scene; and
determining a depth for each of the set of focal surfaces based on the clustered scene points and associated scene geometry data
and/or
wherein adjusting the shape of each focal surface to minimize the distance between each scene point of the set of nearest scene points to the focal surface and the focal surface includes:
applying a non-linear least squares optimization between each of the set of nearest scene points to the focal surface.
33. The method of claim 31 or 32, further comprising: determining, for each phase function, a color image of the virtual scene to be displayed on the electronic display element for each focal surface.
34. The method of any of claims 31 to 33,
wherein the SLM and the electronic display element are included in a head mounted display and the SLM time-multiplexes adjustment of the wavefront for each focal surface based on the corresponding phase function to cause a composite image of the virtual scene to be provided to a user viewing the virtual scene through an exit pupil of the head mounted display
and/or
wherein each phase function shifts the light from the electronic display element for each focal surface by introducing phase delays associated with the shape of each focal surface, and wherein each focal surface is at least one of a continuously- varying shape or a discontinuous surface.
PCT/US2018/012777 2017-01-19 2018-01-08 Focal surface display WO2018136251A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201880018483.4A CN110431470B (en) 2017-01-19 2018-01-08 Focal plane display
CN202210220513.8A CN114594603A (en) 2017-01-19 2018-01-08 Focal plane display
EP18151916.6A EP3351999A1 (en) 2017-01-19 2018-01-16 Focal surface display

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201762448303P 2017-01-19 2017-01-19
US62/448,303 2017-01-19
US201762508002P 2017-05-18 2017-05-18
US62/508,002 2017-05-18
US15/786,386 US10330936B2 (en) 2017-01-19 2017-10-17 Focal surface display
US15/786,386 2017-10-17

Publications (1)

Publication Number Publication Date
WO2018136251A1 true WO2018136251A1 (en) 2018-07-26

Family

ID=62840758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/012777 WO2018136251A1 (en) 2017-01-19 2018-01-08 Focal surface display

Country Status (2)

Country Link
US (2) US10330936B2 (en)
WO (1) WO2018136251A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756726A (en) * 2019-02-02 2019-05-14 京东方科技集团股份有限公司 Display device and its display methods, virtual reality device

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3265866B1 (en) 2015-03-05 2022-12-28 Magic Leap, Inc. Systems and methods for augmented reality
IL294134B2 (en) 2016-08-02 2023-10-01 Magic Leap Inc Fixed-distance virtual and augmented reality systems and methods
US10330936B2 (en) * 2017-01-19 2019-06-25 Facebook Technologies, Llc Focal surface display
US10812936B2 (en) 2017-01-23 2020-10-20 Magic Leap, Inc. Localization determination for mixed reality systems
CA3054617A1 (en) 2017-03-17 2018-09-20 Magic Leap, Inc. Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
KR20230149347A (en) 2017-03-17 2023-10-26 매직 립, 인코포레이티드 Mixed reality system with color virtual content warping and method of generating virtual content using same
EP3596703A1 (en) 2017-03-17 2020-01-22 Magic Leap, Inc. Mixed reality system with virtual content warping and method of generating virtual content using same
US10529117B2 (en) * 2018-04-16 2020-01-07 Facebook Technologies, Llc Systems and methods for rendering optical distortion effects
WO2020023383A1 (en) * 2018-07-23 2020-01-30 Magic Leap, Inc. Mixed reality system with virtual content warping and method of generating virtual content using same
US20200049994A1 (en) * 2018-08-13 2020-02-13 Google Llc Tilted focal plane for near-eye display system
US10989927B2 (en) * 2019-09-19 2021-04-27 Facebook Technologies, Llc Image frame synchronization in a near eye display

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050007603A1 (en) * 2002-01-24 2005-01-13 Yoel Arieli Spatial wavefront analysis and 3d measurement
WO2006102201A1 (en) * 2005-03-18 2006-09-28 Cdm Optics, Inc. Imaging systems with pixelated spatial light modulators
US20060232665A1 (en) * 2002-03-15 2006-10-19 7Tm Pharma A/S Materials and methods for simulating focal shifts in viewers using large depth of focus displays
US20080117289A1 (en) * 2004-08-06 2008-05-22 Schowengerdt Brian T Variable Fixation Viewing Distance Scanned Light Displays
US9292973B2 (en) * 2010-11-08 2016-03-22 Microsoft Technology Licensing, Llc Automatic variable virtual focus for augmented reality displays

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7486341B2 (en) * 2005-11-03 2009-02-03 University Of Central Florida Research Foundation, Inc. Head mounted display with eye accommodation having 3-D image producing system consisting of, for each eye, one single planar display screen, one single planar tunable focus LC micro-lens array, one single planar black mask and bias lens
CN103439801B (en) * 2013-08-22 2016-10-26 北京智谷睿拓技术服务有限公司 Sight protectio imaging device and method
WO2016105281A1 (en) 2014-12-26 2016-06-30 Koc University Near-to-eye display device
US10330936B2 (en) * 2017-01-19 2019-06-25 Facebook Technologies, Llc Focal surface display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050007603A1 (en) * 2002-01-24 2005-01-13 Yoel Arieli Spatial wavefront analysis and 3d measurement
US20060232665A1 (en) * 2002-03-15 2006-10-19 7Tm Pharma A/S Materials and methods for simulating focal shifts in viewers using large depth of focus displays
US20080117289A1 (en) * 2004-08-06 2008-05-22 Schowengerdt Brian T Variable Fixation Viewing Distance Scanned Light Displays
WO2006102201A1 (en) * 2005-03-18 2006-09-28 Cdm Optics, Inc. Imaging systems with pixelated spatial light modulators
US9292973B2 (en) * 2010-11-08 2016-03-22 Microsoft Technology Licensing, Llc Automatic variable virtual focus for augmented reality displays

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756726A (en) * 2019-02-02 2019-05-14 京东方科技集团股份有限公司 Display device and its display methods, virtual reality device

Also Published As

Publication number Publication date
US20180203235A1 (en) 2018-07-19
US10330936B2 (en) 2019-06-25
US10558049B2 (en) 2020-02-11
US20190265484A1 (en) 2019-08-29

Similar Documents

Publication Publication Date Title
US10558049B2 (en) Focal surface display
US10983354B2 (en) Focus adjusting multiplanar head mounted display
US10937129B1 (en) Autofocus virtual reality headset
US10317680B1 (en) Optical aberration correction based on user eye position in head mounted displays
US11106276B2 (en) Focus adjusting headset
JP7369147B2 (en) Homography Transformation Matrix-Based Temperature Calibration for Vision Systems
JP6502586B2 (en) Virtual reality headset to focus
US10241569B2 (en) Focus adjustment method for a virtual reality headset
US9984507B2 (en) Eye tracking for mitigating vergence and accommodation conflicts
US11776509B2 (en) Image correction due to deformation of components of a viewing device
US10432908B1 (en) Real-time multifocal displays with gaze-contingent rendering and optimization
EP3179289B1 (en) Focus adjusting virtual reality headset
EP3351999A1 (en) Focal surface display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18741029

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18741029

Country of ref document: EP

Kind code of ref document: A1