CN117561470A - Free-form surface light field display for VR/AR head-mounted device - Google Patents

Free-form surface light field display for VR/AR head-mounted device Download PDF

Info

Publication number
CN117561470A
CN117561470A CN202280043964.7A CN202280043964A CN117561470A CN 117561470 A CN117561470 A CN 117561470A CN 202280043964 A CN202280043964 A CN 202280043964A CN 117561470 A CN117561470 A CN 117561470A
Authority
CN
China
Prior art keywords
user
view
lenslets
field
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280043964.7A
Other languages
Chinese (zh)
Inventor
基兰·康纳·凯利
布赖恩·惠尔赖特
耿莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Meta Platforms Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/847,730 external-priority patent/US20220413297A1/en
Application filed by Meta Platforms Technologies LLC filed Critical Meta Platforms Technologies LLC
Priority claimed from PCT/US2022/035020 external-priority patent/WO2022272148A1/en
Publication of CN117561470A publication Critical patent/CN117561470A/en
Pending legal-status Critical Current

Links

Abstract

A head mounted display for virtual reality imaging includes a pixel array including a plurality of pixels configured in a two-dimensional surface, each pixel providing a plurality of light beams to form an image provided to a user. The head mounted display further includes a first optical element for providing a central portion of a field of view of an image passing through an eyebox that circumscribes a volume including a pupil of a user and a second optical element for providing a peripheral portion of the field of view of the image passing through the eyebox. The second optical element includes a lenslet array for providing a segmented view of the peripheral portion of the field of view, the lenslet array including at least one of a free-form surface lenslet, a liquid crystal lenslet, a fresnel lenslet, and a wafer lenslet. A system and method for using the head mounted display are also provided.

Description

Free-form surface light field display for VR/AR head-mounted device
Technical Field
The present disclosure relates to a headset for use in Virtual Reality (VR) applications that include peripheral displays. More particularly, the present disclosure relates to a head mounted device (head set) that provides peripheral views using a free-form surface multi-lenslet array (MLA) in a light field display.
Background
In the field of virtual reality head mounted devices, attention is focused on the binocular field of view (FOV) of the user, which includes about 60 ° up, about 50 ° at the nose and periphery, and about 75 ° down. This is about 2.5 steradians (Sr). Current VR devices support a large portion of this binocular (or "stereoscopic") portion of the field of view, but provide little service to the peripheral (visible to only one eye) or lower binocular fields. In order to provide a fully immersive experience to the viewer, a larger portion of the peripheral view is desired. Human vision includes such peripheral fields of view: the peripheral field of view is greater than 200 ° in the horizontal direction and greater than 115 ° (about 5.3Sr total) in the vertical direction. Current optical applications cannot integrate such a peripheral field of view (FOV) into such a head-mounted device: the head-mounted device is compact and lightweight, and a viewer can comfortably use the head-mounted device and walk around with it.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided an apparatus for virtual reality imaging, the apparatus comprising: a pixel array, a first optical element, and a second optical element, the pixel array comprising a plurality of pixels arranged in a two-dimensional surface, each pixel providing a plurality of light beams to form an image provided to a user; the first optical element is configured to provide a central portion of a field of view of an image through an eyebox that circumscribes a volume that includes a pupil of a user; the second optical element is configured to provide a peripheral portion of a field of view of the image through the eyebox, wherein the second optical element includes a lenslet array for providing a segmented view of the peripheral portion of the field of view, the lenslet array including a plurality of lenslets selected from at least one of: free-form surface lenslets, liquid crystal lenslets, fresnel lenslets, and wafer lenslets.
In some embodiments, the central portion and the peripheral portion of the field of view comprise at least one hemispherical degree of the field of view of the user.
In some embodiments, the peripheral portion of the field of view has a higher angular resolution of fifteen minutes of arc in a region adjacent to the central portion of the field of view.
In some embodiments, the second optical element provides an angular resolution of axial attenuation in a peripheral portion of the field of view.
In some embodiments, the second optical element is a free-form surface lenslet array, and wherein the two-dimensional surface of the pixel array is planar.
In some embodiments, the pixel array includes a tapered display surrounding the first optical element, and the second optical element is a lenslet array surrounding the first optical element to provide a peripheral portion of the field of view of the image through the eyebox.
In some embodiments, the two-dimensional surface follows a one-dimensional curvature.
In some embodiments, the pixel array comprises one of a flexible organic light emitting diode array, a flexible liquid crystal display, or a light emitting diode array.
In some embodiments, the second optical element comprises a lenslet array having a plurality of lenslets arranged at a pitch greater than one-fourth of the focal length of the lenslets, and wherein the light beams from the individual pixels pass through the eyebox at unique angles.
In some embodiments, the pixel array includes segmented portions of active pixels separated by gaps of inactive pixels, wherein two sub-portions of a peripheral portion of a field of view of images from two adjacent segmented portions of the plurality of active pixels form a continuous image on a retina of a user, and the light beam from the segmented portions of active pixels passes through the eyebox at an angle unique to each pixel based on a position of a pupil of the user.
According to a second aspect of the present disclosure, there is provided a display comprising: a pixel array, a multi-lenslet array, a memory, and one or more processors, the pixel array being configured in a two-dimensional surface; the multi-lenslet array is configured to provide light from at least one of the following to a retina of a user of the display: free-form surface lenslets, liquid crystal lenslets, fresnel lenslets, and wafer lenslets; the memory stores a plurality of instructions; the one or more processors are configured to execute the plurality of instructions to activate each of a plurality of segments in the pixel array to emit a light beam forming a portion of a peripheral field of view of the image, the respective portions providing different fields of view of the image, wherein the image is projected onto a retina of a user of the head-mounted display through an eyebox that defines a location of a pupil of the user.
In some embodiments, the portion of the peripheral field of view includes at least one steradian of the user field of view at a resolution of at least fifteen minutes of arc.
In some embodiments, the plurality of instructions further cause the one or more processors to select a portion of the peripheral field of view for each of two adjacent segments to form a continuous image in the retina of the user through the eyebox.
In some embodiments, the gap of inactive pixels (inactive pixels) between two adjacent segments is selected such that the light beam provided by each of the two adjacent segments in the pixel array passes through the eyebox to form a continuous, cross-talk free image in the retina of the user.
In some embodiments, the plurality of instructions further includes instructions indicating a position of a pupil of the user within the eyebox.
In some embodiments, the display further comprises a sensor configured to provide positional information of the pupil of the user within the eyebox.
In some embodiments, the memory includes calibration instructions to select a peripheral field of view of the image and modify an angular mapping of the pixel array to the retina of the user based on the gaze direction of the user and the position of the pupil.
According to a third aspect of the present disclosure, there is provided a method for digitally calibrating a light field display, the method comprising: acquiring, with a camera, an image of a pixel array through a multi-lenslet array in a light field display of a head-mounted display device, the multi-lenslet array comprising a plurality of lenslets, the lenslets selected from at least one of: free-form surface lenslets, liquid crystal lenslets, fresnel lenslets, and wafer lenslets, the image being associated with a pupil position of a user of the head-mounted display device; obtaining an angular map (angular map) of the pixel array from an image of the pixel array, wherein the angular map includes angles of a plurality of light beams from each active pixel in the pixel array; and storing the angle map in a memory of the head mounted display device based on the pupil position.
In some embodiments, the method further comprises storing instructions in a memory of the head mounted display device that activate segments of the pixel array based on the angle map and the pupil position.
In some embodiments, storing the angle map in a memory of the head mounted display device comprises: the correction factor is stored in the angle map based on the adaptation parameters of the head mounted display device on the user.
According to a fourth aspect of the present disclosure, there is provided a method for aligning a head mounted display, the method comprising: a multi-lenslet array is disposed adjacent to a pixel array that is arranged in a two-dimensional surface, each pixel providing a plurality of light beams to the multi-lenslet array to form an image. The method further includes rotating the multi-lenslet array about its center until the image projection displays a complete view of the non-overlapping features, and translating the multi-lenslet array from its center along the plane of the multi-lenslet array until the image projection displays a complete view of the non-overlapping features.
According to other aspects of the present disclosure, a non-transitory computer-readable medium is provided that stores a plurality of instructions that, when executed by a processor in a computer, cause the computer to perform a method of using a head mounted display. The method comprises the following steps: activating one or more pixels in a first array of pixels configured to provide a light beam to form a central portion of a field of view of an image provided to a user of the head mounted display; at least one segment of the plurality of segments in the second array of pixels is activated, the second array of pixels being configured to provide a beam of light to form a peripheral portion of a field of view of an image provided to a user of the head mounted display, and a portion of the peripheral field of view is selected for each of two adjacent segments in the second array of pixels to form a continuous image in the retina of the user through an eyeward zone that circumscribes a volume that includes a location of a pupil of the user of the head mounted display.
It will be understood that any feature described herein as being suitable for incorporation into one or more aspects or one or more embodiments of the present disclosure is intended to be generic in any and all aspects and embodiments of the present disclosure. Other aspects of the disclosure will be appreciated by those skilled in the art from the description, claims and drawings of the disclosure. The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the claims.
Drawings
Fig. 1A-1B illustrate an exemplary head mounted display (head mounted display, HMD) according to some embodiments.
Fig. 2A-2C illustrate FOVs of human vision including a central portion, a peripheral left portion, and a peripheral right portion, according to some embodiments.
Fig. 3A and 3B illustrate an HMD with a peripheral light-field display to provide a peripheral FOV to a user, according to some embodiments.
Fig. 4 illustrates an MLA for collecting light from a light field display to provide a peripheral FOV to an HMD user, the MLA including varying angular resolution, according to some embodiments.
Fig. 5A-5C illustrate freeform lenslets for MLA for collecting light from a light field display to provide a peripheral FOV to an HMD user, according to some embodiments.
Fig. 6A-6B illustrate wafer lenslets for MLA for collecting light from a light field display to provide a peripheral FOV to an HMD user, according to some embodiments.
Fig. 7 illustrates a liquid crystal MLA for collecting light from a light field display to provide a peripheral FOV to an HMD user, according to some embodiments.
Fig. 8A-8B illustrate fresnel lenslets for MLA for collecting light from a light field display to provide a peripheral FOV to an HMD user, according to some embodiments.
Fig. 9 is a flowchart illustrating steps in a method for mechanically aligning a multi-lenslet array with a light field display, according to some embodiments.
Fig. 10 is a flow chart illustrating steps in a method for digitally calibrating a light field display, according to some embodiments.
Fig. 11 is a flowchart illustrating steps in a method for providing a peripheral field of view to a user of an HMD device having a light-field display, in accordance with some embodiments.
FIG. 12 is a block diagram illustrating an exemplary computer system in which the methods of FIGS. 9, 10, and 11 may be implemented, according to some embodiments.
In the drawings, elements having the same or similar reference numbers share the same or similar features unless explicitly stated otherwise.
Detailed Description
Various embodiments of a peripheral display are described herein. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the technology described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects.
Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Embodiments disclosed herein may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been somehow adjusted before being presented to a user, and may include, for example, virtual Reality (VR), augmented reality (augmented reality, AR), mixed Reality (MR), mixed reality (hybrid reality), or some combination and/or derivative thereof. The artificial reality content may include entirely generated content or generated content in combination with captured (e.g., real world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of the foregoing may be presented in a single channel or in multiple channels (e.g., stereoscopic video producing a three-dimensional effect to a viewer). Further, in some embodiments, the artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, for creating content in the artificial reality and/or otherwise for use in the artificial reality (e.g., performing an activity in the artificial reality), for example. The artificial reality system providing artificial reality content may be implemented on a variety of platforms including a Head Mounted Display (HMD) connected to a host computer system, a stand-alone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
In some embodiments of the present disclosure, "near-eye" may be defined to include optical elements that: the optical element is configured to be placed within 35mm from the user's eye when using a near-eye optical device such as a head-mounted display (HMD).
In Virtual Reality (VR) displays, options for expanding the field of view to cover the field of view of a person are limited. Some options include filling the periphery with sparse LEDs or bare display panels, but the resolution of these options is lacking even compared to the low resolution of the human eye at large angles. Other methods may include tiling (e.g., a "split lens" architecture). With enough tiles, this provides excellent coverage, but does not perform well in terms of resolution and is bulky.
To address the above, the HMDs disclosed herein include a first optical element for providing a central FOV through an eyebox of the HMD. The HMD also includes a second optical element for providing a peripheral FOV through the eyebox. The second optical element includes an MLA to provide a segmented view of the peripheral FOV. The MLA may be a free-form surface MLA, a liquid crystal MLA, a Fresnel MLA or a wafer MLA, or any series combination of these MLAs. The MLA is closely located beside the display. Any two adjacent lenslets in the MLA form successive images on the retina of the user from two adjacent segmented portions of the active pixels in the display.
To provide a wide peripheral view in AR/VR applications, some embodiments use such a light field display: the light field display has segmented portions of active pixels separated by gaps of inactive pixels. Multiple light field displays are compact and provide a wide eyebox and FOV, but may sacrifice angular resolution. Thus, while the first optical element may desirably have a high resolution for the central FOV, the second optical element may allow for a lower angular resolution for the wider peripheral FOV provided by the MLA and light field display. In some embodiments, the resolution of the second optical element (as determined by the MLA) may gradually decrease toward the end of the headset at a boundary region adjacent to the first optical element. Thus, some embodiments may include an MLA that: the MLA has a transition region proximate the first optical element where lenslets in the MLA proximate the first optical element have a higher numerical aperture (e.g., a greater angular resolution).
In the present disclosure, some embodiments include a flat peripheral light field display having free-form surface lenslets adapted to match the requirements of the periphery. Some embodiments include a curved peripheral light field display from the outer eyebrow to the lower cheek of the viewer having free-form surface lenslets and a tapered display surrounding the central optic. The single display fills the entire (or substantially the entire) peripheral FOV.
Fig. 1A illustrates an exemplary HMD 100 according to some embodiments. For example, HMD 100 may be a Virtual Reality (VR) HMD. HMD 100 includes a front panel 101, goggles (visors) 103, and a strap 105. The front panel 101 includes and protects a display for the user, the goggles 103 adjust the HMD 100 on the user, and the straps 105 bring the HMD 100 into close contact with the user's head. The audio device 107 provides sound to the user.
In some embodiments, HMD 100 may include processor circuitry 112 and memory circuitry 122. The memory circuit 122 may store instructions that, when executed by the processor circuit 112, cause the HMD 100 to perform the methods disclosed herein. Further, the HMD 100 may include a communication module 118. The communication module 118 may include radio frequency software and hardware configured to wirelessly communicate the processor 112 and memory 122 with an external network or some other device. Accordingly, the communication module 118 may include a radio antenna, transceiver, and sensor, and further include digital processing circuitry for signal processing according to any of a number of wireless protocols, such as Wi-Fi, bluetooth, near field contact (Near field contact, NFC), and the like. In addition, the communication module 118 may also communicate with other input tools and accessories (e.g., a handle, joystick, mouse, wireless pointer, etc.) that cooperate with the HMD 100.
Fig. 1B shows a partial view of a left side view 102 of the HMD 100 corresponding to the left eye 60 of the user. HMD 100 may include two images (mirror images) of left side view 102, each with the same or similar elements as shown in left side view 102. The choice to the left in fig. 1B is arbitrary, and all of the components therein may appear on the right side of the HMD 100. HMD 100 includes pixel array 120-1 and pixel array 120-2 (hereinafter collectively referred to as "pixel array 120"). The pixel array 120 includes a plurality of pixels arranged in a two-dimensional surface (e.g., a planar surface oriented in one direction as in the pixel array 120-1 and one or two planar surfaces oriented in the other direction as in the pixel array 120-2). Each pixel in pixel array 120 provides a plurality of light beams 123-1 and 123-2 (hereinafter collectively referred to as "display light beams 123") for forming an image that is provided to a user. The optical element 130 is configured to provide a central portion of the FOV of the image passing through the eyebox 121. The central portion of the FOV of the image may include the beam 125-1. The optical element 153 provides a peripheral portion of the FOV of the image passing through the eyebox 121, which includes the light beam 125-2. The light beams 125-1 and 125-2 will be collectively referred to hereinafter as "eyebox light beams 125". Eye 60 includes a pupil 61 for receiving at least some of the eyebox beam 125 and a retina 63 for projecting images. The front panel 101 and the communication module 118 are also shown (see fig. 1A).
In some embodiments, optical elements 130 and 153 may include one or more optical elements, such as diffractive elements (gratings and prisms), refractive elements (lenses), guiding elements (e.g., planar waveguides and/or optical fibers), and polarizing elements (e.g., polarizers, half-wave plates, quarter-wave plates, polarization rotators, pancharatnam-Berry phase (PBP) lenses, etc.). In some embodiments, optical elements 130 and 153 may include one or more passive elements in combination with one or more active elements (e.g., liquid Crystal (LC) variable waveplates or variable polarizers).
In some embodiments, pixel array 120-2 may be divided into a plurality of active pixel segments, and optical element 153 may include a multi-lenslet array, with each lenslet directing light beam 123-2 from at least one pixel segment to eyebox 121. In some embodiments, optical element 153 may comprise a free-form surface multi-lenslet array. Accordingly, beam 125-2 provides a segmented view of the peripheral FOV that forms a continuous projection of the image periphery on retina 63 through eyebox 121 and pupil 61 by overlapping FOV frustums (frustums) from different active pixel segments. In some embodiments, processor 112 activates each segment in pixel array 120-2 to emit optical beam 123-2, which forms a portion of the peripheral FOV. The respective portions of the peripheral FOV from the respective segments may include different perspectives of the image.
In some embodiments, HMD 100 includes one or more sensors 160 to determine the position of pupil 61 within eyebox 121. The sensor 160 then sends information to the processor 112 regarding the position of the pupil 61 within the eyebox 121. Accordingly, the processor 112 may determine the gaze direction of the user based on the position of the pupil 61 within the eyebox 121. In some embodiments, the memory 122 includes instructions for the processor 112 to select a peripheral field of view of the image based on the gaze direction of the observer and the position of the pupil 61 within the eyebox 121. In some embodiments, the memory 122 includes display calibration instructions to change the manner in which the virtual image is mapped to the pixel array 120 based on pupil position and/or gaze direction.
Fig. 2A-2C show graphs 200A, 200B, and 200C of a field of view (FOV) 250 for human vision. According to some embodiments, FOV 250 includes a central portion 205, a peripheral left portion 210L, and a peripheral right portion 210R (hereinafter collectively referred to as "peripheral portion 210") measured in accordance with angular aperture 201. The angular aperture 201 is azimuthally measured with respect to a direction (corresponding to 0 °) perpendicular to and pointing straight out from the face of the user.
Fig. 2A shows a graph 200A in which left eye portion 210L and right eye portion 210R vary according to angular aperture 201 (in degrees). This represents the field of view of a person without eye rotation. The peripheral portion 210 may have some overlap in the binocular portion 215, the binocular portion 215 being included within the lower peripheral FOV. The central portion 205 includes a combined FOV from both eyes within 45 ° of normal, i.e., the central portion 205 includes a binocular FOV. According to chart 200A, peripheral portion 210 may include approximately 60% of total FOV 250.
Fig. 2B shows an approximate performance chart 200B of human vision for the entire FOV 250, with the abscissa (e.g., X-axis) representing angular aperture 201 and the ordinate (e.g., Y-axis) representing angular resolution 202, expressed in arc minutes (arc minutes/arcmin). Performance chart 200B assumes that the eye is inadvertently rotated up to 30 ° away from center. Thus, the "foveal" resolution of 1 arc minute remains up to 30 ° in the radial direction, and the human eye performance steadily drops from 30 ° to a resolution of about 1 degree at 90 ° angular aperture (e.g., near the edge of FOV 250). The performance of the human eye in the center portion 205 may drop down to about 6 minutes of arc at the edges. Also shown is a peripheral portion 210 (see fig. 2A).
Fig. 2C shows a performance chart 200C compared to human body function for a different optical configuration of the HMD. As in graph 200B, angular resolution 202 is plotted against angular aperture 201. The split lens construction 230 captures the peripheral portion 210 at a relatively high resolution. The dashed line represents the design-based performance range of the split lens. The split lens construction 230 is compromised by the form factor of the HMD application (including the weight of the lens used, etc.).
The light field display configuration 220 is capable of maintaining performance comparable to normal eye visual performance over substantially the entire span of the peripheral portion 210. In some embodiments, the resolution of light field construction 220 may be limited by the number of Pixels Per Inch (PPI) in the pixel array (e.g., pixel array 120) and the focal length of the lenslets (e.g., optical elements 153) in the multi-lenslet array.
Fig. 3A and 3B illustrate an HMD 300 having peripheral light-field displays 350L and 350R (collectively, "light-field displays 350"). In some embodiments, the light field display includes a lenslet array with miniature lenses of about 1mm in size to provide an adjusted focus point to a user viewing the display. In some embodiments, the light field display described in this disclosure may include a lenslet array with microlenses having a size of about 3mm to 6mm, which may not necessarily provide accommodation focus to the eye.
Light field display 350L includes pixel array 320L and lenslet array 353L for providing peripheral display light emitted by pixel array 320L to the peripheral FOV of the left eye of the user. The light field display 350R includes a pixel array 320R and a lenslet array 353R for providing peripheral display light emitted by the pixel array 320R to the peripheral FOV of the right eye of the user of the HMD 300. For example, pixel arrays 320L and 320R (collectively "pixel array 320") may be OLED displays or LCDs. Lenslet arrays 353L and 353R (collectively "lenslet arrays 353") may be flat lenslet arrays configured with square subdivision surfaces (square tessellation), hexagonal subdivision surfaces (hexagonal tessellation), and/or hexagonal subdivision surfaces (hexapolar tessellation). The advantage of a hexagonal subdivision surface is that the number of unique prescriptions can be reduced due to rotational symmetry (e.g., a lenslet with 9 rows requires only 9 unique prescriptions). A main display (not shown) of HMD 300 is disposed behind central optics 330L and 330R.
Fig. 4 illustrates an MLA 453 for collecting light from a display to provide a peripheral FOV to an HMD user, the MLA including varying angular resolution, according to some embodiments. MLA 453 includes a series of lenslet rows 455-1, 455-2, and 455-3 (hereinafter collectively referred to as "lenslet rows 455"). The lenslet row 455-1 is closer to the first optical element providing the central FOV and includes slightly larger lenslets 401, which slightly larger lenslets 401 have a larger NA (numerical aperture) and/or a longer focal length (as derived from their larger lateral dimensions), and thus provide a higher angular resolution. The lenslet row 455-2 is farther from the first optical element, thus covering the peripheral portion of the FOV that is farther from the HMD user's angle, and the lenslet row 455-2 includes slightly smaller lenslets 401 that have a smaller NA and/or shorter focal length (as derived from their larger lateral dimensions), thus providing a lower angular resolution than the lenslets in lenslet row 455-1, which is less important to the HMD user (see peripheral portion 210). The lenslet row 455-3 is located on the outer edge of the MLA 453, thus covering the area in the peripheral FOV where the angular resolution is least important to the HMD user (see above). The lenslets 401 in lenslet row 455-3 are smaller in size and therefore have smaller NA and/or smaller focal length such that the angular resolution is reduced.
Fig. 5A-5C illustrate free-form surface lenslets 501 for an MLA 553 for collecting input light 523 from a display 520 to provide a peripheral FOV through an eyebox 521 of the HMD via output light 525, according to some embodiments.
FIG. 5A shows display pixel coordinates [ x, y ] along the plane of display 520]To an output angular coordinate [ tan theta ] imaged through a freeform (e.g., non-axisymmetric) surface of lenslet 501 x ,tanθ y ]A distortion map 500A in between such that for each (or a greater number of) pixels, the output light 525 passes through the eyebox 521.
FIG. 5B shows the eye pupil coordinates [ x ] in a plane including the eyebox 521 Eyes (eyes) ,y Eyes (eyes) ]Three-way distortion map (thread-way distortion map) 500B.
Fig. 5C shows a main display 520-1 that generates a central portion of the FOV in which light 523-1 is directed through a central lens 530, with light 525-1, onto an eyebox 521. The peripheral display 520-2 generates light 523-2 forming the peripheral portion of the FOV for the user, and the MLA 553 formed by the plurality of freeform lenslets 501 directs the light 525-2 into the eyebox 521. Displays 520-1 and 520-2 will be collectively referred to hereinafter as "displays 520". The light beams 523-1 and 523-2 will be collectively referred to as "input light 523" hereinafter. Light beams 525-1 and 525-2 will be collectively referred to hereinafter as "output light 525".
Fig. 6A-6B illustrate a wafer lenslet 601 for an MLA 653 for collecting light 623 from a display 620 to provide a peripheral FOV to an eye-ward region of an HMD, according to some embodiments. The pixel array 620 is adjacent to the MLA, only one of which is shown 601 for illustrative purposes. The light field display 620 includes a plurality of pixels 622 that produce light beams 623 that are directed toward the wafer lenslet 601. In some embodiments, the distance 603 between the pixel array 620 and the microlens 601 may be approximately equal to the focal length of the microlens 601, so the outgoing light beam 625 may be collimated in different directions, depending on the specific location of the starting pixel 621. Wafer lenslet 601 includes two polarization sensitive surfaces 650-1 and 650-2 (hereinafter collectively referred to as "polarization sensitive surfaces 650"). Accordingly, the light beam 623 may be transmitted through the partially reflective surface 650-1 (e.g., due to a first polarization state caused by the emissive properties of the polarizer or pixel array 620) and reflected from the surface 650-2 including a quarter wave plate (QWP for switching the polarization of incident light) and a reflective polarizer. Upon reflection, the polarization states of the light beams 623 are reversed (e.g., by passing through a quarter wave plate between the polarization sensitive surfaces 650) such that they are again reflected from the surface 650-1 and then transmitted out as an exit light beam 625 at the surface 650-2. The net effect of the wafer lenslet 601 is to increase the optical path of the beam 623 within the lenslet, thereby increasing optical power without increasing the width, weight, and volume of the optic.
Fig. 7 illustrates Liquid Crystal (LC) lenslets 701-1, 701-2, and 701-3 (hereinafter collectively referred to as "LC lenslets 701") for MLA 753 that collect light 723 from light field display 720 to provide a peripheral FOV through an eyebox 721 of the HMD, according to some embodiments. The lenslets 701 include a liquid crystal layer. In some embodiments, light beam 723 may have a linear polarization provided by a light emitter in display 720. LC molecules within LC lenslets 701 are redirected according to the electric field provided by the electrodes and provide optical power to each of these lenslets 701. The result is that the outgoing beam 725 converges on the eyebox 721. Light beams 723 and 725 may be linearly polarized along the direction of the LC material in LC lenslet 701 by placing a linear polarizer before or after LC lenslet 701.
Fig. 8A-8B illustrate fresnel lenslets 801 forming an MLA 853 for collecting light 823 from a light field display 820 to provide a peripheral FOV through an eyebox 821 of the HMD with a light beam 825, according to some embodiments.
FIG. 8B shows an MLA 853 that directs beam 823-2 into beam 825-2. In some embodiments, the first surface of the MLA 853 may comprise a Fresnel lenslet 801 and the second surface of the MLA may comprise a refractive surface, which may comprise a spherical element or a freeform surface element. The light beams 823-1 and 823-2 will be collectively referred to as "input light 823" hereinafter. The light beams 825-1 and 825-2 will be collectively referred to hereinafter as "output light 825".
Fig. 9 is a flowchart illustrating steps in a method 900 for mechanically aligning a multi-lenslet array with a light field display, according to some embodiments. According to some embodiments, a multi-lenslet array and a light field display may be included in an HMD device disclosed herein (e.g., HMD devices 100 and 300). The HMD may include a light-field display (e.g., pixel array and light-field displays 120, 320, 350, 520, 620, 720, and 820) having a plurality of pixels configured in a two-dimensional surface, each pixel providing a plurality of light beams (e.g., light beams 123, 125, 523, 525, 623, 625, 723, 725, 823, and 825) to form an image that passes through an eyebox (e.g., eyeboxes 121, 521, 721, and 821) of the HMD that circumscribes a volume that includes a pupil of a user. The HMD may also include optical elements (e.g., optical elements 130 and 330) configured to provide a central portion of the FOV of the image through the eyebox. In some embodiments, the HMD device further includes optical elements (e.g., optical element 153 and MLAs 353 and 453) configured to provide a peripheral portion of the field of view of the image through the eyebox. Methods consistent with the present disclosure may include at least one or more of the steps in method 900 performed in a different order, simultaneously, quasi-simultaneously, or overlapping in time.
Step 902 includes: the multi-lenslet array is disposed adjacent to an array of pixels arranged in a two-dimensional surface, each pixel providing a plurality of light beams to the multi-lenslet array to form an image.
Step 904 includes: the multi-lenslet array is rotated about its center until the image projection displays a complete view without overlapping features.
Step 906 includes: the multi-lenslet array is translated from its center along the plane of the multi-lenslet array until the image projection displays a complete view without overlapping features.
Fig. 10 is a flowchart illustrating steps in a method 1000 for digitally calibrating a light field display, according to some embodiments. Consistent with the present disclosure, a multi-lenslet array in a light field display may be included in an HMD device disclosed herein (e.g., HMD devices 100 and 300). The HMD may include a pixel array (e.g., pixel array and light-field displays 120, 320, 350, 520, 620, 720, and 820) having a plurality of pixels configured in a two-dimensional surface, each pixel providing a plurality of light beams (e.g., light beams 123, 125, 523, 525, 623, 625, 723, 725, 823, and 825) to form an image provided to a user. The HMD device may also include optical elements (e.g., optical elements 130 and 330) configured to provide a central portion of the FOV of the image passing through the eyebox (e.g., eyeboxes 121, 521, 721, and 821) that circumscribes a volume that includes the user's pupil. In some embodiments, the HMD device further includes optical elements (e.g., optical element 153 and MLAs 353 and 453) configured to provide a peripheral portion of the field of view of the image through the eyebox. In some embodiments, digital calibration of a light field display as disclosed herein may include creating a plurality of angle maps of a pixel array, each angle map associated with a pupil position and/or gaze direction of a user, and storing the angle maps in a memory of the HMD device (see memory 122). Methods consistent with the present disclosure may include at least one or more of the steps in method 1000 performed in a different order, simultaneously, quasi-simultaneously, or overlapping in time.
Step 1002 includes: acquiring, with a camera, an image of a pixel array through a multi-lenslet array in a light field display of a head-mounted display device, the multi-lenslet array comprising a plurality of lenslets, the lenslets selected from at least one of: free-form surface lenslets, liquid crystal lenslets, fresnel lenslets, and wafer lenslets, the image being associated with a pupil position of a user of the head-mounted display device.
Step 1004 includes: an angle map of the pixel array is obtained from an image of the pixel array, wherein the angle map includes angles of a plurality of light beams from each active pixel in the pixel array.
Step 1006 includes: based on the pupil position, the angle map is stored in a memory of the head mounted display device. In some embodiments, step 1006 includes: instructions are stored in a memory of the head mounted display device that activate segments of the pixel array based on the angle map and pupil position. In some embodiments, step 1006 includes storing the correction factor in the angle map based on the fit parameters of the head mounted display device on the user.
Fig. 11 is a flowchart illustrating steps in a method 1100 for providing a peripheral field of view to a user of an HMD device having a light-field display, in accordance with some embodiments. According to some embodiments, an HMD device (e.g., HMD devices 100 and 300) may include a multi-lenslet array and a light-field display as disclosed herein. The HMD may include a pixel array (e.g., pixel array and light-field displays 120, 320, 350, 520, 620, 720, and 820) having a plurality of pixels configured in a two-dimensional surface, each pixel providing a plurality of light beams (e.g., light beams 123, 125, 523, 525, 623, 625, 723, 725, 823, and 825) to form an image provided to a user. The HMD device may also include optical elements (e.g., optical elements 130 and 330) configured to provide a central portion of the FOV of the image passing through the eyebox (e.g., eyeboxes 121, 521, 621, 721, and 821) that circumscribes a volume that includes the user's pupil. In some embodiments, the HMD device further includes optical elements (e.g., optical element 153 and MLAs 353 and 453) configured to provide a peripheral portion of the field of view of the image through the eyebox. Methods consistent with the present disclosure may include at least one or more of the steps in method 1100 performed in a different order, simultaneously, quasi-simultaneously, or overlapping in time.
Step 1102 includes: one or more pixels in a first array of pixels configured to provide a beam of light to form a central portion of a FOV of an image provided to a user of the HMD are activated.
Step 1104 includes: at least one segment of a plurality of segments in a second array of pixels configured to provide a light beam to form a peripheral portion of a FOV of an image provided to a user of the HMD is activated.
Step 1106 includes: a portion of the peripheral FOV is selected for each of two adjacent segments in the second pixel array to form a continuous image in the user's retina through an eyebox that circumscribes a volume that includes a location of the pupil of the user of the HMD.
Hardware overview
Fig. 12 is a block diagram illustrating an exemplary computer system 1200 with which the HMD device 100 of fig. 1A may be implemented, as well as methods 900, 1000, and 1100. In some aspects, computer system 1200 may be implemented using hardware, or a combination of software and hardware, or in a dedicated server, or integrated into another entity, or distributed among multiple entities. Computer system 1200 may include a desktop computer, laptop computer, tablet phone, smart phone, feature phone, or server computer, among others. The server computer may be remotely located in the data center or may be stored locally.
Computer system 1200 includes a bus 1208 or other communication mechanism for communicating information, and a processor 1202 (e.g., processor 112) coupled with bus 1208 for processing information. By way of example, computer system 1200 may be implemented with one or more processors 1202. The processor 1202 may be a general purpose microprocessor, microcontroller, digital signal processor (digital signal processor, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA), programmable logic device (programmable logic device, PLD), controller, state machine, gating logic, discrete hardware components, or any other suitable entity that can perform information operations or other processing of information.
In addition to hardware, the computer system 1200 may include code that creates an execution environment for the computer program in question, such as code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them, stored in an included memory 1204 (e.g., memory 122), such as Random Access Memory (RAM), flash memory, read Only Memory (ROM), programmable read only memory (programmable read-only memory, PROM), erasable PROM (EPROM), registers, a hard disk, a removable disk, a CD-ROM, a DVD, or any other suitable storage device coupled to the bus 1208 for storing information and storing instructions to be executed by the processor 1202. The processor 1202 and the memory 1204 can be supplemented by, or incorporated in, special purpose logic circuitry.
The instructions may be stored in the memory 1204 and implemented in one or more computer program products (e.g., one or more modules of computer program instructions encoded on a computer readable medium) for execution by the computer system 1200 or to control the operation of the computer system 1200 according to any method known to those skilled in the art, including, but not limited to, computer languages (e.g., data-oriented languages (e.g., SQL, dBase)), system languages (e.g., C language, objective-C language, c++ language, assembly language), structural languages (e.g., java, ·net), and languages (e.g., PHP, ruby, perl, python). The instructions may also be implemented in a computer language such as an array language, an aspect-oriented language, an assembly language, an authoring language, a command line interface language, a compilation language, a concurrency language, a curly language, a dataflow language, a data structuring language, a declarative language, a profound language, an extension language, a fourth generation language, a functional language, an interactive mode language, an interpretation language, an iterative language, a list-based language, a small language, a logic-based language, a machine language, a macro language, a meta-programming language, a multi-paradigm language, a numerical analysis, a non-english-based language, an object-oriented class-based language, an object-oriented prototype-based language, an offside rule language, a procedural language, a reflection language, a rule-based language, a script language, a stack-based language, a synchronization language, a syntax processing language, a visualization language, a wirth language, and an xml-based language. Memory 1204 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1202.
The computer programs discussed herein do not necessarily correspond to files in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
Computer system 1200 also includes a data storage device 1206 (e.g., magnetic or optical disk) coupled to bus 1208 for storing information and instructions. Computer system 1200 may be coupled to a variety of devices via input/output module 1210. The input/output module 1210 may be any input/output module. The exemplary input/output module 1210 includes a data port (e.g., a USB port). The input/output module 1210 is configured to connect to a communication module 1212. Exemplary communications module 1212 includes a network interface card, such as an ethernet card and a modem. In certain aspects, the input/output module 1210 is configured to connect to a plurality of devices, such as an input device 1214 and/or an output device 1216. Exemplary input devices 1214 include a keyboard and a pointing device, such as a mouse or a trackball, by which a consumer can provide input to computer system 1200. Other types of input devices 1214 may also be used to provide interaction with the consumer, such as a tactile input device, a visual input device, an audio input device, or a brain-computer interface device. For example, feedback provided to the consumer may be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the consumer may be received in any form, including acoustic input, speech input, tactile input, or brain wave input. Exemplary output devices 1216 include a display device, such as a liquid crystal display (liquid crystal display, LCD) monitor, for displaying information to the consumer.
According to one aspect of the disclosure, the HMD 100 may be implemented, at least in part, using the computer system 1200 in response to the processor 1202 executing one or more sequences of one or more instructions contained in the memory 1204. Such instructions may be read into memory 1204 from another machine-readable medium, such as data storage device 1206. Execution of the sequences of instructions contained in main memory 1204 causes processor 1202 to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in memory 1204. In alternative aspects, hard-wired circuitry may be used in place of or in combination with software instructions to implement the various aspects of the disclosure. Thus, aspects of the present disclosure are not limited to any specific combination of hardware circuitry and software.
Aspects of the subject matter described in this specification can be implemented in a computing system that includes a back-end component (e.g., a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical consumer interface or a web browser through which a consumer can interact with an implementation of the subject matter described in this specification), or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). The communication network (e.g., network 150) may include, for example, any one or more of a LAN, a WAN, the Internet, etc. Further, the communication network may include, but is not limited to, for example, any one or more of a bus network, a star network, a ring network, a mesh network, a star bus network, a tree or hierarchical network, and the like. The communication module may be, for example, a modem or an ethernet card.
Computer system 1200 may include clients and servers. The client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The computer system 1200 may be, for example, but is not limited to, a desktop computer, a laptop computer, or a tablet computer. Computer system 1200 may also be embedded in another device such as, but not limited to, a mobile phone, a PDA, a mobile audio player, a global positioning system (Global Positioning System, GPS) receiver, a video game console, and/or a television set top box.
The term "machine-readable storage medium" or "computer-readable medium" as used herein refers to any medium or media that participates in providing instructions to processor 1202 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes optical or magnetic disks, such as data storage device 1206. Volatile media includes dynamic memory, such as memory 1204. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that form bus 1208. Common forms of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. The machine-readable storage medium may be a machine-readable storage device, a machine-readable storage substrate, a storage device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
To illustrate the interchangeability of hardware and software, various illustrative blocks, modules, components, methods, operations, instructions, and algorithms have been described in general terms in terms of their functionality. Whether such functionality is implemented as hardware, software, or a combination of hardware and software depends upon the particular application and design constraints imposed on the overall system. Those skilled in the art can implement the described functionality in varying ways for each particular application.
As used herein, the phrase "at least one" preceding a series of items, and the term "and" or "used to separate any item, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase "at least one" does not require the selection of at least one item; rather, the phrase allows for the meaning of: including at least one of any of the plurality of items, and/or at least one of any combination of the plurality of items, and/or at least one of each of the plurality of items. For example, the phrase "at least one of A, B and C" or "at least one of A, B or C" each refer to a alone, B alone, or C alone; A. any combination of B and C; and/or at least one of each of A, B and C.
The term "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Phrases such as, for example, an aspect, this aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an example, this example, another example, some examples, one or more examples, a configuration, the configuration, another configuration, some configurations, one or more configurations, subject technology, disclosure, the present disclosure, and other variations thereof, are for convenience and do not imply that the disclosure associated with such one or more phrases is essential to the subject technology, or that the disclosure applies to all configurations of the subject technology. The disclosure relating to such one or more phrases may apply to all configurations, or one or more configurations. The disclosure relating to such one or more phrases may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other preceding phrases.
Reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more. The term "some" refers to one or more. The underlined and/or italicized headings and subheadings are used for convenience only, do not limit the subject technology, and are not involved in the interpretation of the description of the subject technology. Relational terms such as first and second, and the like may be used to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. All structural and functional equivalents to the elements of the various configurations described in this disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the subject technology. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the above description.
While this specification contains many specifics, these should not be construed as limitations on the scope of what may be described, but rather as descriptions of specific embodiments of the subject matter. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a described combination can in some cases be excised from the combination, and the described combination may be directed to a subcombination or variation of a subcombination.
The subject matter of the present specification has been described with respect to particular aspects, but other aspects may be practiced and within the scope of the appended claims. For example, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. The actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the aspects described above should not be understood as requiring such separation in all aspects, and it should be understood that the described program components and systems can generally be integrated in a single software product or packaged into multiple software products.
The title, background, brief description of the drawings, abstract and description figures are incorporated herein by reference into the present disclosure and are provided as illustrative examples of the present disclosure and not as limiting descriptions. The following understanding should be followed: they are not intended to limit the scope or meaning of the claims. Furthermore, in the detailed description, it can be seen that the description provides illustrative examples for the purpose of streamlining the disclosure, and that the various features are grouped together in various embodiments. This method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate descriptive matter.
The claims are not intended to be limited to the aspects described herein but are to be accorded the full scope consistent with the language claims and encompassing all legal equivalents. However, none of the claims are intended to cover a subject matter that fails to meet applicable patent statutes, and should not be interpreted in this way.

Claims (15)

1. An apparatus for virtual reality imaging, the apparatus comprising:
a pixel array comprising a plurality of pixels arranged in a two-dimensional surface, each pixel providing a plurality of light beams to form an image provided to a user;
a first optical element configured to provide a central portion of a field of view of the image through an eyebox, the eyebox confining a volume including a pupil of the user; and
a second optical element configured to provide a peripheral portion of the field of view of the image through the eyebox, wherein the second optical element comprises a lenslet array for providing a segmented view of the peripheral portion of the field of view, the lenslet array comprising a plurality of lenslets selected from at least one of: free-form surface lenslets, liquid crystal lenslets, fresnel lenslets, and wafer lenslets.
2. The device of claim 1, wherein the central portion and the peripheral portion of the field of view comprise at least one hemispherical degree of a field of view of a user.
3. The apparatus of claim 1 or 2, wherein the peripheral portion of the field of view has a higher angular resolution of fifteen minutes of arc in a region adjacent to the central portion of the field of view.
4. The apparatus of any preceding claim, wherein the second optical element provides an angular resolution of axial attenuation in the peripheral portion of the field of view.
5. The apparatus of any preceding claim, wherein the second optical element is a free-form surface lenslet array, and wherein the two-dimensional surface of the pixel array is planar; or preferably wherein,
the pixel array includes a tapered display surrounding the first optical element, an
The second optical element is a lenslet array surrounding the first optical element to provide the peripheral portion of the field of view of the image through the eyebox.
6. The apparatus of any one of claims 1 to 4, wherein the two-dimensional surface follows a one-dimensional curvature.
7. A device according to any preceding claim, wherein the array of pixels comprises one of a flexible organic light emitting diode array, a flexible liquid crystal display or a light emitting diode array.
8. The apparatus of any preceding claim, wherein the second optical element comprises a lenslet array having a plurality of lenslets arranged at a pitch greater than one quarter of the focal length of the lenslet, and wherein the plurality of light beams from a single pixel pass through the eyebox at unique angles.
9. The apparatus of any preceding claim, wherein the array of pixels comprises segmented portions of active pixels separated by gaps of inactive pixels, wherein two sub-portions of the peripheral portion of the field of view of the image from two adjacent segmented portions of a plurality of active pixels form a continuous image on the retina of the user, and light beams from the segmented portions of active pixels pass through the eyebox at an angle unique to each pixel based on the position of the pupil of the user.
10. A display, the display comprising:
a pixel array configured in a two-dimensional surface;
A multi-lenslet array configured to provide light from at least one of the following to a retina of a user of the display: free-form surface lenslets, liquid crystal lenslets, fresnel lenslets, and wafer lenslets;
a memory storing a plurality of instructions; and
one or more processors configured to execute the plurality of instructions to activate each of a plurality of segments in the pixel array to emit a light beam forming a portion of a peripheral field of view of an image, each portion providing a different field of view of the image, wherein the image is projected onto a retina of a user of a head mounted display through an eyebox, the eyebox defining a location of a pupil of the user.
11. The display of claim 10, wherein the portion of the peripheral field of view comprises at least one steradian of a user field of view at a resolution of at least fifteen arc minutes; and/or preferably wherein the plurality of instructions further cause the one or more processors to select a portion of the peripheral field of view for each of two adjacent segments to form a continuous image in the retina of the user through the eyebox.
12. A display according to claim 10 or 11, wherein the gap of inactive pixels between two adjacent segments is selected such that the light beam provided by each of the two adjacent segments in the pixel array passes through the eyebox to form a continuous, cross-talk free image in the retina of the user; and/or preferably, wherein the plurality of instructions further comprises instructions indicating a position of the user's pupil within the eyebox.
13. The display of any one of claims 10 to 12, further comprising a sensor configured to provide positional information of the user's pupil within the eyebox; and/or preferably wherein the memory includes calibration instructions to select the peripheral field of view of the image and modify an angular mapping of the pixel array to the retina of the user based on the gaze direction of the user and the position of the pupil.
14. A method, the method comprising:
acquiring, with a camera, an image of a pixel array through a multi-lenslet array in a light field display of a head-mounted display device, the multi-lenslet array comprising a plurality of lenslets, the lenslets selected from at least one of: free-form surface lenslets, liquid crystal lenslets, fresnel lenslets, and wafer lenslets, the image being associated with a pupil position of a user of the head-mounted display device;
Obtaining an angle map of the pixel array from the image of the pixel array, wherein the angle map includes angles of a plurality of light beams from each active pixel in the pixel array; and
the angle map is stored in a memory of the head mounted display device based on the pupil position.
15. The method of claim 14, further comprising storing instructions in the memory of the head mounted display device that activate segments of the pixel array based on the angle map and the pupil position; and/or preferably, wherein storing the angle map in a memory of the head-mounted display device comprises: a correction factor is stored in the angle map based on adaptation parameters of the head mounted display device on the user.
CN202280043964.7A 2021-06-24 2022-06-24 Free-form surface light field display for VR/AR head-mounted device Pending CN117561470A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/214,606 2021-06-24
US17/847,730 US20220413297A1 (en) 2021-06-24 2022-06-23 Free-form light field displays for vr/ar headsets
US17/847,730 2022-06-23
PCT/US2022/035020 WO2022272148A1 (en) 2021-06-24 2022-06-24 Free-form light field displays for vr/ar headsets

Publications (1)

Publication Number Publication Date
CN117561470A true CN117561470A (en) 2024-02-13

Family

ID=89813372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280043964.7A Pending CN117561470A (en) 2021-06-24 2022-06-24 Free-form surface light field display for VR/AR head-mounted device

Country Status (1)

Country Link
CN (1) CN117561470A (en)

Similar Documents

Publication Publication Date Title
US20230055420A1 (en) Virtual and augmented reality systems and methods having improved diffractive grating structures
US9915826B2 (en) Virtual and augmented reality systems and methods having improved diffractive grating structures
US10088689B2 (en) Light engine with lenticular microlenslet arrays
US9927614B2 (en) Augmented reality display system with variable focus
US10108014B2 (en) Waveguide display with multiple focal depths
US20230050117A1 (en) Polarization compensation for wire grid polarizer of head-mounted display system
US20230375842A1 (en) Polarization-multiplexed optics for head-mounted display systems
US11740473B2 (en) Flexible displays for VR/AR headsets
US20230273434A1 (en) Multilayer flat lens for ultra-high resolution phase delay and wavefront reshaping
CN117561470A (en) Free-form surface light field display for VR/AR head-mounted device
US11493772B1 (en) Peripheral light field display
CN117396793A (en) Flexible display for VR/AR head-mounted device
EP4359854A1 (en) Free-form light field displays for vr/ar headsets
TW202307517A (en) Flexible displays for vr/ar headsets
TW202331349A (en) Light guide combiner with increased field of view (fov) and eyebox efficiency for enhanced reality applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination