WO2019164745A1 - Multifocal optics for light field displays - Google Patents

Multifocal optics for light field displays Download PDF

Info

Publication number
WO2019164745A1
WO2019164745A1 PCT/US2019/018018 US2019018018W WO2019164745A1 WO 2019164745 A1 WO2019164745 A1 WO 2019164745A1 US 2019018018 W US2019018018 W US 2019018018W WO 2019164745 A1 WO2019164745 A1 WO 2019164745A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
lens
display
multifocal
array
Prior art date
Application number
PCT/US2019/018018
Other languages
French (fr)
Inventor
Jukka-Tapani Makinen
Kai Ojala
Original Assignee
Pcms Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pcms Holdings, Inc. filed Critical Pcms Holdings, Inc.
Publication of WO2019164745A1 publication Critical patent/WO2019164745A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/322Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using varifocal lenses or mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/02Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the intensity of light
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/001Axicons, waxicons, reflaxicons

Definitions

  • the human mind perceives and determines depths of observed objects in part by receiving signals from muscles used to orient each eye.
  • the brain associates the relative angular orientations of the eyes with the determined depths of focus.
  • Correct focus cues give rise to a natural blur on objects outside of an observed focal plane and a natural dynamic parallax effect.
  • 3D display capable of providing correct focus cues uses volumetric display techniques that can produce 3D images in true 3D space.
  • Each“voxel” of a 3D image is located physically at the spatial position where it is supposed to be and reflects or emits light from that position toward the observers to form a real image in the eyes of viewers.
  • the main problems with 3D volumetric displays are their low resolution, large physical size and expensive manufacturing costs. These issues make them too cumbersome to use outside of special cases e.g., product displays, museums, shows, etc.
  • holographic display Another type of 3D display device capable of providing correct retinal focus cues is the holographic display.
  • Holographic displays aim to reconstruct whole light wavefronts scattered from objects in natural settings.
  • SLM Spatial Light Modulator
  • LF display systems are designed to create so-called light fields that represent light rays travelling in space to all directions. LF systems aim to control light emissions both in spatial and angular domains, unlike the conventional stereoscopic 3D displays that can basically only control the spatial domain with higher pixel densities.
  • parallax is created across each individual eye of the viewer producing the correct retinal blur corresponding to the 3D location of the object being viewed. This can be done by presenting multiple views per single eye.
  • the second approach is a multi-focal-plane approach, in which an object’s image is projected to an appropriate focal plane corresponding to its 3D location.
  • Many light field displays use one of these two approaches.
  • the first approach is usually more suitable for a head mounted single-user device as the locations of eye pupils are much easier to determine and the eyes are closer to the display making it possible to generate the required dense field of light rays.
  • the second approach is better suited for displays that are located at a distance from the viewer(s) and could be used without headgear.
  • the SMV condition can be met by reducing the interval between two views at the correct viewing distance to a smaller value than the size of the eye pupil.
  • the human pupil is generally estimated to be about 4 mm in diameter. If ambient light levels are high (e.g., in sunlight), the diameter can be as small as 1.5 mm and in dark conditions as large as 8 mm.
  • the maximum angular density that can be achieved with SMV displays is limited by diffraction and there is an inverse relationship between spatial resolution (pixel size) and angular resolution. Diffraction increases the angular spread of a light beam passing through an aperture and this effect may be taken into account in the design of very high density SMV displays.
  • Head-mounted devices occupy less space than goggleless solutions, which also means that they can be made with smaller components and less materials making them relatively low cost.
  • head mounted VR goggles and smart glasses are single user devices, they do not allow shared experiences as naturally as goggleless solutions.
  • Volumetric 3D displays take space from all three spatial directions and require a lot of physical material making these systems easily heavy, expensive to manufacture and difficult to transport. Due to the heavy use of materials, the volumetric displays also tend to have small“windows” and limited field-of view (FOV).
  • FOV field-of view
  • Screen-based 3D displays typically have one large but flat component, which is the screen and a system that projects the image(s) over free space from a distance. These systems can be made more compact for transportation and they also cover much larger FOVs than, for example, volumetric displays. These systems are complex and expensive as they require projector sub-assemblies and, for example, accurate alignment between the different parts, making them best for professional use cases.
  • Flat form-factor 3D displays may require a lot of space in two spatial directions, but as the 3rd direction is only virtual, they are relatively easy to transport to and assemble in different environments. As the devices are flat, at least some optical components used in them are more likely to be manufactured in sheet or roll format making them relatively low cost in large volumes.
  • Systems and methods set forth herein may control multifocal optical features of a multivew display structure to create multiple focal planes.
  • a method of operating a multiview display as a 3D light field display comprising controlling a spatial light modulator to adjust apertures in front of projector cells of the multiview display by selectively blocking parts of projected beams to create focal planes at different distances from the multiview display.
  • a method of operating a multiview display as a 3D light field display comprising operating a plurality of light field pixels of a multiview display to create a plurality of focal planes at a plurality of distances from the multiview display, wherein a first light field pixel comprises a multiview projector cell of the multiview display, a region of a spatial light modulator (SLM) of the multi view display, and a focusing lens of the multiview display.
  • SLM spatial light modulator
  • each light field pixel comprises: a multiview projector cell of the display structure; a portion of a spatial light modulator (SLM) of the display structure; and a focusing lens of the display structure.
  • SLM spatial light modulator
  • systems and methods set forth herein may be used for a multi-focal plane light field display.
  • a multi-focal plane light field display may comprise emitting light from a plurality of light emitting elements; collimating the emitted light from the plurality of light emitting elements into at least one light beam with at least one collimating optical element; operating a spatial light modulator to create a time synchronized aperture for the at least one light beam, the aperture controlling which portion of the at least one light beam is passed through the spatial light modulator; and focusing the controlled portion of the at least one light beam passed through the spatial light modulator with at least one optical element used as a focusing lens.
  • An example apparatus in accordance with some embodiments may include: an array of light- emitting elements configured to emit light; an array of multifocal lenses configured to focus the light from the array of light-emitting elements, each of the multifocal lenses having two or more portions, different portions being configured to focus the light to different distances; and a spatial light modulator configured to control the portion of each lens upon which the light falls.
  • Some embodiments of the example apparatus may further include an array of collimating optical elements between the array of light-emitting elements and the array of multifocal lenses, the array of collimating optical elements being configured to collimate the light from the array of light-emitting elements.
  • each multifocal lens are substantially annular in shape, each annular portion being configured to focus the light to a different respective focal distance.
  • the spatial light modulator includes, for each of the multifocal lenses, a plurality of concentric annular pixels, each pixel corresponding to a respective portion of the corresponding multifocal lens.
  • the apparatus includes a plurality of light field pixels, each light field pixel includes one of the multifocal lenses and a corresponding plurality of the light- emitting elements.
  • Some embodiments of the example apparatus may further include opaque boundary structures between adjacent light field pixels.
  • the spatial light modulator is a liquid crystal display (LCD).
  • LCD liquid crystal display
  • the multifocal lenses are axicon lenses.
  • the multifocal lenses are hyperbolic lenses.
  • a first portion of the two or more portions of each lens is a diffractive lens, and a second portion of the two or more portions of each lens is a refractive lens.
  • An example method in accordance with some embodiments may include: emitting light from a light- emitting element an array of light-emitting elements toward a corresponding multifocal lens in an array of multifocal lenses; operating a spatial light modulator between the array of light-emitting elements and the array of multifocal lenses to control which portion of the multifocal lens the light is incident on; and focusing the light by the multifocal lens to a focal distance associated with the portion of the multifocal lens the light is incident on.
  • Some embodiments of the example method may further include collimating the emitted light before it reaches the spatial light modulator.
  • Some embodiments of the example method may further include displaying at least one voxel by focusing light from a plurality of multifocal lenses onto a common focal spot.
  • operating the spatial light modulator may include generating a substantially annular aperture such that the light is incident on a substantially annular portion of the multifocal lens.
  • a size of the substantially annular aperture is selected to determine the focal distance.
  • FIG. 1 A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
  • FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to some embodiments.
  • WTRU wireless transmit/receive unit
  • FIGs. 2A-2B are schematic plan views illustrating example focal distances and eye convergence angles according to some embodiments.
  • FIGs. 3A-3C are schematic plan views illustrating example eye focus angles (FA) and convergence angles (CA) together with pixel clusters on a flat Light Field (LF) display according to some embodiments.
  • FA eye focus angles
  • CA convergence angles
  • FIGs. 4A-4C depict schematic perspective views illustrating example levels of occlusion of Light Fields directed towards a pupil according to some embodiments.
  • FIG. 5 is a schematic plan view illustrating example light emission angles directed towards respective viewers according to some embodiments.
  • FIG. 6A depicts schematic plan views illustrating an example of increasing beam divergence caused by geometric factors according to some embodiments.
  • FIG. 6B depicts schematic plan views illustrating an example of increasing beam divergence caused by diffraction according to some embodiments.
  • FIG. 7 depicts schematic plan views illustrating three example lenses having various magnification ratios according to some embodiments.
  • FIGs. 8A-8D are schematic plan views illustrating example geometric and diffraction effects for one or two extended sources imaged to a fixed distance with a fixed magnification according to some embodiments.
  • FIG. 9A is a schematic plan view illustrating an example functioning of a standard lens according to some embodiments.
  • FIG. 9B is a schematic plan view illustrating an example functioning of a multifocal axicon lens according to some embodiments.
  • FIG. 10 is a schematic plan view illustrating an example ray trace picture of an imaging system with two spherical plano-convex lenses according to some embodiments.
  • FIG. 11 is a schematic plan view illustrating example focal points resulting from different apertures for imaging a point source according to some embodiments.
  • FIG. 12 is a schematic plan view illustrating example extended sources imaged with spherical lens pairs and apertures according to some embodiments.
  • FIG. 13 is a schematic plan view illustrating an example LF display structure according to some embodiments.
  • FIGs. 14A-14B are schematic plan views illustrating example display optical function scenarios if more than one light emitter is activated and more than one LF display pixel is used simultaneously according to some embodiments.
  • FIG. 15 is a schematic plan view illustrating an exemplary viewing geometry available with a 3D LF display structure according to some embodiments.
  • FIGs. 16A-16B are schematic plan views illustrating exemplary viewing geometry scenarios according to some embodiments.
  • FIG. 17 is a schematic perspective view illustrating an exemplary embodiment of a curved 3D LF display according to some embodiments.
  • FIG. 18 is a schematic plan view illustrating an example display structure using LF pixels according to some embodiments.
  • FIG. 19 is a schematic plan view illustrating an example horizontal viewing geometry available with the exemplary display structure of FIG. 18 according to some embodiments.
  • FIG. 20 is a schematic plan view illustrating exemplary simulation ray trace images according to some embodiments.
  • FIG. 21 is a flowchart showing an example process for operating a spatial light modulator to generate a focal plane image according to some embodiments.
  • a wireless transmit/receive unit may be used as a multivew display, a light field display, or a display structure in embodiments described herein.
  • FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • ZT UW DTS-s OFDM zero-tail unique-word DFT-Spread OFDM
  • UW-OFDM unique word OFDM
  • FBMC filter bank multicarrier
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/1 13, a ON 106/1 15, a public switched telephone network (PSTN) 108, the Internet 1 10, and other networks 1 12, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-mounted display
  • a vehicle a drone
  • the communications systems 100 may also include a base station 114a and/or a base station 1 14b.
  • Each of the base stations 1 14a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/1 15, the Internet 110, and/or the other networks 112.
  • the base stations 1 14a, 1 14b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 1 14a, 1 14b are each depicted as a single element, it will be appreciated that the base stations 1 14a, 1 14b may include any number of interconnected base stations and/or network elements.
  • the base station 1 14a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 1 14b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 1 14a may be divided into three sectors.
  • the base station 1 14a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 1 14a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 1 14a, 1 14b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 1 16, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 1 16 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 1 14a in the RAN 104/1 13 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 1 15/1 16/117 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (FISPA+).
  • HSPA may include High-Speed Downlink (DL) Packet Access (FISDPA) and/or High-Speed UL Packet Access (FISUPA).
  • the base station 1 14a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 1 16 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • E-UTRA Evolved UMTS Terrestrial Radio Access
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 1 14a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 1 16 using New Radio (NR).
  • a radio technology such as NR Radio Access , which may establish the air interface 1 16 using New Radio (NR).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 1 14a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
  • the base station 1 14a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.1 1 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.1 1 i.e., Wireless Fidelity (WiFi)
  • WiMAX Worldwide Interoperability for Microwave Access
  • CDMA2000, CDMA2000 1X, CDMA2000 EV-DO Code Division Multiple Access 2000
  • IS-2000 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global System for Mobile communications
  • EDGE Enhanced Data rates for G
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.1 1 to establish a wireless local area network (WLAN).
  • the base station 1 14b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 1 14b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
  • a cellular-based RAT e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.
  • the base station 1 14b may have a direct connection to the Internet 1 10.
  • the base station 1 14b may not be required to access the Internet 1 10 via the CN 106/115.
  • the RAN 104/113 may be in communication with the CN 106/1 15, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the CN 106/1 15 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104/1 13 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/1 13 or a different RAT.
  • the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
  • the CN 106/1 15 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 1 12.
  • the PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite.
  • the networks 1 12 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/1 13 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 1 14b, which may employ an IEEE 802 radio technology.
  • FIG. 1 B is a system diagram illustrating an example WTRU 102.
  • the WTRU 102 may include a processor 1 18, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
  • GPS global positioning system
  • the processor 1 18 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 1 18 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 1 18 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 1 18 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 1 16.
  • a base station e.g., the base station 114a
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ
  • the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 1 16.
  • transmit/receive elements 122 e.g., multiple antennas
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
  • the WTRU 102 may have multi-mode capabilities.
  • the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
  • the processor 1 18 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 1 18 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 1 16 from a base station (e.g., base stations 1 14a, 1 14b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location- determination method while remaining consistent with an embodiment.
  • the processor 1 18 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • a gyroscope an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 1 18).
  • the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, and/or any other device(s) described herein may be performed by one or more emulation devices (not shown).
  • the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
  • the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • RF circuitry e.g., which may include one or more antennas
  • Systems and methods set forth herein use multifocal optical features to create multiple focal planes with a multiview display structure based on projected beams. Such systems and methods may transform a standard multiview display into a true 3D LF display that is able to provide the necessary focus cues for the eyes of a viewer and address the VAC problem.
  • FIGs. 2A-2B are schematic plan views 220 illustrating example focal distances and eye convergence angles according to some embodiments.
  • FIG. 2A depicts a retinal focus scenario 200 when viewing a natural scene.
  • FIG. 2B depicts a retinal focus scenario 220 when viewing an autostereoscopic 3D display 222.
  • Some parts of an image generated with the configuration of FIG. 2A are blurred, whereas an image generated with the configuration of FIG. 2B, all parts of the image are in focus.
  • Current stereoscopic displays commonly used in home theatres and cinemas, employ suboptimal technology for making 3D images. There is a neural connection in the human brain between light sensitive cells on the eye retinas and the cells sensing eye muscle movement. The associated areas work together when a perception of depth is created.
  • Autostereoscopic 3D displays lack correct retinal focus cues due to the fact that the image information is limited to the plane of the display.
  • the eyes focus to a different point than where they converge, physiological signals in the brain get mixed up.
  • Depth cue mismatch of convergence and accommodation leads to e.g. eye strain, fatigue, nausea and slower eye accommodation to object distance. This phenomenon is called vergence-accommodation conflict (VAC) and is a result of non-proportional depth squeezing in artificial 3D images.
  • VAC vergence-accommodation conflict
  • FIGs. 3A-3C are schematic plan views illustrating example eye focus angles (FA) and convergence angles (CA) together with pixel clusters on a flat Light Field (LF) display according to some embodiments. Depicted are eye focus angles (FA) 308, 338 and convergence angles (CA) 306, 336 together with pixel clusters on a flat LF display 304, 334, 366 in three cases: an image point on the display surface 302 (FIG. 3A), a virtual image point behind the display surface 340 (FIG. 3B), and a virtual image at infinite distance behind the display surface 362 (FIG. 3C).
  • the various geometric differences in each scenario 300, 330, 360 may be understood visually from the provided cases in FIGs. 3A-3C.
  • the first category is volumetric display techniques that can produce 3D images in true 3D space. Each“voxel” of the 3D image is located physically at the spatial position where it is supposed to be and reflects or emits light from that position toward the observers to form a real image in the eyes of viewers.
  • the main problems with 3D volumetric displays are low resolution, large physical size and a high complexity of the systems. They are expensive to manufacture and too cumbersome too use outside special use cases like product displays, museums etc.
  • the second 3D display device category capable of providing correct retinal focus cues is the holographic display. These displays operate by reconstructing the whole light wavefronts scattered from objects in natural settings.
  • the third 3D display technology category capable of providing natural retinal focus cues is called the Light Field (LF) display, and the Light Field (LF) display is the dominant technological domain of this disclosure.
  • Vergence-accommodation conflict is a determined driver for moving from the current stereoscopic 3D displays to the more advanced light field systems.
  • a flat form-factor LF 3D display is able to produce both the correct eye convergence and correct focus angles simultaneously.
  • FIGs. 3A-3C show these correct angles in three different 3D image content cases.
  • FIG. 3A an image point lies on a surface of a display 302 and only one illuminated pixel visible to both eyes is sufficient to represent the point correctly. Both eyes are focused and converged to the same point.
  • the virtual image point is behind the display 340 and two clusters of pixels 332 are illuminated to represent the single point correctly.
  • the direction of the light rays from these two spatially separated pixel clusters are controlled in such a way that the emitted light is visible only to the correct eye, thus enabling the eyes to converge to the same single virtual point.
  • the virtual image is at infinity 362 behind the screen and only parallel light rays 368, 370, 372, 374, 376, 378 are emitted from the display surface 366 from two spatially separated pixel clusters 364.
  • the minimum size for the pixel cluster is at least the size of the eye pupil. This size of cluster is the maximum size of pixel cluster called for on the display surface 366.
  • spatial and angular control of emitted light from the LF display device creates both the convergence and focus angles for natural eye responses to the 3D image content.
  • FIGs. 4A-4C depict schematic perspective views illustrating example levels of occlusion of Light Fields directed towards a pupil according to some embodiments.
  • LF systems aim to control light emissions both in spatial and angular domains, unlike the conventional stereoscopic 3D displays that can only control the spatial domain.
  • a second approach is a multi-focal-plane approach, in which the object’s image is projected to a focal plane corresponding to its 3D location.
  • the first approach is usually more suitable to a head mounted single-user device as the location of eye pupils are much easier to determine and the eyes are closer to the display making it easier to provide the desired dense field of light rays.
  • the second approach is better suited for displays that are located at a distance from the viewer and may be used without headgear.
  • correct retinal blur is achieved by presenting multiple views per single eye.
  • FIGs. 4A-4C shows occlusions of scene caused by parallax across the pupil. In FIG. 4A, only a portion of person’s body (their foot) is visible and the rest of the person is blocked by an occlusion 402. This view 400 corresponds with a left field view from a left side of the pupil.
  • FIG. 4A shows occlusions of scene caused by parallax across the pupil.
  • This view 400 corresponds with a left field view from a left side of the pupil.
  • This view 420 corresponds with a central field view from a center of the pupil.
  • FIG. 4C the entirety of the person’s body is visible, and an occlusion 442 does not block view of the person.
  • This view 440 corresponds with a right field view from a right side of the pupil.
  • the resulting varied images represent views that could be presented in order to produce correct retinal blur. If the light from at least two images from slightly different viewpoints enters the eye pupil simultaneously, a more realistic visual experience follows. In this case, motion parallax effects better resemble natural conditions as the brain unconsciously predicts the image change due to motion.
  • a Super Multi View (SMV) effect can be achieved by ensuring the interval between two views at the correct viewing distance is a smaller value than the size of the eye pupil.
  • SMV Super Multi View
  • the human pupil is generally estimated to be ⁇ 4 mm in diameter. If the ambient light levels are high (e.g., in sunlight), the diameter can be as small as 1.5 mm and in dark conditions as large as 8 mm.
  • the maximum angular density that can be achieved with SMV displays is limited by diffraction and there is an inverse relationship between spatial resolution (pixel size) and angular resolution. Diffraction increases the angular spread of a light beam passing through an aperture and this effect should be taken into account in the design of very high density SMV displays.
  • FIG. 5 is a schematic plan view illustrating example light emission angles directed towards respective viewers according to some embodiments.
  • FIG. 5 shows a schematic view 500 of the geometry involved in creation of the light emission angles from a LF display 518.
  • the LF display 518 in FIG. 5 produces the desired retinal focus cues and multiple views of 3D content in a single flat form-factor panel.
  • a single 3D display surface projects at least two different views to the two eyes of a single user in order to create a coarse 3D perception effect.
  • the brain uses these two different eye images to determine 3D distance. Logically this is based on triangulation and interpupillary distance. To provide this effect, at least two views are projected into a Single user Viewing Angle (SVA) 510, as shown in FIG.
  • SVA Single user Viewing Angle
  • the LF display projects at least two different views inside a single eye pupil in order to provide the correct retinal focus cues.
  • an“eye-box” is usually defined around the viewer eye pupil when determining the volume of space within which a viewable image is formed.
  • the eye-box width 508 is the distance from the left-most part of the eye to the right-most part of the eye.
  • at least two partially overlapping views are projected inside an Eye-Box Angle (EBA) 514 covered by the eye-box at a certain viewing distance 516.
  • EBA Eye-Box Angle
  • the LF display is viewed by multiple viewers 502, 504, 506 looking at the display 518 from different viewing angles, creating images for each viewer corresponding to a virtual object point 520.
  • several different views of the same 3D content are projected to respective viewers covering a whole intended Multi user Viewing Angle (MVA) 512.
  • MVA Multi user Viewing Angle
  • the display is intended to be used with multiple users, all positioned inside a moderate MVA of 90 degrees, then a total of 300 different views is called for. Similar calculations for a display positioned at 30cm distance (e.g., a mobile phone display) would result in only 90 different views being sufficient for horizontal multiview angle of 90 degrees. And if the display is positioned 3m away (e.g., a television screen) from the viewers, a total of 900 different views would be used to cover the same 90 degree multiview angle.
  • 30cm distance e.g., a mobile phone display
  • FIG. 5 illustrates the fact that three different angular ranges are covered simultaneously by the LF display: one for covering the pupil of a single eye, one for covering the two eyes of a single user, and one for the multiuser case.
  • the latter two may be resolved by using either several light emitting pixels under a lenticular or parallax barrier structure or by using several projectors with a common screen.
  • a flat-panel-type multiview LF display may be based on spatial multiplexing alone.
  • a row or matrix of light emitting pixels (LF sub-pixels) may be located behind a lenticular lens sheet or microlens array and each pixel is projected to a unique view direction in front of the display structure.
  • the more pixels there are on the light emitting layer behind each lenticular feature the more views can be generated. This leads to a direct trade-off situation between a number of unique views generated and spatial resolution. If smaller LF pixel size is desired from the 3D display, the size of individual sub-pixels may be reduced; or alternatively, a smaller number of viewing directions may be generated.
  • a high quality LF display has both high spatial and angular resolutions.
  • Generating a high-resolution LF image in some embodiments comprises projecting a plurality of high-resolution, depth-dependent, 2D images onto different focal planes using crossing beams.
  • a distance between each focal plane should be kept inside the human visual system depth resolution.
  • a respective position at which one of more beams intersect is called a voxel.
  • Each beam of the voxel is tightly collimated and has a narrow diameter.
  • each beam waist is collocated with the position at which the beams intersect (i.e., the voxel). This helps to avoid contradicting focus cues from being received by an observer.
  • a large divergence value indicates at least two relationships: (i) beam diameter increases as the distance between a given voxel and an observer’s eye becomes smaller, and (ii) virtual focal plane spatial resolution decreases as the distance between a given voxel and an observer’s eye becomes smaller.
  • a native resolution at the eye increases as the distance between a given voxel and an observer’s eye becomes smaller.
  • FIG. 6A depicts schematic plan views illustrating an example 600 of increasing beam divergence caused by geometric factors according to some embodiments
  • FIG. 6B depicts schematic plan views illustrating an example 650 of increasing beam divergence caused by diffraction according to some embodiments.
  • achievable light beam collimation is dependent on two geometric factors: (i) a size of the light source and (ii) a focal length of the lens.
  • Perfect collimation 608, that is collimation without any beam divergence, can only be achieved in a theoretical scenario in which a single color point source (PS) 602 is positioned exactly at a focal length distance from an ideal positive lens. This case is pictured in a top-most example of FIG. 6A.
  • PS single color point source
  • diffraction Another feature causing beam divergence is diffraction.
  • the term refers to various phenomena that occur when a wave (e.g., of light) encounters an obstacle or a slit (e.g., in a grating). It can be thought of as the bending of light around the corners of an aperture into a region of geometric shadow. Diffraction effects can be found in all imaging systems and they cannot be removed even with a perfect lens design that is able to balance out all optical aberrations. In fact, a lens that is able to reach the highest optical quality is often called“diffraction limited” as most of the blurring remaining in the image comes from diffraction. The angular resolution achievable with a diffraction limited lens can be calculated from the formula of Eq.
  • FIG. 6B shows that beam divergence 658, 660, 662 is increased as lens aperture size is reduced for a single color point source (PS) 652, 654, 656.
  • PS point source
  • the size of an extended source has a substantial effect on achievable beam divergence.
  • the source geometry or spatial distribution is mapped by the projector lens to an angular distribution of the beam and this can be observed in a resulting“far field pattern” of the source-lens system.
  • the size of the image can be ascertained from a system “magnification ratio”.
  • FIG. 7 depicts schematic plan views illustrating three example lenses having various magnification ratios according to some embodiments.
  • the system magnification ratio is calculated by dividing the distance between the lens and image with the distance between the source and lens, as illustrated in FIG. 7. If the distance 702, 732, 762 between the source 710, 740, 770 and lens 712, 742, 772 is fixed, different image distances 704, 734, 764 between lens 712, 742, 772 and image 714, 744, 774 may be achieved by changing the optical power of the lens through adjustment of the lens curvature.
  • the source height 706, 736, 766 is fixed, resulting in progressively larger image heights 708, 738, 768 as the image distances 704, 734, 764 are increased.
  • the desired changes in lens optical power become smaller. Extreme cases approach a situation wherein the lens is effectively collimating the emitted light into a beam that has the spatial distribution of the source mapped into the angular distribution. In these cases the source image is formed without focusing.
  • LF pixel projection lenses In flat form factor goggle-less LF displays, LF pixel projection lenses have very small focal lengths in order to achieve the flat structure. Typically, the beams from a single LF pixel are projected to a relatively large viewing distance. This means that the sources are effectively imaged with high magnification as the beams of light propagate to a viewer. For example: if the source size is 50 m x 50 pm, projection lens focal length is 1 mm, and viewing distance is 1 m, the resulting magnification ratio is 1000:1. Given these conditions, the source’s image will be 50 mm x 50 mm in size. This indicates that the single light emitter can be seen only with one eye inside this 50 mm diameter eye box.
  • the resulting image would be 100 mm wide and the same pixel could be visible to both eyes simultaneously - the average distance between eye pupils is only 64 mm. In the latter case, a stereoscopic 3D image would not be formed as both eyes would see the same image.
  • the example calculation above shows how geometric parameters comprising light source size, lens focal length and viewing distance are tied to each other and effect overall performance.
  • the LF system is designed such that the beam size at the viewing distance does not exceed the distance between the two eyes, as that would break the stereoscopic effect.
  • the spatial resolution achievable with the beams decreases as the divergence increases. It should also be noted that if the beam size at the viewing distance is larger than the size of the eye pupil, the pupil becomes the limiting aperture of the whole optical system.
  • FIGs. 8A-8D are schematic plan views illustrating example geometric and diffraction effects for one or two extended sources imaged to a fixed distance with a fixed magnification according to some embodiments.
  • the distance 810, 836, 860, 886 from the extended source(s) (ES) 802 is the distance 810, 836, 860, 886 from the extended source(s) (ES) 802
  • FIG. 8A shows a scenario 800 wherein a lens aperture size 804 is relatively small (e.g., 5mhi) and a Geometric Image (Gl) 806 is surrounded by blur from the Diffracted Image (Dl) 808 which is much larger.
  • FIG. 8B shows a scenario 820 wherein two extended sources 822, 824 are placed side-by-side and imaged with the same small aperture lens 826 as in FIG. 8A. Even though the Gls 828, 830 of both sources are clearly separated, the two source images cannot be resolved as the diffracted images 832, 834 overlap.
  • FIG. 8C shows a scenario 850 wherein a lens with the same focal length as the one used in FIG. 8A, but with a larger aperture 854 (e.g., 10mhi), is used for imaging the extended source.
  • a larger aperture 854 e.g. 10mhi
  • FIGs. 8A-D together show that increasing an aperture size makes it possible to use two different sources and therefore improve spatial resolution of a voxel grid.
  • FIG. 9A is a schematic plan view illustrating an example functioning of a standard lens according to some embodiments.
  • FIG. 9B is a schematic plan view illustrating an example functioning of a multifocal axicon lens according to some embodiments.
  • multifocal lenses are designed to provide focus at more than one distance from the optical element, as illustrated in FIGs. 9A (functioning 900 of a standard lens 902) and 9B (functioning 950 of a multifocal axicon lens 952).
  • Some example embodiments use multifocal lenses.
  • Such multifocal lenses may be divided into two different types: diffractive and refractive.
  • Embodiments with diffractive designs may have a micro pattern at some part of the lens aperture that diffracts light creating more than two focal points (as discussed in European patent application EP2045648B1).
  • Embodiments with refractive designs may be based on, for example, a Fresnel lens design that uses different aperture zones for creation of the multiple focal spots (as discussed in US patent 4,210,391 ).
  • Both refractive and diffractive lenses use fairly complex optical surface features and such lenses may be manufactured by, for example, injection molding from plastic optic materials like PMMA or polycarbonate, or refractive and diffractive lenses may be replicated with UV curable materials with the help of a mold.
  • axicon 952 One specific type of a multifocal lens is called an axicon 952. It is an optical element, which creates a narrow focal line 954 along the optical axis (FIG. 9B). Light rays entering the axicon are refracted towards the optical axis with the same angle independently from the radial distance from the axis. This creates a conical wavefront, which generates a focal line covering a certain distance.
  • the focal line can be used, for example, in part alignment or in extending the depth of focus in applications such as optical coherence tomography, astronomy, or light sectioning.
  • Axicons can be manufactured as refractive cones from glass or plastics by grinding or molding or as diffractive circular gratings made by UV curing or embossing. They can also be made as combinations of the different optical features and as a hybrid of refractive and reflective surfaces.
  • multifocal lenses may be axicon lenses. Some embodiments of multifocal lenses may be hyperbolic lenses.
  • FIG. 10 is a schematic plan view illustrating an example ray trace picture 1000 of an imaging system with two spherical plano-convex lenses according to some embodiments.
  • Imaging optical systems may have aberrations that cause errors to a generated image.
  • Spherical aberrations may occur when incoming light rays focus at different points after passing through a spherical optical surface.
  • Light rays passing through a lens near its optical axis are refracted less than rays closer to the edge of the lens, and as a result end up in different focal spots. This effect is illustrated in FIG.
  • a point source 1002 is imaged through a collimator 1004, aperture 106, and focus lens 1008 into an image 1010 with a series of points 1012, 1014, 1016, 1018 on the optical axis with two plano-convex lenses that have spherical surfaces.
  • Spherical aberrations blur the image and it is usually an undesired feature that is corrected by using multiple lens elements or aspheric surfaces in the lens design.
  • the spherical aberrations make a simple lens behave like a multifocal lens, and this feature can be used advantageously if there is a desire for extended depth of focus.
  • a series of apertures with different radii for selectively blocking different parts of an overall lens optical aperture, a series of different focal points can be created with a single lens element.
  • Lenses that have a perfect spherical surface shape focus rays propagating closer to the lens center to a focal spot that is further away from the lens than the focal spot achieved with rays propagating closer to the lens edge. This effect is illustrated in FIG. 11 for three identical plano-convex spherical lens pairs are used for imaging point sources to different focal points with the help of three different apertures positioned in the widest part of the beam located between the collimating and focusing lenses.
  • FIG. 11 is a schematic plan view illustrating example focal points resulting from different apertures for a point source according to some embodiments.
  • the aperture 11 16 blocks the central part of the beam and allows an outer ring of the beam emitted from the source 1 1 12 to pass from the collimating lens 1 1 14 to the focusing lens 1 118.
  • the aperture 1 136 allows only the central part of the beam emitted from the source 1 132 to pass from the collimating lens 1 134 to the focusing lens 1 138.
  • the aperture 1 126 allows only rays emitted from the source 1 122 in a ring-shaped zone between the edge and center obscurations to pass from the collimating lens 1 124 to the focusing lens 1128, and the focal spot 1 130 is created in between the focal spots 1 120, 1 140.
  • Sources 1 112, 1 122, 1 132 cause the same effect as sources 1142, 1152, 1162 with collimating lenses 1 144, 1154, 1 164 and focusing lenses 1 148, 1158, 1 168 that have hyperbolic shapes instead of spherical.
  • These lenses 1144, 1 154, 1 164, 1 148, 1 158, 1 168 have the same surface radius values as the top three lenses 11 14, 1124, 1 134, but as their conic constant is set to the value of -5, the order of focal spot locations 1 150, 1160, 1 170 in respect to the different apertures 1 146, 1156, 1 166 is reversed in comparison with FIG. 1 1.
  • the conic constant causes the lens shapes to deform towards a conical shape, and the overall aspheric surface begins to somewhat resemble the shape used in axicon designs.
  • FIG. 1 1 shows examples of aperture configurations within a spatial light modulator that may be used to generate a multiview display.
  • Some embodiments of a method of emitting light from a source, collimating the emitted light, operating a spatial light modulator that includes an aperture, and focusing the light passed through the spatial light modulator are performed in a time-synchronized manner to produce a plurality of focal planes at a respective plurality of distances from the spatial light modulator.
  • a plurality of focal spots such as the examples shown in FIG. 1 1 , may be combined to generate a focal plane.
  • FIG. 12 is a schematic plan view illustrating example extended sources imaged with spherical lens pairs and different apertures according to some embodiments.
  • all natural light sources have a finite surface area to be considered when designing an imaging system.
  • the use of extended sources simply means that the source surfaces are imaged to somewhat different focal planes depending on the aperture used.
  • FIG. 12 shows the effect when three identical plano-convex spherical lens pairs of collimator lenses 1204, 1224, 1244 and focus lenses 1208, 1228, 1248 are used for imaging extended sources 1202, 1222, 1242 with different apertures 1206, 1226, 1246 (as in FIG. 1 1 ) to different focal distances.
  • the optical magnification also changes.
  • the source image 1250 is slightly bigger where the image is formed further away from the lens than for source image 1230 where the image is formed closer to the lens.
  • LEDs are LED chips that are manufactured with the same basic techniques and from the same materials as standard LED chips in use today.
  • pLEDs are miniaturized versions of the commonly available LED components and they can be made as small as 1 m - 10 m in size.
  • a dense matrix of pLEDs can have 2 pm x 2 pm chips assembled with 3 pm pitch.
  • pLEDs are much more stable components and they can provide greater light intensities, which makes them useful for many applications from head mounted display systems to adaptive car headlamps (LED matrix) and TV backlights.
  • pLEDs may be used for 3D displays, which use very dense matrices of individually addressable light emitters that can be switched on and off at very fast speeds.
  • One bare pLED chip can emit a specific color with spectral width of -20-30 nm.
  • a white source can be created by coating the chip with a layer of phosphor, which converts the light emitted by blue or UV pLEDs into a wider white light emission spectra.
  • a full-color source can also be created by placing separate red, green and blue pLED chips side-by-side as a combination of these three primary colors generates a full color pixel when the separate color emissions are combined by the human visual system.
  • a very dense matrix designed in this style may comprise self-emitting full-color pixels that have a total width below 10 pm (3 x 3 pm pitch).
  • Light extraction efficiency at the semiconductor chip is one parameter that determines electricity- to-light efficiency of pLED structures.
  • One such method uses a shaped plastic optical element that is integrated directly on top of a pLED chip. Due to a lower refractive index difference, integration of the plastic shape extracts more light from the chip material than in a case where the chip is surrounded by air. The plastic shape also directs the light in a way that enhances light extraction from the plastic piece and makes the emission pattern more directional.
  • Another method comprises shaping the chip itself into a form that favors light emission angles that are more perpendicular towards the front facet of the semiconductor chip.
  • Systems and methods set forth herein are based on the use of multifocal optical features that make it possible to create multiple focal planes with a multiview display structure based on projected beams.
  • a display optical structure light is emitted from a layer with separately controllable small emitters, such as LEDs.
  • a lens structure e.g., a polycarbonate lenticular sheet placed on top of the emitters may collimate the light into a set of beams that are used to form the image at different viewing directions.
  • the emitters and collimator lenses form a series of projector cells that are separated from each other with opaque boundary structures that suppress crosstalk between neighboring cells.
  • a display structure may comprise a spatial light modulator (SLM) and a focusing lens array.
  • the SLM may be, for example, a custom LCD panel for adjusting apertures in front of the multiview display projector cells by selectively blocking parts of the projected beams.
  • the focusing lens may be, for example, a polycarbonate lenticular sheet, which focuses the emitted beams to different focal planes.
  • a single multifocal lens shape may be used for all projector cells and creation of multiple focal planes, as the SLM is used for adjusting beam aperture(s) and accordingly a focal length of each focusing lens.
  • a combination of one multiview projector cell, one section of the SLM covering the cell aperture, and one focusing lens is considered a light field pixel (LF pixel).
  • LF pixels in the herein discussed structures may be capable of projecting multiple beams in different view directions, and the beams may be focused at multiple focal surfaces.
  • a true LF display may be achieved without moving parts.
  • a simple multiview display based on projected beams may be transformed into a true 3D LF display with multiple focal planes to address the VAC problem.
  • Rendering of 3D images can be performed with the presented optical hardware structure.
  • the rendering functionality present in existing multiview displays may be extended by adding focal layers that are rendered sequentially.
  • FIG. 13 is a schematic plan view illustrating an example LF display structure according to some embodiments.
  • Light is emitted from a layer 1302 with separately controllable small emitters, such as pLEDs.
  • a lens structure 1304 e.g., polycarbonate lenticular sheet placed on top of the emitters collimates the light into a set of beams that are used to form the image at different viewing directions.
  • the emitters 1302 and collimator lenses 1304 form a series of projector cells that are separated from each other with opaque boundary structures 1322 that suppress crosstalk between neighboring cells.
  • the display structure 1300 shown in FIG. 13 includes a spatial light modulator (SLM) 1306 and a focusing lens array 1308.
  • the SLM may be, for example, a custom LCD panel for adjusting the aperture in front of the multiview display projector cells by selectively blocking parts of the projected beams.
  • the focusing lens 1308 may be, for example, a polycarbonate lenticular sheet, to focus the emitted beams to different focal planes.
  • a single multifocal lens shape may be used for all projector cells and for the creation of multiple focal planes, as the SLM 1306 is used for adjusting the beam aperture and consequently the focal length of the focusing lens, as previously discussed.
  • LF pixel 1324 The combination of one multiview projector cell, one section of the SLM 1306 covering the cell aperture, and one focusing lens may be termed a LF pixel 1324. All LF pixels in the structure 1300 are capable of projecting multiple beams to different view directions, and all of the beams may be focused to multiple focal surfaces.
  • a large aperture at the light field rim 1310 may be used to generate a light source image at the furthest focal point 1312.
  • An intermediate aperture in the middle of an LF pixel 1314 may be used to generate a light source image at an intermediate focal point 1316.
  • a small aperture at the LF pixel center 1318 may be used to generate a light source image at the closest focal point 1320.
  • a projector lens 1326 may be used to further adjust light source images for displaying to a viewer, as discussed further below. In other embodiments, the projector lens is omitted.
  • the physical size of the light emitting elements and the total magnification of the LF pixel optics determine the achievable spatial resolution on each 3D image virtual focal surface.
  • the geometric magnification will make the pixel images appear larger than when the focal surface is located closer to the display.
  • Diffraction may also impact the achievable resolution when the light emitter and LF pixel aperture sizes are very small, as previously discussed.
  • focusing (and collimating) optical shapes may be used which make central portions of the beams focus closer to the display and edge parts of the beams focus further away.
  • One such shape is a hyperbolic lens, as previously discussed.
  • the pixel images with further focal points and greater magnification exhibit less diffraction blur than the pixel images closer to the display.
  • the close-by pixel images may tolerate higher diffraction blur coming from the smaller central aperture, as the spatial resolution is higher due to the lower geometric magnification factor.
  • the diffraction may even be a desired effect, as it can be used for blurring out pixel boundaries when, for example, a cluster of light emitting elements is used to form one voxel on the close-by image plane, as spatial resolution is balanced over all focal surfaces with different magnification ratios.
  • a light field display device may include an array of light-emitting elements configured to emit light; an array of lenses configured to focus the light from the array of light-emitting elements, such that each lens of the array of lenses divided into two or more portions and each portion of each lens configured to focus the light to different distances; and a spatial light modulator configured to control where the light falls on each lens.
  • the spatial light modulator includes a controllable aperture configured to select an optical path, such as the examples shown in FIG. 13.
  • the spatial light modulator is configured to select an optical path, an aperture size, and a shape of the light.
  • the light field display device includes an array of collimating optical elements configured to collimate the light from the array of light-emitting elements, such as the examples shown in FIG. 13.
  • the light field display device includes an array of light field (LF) pixels.
  • Each LF pixel includes one or more light-emitting elements, one or more lenses (which may include separate collimating, focusing, and/or projecting lenses or a single combined lens of any of these types of lenses), and a portion of the SLM.
  • Some embodiments have two or more LF pixels that are configured to focus the light to different distances, such as part of two or more focal planes.
  • the SLM acts as a controllable, separate aperture for each LF pixel.
  • focusing a light beam includes generating a target light beam within a threshold angle of a target angle and within a threshold distance of a target focal distance, such as the three scenarios described above regarding generating light source images at three different focal distances using three different focal angles.
  • the spatial light modulator is a liquid crystal display (LCD).
  • the LCD or SLM is operated to create a time-synchronized aperture by configuring a first set of pixels of the LCD to block a portion of a light beam and by configuring a second set of pixels of the LCD to pass a portion of the light beam.
  • the term focal distance is used to refer to a distance from the LF pixel at which light is focused.
  • the focal distance may be the same as the focal length of the selected portion of the multifocal lens.
  • the focal distance may be different from the focal length of the selected portion of the multifocal lens. For example, if the light is diverging when it is incident on the selected portion of the multifocal lens, the focal distance may be greater than the focal length of the selected portion of the multifocal lens.
  • a display structure is used as a LF image generation module together with a separate projector lens array.
  • This scenario may produce different focal surfaces in between some layers of the display, and a final image may be formed by a front projector lens array 1326 as shown in FIG. 13.
  • Such a modular approach may permit use of a single image generation module for different viewing scenarios by changing a front projector lens array. For example, single user and multiple user embodiments may use the same LF image generation module in combination with different front projector lens arrays.
  • a display structure may be incorporated into a head mounted display (HMD) device.
  • light emitters may comprise, for example, a very dense matrix of LEDs positioned behind a microlens arrays with short focal lengths and a small pixel LCD.
  • the LED images may be formed at distances a few millimeters from the focusing lens array, and a pair of magnifying lenses (or the like) may project the images separately to each eye.
  • Such lenses are commonly used in current virtual reality (VR) display devices.
  • VR virtual reality
  • Systems and methods as set forth herein may be utilized to add multiple focal places to AR and VR devices, thus improving user experiences.
  • a display structure may be used as a multi-focal plane light field display.
  • a multifocal plane light field display may include emitting light from a plurality of light emitting elements; collimating the emitted light from the plurality of light emitting elements into at least one light beam with at least one collimating optical element; operating a spatial light modulator to create a time synchronized aperture for the at least one light beam, the aperture controlling which portion of the at least one light beam is passed through the spatial light modulator; and focusing the controlled portion of the at least one light beam passed through the spatial light modulator with at least one optical element used as a focusing lens.
  • illumination of the plurality of light emitting elements, the relative position of the plurality of light emitting elements to the at least one collimating optical element, and a shape and a location of the aperture controlled by the spatial light modulator are time synchronized in a manner to produce a plurality of focal planes at a plurality of distances from the multi-focal plane light field display.
  • multiple light beams from a subset of the plurality of light emitting elements are used to create a light field voxel.
  • the focused light beams may further be projected through at least one projection optical element for near-eye head mounted display device applications (e.g., HMD VR or AR devices).
  • Some embodiments of a display structure includes a Fresnel lens associated with each of the plurality of light field pixels, wherein a field of view of the plurality of light field pixels overlap within a threshold distance of a target viewing distance.
  • Some embodiments of a process performed by a display structure includes projecting a focused light beam through at least one projection optical element configured for a near-eye display device.
  • FIGs. 14A-14B are schematic plan views illustrating example display optical function scenarios if more than one light emitter is activated and more than one LF display pixel is used simultaneously according to some embodiments.
  • one cell has three active light emitting components 1402.
  • the spatially separated emitters 1402 generate a series of collimated beams that propagate in different directions.
  • the emitted beams pass through a collimator lens 1404 and an SLM 1406. Beams passing through the SLM are focused by a focusing lens 1408 to a surface 1410 at a certain distance from the display.
  • the source images are magnified making the voxels appear bigger than the emitter surfaces for the LF pixel field of view (FOV) 1412.
  • FOV field of view
  • the virtual focal surface 1410 defined by the best focal spots of each directional beam is actually slightly curved. This curvature can be either negative or positive depending on the projection lens shapes. Single LF pixel focal surface curvature is also more noticeable on locations closer to the display than on the further focal distances. At larger distances from the display, the pixel image depth of focus becomes larger and the focal surfaces will approach planar surfaces.
  • each light emitter emits a beam in which a portion of the beam passes through a collimator lens 1454, an SLM 1456, and a focusing lens 1458.
  • a focal surface which can be used for presenting 3D image information, such as with successive 2D images that are cross-sections of the whole 3D image.
  • the projected beams are focused at these focal planes, a viewer’s eyes have natural focus cues, and may more naturally accommodate the 3D image.
  • a good quality visual experience may be achieved using only a few focal planes when the viewing distance is large enough that eye depth resolution is only ⁇ 0.6 diopters.
  • the distance between successive focal surfaces can be further increased as the distance between the image surface and viewer is increased.
  • all of the LF pixels in the display may project emitter images towards both eyes of the viewer.
  • one emitter inside the LF pixel should not be visible to both eyes simultaneously when a created voxel is located outside the display surface. This means the field-of-view (FOV) of one LF pixel covers both eyes, but the sub-pixels inside the LF pixel(s) should have FOVs that make the beams narrower than the distance between the viewer’s eye pupils (-64 mm on average) at the viewing distance.
  • FOV field-of-view
  • the FOV of one LF pixel and the FOVs of the single emitters may be determined by the widths of the emitter and magnification of the collimator-focus lens pair.
  • One voxel created with a focusing beam may be visible to the eye when the beam continues its propagation after the focal point and enters the eye pupil at the designated viewing distance.
  • the FOV of a voxel may cover both eyes simultaneously. If the voxel is visible to a single eye only, a stereoscopic effect is not formed and a 3D image is not seen.
  • the voxel FOV is increased by directing multiple crossing beams from more than one LF pixel to the same voxel inside a human persistence-of-vision (POV) time frame.
  • the total voxel FOV may be the sum of individual emitter beam FOVs as illustrated in FIGs. 14A and 14B.
  • a display may for example be curved at a particular radius, or may be such that projected beam directions are turned towards a specific point such as with a flat Fresnel lens sheet.
  • a viewing zone may be formed in front of and/or behind a display device where a 3D image is visible.
  • the spatial light modulator is configured to produce an increased pixel infill, for example, such as if more than one set of light emitting elements are used to generate a light field voxel.
  • two or more light beams generated by two or more light emitting elements form a light field voxel.
  • FIG. 15 is a schematic plan view illustrating an exemplary viewing geometry 1500 available with a 3D LF display structure according to some embodiments.
  • a viewing zone 1504 limited by the furthest focal distance from the display 1502 with reasonable spatial resolution and by the whole display FOV 1506.
  • a light field pixel FOV 1508 is shown for an example LF pixel.
  • the display surface is curved with a radius about the same as the designated viewing distance.
  • the overlapping LF pixel FOVs form a visibility zone around the facial area of the viewer 1512.
  • the size of a visibility zone 1510 may determine the amount of movement permissible for the viewer’s head while maintaining presentation of the 3D LF.
  • Both of the viewer’s eye pupils should be inside the visibility zone 1510 at the same time for stereoscopic image presentation.
  • FIGs. 16A-16B are schematic plan views illustrating exemplary viewing geometry scenarios according to some embodiments.
  • the size of the visibility zone may be configured or adapted for particular use cases by altering the LF pixel FOVs.
  • a single viewer is sitting in front of the display and both eye pupils are covered by a small visibility zone 1604 achieved with a narrow LF pixel FOV 1602.
  • the minimum functional width of the zone is determined by the eye pupil distance (on average ⁇ 64 mm).
  • a small width also means a small tolerance for viewing distance changes, as the narrow FOVs start to separate from each other both in front of and behind an optimal viewing location.
  • 16B depicts a viewing geometry where a LF pixel FOV 1652 is relatively wide, such as if there are multiple viewers inside the large visibility zone 1654 with each viewer at a different viewing distance.
  • the positional tolerances may be relatively large.
  • a visibility zone may be increased by increasing the FOV of each LF pixel in the display structure.
  • This may comprise, for example, increasing the width of a light emitter row or making the focal length of the collimating optics shorter.
  • a maximum width for an emitter row may be determined by the width of the projector cell (LF pixel aperture), as there may not be more components in a single projector cell than can be bonded to the surface area directly below a collimating lens. If the focal length of the collimating lens is decreased, the geometric magnification increases, making it more difficult to achieve a specific voxel spatial resolution.
  • the LF pixel FOV is doubled - but the source image magnification to all focal planes increases by a factor of two and accordingly the voxel size on a given focal plane is also doubled.
  • This resolution reduction may be compensated through decreasing the highest magnification ratio by bringing the edge of the viewing zone closer to the display surface. This may make the total volume where a 3D image is visible shallower, and thus a visual experience more restricted.
  • this connection between the different design parameters may result in a trade-off between 3D image spatial resolution and size of the viewing and visibility zones. If the visibility zone is increased, choices must be made between lower resolution on the focal plane closest to the viewer or decreasing the size of the image viewing zone.
  • Structures as set forth herein may cause light loss due to some of the emitted light being blocked in the aperture control layer by the SLM.
  • the amount of lost light will be determined by the pixel structure of the SLM. If a large number of different focal surfaces is desired, the LF pixel apertures may be blocked with a greater number of different aperture configurations such that only smaller portions of the emitted light are allowed to pass the aperture at a time.
  • the beams may also be focused only in, for example, the horizontal direction (or other single direction, or the like), in which case the SLM may be a linear component allowing all of the emitted light to pass in the vertical direction.
  • the collimating and focusing elements may also be, for example, cylindrical rather than spherical as in the case of a two-dimensional multiview display.
  • SLM aperture patterns may also be two-dimensional - for example concentric rings filling the LF pixel apertures.
  • rectangular aperture shapes may also be used, such as when currently available LCD panels are used, where pixels are arranged in rectangular rows and columns.
  • Color images may be created by utilizing successive rows of red, green, and blue components on a light emitting layer.
  • the colors may be combined, for example, with a separate combiner structure on top of the components or by overlapping projected beams on the focal surface layers.
  • a diffractive structure connected to the multifocal element may also be used for color correction if refractive collimating and focusing elements are used for beam creation and focusing.
  • Particular color combination approaches may be selected based on, for example, the types of multifocal elements used in a display structure.
  • the SLM may comprise a LCD panel. SLM functionality allows the use of current LCD panels in the structure of some embodiments set forth herein.
  • the SLM pixels may be used with a binary on-off functionality when light emitting pixels (e.g., LEDs) are modulated separately.
  • An LCD panel can also be used for pixel intensity modulation in some embodiments, which may reduce complexity of light emitting layer controls. Switching speed requirements for the SLM may be relatively easily satisfied, as flicker free images of ⁇ 60 Hz may be achieved with the SLM.
  • primary 3D image generation is performed by a multiview display module behind the aperture controlling structure, and the SLM is only used for passing or blocking parts of beams that are directed to a viewer’s eyes, making the human visual system the determining factor. Overall, SLM and light emitting layer controls may be fitted together and synchronized, but currently available LCD panel refresh frequencies are adequate to achieve this.
  • the SLM may comprise an electrophoretic display (e.g., e- ink) or the like, using a display cell structure capable of being made transmissive.
  • an electrophoretic display e.g., e- ink
  • a display cell structure capable of being made transmissive.
  • a SLM may comprise a black-and-white panel. If light emitting pixels are made to generate only white light, a SLM may be used for color filtering and the light emitting layer may be simplified. In such cases color filter arrangements may be configured such that longer wavelength light passes the SLM layer with somewhat larger apertures than shorter wavelength light, to compensate for larger diffraction blur occurring with red light (i.e., longer wavelength).
  • thickness of the SLM may be minimized to maintain homogeneous LF pixel beam intensities across the pixel FOV(s). If the structure containing the aperture modulator is very thick, the off-axis beams will hit the aperture shapes from diverse angles and with different parts of the beam. This causes unequal beam intensities to different view angles and it may be compensated with a calibration method that e.g. lowers the intensity of emitted light at the central part of the emitter matrix in relation to the emitters on the outer edges of the matrix.
  • Current LCD display manufacturing technologies can be used to make panels that are below 1 mm thick, which allow for LF pixels that have aperture sizes in the range of ⁇ 0.5 - 1.0 mm.
  • LCD layer thickness below 0.5 mm is feasible.
  • Thin LCDs can be used especially if the aperture modulator is integrated to the lens sheets by e.g. lamination and the microlens layers provide enough support and protection for the LCD stack.
  • the SLM may be synchronized to light emitting pixels and image content.
  • the SLM may be used to create a series of successive 2D images at different focal surfaces one after another, as“slices” of the 3D image. This may simplify rendering, such as where only a few focal surfaces are used.
  • the SLM may adjust the relative brightness levels of the voxels of each focal plane in a more continuous manner, so that the combination of focal planes may cause the 3D light field image to appear more continuous in the depth direction.
  • the SLM may be used for final selection of blocked and passed beam focus layers, and in these cases the SLM may be controlled in consideration of the view direction(s) determined, for example, with the additional use of an active eye tracking module.
  • rendering speed requirements may be eased by grouping the light emitting layer and SLM pixels such that an image is displayed as interlaced rather than successive single pixels.
  • the number of pixels included in one group may be based on light emitting pixel size and pitch, SLM pixel size, and size of the beam aperture to be controlled with the SLM.
  • FIG. 17 is a schematic perspective view illustrating an exemplary embodiment 1700 of a curved 3D LF display according to some embodiments.
  • a tabletop 3D LF display device 1706 with curved 75” screen may be placed at a 1500mm distance 1710 from a single viewer 1702.
  • a display may form a light field image to a volumetric virtual viewing zone 1704, which may cover the distance 1708 from the display surface to 1000mm from the viewer’s position.
  • the display may generate multiple views both in the horizontal and vertical directions using the LF pixel structure(s) as previously discussed.
  • FIG. 18 is a schematic plan view illustrating an example display structure 1800 using LF pixels according to some embodiments.
  • Light is emitted from LED arrays 1802 (for example having component size 2mhi x 2 m, pitch 3 m, and 30mhi height for a sub-array).
  • Rotationally symmetric collimator lenses are placed at a distance 1820 (such as 2mm) from the m ⁇ E ⁇ e, and the array may comprise a microlens sheet
  • the hyperbolic collimator lenses 1804 may be multifocal (such as with a central focal length of ⁇ 2mm).
  • a focusing lens array 1808 (for example also a
  • 0.5mm thick 1814 microlens sheet formed of polycarbonate may have rotationally symmetric spherical surfaces (for example having a relatively large radius of 80mm).
  • the aperture sizes 1810 of the collimating and focus lenses may be selected (such as at 750mhi).
  • a LCD panel stack 1806 with polarizers and patterned liquid crystal layer (for example 0.5mm thick 1816) may be laminated in between the collimating lens array and the focusing lens array sheets.
  • the LCD may have concentric rings covering the lens apertures, permitting adjustment of the light beam limiting aperture size and shape of each
  • each LCD aperture may be a disc (for example with a diameter of 220 mhi).
  • An outermost part of LCD apertures may have a circular inner diameter (for example of 440mhi) and a rectangular outer shape (for example a size of 750mhi x 750mhi).
  • the third, intermediate aperture may be positioned between these aperture shapes and block or pass a ring-shaped section of each beam generated by the m ⁇ E ⁇ e and collimating lens.
  • the whole optical structure may be of minimal thickness (in the example, only 3.5mm thick) and the LF pixels are capable of projecting multiple beams which can be focused to three different focal surface layers in front of the display using the three different aperture shapes in each pixel.
  • FIG. 19 is a schematic plan view illustrating an example horizontal viewing geometry 1900 available with the exemplary display structure of FIG. 18 according to some embodiments.
  • the red, green, and blue components have the same size and are bonded as alternating rows in the vertical direction. Their colors combine in the projected beams at the different focal layers when crossing beams are combined into voxels.
  • Both the collimator and focusing lens arrays may have integrated diffractive structures that compensate color dispersion in the lens materials.
  • Each LF pixel may have, for example, 22 red, green, and blue rows of 100 LEDs, which exemplarily may be used for projecting 100 unique views in a horizontal direction with a total FOV 1910 of ⁇ 8.5 degrees.
  • the distance 1908 from the display surface to the viewer’s position is 1000mm for this embodiment.
  • the whole display 1902 of FIG. 19 is curved with a radius 1912 of 1500mm in the horizontal direction. This makes the single LF pixel viewing windows overlap and cause an ⁇ 225mm wide viewing window 1906 to form for a single user at the designated 1500mm viewing distance.
  • a cylindrical polycarbonate Fresnel lens sheet with 1500mm focal length may be used for overlapping the vertical views.
  • a created visibility zone around the viewing window 1906 may enable the viewer to move their head ⁇ 75mm left or right, as well as -150 mm forward and ⁇ 190mm backwards from the nominal position. Both eye pupils of an average person will stay inside the visibility zone with these measurements and the 3D image can be seen in the whole display viewing zone 1904 shown in FIG. 19. Tolerance for the head position may be improved by display structures which permit adjustment of the display tilt angle in the vertical direction or display stand height, and is generally adequate for a single viewer sitting in front of a display in a stable setting (e.g., at a desk or table).
  • an example light field display configuration have an display optical structure at a 1500mm distance from the viewing window.
  • the example light field display configuration has four intermediate focal planes located between the device and the viewer at distances of 15mm, 150mm, 325mm, and 500mm from the display surface.
  • a pair of rectangular light emitters created with LEDs produce a 2 m x 2 m surface area and 3mhi pitch. These light emitters were used in simulations that traced light distributions from the display optical structure to the four intermediate focal planes and a viewing window. Simulations were made using only green 550nm light, which represents an average wavelength in the visible light range. Illumination distribution simulation results show the geometric imaging effects. Diffraction effects are estimated separately for an aperture based on a simulated airy disc radius. With blue light sources, the diffraction effects are somewhat smaller than with the green sources and with red light sources, the diffraction effects are somewhat larger.
  • FIG. 20 is a schematic plan view illustrating exemplary simulation ray trace images according to some embodiments.
  • the rays focus to a plane (which is 15mm away from the device’s display surface for this example configuration).
  • example outer edge apertures and example intermediate apertures produce beams that are more collimated and focus the pixel images at much further distances from the device.
  • the exemplary ray trace scenario 2000 shown in FIG. 20 illustrates how three neighboring LF pixels 2002, 2004, 2006 may be used together in creation of two focused voxels 2008, 2010 at a focal plane (which may be at a 15mm distance from the display surface, for example).
  • a focal plane which may be at a 15mm distance from the display surface, for example.
  • the upper voxel 2008 may be created by activating three LEDs that are located at 39mhi above the optical axis in the topmost LF pixel 2002, at 57mhi below the optical axis in the intermediate LF pixel 2004 and 150mhi below the optical axis in the bottom LF pixel 2006.
  • the lower voxel 2010 may be created in a similar way as the upper voxel 2008.
  • This scenario 2000 shows how a voxel may be created on a focal surface by crossing beams emitted from multiple LF pixels.
  • FIG. 21 is a flowchart showing an example process for operating a spatial light modulator to generate a focal plane image according to some embodiments.
  • a light field projection process 2100 includes emitting 2102 light from an array of light emitting elements in some embodiments.
  • the light field projection process 2100 further includes collimating 2104 the emitted light from the array of light emitting elements into at least one light beam with at least one collimating optical element.
  • the light field projection process 2100 further includes operating 2106 a spatial light modulator to create a time-synchronized aperture for the light beam.
  • the time-synchronized aperture controls a portion of the light beam to be passed through the spatial light modulator.
  • the light field projection process 2100 further includes focusing 2108 the controlled portion of the light beam passed through the spatial light modulator with at least one optical element used as a focusing lens.
  • modules include hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation.
  • hardware e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices
  • Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A 3D light field display device may include an array of light-emitting elements configured to emit light, an array of multifocal lenses configured to focus the light from the array of light-emitting elements, each of the multifocal lenses having two or more portions, different portions being configured to focus the light to different distances, and a spatial light modulator (SLM) configured to control the portion of the lens upon which the light falls. The device may include an array of collimating optical elements for collimating the emitted light. Each multifocal lens may be annular in shape and configured to focus the light to different distances. The SLM may include a plurality of pixels that each correspond to a portion of a multifocal lens. The SLM may control which portion of a multifocal lens the light falls to select a focal distance for the light.

Description

MULTIFOCAL OPTICS FOR LIGHT FIELD DISPLAYS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a non-provisional filing of, and claims benefit under 35 U.S.C. §1 19(e) from, U.S. Provisional Patent Application Serial No. 62/633,047, entitled“Multifocal Optics for Light Field Displays,” filed February 20, 2018, which is hereby incorporated by reference in its entirety.
BACKGROUND
[0002] The human mind perceives and determines depths of observed objects in part by receiving signals from muscles used to orient each eye. The brain associates the relative angular orientations of the eyes with the determined depths of focus. Correct focus cues give rise to a natural blur on objects outside of an observed focal plane and a natural dynamic parallax effect.
[0003] One type of 3D display capable of providing correct focus cues uses volumetric display techniques that can produce 3D images in true 3D space. Each“voxel” of a 3D image is located physically at the spatial position where it is supposed to be and reflects or emits light from that position toward the observers to form a real image in the eyes of viewers. The main problems with 3D volumetric displays are their low resolution, large physical size and expensive manufacturing costs. These issues make them too cumbersome to use outside of special cases e.g., product displays, museums, shows, etc.
[0004] Another type of 3D display device capable of providing correct retinal focus cues is the holographic display. Holographic displays aim to reconstruct whole light wavefronts scattered from objects in natural settings. The main problem with this technology is a lack of suitable Spatial Light Modulator (SLM) component that could be used in the creation of the extremely detailed wavefronts.
[0005] A further type of 3D display technology capable of providing natural retinal focus cues is called the Light Field (LF) display. LF display systems are designed to create so-called light fields that represent light rays travelling in space to all directions. LF systems aim to control light emissions both in spatial and angular domains, unlike the conventional stereoscopic 3D displays that can basically only control the spatial domain with higher pixel densities. There are at least two fundamentally different ways to create light fields. [0006] In a first approach, parallax is created across each individual eye of the viewer producing the correct retinal blur corresponding to the 3D location of the object being viewed. This can be done by presenting multiple views per single eye.
[0007] The second approach is a multi-focal-plane approach, in which an object’s image is projected to an appropriate focal plane corresponding to its 3D location. Many light field displays use one of these two approaches. The first approach is usually more suitable for a head mounted single-user device as the locations of eye pupils are much easier to determine and the eyes are closer to the display making it possible to generate the required dense field of light rays. The second approach is better suited for displays that are located at a distance from the viewer(s) and could be used without headgear.
[0008] In current relatively low density multiview imaging displays, the views change in a coarse stepwise fashion as the viewer moves in front of the device. This lowers the quality of 3D experience and can even cause a complete breakdown of 3D perception. In order to mitigate this problem (together with the VAC), some Super Multi View (SMV) techniques have been tested with as many as 512 views. The idea is to generate an extremely large number of views so as to make any transition between two viewpoints very smooth. If the light from at least two images from slightly different viewpoints enters the eye pupil simultaneously, a much more realistic visual experience follows. In this case, motion parallax effects resemble the natural conditions better as the brain unconsciously predicts the image change due to motion.
[0009] The SMV condition can be met by reducing the interval between two views at the correct viewing distance to a smaller value than the size of the eye pupil. At normal illumination conditions, the human pupil is generally estimated to be about 4 mm in diameter. If ambient light levels are high (e.g., in sunlight), the diameter can be as small as 1.5 mm and in dark conditions as large as 8 mm. The maximum angular density that can be achieved with SMV displays is limited by diffraction and there is an inverse relationship between spatial resolution (pixel size) and angular resolution. Diffraction increases the angular spread of a light beam passing through an aperture and this effect may be taken into account in the design of very high density SMV displays.
[0010] Different existing 3D displays can be classified on the basis of their form-factors into four different categories.
[0011] Head-mounted devices (HMD) occupy less space than goggleless solutions, which also means that they can be made with smaller components and less materials making them relatively low cost. However, as head mounted VR goggles and smart glasses are single user devices, they do not allow shared experiences as naturally as goggleless solutions. [0012] Volumetric 3D displays take space from all three spatial directions and require a lot of physical material making these systems easily heavy, expensive to manufacture and difficult to transport. Due to the heavy use of materials, the volumetric displays also tend to have small“windows” and limited field-of view (FOV).
[0013] Screen-based 3D displays typically have one large but flat component, which is the screen and a system that projects the image(s) over free space from a distance. These systems can be made more compact for transportation and they also cover much larger FOVs than, for example, volumetric displays. These systems are complex and expensive as they require projector sub-assemblies and, for example, accurate alignment between the different parts, making them best for professional use cases.
[0014] Flat form-factor 3D displays may require a lot of space in two spatial directions, but as the 3rd direction is only virtual, they are relatively easy to transport to and assemble in different environments. As the devices are flat, at least some optical components used in them are more likely to be manufactured in sheet or roll format making them relatively low cost in large volumes.
[0015] Systems and methods set forth herein address these issues, and others.
SUMMARY
[0016] Systems and methods set forth herein may control multifocal optical features of a multivew display structure to create multiple focal planes.
[0017] In one embodiment, there is a method of operating a multiview display as a 3D light field display, comprising controlling a spatial light modulator to adjust apertures in front of projector cells of the multiview display by selectively blocking parts of projected beams to create focal planes at different distances from the multiview display.
[0018] In one embodiment, there is a method of operating a multiview display as a 3D light field display, comprising operating a plurality of light field pixels of a multiview display to create a plurality of focal planes at a plurality of distances from the multiview display, wherein a first light field pixel comprises a multiview projector cell of the multiview display, a region of a spatial light modulator (SLM) of the multi view display, and a focusing lens of the multiview display.
[0019] In one embodiment, there is a display structure comprising a plurality of light field pixels, wherein each light field pixel comprises: a multiview projector cell of the display structure; a portion of a spatial light modulator (SLM) of the display structure; and a focusing lens of the display structure.
[0020] In one embodiment, systems and methods set forth herein may be used for a multi-focal plane light field display. Such a multi-focal plane light field display may comprise emitting light from a plurality of light emitting elements; collimating the emitted light from the plurality of light emitting elements into at least one light beam with at least one collimating optical element; operating a spatial light modulator to create a time synchronized aperture for the at least one light beam, the aperture controlling which portion of the at least one light beam is passed through the spatial light modulator; and focusing the controlled portion of the at least one light beam passed through the spatial light modulator with at least one optical element used as a focusing lens.
[0021] An example apparatus in accordance with some embodiments may include: an array of light- emitting elements configured to emit light; an array of multifocal lenses configured to focus the light from the array of light-emitting elements, each of the multifocal lenses having two or more portions, different portions being configured to focus the light to different distances; and a spatial light modulator configured to control the portion of each lens upon which the light falls.
[0022] Some embodiments of the example apparatus may further include an array of collimating optical elements between the array of light-emitting elements and the array of multifocal lenses, the array of collimating optical elements being configured to collimate the light from the array of light-emitting elements.
[0023] For some embodiments of the example apparatus, at least two of the portions of each multifocal lens are substantially annular in shape, each annular portion being configured to focus the light to a different respective focal distance.
[0024] For some embodiments of the example apparatus, the spatial light modulator includes, for each of the multifocal lenses, a plurality of concentric annular pixels, each pixel corresponding to a respective portion of the corresponding multifocal lens.
[0025] For some embodiments of the example apparatus, the apparatus includes a plurality of light field pixels, each light field pixel includes one of the multifocal lenses and a corresponding plurality of the light- emitting elements.
[0026] Some embodiments of the example apparatus may further include opaque boundary structures between adjacent light field pixels.
[0027] For some embodiments of the example apparatus, the spatial light modulator is a liquid crystal display (LCD).
[0028] For some embodiments of the example apparatus, the multifocal lenses are axicon lenses.
[0029] For some embodiments of the example apparatus, the multifocal lenses are hyperbolic lenses. [0030] For some embodiments of the example apparatus, a first portion of the two or more portions of each lens is a diffractive lens, and a second portion of the two or more portions of each lens is a refractive lens.
[0031] An example method in accordance with some embodiments may include: emitting light from a light- emitting element an array of light-emitting elements toward a corresponding multifocal lens in an array of multifocal lenses; operating a spatial light modulator between the array of light-emitting elements and the array of multifocal lenses to control which portion of the multifocal lens the light is incident on; and focusing the light by the multifocal lens to a focal distance associated with the portion of the multifocal lens the light is incident on.
[0032] Some embodiments of the example method may further include collimating the emitted light before it reaches the spatial light modulator.
[0033] Some embodiments of the example method may further include displaying at least one voxel by focusing light from a plurality of multifocal lenses onto a common focal spot.
[0034] For some embodiments of the example method, operating the spatial light modulator may include generating a substantially annular aperture such that the light is incident on a substantially annular portion of the multifocal lens.
[0035] For some embodiments of the example method, a size of the substantially annular aperture is selected to determine the focal distance.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] FIG. 1 A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
[0037] FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to some embodiments.
[0038] FIGs. 2A-2B are schematic plan views illustrating example focal distances and eye convergence angles according to some embodiments.
[0039] FIGs. 3A-3C are schematic plan views illustrating example eye focus angles (FA) and convergence angles (CA) together with pixel clusters on a flat Light Field (LF) display according to some embodiments.
[0040] FIGs. 4A-4C depict schematic perspective views illustrating example levels of occlusion of Light Fields directed towards a pupil according to some embodiments. [0041] FIG. 5 is a schematic plan view illustrating example light emission angles directed towards respective viewers according to some embodiments.
[0042] FIG. 6A depicts schematic plan views illustrating an example of increasing beam divergence caused by geometric factors according to some embodiments.
[0043] FIG. 6B depicts schematic plan views illustrating an example of increasing beam divergence caused by diffraction according to some embodiments.
[0044] FIG. 7 depicts schematic plan views illustrating three example lenses having various magnification ratios according to some embodiments.
[0045] FIGs. 8A-8D are schematic plan views illustrating example geometric and diffraction effects for one or two extended sources imaged to a fixed distance with a fixed magnification according to some embodiments.
[0046] FIG. 9A is a schematic plan view illustrating an example functioning of a standard lens according to some embodiments.
[0047] FIG. 9B is a schematic plan view illustrating an example functioning of a multifocal axicon lens according to some embodiments.
[0048] FIG. 10 is a schematic plan view illustrating an example ray trace picture of an imaging system with two spherical plano-convex lenses according to some embodiments.
[0049] FIG. 11 is a schematic plan view illustrating example focal points resulting from different apertures for imaging a point source according to some embodiments.
[0050] FIG. 12 is a schematic plan view illustrating example extended sources imaged with spherical lens pairs and apertures according to some embodiments.
[0051] FIG. 13 is a schematic plan view illustrating an example LF display structure according to some embodiments.
[0052] FIGs. 14A-14B are schematic plan views illustrating example display optical function scenarios if more than one light emitter is activated and more than one LF display pixel is used simultaneously according to some embodiments.
[0053] FIG. 15 is a schematic plan view illustrating an exemplary viewing geometry available with a 3D LF display structure according to some embodiments.
[0054] FIGs. 16A-16B are schematic plan views illustrating exemplary viewing geometry scenarios according to some embodiments. [0055] FIG. 17 is a schematic perspective view illustrating an exemplary embodiment of a curved 3D LF display according to some embodiments.
[0056] FIG. 18 is a schematic plan view illustrating an example display structure using LF pixels according to some embodiments.
[0057] FIG. 19 is a schematic plan view illustrating an example horizontal viewing geometry available with the exemplary display structure of FIG. 18 according to some embodiments.
[0058] FIG. 20 is a schematic plan view illustrating exemplary simulation ray trace images according to some embodiments.
[0059] FIG. 21 is a flowchart showing an example process for operating a spatial light modulator to generate a focal plane image according to some embodiments.
[0060] The entities, connections, arrangements, and the like that are depicted in— and described in connection with— the various figures are presented by way of example and not by way of limitation. As such, any and all statements or other indications as to what a particular figure“depicts,” what a particular element or entity in a particular figure“is” or“has,” and any and all similar statements— that may in isolation and out of context be read as absolute and therefore limiting— may only properly be read as being constructively preceded by a clause such as“In at least one embodiment, ....” For brevity and clarity of presentation, this implied leading clause is not repeated ad nauseum in the detailed description.
EXAMPLE NETWORKS FOR IMPLEMENTATION OF THE EMBODIMENTS
[0061] A wireless transmit/receive unit (WTRU) may be used as a multivew display, a light field display, or a display structure in embodiments described herein.
[0062] FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0063] As shown in FIG. 1A, the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/1 13, a ON 106/1 15, a public switched telephone network (PSTN) 108, the Internet 1 10, and other networks 1 12, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a“station” and/or a“STA”, may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d may be interchangeably referred to as a UE.
[0064] The communications systems 100 may also include a base station 114a and/or a base station 1 14b. Each of the base stations 1 14a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/1 15, the Internet 110, and/or the other networks 112. By way of example, the base stations 1 14a, 1 14b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 1 14a, 1 14b are each depicted as a single element, it will be appreciated that the base stations 1 14a, 1 14b may include any number of interconnected base stations and/or network elements.
[0065] The base station 1 14a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 1 14b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 1 14a may be divided into three sectors. Thus, in one embodiment, the base station 1 14a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 1 14a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions. [0066] The base stations 1 14a, 1 14b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 1 16, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 1 16 may be established using any suitable radio access technology (RAT).
[0067] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 1 14a in the RAN 104/1 13 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 1 15/1 16/117 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (FISPA+). HSPA may include High-Speed Downlink (DL) Packet Access (FISDPA) and/or High-Speed UL Packet Access (FISUPA).
[0068] In an embodiment, the base station 1 14a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 1 16 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
[0069] In an embodiment, the base station 1 14a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 1 16 using New Radio (NR).
[0070] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 1 14a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
[0071] In other embodiments, the base station 1 14a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.1 1 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like. 10072] The base station 1 14b in FIG. 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.1 1 to establish a wireless local area network (WLAN). In an embodiment, the base station 1 14b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 1 14b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell. As shown in FIG. 1A, the base station 1 14b may have a direct connection to the Internet 1 10. Thus, the base station 1 14b may not be required to access the Internet 1 10 via the CN 106/115.
[0073] The RAN 104/113 may be in communication with the CN 106/1 15, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106/1 15 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1A, it will be appreciated that the RAN 104/1 13 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/1 13 or a different RAT. For example, in addition to being connected to the RAN 104/113, which may be utilizing a NR radio technology, the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
[0074] The CN 106/1 15 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 1 12. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 1 12 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/1 13 or a different RAT.
- I Q - [0075] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 1 14b, which may employ an IEEE 802 radio technology.
[0076] FIG. 1 B is a system diagram illustrating an example WTRU 102. As shown in FIG. 1 B, the WTRU 102 may include a processor 1 18, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0077] The processor 1 18 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 1 18 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 1 18 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 1 18 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0078] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 1 16. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0079] Although the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU
102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ
MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 1 16.
[0080] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
[0081] The processor 1 18 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 1 18 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0082] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
[0083] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 1 16 from a base station (e.g., base stations 1 14a, 1 14b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location- determination method while remaining consistent with an embodiment.
[0084] The processor 1 18 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
[0085] The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 1 18). In an embodiment, the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
[0086] In view of FIGs. 1 A-1 B, and the corresponding description of FIGs. 1 A-1 B, one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0087] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
[0088] The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
DETAILED DESCRIPTION
[0089] Systems and methods set forth herein use multifocal optical features to create multiple focal planes with a multiview display structure based on projected beams. Such systems and methods may transform a standard multiview display into a true 3D LF display that is able to provide the necessary focus cues for the eyes of a viewer and address the VAC problem.
General 3D Image Perception
[0090] FIGs. 2A-2B are schematic plan views 220 illustrating example focal distances and eye convergence angles according to some embodiments. FIG. 2A depicts a retinal focus scenario 200 when viewing a natural scene. FIG. 2B depicts a retinal focus scenario 220 when viewing an autostereoscopic 3D display 222. Some parts of an image generated with the configuration of FIG. 2A are blurred, whereas an image generated with the configuration of FIG. 2B, all parts of the image are in focus. Current stereoscopic displays, commonly used in home theatres and cinemas, employ suboptimal technology for making 3D images. There is a neural connection in the human brain between light sensitive cells on the eye retinas and the cells sensing eye muscle movement. The associated areas work together when a perception of depth is created. Autostereoscopic 3D displays lack correct retinal focus cues due to the fact that the image information is limited to the plane of the display. When the eyes focus to a different point than where they converge, physiological signals in the brain get mixed up. Depth cue mismatch of convergence and accommodation leads to e.g. eye strain, fatigue, nausea and slower eye accommodation to object distance. This phenomenon is called vergence-accommodation conflict (VAC) and is a result of non-proportional depth squeezing in artificial 3D images.
[0091] FIGs. 3A-3C are schematic plan views illustrating example eye focus angles (FA) and convergence angles (CA) together with pixel clusters on a flat Light Field (LF) display according to some embodiments. Depicted are eye focus angles (FA) 308, 338 and convergence angles (CA) 306, 336 together with pixel clusters on a flat LF display 304, 334, 366 in three cases: an image point on the display surface 302 (FIG. 3A), a virtual image point behind the display surface 340 (FIG. 3B), and a virtual image at infinite distance behind the display surface 362 (FIG. 3C). The various geometric differences in each scenario 300, 330, 360 may be understood visually from the provided cases in FIGs. 3A-3C. [0092] At least three types of 3D displays are able to provide the correct focus cues for natural 3D image perception. The first category is volumetric display techniques that can produce 3D images in true 3D space. Each“voxel” of the 3D image is located physically at the spatial position where it is supposed to be and reflects or emits light from that position toward the observers to form a real image in the eyes of viewers. The main problems with 3D volumetric displays are low resolution, large physical size and a high complexity of the systems. They are expensive to manufacture and too cumbersome too use outside special use cases like product displays, museums etc. The second 3D display device category capable of providing correct retinal focus cues is the holographic display. These displays operate by reconstructing the whole light wavefronts scattered from objects in natural settings. One problem in this field of technology is a lack of suitable Spatial Light Modulator (SLM) components that could be used in the creation of the extremely detailed wavefronts. The third 3D display technology category capable of providing natural retinal focus cues is called the Light Field (LF) display, and the Light Field (LF) display is the dominant technological domain of this disclosure.
[0093] Vergence-accommodation conflict is a determined driver for moving from the current stereoscopic 3D displays to the more advanced light field systems. A flat form-factor LF 3D display is able to produce both the correct eye convergence and correct focus angles simultaneously. FIGs. 3A-3C show these correct angles in three different 3D image content cases. In the first case (FIG. 3A) 300, an image point lies on a surface of a display 302 and only one illuminated pixel visible to both eyes is sufficient to represent the point correctly. Both eyes are focused and converged to the same point. In the second case (FIG. 3B) 330, the virtual image point is behind the display 340 and two clusters of pixels 332 are illuminated to represent the single point correctly. In addition, the direction of the light rays from these two spatially separated pixel clusters are controlled in such a way that the emitted light is visible only to the correct eye, thus enabling the eyes to converge to the same single virtual point. In the third case (FIG. 3C) 360, the virtual image is at infinity 362 behind the screen and only parallel light rays 368, 370, 372, 374, 376, 378 are emitted from the display surface 366 from two spatially separated pixel clusters 364. In this case 360, the minimum size for the pixel cluster is at least the size of the eye pupil. This size of cluster is the maximum size of pixel cluster called for on the display surface 366. In the last two presented generalized cases 330, 360, spatial and angular control of emitted light from the LF display device creates both the convergence and focus angles for natural eye responses to the 3D image content.
[0094] FIGs. 4A-4C depict schematic perspective views illustrating example levels of occlusion of Light Fields directed towards a pupil according to some embodiments. LF systems aim to control light emissions both in spatial and angular domains, unlike the conventional stereoscopic 3D displays that can only control the spatial domain. There are two different ways to create light fields. In a first approach, parallax is created across each eye of the viewer producing correct retinal blur corresponding to a 3D location of the object being viewed. A second approach is a multi-focal-plane approach, in which the object’s image is projected to a focal plane corresponding to its 3D location. The first approach is usually more suitable to a head mounted single-user device as the location of eye pupils are much easier to determine and the eyes are closer to the display making it easier to provide the desired dense field of light rays. The second approach is better suited for displays that are located at a distance from the viewer and may be used without headgear. In one embodiment of the present system, correct retinal blur is achieved by presenting multiple views per single eye. FIGs. 4A-4C shows occlusions of scene caused by parallax across the pupil. In FIG. 4A, only a portion of person’s body (their foot) is visible and the rest of the person is blocked by an occlusion 402. This view 400 corresponds with a left field view from a left side of the pupil. In FIG. 4B, a larger portion of the person’s body is visible but a small portion of the person is still blocked by an occlusion 422. This view 420 corresponds with a central field view from a center of the pupil. In FIG. 4C, the entirety of the person’s body is visible, and an occlusion 442 does not block view of the person. This view 440 corresponds with a right field view from a right side of the pupil. The resulting varied images represent views that could be presented in order to produce correct retinal blur. If the light from at least two images from slightly different viewpoints enters the eye pupil simultaneously, a more realistic visual experience follows. In this case, motion parallax effects better resemble natural conditions as the brain unconsciously predicts the image change due to motion. A Super Multi View (SMV) effect can be achieved by ensuring the interval between two views at the correct viewing distance is a smaller value than the size of the eye pupil.
[0095] At normal illumination conditions the human pupil is generally estimated to be ~4 mm in diameter. If the ambient light levels are high (e.g., in sunlight), the diameter can be as small as 1.5 mm and in dark conditions as large as 8 mm. The maximum angular density that can be achieved with SMV displays is limited by diffraction and there is an inverse relationship between spatial resolution (pixel size) and angular resolution. Diffraction increases the angular spread of a light beam passing through an aperture and this effect should be taken into account in the design of very high density SMV displays.
[0096] FIG. 5 is a schematic plan view illustrating example light emission angles directed towards respective viewers according to some embodiments. In particular, FIG. 5 shows a schematic view 500 of the geometry involved in creation of the light emission angles from a LF display 518. The LF display 518 in FIG. 5 produces the desired retinal focus cues and multiple views of 3D content in a single flat form-factor panel. A single 3D display surface projects at least two different views to the two eyes of a single user in order to create a coarse 3D perception effect. The brain uses these two different eye images to determine 3D distance. Logically this is based on triangulation and interpupillary distance. To provide this effect, at least two views are projected into a Single user Viewing Angle (SVA) 510, as shown in FIG. 5. Furthermore, in at least one embodiment the LF display projects at least two different views inside a single eye pupil in order to provide the correct retinal focus cues. For optical design purposes, an“eye-box” is usually defined around the viewer eye pupil when determining the volume of space within which a viewable image is formed. The eye-box width 508 is the distance from the left-most part of the eye to the right-most part of the eye. In some embodiments of the LF display, at least two partially overlapping views are projected inside an Eye-Box Angle (EBA) 514 covered by the eye-box at a certain viewing distance 516. In some embodiments, the LF display is viewed by multiple viewers 502, 504, 506 looking at the display 518 from different viewing angles, creating images for each viewer corresponding to a virtual object point 520. In such embodiments, several different views of the same 3D content are projected to respective viewers covering a whole intended Multi user Viewing Angle (MVA) 512.
[0097] The following example calculations concern the above geometry. The values in the ensuing scenario are provided for the sake of clarity and are not meant to be limiting in any way. If the LF display is positioned at 1 m distance from a single viewer and an eye-box width is set to 10mm, then the value for EBA would be ~0.6 degrees and at least one view of the 3D image content is generated for each -0.3 degree angle. As the standard human interpupillary distance is ~65mm, the SVA would be -4.3 degrees and around 14 different views would be employed just for a single viewer positioned at the direction of the display normal (if the whole facial area of the viewer is covered). If the display is intended to be used with multiple users, all positioned inside a moderate MVA of 90 degrees, then a total of 300 different views is called for. Similar calculations for a display positioned at 30cm distance (e.g., a mobile phone display) would result in only 90 different views being sufficient for horizontal multiview angle of 90 degrees. And if the display is positioned 3m away (e.g., a television screen) from the viewers, a total of 900 different views would be used to cover the same 90 degree multiview angle.
[0098] The calculations indicate that a LF multiview system is easier to create for use cases wherein the display is closer to the viewers than for those in which the users are further away. Furthermore, FIG. 5 illustrates the fact that three different angular ranges are covered simultaneously by the LF display: one for covering the pupil of a single eye, one for covering the two eyes of a single user, and one for the multiuser case. Of these three angular ranges, the latter two may be resolved by using either several light emitting pixels under a lenticular or parallax barrier structure or by using several projectors with a common screen. These techniques are suitable for the creation of relatively large light emission angles utilized in the creation of multiple views. However, these systems lack the angular resolution required to address the eye pupil, which means that they are not capable of producing the correct retinal focus cues and are susceptible to the VAC. [0099] A flat-panel-type multiview LF display may be based on spatial multiplexing alone. A row or matrix of light emitting pixels (LF sub-pixels) may be located behind a lenticular lens sheet or microlens array and each pixel is projected to a unique view direction in front of the display structure. The more pixels there are on the light emitting layer behind each lenticular feature, the more views can be generated. This leads to a direct trade-off situation between a number of unique views generated and spatial resolution. If smaller LF pixel size is desired from the 3D display, the size of individual sub-pixels may be reduced; or alternatively, a smaller number of viewing directions may be generated. A high quality LF display has both high spatial and angular resolutions.
Optical Features Limiting the Resolution of Flat Form Factor LF Displays
[0100] Generating a high-resolution LF image in some embodiments comprises projecting a plurality of high-resolution, depth-dependent, 2D images onto different focal planes using crossing beams. A distance between each focal plane should be kept inside the human visual system depth resolution. A respective position at which one of more beams intersect is called a voxel. Each beam of the voxel is tightly collimated and has a narrow diameter. Furthermore, each beam waist is collocated with the position at which the beams intersect (i.e., the voxel). This helps to avoid contradicting focus cues from being received by an observer. If a beam diameter is large, a voxel formed at the beam crossing is imaged to an eye’s retina as a large spot. A large divergence value indicates at least two relationships: (i) beam diameter increases as the distance between a given voxel and an observer’s eye becomes smaller, and (ii) virtual focal plane spatial resolution decreases as the distance between a given voxel and an observer’s eye becomes smaller. A native resolution at the eye increases as the distance between a given voxel and an observer’s eye becomes smaller.
[0101] FIG. 6A depicts schematic plan views illustrating an example 600 of increasing beam divergence caused by geometric factors according to some embodiments, whereas FIG. 6B depicts schematic plan views illustrating an example 650 of increasing beam divergence caused by diffraction according to some embodiments. In the case of an ideal lens, achievable light beam collimation is dependent on two geometric factors: (i) a size of the light source and (ii) a focal length of the lens. Perfect collimation 608, that is collimation without any beam divergence, can only be achieved in a theoretical scenario in which a single color point source (PS) 602 is positioned exactly at a focal length distance from an ideal positive lens. This case is pictured in a top-most example of FIG. 6A. Unfortunately, all real-world light sources have a non-zero surface area from which the light is emitted, making them extended sources (ES) 604, 606. As each point of the source is separately imaged by the lens, the total beam may be thought of as comprising a plurality of collimated sub-beams that propagate to somewhat different directions after the lens. The lower two cases presented in FIG. 6A show that as an ES grows larger, the total beam divergence 610, 612 increases. This geometric factor cannot be avoided with any optical means. With relatively large light sources, divergence coming from system optical geometry is a prohibitively dominating feature.
[0102] Another feature causing beam divergence is diffraction. The term refers to various phenomena that occur when a wave (e.g., of light) encounters an obstacle or a slit (e.g., in a grating). It can be thought of as the bending of light around the corners of an aperture into a region of geometric shadow. Diffraction effects can be found in all imaging systems and they cannot be removed even with a perfect lens design that is able to balance out all optical aberrations. In fact, a lens that is able to reach the highest optical quality is often called“diffraction limited” as most of the blurring remaining in the image comes from diffraction. The angular resolution achievable with a diffraction limited lens can be calculated from the formula of Eq. 1 : sin Q = 1.22 * l / D Eq. 1 where l is the wavelength of the light and D is the diameter of the entrance pupil of the lens. It can be seen from the equation above that the color of light and lens aperture size are the only things that have an influence on the amount of diffraction. FIG. 6B shows that beam divergence 658, 660, 662 is increased as lens aperture size is reduced for a single color point source (PS) 652, 654, 656. This effect can be formulated into a general rule in imaging optics design: if a design is diffraction limited, the only way to improve resolution is to make the aperture larger. Diffraction is a primary cause of beam divergence with relatively small light sources.
[0103] As presented in FIG. 6A, the size of an extended source has a substantial effect on achievable beam divergence. The source geometry or spatial distribution is mapped by the projector lens to an angular distribution of the beam and this can be observed in a resulting“far field pattern” of the source-lens system. In practice this means that if a collimating lens is positioned at the focal distance from a source, the source is imaged to a relatively large distance from the lens. The size of the image can be ascertained from a system “magnification ratio”.
[0104] FIG. 7 depicts schematic plan views illustrating three example lenses having various magnification ratios according to some embodiments. In the case of a simple imaging lens, the system magnification ratio is calculated by dividing the distance between the lens and image with the distance between the source and lens, as illustrated in FIG. 7. If the distance 702, 732, 762 between the source 710, 740, 770 and lens 712, 742, 772 is fixed, different image distances 704, 734, 764 between lens 712, 742, 772 and image 714, 744, 774 may be achieved by changing the optical power of the lens through adjustment of the lens curvature. For the three examples 700, 730, 760 of FIG. 7, the source height 706, 736, 766 is fixed, resulting in progressively larger image heights 708, 738, 768 as the image distances 704, 734, 764 are increased. As the image distance 704, 734, 764 becomes larger (relative to the lens focal length), the desired changes in lens optical power become smaller. Extreme cases approach a situation wherein the lens is effectively collimating the emitted light into a beam that has the spatial distribution of the source mapped into the angular distribution. In these cases the source image is formed without focusing.
[0105] In flat form factor goggle-less LF displays, LF pixel projection lenses have very small focal lengths in order to achieve the flat structure. Typically, the beams from a single LF pixel are projected to a relatively large viewing distance. This means that the sources are effectively imaged with high magnification as the beams of light propagate to a viewer. For example: if the source size is 50 m x 50 pm, projection lens focal length is 1 mm, and viewing distance is 1 m, the resulting magnification ratio is 1000:1. Given these conditions, the source’s image will be 50 mm x 50 mm in size. This indicates that the single light emitter can be seen only with one eye inside this 50 mm diameter eye box. If the source would have a diameter of 100 pm, the resulting image would be 100 mm wide and the same pixel could be visible to both eyes simultaneously - the average distance between eye pupils is only 64 mm. In the latter case, a stereoscopic 3D image would not be formed as both eyes would see the same image. The example calculation above shows how geometric parameters comprising light source size, lens focal length and viewing distance are tied to each other and effect overall performance.
[0106] As the beams of light are projected from the LF display pixels, divergence causes the beams to expand. This applies not only to the actual beam emitted from the display towards the viewer but also to any virtual beam that appears to be emitted behind the display, converging to a single virtual focal point close to the display surface. In the case of a multiview display, this is a good thing as the divergence expands the size of the eye box. In such embodiments, the LF system is designed such that the beam size at the viewing distance does not exceed the distance between the two eyes, as that would break the stereoscopic effect. However, when creating a voxel using two or more crossing beams, on a virtual focal plane anywhere outside the display surface, the spatial resolution achievable with the beams decreases as the divergence increases. It should also be noted that if the beam size at the viewing distance is larger than the size of the eye pupil, the pupil becomes the limiting aperture of the whole optical system.
[0107] Both geometric and diffraction effects work in tandem in all optical systems. The present structure provides a means to control both geometric and diffraction effects during LF display design so as to enable a variety of solutions for voxel resolution. With very small light sources, as the optical system measurements become closer to the wavelength of light, diffraction effects start to dominate the performance of the overall LF display.
[0108] FIGs. 8A-8D are schematic plan views illustrating example geometric and diffraction effects for one or two extended sources imaged to a fixed distance with a fixed magnification according to some embodiments. For FIGs. 8A - 8D, the distance 810, 836, 860, 886 from the extended source(s) (ES) 802,
822, 824, 852, 872, 874 to the lens 804, 826, 854, 876 is fixed (e.g., 10mhi) for each example. FIG. 8A shows a scenario 800 wherein a lens aperture size 804 is relatively small (e.g., 5mhi) and a Geometric Image (Gl) 806 is surrounded by blur from the Diffracted Image (Dl) 808 which is much larger. FIG. 8B shows a scenario 820 wherein two extended sources 822, 824 are placed side-by-side and imaged with the same small aperture lens 826 as in FIG. 8A. Even though the Gls 828, 830 of both sources are clearly separated, the two source images cannot be resolved as the diffracted images 832, 834 overlap. In practice, this means that reducing the light source size would not improve the achievable voxel resolution. The resulting source image size would be the same with two separate light sources as it would be with one larger source that covers the area of both separate emitters. In order to resolve the two source images as separate pixels/voxels, the aperture size of the imaging lens should be increased. FIG. 8C shows a scenario 850 wherein a lens with the same focal length as the one used in FIG. 8A, but with a larger aperture 854 (e.g., 10mhi), is used for imaging the extended source. In FIG. 8C, the diffraction is reduced and the Dl 858 is only slightly larger than the Gl 856, which has remained the same as the magnification is fixed. In scenario 870 of FIG. 8D, a larger aperture 876 (e.g., 10mhi) is used and the two geometric images 878, 880 can be resolved as the Dls 882, 884 are no longer overlapping. FIGs. 8A-D together show that increasing an aperture size makes it possible to use two different sources and therefore improve spatial resolution of a voxel grid.
Multifocal Lenses
[0109] FIG. 9A is a schematic plan view illustrating an example functioning of a standard lens according to some embodiments. FIG. 9B is a schematic plan view illustrating an example functioning of a multifocal axicon lens according to some embodiments. Unlike standard lenses generally used in optical systems that focus at a single focus point 904, multifocal lenses are designed to provide focus at more than one distance from the optical element, as illustrated in FIGs. 9A (functioning 900 of a standard lens 902) and 9B (functioning 950 of a multifocal axicon lens 952). Some example embodiments use multifocal lenses. Such multifocal lenses may be divided into two different types: diffractive and refractive. Embodiments with diffractive designs may have a micro pattern at some part of the lens aperture that diffracts light creating more than two focal points (as discussed in European patent application EP2045648B1). Embodiments with refractive designs may be based on, for example, a Fresnel lens design that uses different aperture zones for creation of the multiple focal spots (as discussed in US patent 4,210,391 ). Both refractive and diffractive lenses use fairly complex optical surface features and such lenses may be manufactured by, for example, injection molding from plastic optic materials like PMMA or polycarbonate, or refractive and diffractive lenses may be replicated with UV curable materials with the help of a mold.
[0110] One specific type of a multifocal lens is called an axicon 952. It is an optical element, which creates a narrow focal line 954 along the optical axis (FIG. 9B). Light rays entering the axicon are refracted towards the optical axis with the same angle independently from the radial distance from the axis. This creates a conical wavefront, which generates a focal line covering a certain distance. The focal line can be used, for example, in part alignment or in extending the depth of focus in applications such as optical coherence tomography, astronomy, or light sectioning. Axicons can be manufactured as refractive cones from glass or plastics by grinding or molding or as diffractive circular gratings made by UV curing or embossing. They can also be made as combinations of the different optical features and as a hybrid of refractive and reflective surfaces. For some embodiments, multifocal lenses may be axicon lenses. Some embodiments of multifocal lenses may be hyperbolic lenses.
[0111] FIG. 10 is a schematic plan view illustrating an example ray trace picture 1000 of an imaging system with two spherical plano-convex lenses according to some embodiments. Imaging optical systems may have aberrations that cause errors to a generated image. Spherical aberrations may occur when incoming light rays focus at different points after passing through a spherical optical surface. Light rays passing through a lens near its optical axis are refracted less than rays closer to the edge of the lens, and as a result end up in different focal spots. This effect is illustrated in FIG. 10, where a point source 1002 is imaged through a collimator 1004, aperture 106, and focus lens 1008 into an image 1010 with a series of points 1012, 1014, 1016, 1018 on the optical axis with two plano-convex lenses that have spherical surfaces. Spherical aberrations blur the image and it is usually an undesired feature that is corrected by using multiple lens elements or aspheric surfaces in the lens design.
[0112] The spherical aberrations make a simple lens behave like a multifocal lens, and this feature can be used advantageously if there is a desire for extended depth of focus. By using a series of apertures with different radii for selectively blocking different parts of an overall lens optical aperture, a series of different focal points can be created with a single lens element. Lenses that have a perfect spherical surface shape focus rays propagating closer to the lens center to a focal spot that is further away from the lens than the focal spot achieved with rays propagating closer to the lens edge. This effect is illustrated in FIG. 11 for three identical plano-convex spherical lens pairs are used for imaging point sources to different focal points with the help of three different apertures positioned in the widest part of the beam located between the collimating and focusing lenses.
[0113] FIG. 11 is a schematic plan view illustrating example focal points resulting from different apertures for a point source according to some embodiments. The aperture 11 16 blocks the central part of the beam and allows an outer ring of the beam emitted from the source 1 1 12 to pass from the collimating lens 1 1 14 to the focusing lens 1 118. The aperture 1 136 allows only the central part of the beam emitted from the source 1 132 to pass from the collimating lens 1 134 to the focusing lens 1 138. The aperture 1 126 allows only rays emitted from the source 1 122 in a ring-shaped zone between the edge and center obscurations to pass from the collimating lens 1 124 to the focusing lens 1128, and the focal spot 1 130 is created in between the focal spots 1 120, 1 140.
[0114] Sources 1 112, 1 122, 1 132 cause the same effect as sources 1142, 1152, 1162 with collimating lenses 1 144, 1154, 1 164 and focusing lenses 1 148, 1158, 1 168 that have hyperbolic shapes instead of spherical. These lenses 1144, 1 154, 1 164, 1 148, 1 158, 1 168 have the same surface radius values as the top three lenses 11 14, 1124, 1 134, but as their conic constant is set to the value of -5, the order of focal spot locations 1 150, 1160, 1 170 in respect to the different apertures 1 146, 1156, 1 166 is reversed in comparison with FIG. 1 1. The conic constant causes the lens shapes to deform towards a conical shape, and the overall aspheric surface begins to somewhat resemble the shape used in axicon designs.
[0115] For some embodiments, FIG. 1 1 shows examples of aperture configurations within a spatial light modulator that may be used to generate a multiview display. Some embodiments of a method of emitting light from a source, collimating the emitted light, operating a spatial light modulator that includes an aperture, and focusing the light passed through the spatial light modulator are performed in a time-synchronized manner to produce a plurality of focal planes at a respective plurality of distances from the spatial light modulator. A plurality of focal spots, such as the examples shown in FIG. 1 1 , may be combined to generate a focal plane.
[0116] FIG. 12 is a schematic plan view illustrating example extended sources imaged with spherical lens pairs and different apertures according to some embodiments. As previously discussed, all natural light sources have a finite surface area to be considered when designing an imaging system. In the case of multifocal lenses, the use of extended sources simply means that the source surfaces are imaged to somewhat different focal planes depending on the aperture used. FIG. 12 shows the effect when three identical plano-convex spherical lens pairs of collimator lenses 1204, 1224, 1244 and focus lenses 1208, 1228, 1248 are used for imaging extended sources 1202, 1222, 1242 with different apertures 1206, 1226, 1246 (as in FIG. 1 1 ) to different focal distances. It should be noted that when the source images 1210, 1230, 1250 are formed at different focal distances, the optical magnification also changes. In FIG. 12, the source image 1250 is slightly bigger where the image is formed further away from the lens than for source image 1230 where the image is formed closer to the lens.
[0117] Different lens shapes and combinations of several lens elements can be effectively used for adjusting the amount of and balancing out the aberrations in an imaging system. In general, an objective is usually to make all light rays falling to the aperture of the optics from one single direction focus to one exact spot on a focal surface. Unfortunately, all refractive optical materials have somewhat different refractive indices for different colored light. This dispersion property makes the different wavelength beams focus at different distances from the lens, and an optical system may become multifocal even though the shapes are corrected for one single color. Color correction or reduction of color aberrations is often achieved with a combination of positive and negative power lenses that are made from materials with different color dispersion properties. This same effect can also be created with a hybrid diffractive-refractive structure, where the color dispersion from refraction is compensated with dispersion from diffraction. pLED Sources for Use in Exemplary Embodiments
[0118] LEDs are LED chips that are manufactured with the same basic techniques and from the same materials as standard LED chips in use today. However, pLEDs are miniaturized versions of the commonly available LED components and they can be made as small as 1 m - 10 m in size. Currently, a dense matrix of pLEDs can have 2 pm x 2 pm chips assembled with 3 pm pitch. When compared to OLEDs, pLEDs are much more stable components and they can provide greater light intensities, which makes them useful for many applications from head mounted display systems to adaptive car headlamps (LED matrix) and TV backlights. pLEDs may be used for 3D displays, which use very dense matrices of individually addressable light emitters that can be switched on and off at very fast speeds.
[0119] One bare pLED chip can emit a specific color with spectral width of -20-30 nm. A white source can be created by coating the chip with a layer of phosphor, which converts the light emitted by blue or UV pLEDs into a wider white light emission spectra. A full-color source can also be created by placing separate red, green and blue pLED chips side-by-side as a combination of these three primary colors generates a full color pixel when the separate color emissions are combined by the human visual system. A very dense matrix designed in this style may comprise self-emitting full-color pixels that have a total width below 10 pm (3 x 3 pm pitch).
[0120] Light extraction efficiency at the semiconductor chip is one parameter that determines electricity- to-light efficiency of pLED structures. There are several methods that aim to enhance the extraction efficiency, which is especially important with mobile devices that have a limited power supply. One such method uses a shaped plastic optical element that is integrated directly on top of a pLED chip. Due to a lower refractive index difference, integration of the plastic shape extracts more light from the chip material than in a case where the chip is surrounded by air. The plastic shape also directs the light in a way that enhances light extraction from the plastic piece and makes the emission pattern more directional. Another method comprises shaping the chip itself into a form that favors light emission angles that are more perpendicular towards the front facet of the semiconductor chip. This makes it easier for the light to escape the high refractive index material of the chip. These structures also direct the light emitted from the chip. In the latter case, the extraction efficiency may be twice that as compared to regular pLEDs. Considerably more light is emitted to an emission cone of 30° in comparison with a standard chip Lambertian distribution wherein light is distributed more evenly to the surrounding hemisphere. Multifocal Optics for Light Field Displays
[0121] Systems and methods set forth herein are based on the use of multifocal optical features that make it possible to create multiple focal planes with a multiview display structure based on projected beams. In one embodiment of a display optical structure, light is emitted from a layer with separately controllable small emitters, such as LEDs. A lens structure (e.g., a polycarbonate lenticular sheet) placed on top of the emitters may collimate the light into a set of beams that are used to form the image at different viewing directions. The emitters and collimator lenses form a series of projector cells that are separated from each other with opaque boundary structures that suppress crosstalk between neighboring cells.
[0122] In one embodiment, a display structure may comprise a spatial light modulator (SLM) and a focusing lens array. The SLM may be, for example, a custom LCD panel for adjusting apertures in front of the multiview display projector cells by selectively blocking parts of the projected beams. The focusing lens may be, for example, a polycarbonate lenticular sheet, which focuses the emitted beams to different focal planes. A single multifocal lens shape may be used for all projector cells and creation of multiple focal planes, as the SLM is used for adjusting beam aperture(s) and accordingly a focal length of each focusing lens. A combination of one multiview projector cell, one section of the SLM covering the cell aperture, and one focusing lens is considered a light field pixel (LF pixel). LF pixels in the herein discussed structures may be capable of projecting multiple beams in different view directions, and the beams may be focused at multiple focal surfaces. In accordance with some embodiments set forth herein, a true LF display may be achieved without moving parts. In some embodiments, a simple multiview display based on projected beams may be transformed into a true 3D LF display with multiple focal planes to address the VAC problem.
[0123] Rendering of 3D images can be performed with the presented optical hardware structure. As the hardware can be used for selectively creating successive focal planes, the rendering functionality present in existing multiview displays may be extended by adding focal layers that are rendered sequentially.
[0124] FIG. 13 is a schematic plan view illustrating an example LF display structure according to some embodiments. Light is emitted from a layer 1302 with separately controllable small emitters, such as pLEDs. A lens structure 1304 (e.g., polycarbonate lenticular sheet) placed on top of the emitters collimates the light into a set of beams that are used to form the image at different viewing directions. The emitters 1302 and collimator lenses 1304 form a series of projector cells that are separated from each other with opaque boundary structures 1322 that suppress crosstalk between neighboring cells.
[0125] The display structure 1300 shown in FIG. 13 includes a spatial light modulator (SLM) 1306 and a focusing lens array 1308. The SLM may be, for example, a custom LCD panel for adjusting the aperture in front of the multiview display projector cells by selectively blocking parts of the projected beams. The focusing lens 1308 may be, for example, a polycarbonate lenticular sheet, to focus the emitted beams to different focal planes. A single multifocal lens shape may be used for all projector cells and for the creation of multiple focal planes, as the SLM 1306 is used for adjusting the beam aperture and consequently the focal length of the focusing lens, as previously discussed. The combination of one multiview projector cell, one section of the SLM 1306 covering the cell aperture, and one focusing lens may be termed a LF pixel 1324. All LF pixels in the structure 1300 are capable of projecting multiple beams to different view directions, and all of the beams may be focused to multiple focal surfaces.
[0126] For a set of three scenarios, a large aperture at the light field rim 1310 may be used to generate a light source image at the furthest focal point 1312. An intermediate aperture in the middle of an LF pixel 1314 may be used to generate a light source image at an intermediate focal point 1316. A small aperture at the LF pixel center 1318 may be used to generate a light source image at the closest focal point 1320. For some embodiments, a projector lens 1326 may used to further adjust light source images for displaying to a viewer, as discussed further below. In other embodiments, the projector lens is omitted.
[0127] The physical size of the light emitting elements and the total magnification of the LF pixel optics determine the achievable spatial resolution on each 3D image virtual focal surface. When the light emitting pixels are focused on a surface that is located further away from the display device, the geometric magnification will make the pixel images appear larger than when the focal surface is located closer to the display. Diffraction may also impact the achievable resolution when the light emitter and LF pixel aperture sizes are very small, as previously discussed. In order to reduce the diffraction effects, focusing (and collimating) optical shapes may be used which make central portions of the beams focus closer to the display and edge parts of the beams focus further away. One such shape is a hyperbolic lens, as previously discussed. As the aperture size is larger when the beam edge is used, the pixel images with further focal points and greater magnification exhibit less diffraction blur than the pixel images closer to the display. The close-by pixel images may tolerate higher diffraction blur coming from the smaller central aperture, as the spatial resolution is higher due to the lower geometric magnification factor. In some cases the diffraction may even be a desired effect, as it can be used for blurring out pixel boundaries when, for example, a cluster of light emitting elements is used to form one voxel on the close-by image plane, as spatial resolution is balanced over all focal surfaces with different magnification ratios.
[0128] For some embodiments, a light field display device may include an array of light-emitting elements configured to emit light; an array of lenses configured to focus the light from the array of light-emitting elements, such that each lens of the array of lenses divided into two or more portions and each portion of each lens configured to focus the light to different distances; and a spatial light modulator configured to control where the light falls on each lens. With some embodiments of the light field display device, the spatial light modulator includes a controllable aperture configured to select an optical path, such as the examples shown in FIG. 13. In some embodiments of the light field display device, the spatial light modulator is configured to select an optical path, an aperture size, and a shape of the light. For some embodiments, the light field display device includes an array of collimating optical elements configured to collimate the light from the array of light-emitting elements, such as the examples shown in FIG. 13. With some embodiments, the light field display device includes an array of light field (LF) pixels. Each LF pixel includes one or more light-emitting elements, one or more lenses (which may include separate collimating, focusing, and/or projecting lenses or a single combined lens of any of these types of lenses), and a portion of the SLM. Some embodiments have two or more LF pixels that are configured to focus the light to different distances, such as part of two or more focal planes. In some embodiments, the SLM acts as a controllable, separate aperture for each LF pixel. For some embodiments of a process performed by the light field display device, focusing a light beam includes generating a target light beam within a threshold angle of a target angle and within a threshold distance of a target focal distance, such as the three scenarios described above regarding generating light source images at three different focal distances using three different focal angles. In some embodiments, the spatial light modulator is a liquid crystal display (LCD). The LCD or SLM is operated to create a time-synchronized aperture by configuring a first set of pixels of the LCD to block a portion of a light beam and by configuring a second set of pixels of the LCD to pass a portion of the light beam.
[0129] It may be noted that the term focal distance is used to refer to a distance from the LF pixel at which light is focused. In embodiments in which light is collimated before being focused by a selected portion of a multifocal lens, the focal distance may be the same as the focal length of the selected portion of the multifocal lens. In other embodiments, the focal distance may be different from the focal length of the selected portion of the multifocal lens. For example, if the light is diverging when it is incident on the selected portion of the multifocal lens, the focal distance may be greater than the focal length of the selected portion of the multifocal lens.
[0130] In some embodiments, a display structure is used as a LF image generation module together with a separate projector lens array. This scenario may produce different focal surfaces in between some layers of the display, and a final image may be formed by a front projector lens array 1326 as shown in FIG. 13. Such a modular approach may permit use of a single image generation module for different viewing scenarios by changing a front projector lens array. For example, single user and multiple user embodiments may use the same LF image generation module in combination with different front projector lens arrays.
[0131] In some embodiments, a display structure may be incorporated into a head mounted display (HMD) device. In such embodiments, light emitters may comprise, for example, a very dense matrix of LEDs positioned behind a microlens arrays with short focal lengths and a small pixel LCD. The LED images may be formed at distances a few millimeters from the focusing lens array, and a pair of magnifying lenses (or the like) may project the images separately to each eye. Such lenses are commonly used in current virtual reality (VR) display devices. As the images are projected to each eye separately, in addition two different sections of the display may be used for producing the stereoscopic image pairs separately. This spatial division may reduce spatial multiplexing on the LF pixel level. Systems and methods as set forth herein may be utilized to add multiple focal places to AR and VR devices, thus improving user experiences.
In one embodiment, a display structure may be used as a multi-focal plane light field display. Such a multifocal plane light field display may include emitting light from a plurality of light emitting elements; collimating the emitted light from the plurality of light emitting elements into at least one light beam with at least one collimating optical element; operating a spatial light modulator to create a time synchronized aperture for the at least one light beam, the aperture controlling which portion of the at least one light beam is passed through the spatial light modulator; and focusing the controlled portion of the at least one light beam passed through the spatial light modulator with at least one optical element used as a focusing lens. In some embodiments, illumination of the plurality of light emitting elements, the relative position of the plurality of light emitting elements to the at least one collimating optical element, and a shape and a location of the aperture controlled by the spatial light modulator are time synchronized in a manner to produce a plurality of focal planes at a plurality of distances from the multi-focal plane light field display. In some embodiments, multiple light beams from a subset of the plurality of light emitting elements are used to create a light field voxel. In some embodiments, the focused light beams may further be projected through at least one projection optical element for near-eye head mounted display device applications (e.g., HMD VR or AR devices). Some embodiments of a display structure includes a Fresnel lens associated with each of the plurality of light field pixels, wherein a field of view of the plurality of light field pixels overlap within a threshold distance of a target viewing distance. Some embodiments of a process performed by a display structure includes projecting a focused light beam through at least one projection optical element configured for a near-eye display device.
[0132] FIGs. 14A-14B are schematic plan views illustrating example display optical function scenarios if more than one light emitter is activated and more than one LF display pixel is used simultaneously according to some embodiments. In FIG. 14A, one cell has three active light emitting components 1402. In this case the spatially separated emitters 1402 generate a series of collimated beams that propagate in different directions. The emitted beams pass through a collimator lens 1404 and an SLM 1406. Beams passing through the SLM are focused by a focusing lens 1408 to a surface 1410 at a certain distance from the display. The source images are magnified making the voxels appear bigger than the emitter surfaces for the LF pixel field of view (FOV) 1412. Due to the slightly different distances between the central and edge components in the emitter row in respect to the lens position, the virtual focal surface 1410 defined by the best focal spots of each directional beam is actually slightly curved. This curvature can be either negative or positive depending on the projection lens shapes. Single LF pixel focal surface curvature is also more noticeable on locations closer to the display than on the further focal distances. At larger distances from the display, the pixel image depth of focus becomes larger and the focal surfaces will approach planar surfaces.
[0133] As shown in a portion of a light field display device 1400 of FIG. 14A, the width of the whole emitter row inside a single projector cell has been magnified and the row image is wider than the original LF pixel width. This means that the row images of neighboring LF pixels will start to overlap when the focal surface is at a further distance from the focusing lens than the source is from the collimating lens, and where the magnification ratio is larger than 1 :1. Accordingly more than one LF pixel may be used for the creation of one voxel. This combination scenario 1450 is shown in FIG. 14B, where two neighboring LF pixels each have one of the active light emitters 1452, which together form one joined voxel 1460 with a voxel FOV 1462. Each light emitter emits a beam in which a portion of the beam passes through a collimator lens 1454, an SLM 1456, and a focusing lens 1458. As the pixel row images overlap they form a focal surface which can be used for presenting 3D image information, such as with successive 2D images that are cross-sections of the whole 3D image. As the projected beams are focused at these focal planes, a viewer’s eyes have natural focus cues, and may more naturally accommodate the 3D image. A good quality visual experience may be achieved using only a few focal planes when the viewing distance is large enough that eye depth resolution is only ~0.6 diopters. The distance between successive focal surfaces can be further increased as the distance between the image surface and viewer is increased.
[0134] Without any light scattering media between the display and the viewer, all of the LF pixels in the display may project emitter images towards both eyes of the viewer. However, in order to create a stereoscopic image, one emitter inside the LF pixel should not be visible to both eyes simultaneously when a created voxel is located outside the display surface. This means the field-of-view (FOV) of one LF pixel covers both eyes, but the sub-pixels inside the LF pixel(s) should have FOVs that make the beams narrower than the distance between the viewer’s eye pupils (-64 mm on average) at the viewing distance. The FOV of one LF pixel and the FOVs of the single emitters may be determined by the widths of the emitter and magnification of the collimator-focus lens pair. One voxel created with a focusing beam may be visible to the eye when the beam continues its propagation after the focal point and enters the eye pupil at the designated viewing distance. The FOV of a voxel may cover both eyes simultaneously. If the voxel is visible to a single eye only, a stereoscopic effect is not formed and a 3D image is not seen. As one LF pixel emitter can be visible to only one eye at a time, the voxel FOV is increased by directing multiple crossing beams from more than one LF pixel to the same voxel inside a human persistence-of-vision (POV) time frame. In this case, the total voxel FOV may be the sum of individual emitter beam FOVs as illustrated in FIGs. 14A and 14B. [0135] In order to make LF pixel FOVs overlap at specified viewing distances, a display may for example be curved at a particular radius, or may be such that projected beam directions are turned towards a specific point such as with a flat Fresnel lens sheet. If the FOVs don’t overlap, the LF pixels may not be seen and some parts of the 3D image may not be formed. In general, due to the limited size of displays and practical limits for possible focal distances, a viewing zone may be formed in front of and/or behind a display device where a 3D image is visible.
[0136] For some embodiments, the spatial light modulator is configured to produce an increased pixel infill, for example, such as if more than one set of light emitting elements are used to generate a light field voxel. In some embodiments, two or more light beams generated by two or more light emitting elements form a light field voxel.
[0137] FIG. 15 is a schematic plan view illustrating an exemplary viewing geometry 1500 available with a 3D LF display structure according to some embodiments. In front of the display 1502, there may be a viewing zone 1504 limited by the furthest focal distance from the display 1502 with reasonable spatial resolution and by the whole display FOV 1506. A light field pixel FOV 1508 is shown for an example LF pixel. In FIG. 15, the display surface is curved with a radius about the same as the designated viewing distance. As such, the overlapping LF pixel FOVs form a visibility zone around the facial area of the viewer 1512. The size of a visibility zone 1510 may determine the amount of movement permissible for the viewer’s head while maintaining presentation of the 3D LF. Both of the viewer’s eye pupils (with a pupil distance 1514 separating the eye pupils) should be inside the visibility zone 1510 at the same time for stereoscopic image presentation.
[0138] FIGs. 16A-16B are schematic plan views illustrating exemplary viewing geometry scenarios according to some embodiments. The size of the visibility zone may be configured or adapted for particular use cases by altering the LF pixel FOVs. In the scenario 1600 of FIG. 16A, a single viewer is sitting in front of the display and both eye pupils are covered by a small visibility zone 1604 achieved with a narrow LF pixel FOV 1602. The minimum functional width of the zone is determined by the eye pupil distance (on average ~64 mm). A small width also means a small tolerance for viewing distance changes, as the narrow FOVs start to separate from each other both in front of and behind an optimal viewing location. The scenario 1650 of FIG. 16B depicts a viewing geometry where a LF pixel FOV 1652 is relatively wide, such as if there are multiple viewers inside the large visibility zone 1654 with each viewer at a different viewing distance. In the case of scenarios such as that depicted in FIG. 16B, the positional tolerances may be relatively large.
[0139] A visibility zone may be increased by increasing the FOV of each LF pixel in the display structure.
This may comprise, for example, increasing the width of a light emitter row or making the focal length of the collimating optics shorter. A maximum width for an emitter row may be determined by the width of the projector cell (LF pixel aperture), as there may not be more components in a single projector cell than can be bonded to the surface area directly below a collimating lens. If the focal length of the collimating lens is decreased, the geometric magnification increases, making it more difficult to achieve a specific voxel spatial resolution. For example, if the collimator lens focal length is halved, the LF pixel FOV is doubled - but the source image magnification to all focal planes increases by a factor of two and accordingly the voxel size on a given focal plane is also doubled. This resolution reduction may be compensated through decreasing the highest magnification ratio by bringing the edge of the viewing zone closer to the display surface. This may make the total volume where a 3D image is visible shallower, and thus a visual experience more restricted. Overall, this connection between the different design parameters may result in a trade-off between 3D image spatial resolution and size of the viewing and visibility zones. If the visibility zone is increased, choices must be made between lower resolution on the focal plane closest to the viewer or decreasing the size of the image viewing zone.
[0140] Structures as set forth herein may cause light loss due to some of the emitted light being blocked in the aperture control layer by the SLM. The amount of lost light will be determined by the pixel structure of the SLM. If a large number of different focal surfaces is desired, the LF pixel apertures may be blocked with a greater number of different aperture configurations such that only smaller portions of the emitted light are allowed to pass the aperture at a time. The beams may also be focused only in, for example, the horizontal direction (or other single direction, or the like), in which case the SLM may be a linear component allowing all of the emitted light to pass in the vertical direction. Additionally, in such a case the collimating and focusing elements may also be, for example, cylindrical rather than spherical as in the case of a two-dimensional multiview display. If unique views are desired both in the horizontal and vertical directions, SLM aperture patterns may also be two-dimensional - for example concentric rings filling the LF pixel apertures. In some embodiments, rectangular aperture shapes may also be used, such as when currently available LCD panels are used, where pixels are arranged in rectangular rows and columns. Some energy savings may be attained by utilizing reflective polarizers in the LCD stack instead of absorbing components. This may allow recycling of the light not in the right polarization orientation for the LCD. In this case light leakage the view directions should be limited such that stray light is adequately controlled.
[0141] Color images may be created by utilizing successive rows of red, green, and blue components on a light emitting layer. The colors may be combined, for example, with a separate combiner structure on top of the components or by overlapping projected beams on the focal surface layers. A diffractive structure connected to the multifocal element may also be used for color correction if refractive collimating and focusing elements are used for beam creation and focusing. Particular color combination approaches may be selected based on, for example, the types of multifocal elements used in a display structure. [0142] In some embodiments set forth herein, the SLM may comprise a LCD panel. SLM functionality allows the use of current LCD panels in the structure of some embodiments set forth herein. The SLM pixels may be used with a binary on-off functionality when light emitting pixels (e.g., LEDs) are modulated separately. An LCD panel can also be used for pixel intensity modulation in some embodiments, which may reduce complexity of light emitting layer controls. Switching speed requirements for the SLM may be relatively easily satisfied, as flicker free images of ~60 Hz may be achieved with the SLM. In some embodiments, primary 3D image generation is performed by a multiview display module behind the aperture controlling structure, and the SLM is only used for passing or blocking parts of beams that are directed to a viewer’s eyes, making the human visual system the determining factor. Overall, SLM and light emitting layer controls may be fitted together and synchronized, but currently available LCD panel refresh frequencies are adequate to achieve this.
[0143] In some embodiments set forth herein, the SLM may comprise an electrophoretic display (e.g., e- ink) or the like, using a display cell structure capable of being made transmissive.
[0144] In some embodiments, a SLM may comprise a black-and-white panel. If light emitting pixels are made to generate only white light, a SLM may be used for color filtering and the light emitting layer may be simplified. In such cases color filter arrangements may be configured such that longer wavelength light passes the SLM layer with somewhat larger apertures than shorter wavelength light, to compensate for larger diffraction blur occurring with red light (i.e., longer wavelength).
[0145] In some embodiments, thickness of the SLM may be minimized to maintain homogeneous LF pixel beam intensities across the pixel FOV(s). If the structure containing the aperture modulator is very thick, the off-axis beams will hit the aperture shapes from diverse angles and with different parts of the beam. This causes unequal beam intensities to different view angles and it may be compensated with a calibration method that e.g. lowers the intensity of emitted light at the central part of the emitter matrix in relation to the emitters on the outer edges of the matrix. Current LCD display manufacturing technologies can be used to make panels that are below 1 mm thick, which allow for LF pixels that have aperture sizes in the range of ~0.5 - 1.0 mm. Without a backlight layer, which is not necessary for the SLM layers, LCD layer thickness below 0.5 mm is feasible. Thin LCDs can be used especially if the aperture modulator is integrated to the lens sheets by e.g. lamination and the microlens layers provide enough support and protection for the LCD stack.
[0146] In some embodiments, the SLM may be synchronized to light emitting pixels and image content.
In some other embodiments, rather the SLM may be used to create a series of successive 2D images at different focal surfaces one after another, as“slices” of the 3D image. This may simplify rendering, such as where only a few focal surfaces are used. In some other embodiments, the SLM may adjust the relative brightness levels of the voxels of each focal plane in a more continuous manner, so that the combination of focal planes may cause the 3D light field image to appear more continuous in the depth direction.
[0147] The SLM may be used for final selection of blocked and passed beam focus layers, and in these cases the SLM may be controlled in consideration of the view direction(s) determined, for example, with the additional use of an active eye tracking module.
[0148] In some embodiments, rendering speed requirements may be eased by grouping the light emitting layer and SLM pixels such that an image is displayed as interlaced rather than successive single pixels. The number of pixels included in one group may be based on light emitting pixel size and pitch, SLM pixel size, and size of the beam aperture to be controlled with the SLM.
[0149] FIG. 17 is a schematic perspective view illustrating an exemplary embodiment 1700 of a curved 3D LF display according to some embodiments. As shown in FIG. 17, a tabletop 3D LF display device 1706 with curved 75” screen may be placed at a 1500mm distance 1710 from a single viewer 1702. A display may form a light field image to a volumetric virtual viewing zone 1704, which may cover the distance 1708 from the display surface to 1000mm from the viewer’s position. The display may generate multiple views both in the horizontal and vertical directions using the LF pixel structure(s) as previously discussed.
[0150] FIG. 18 is a schematic plan view illustrating an example display structure 1800 using LF pixels according to some embodiments. Light is emitted from LED arrays 1802 (for example having component size 2mhi x 2 m, pitch 3 m, and 30mhi height for a sub-array). Rotationally symmetric collimator lenses are placed at a distance 1820 (such as 2mm) from the mίEϋe, and the array may comprise a microlens sheet
(such as a polycarbonate hot-embossed microlens sheet of 0.5mm thickness 1818). The hyperbolic collimator lenses 1804 (for example, having a 1 05mm radius of curvature and conic constant of -9) may be multifocal (such as with a central focal length of ~2mm). A focusing lens array 1808 (for example also a
0.5mm thick 1814 microlens sheet formed of polycarbonate) may have rotationally symmetric spherical surfaces (for example having a relatively large radius of 80mm). The aperture sizes 1810 of the collimating and focus lenses may be selected (such as at 750mhi). A LCD panel stack 1806 with polarizers and patterned liquid crystal layer (for example 0.5mm thick 1816) may be laminated in between the collimating lens array and the focusing lens array sheets. As individually addressable pixels the LCD may have concentric rings covering the lens apertures, permitting adjustment of the light beam limiting aperture size and shape of each
LF pixel individually. A central part of each LCD aperture may be a disc (for example with a diameter of 220 mhi). An outermost part of LCD apertures may have a circular inner diameter (for example of 440mhi) and a rectangular outer shape (for example a size of 750mhi x 750mhi). The third, intermediate aperture may be positioned between these aperture shapes and block or pass a ring-shaped section of each beam generated by the mίEϋe and collimating lens. The whole optical structure may be of minimal thickness (in the example, only 3.5mm thick) and the LF pixels are capable of projecting multiple beams which can be focused to three different focal surface layers in front of the display using the three different aperture shapes in each pixel.
[0151] FIG. 19 is a schematic plan view illustrating an example horizontal viewing geometry 1900 available with the exemplary display structure of FIG. 18 according to some embodiments. In the LED matrix, the red, green, and blue components have the same size and are bonded as alternating rows in the vertical direction. Their colors combine in the projected beams at the different focal layers when crossing beams are combined into voxels. Both the collimator and focusing lens arrays may have integrated diffractive structures that compensate color dispersion in the lens materials. Each LF pixel may have, for example, 22 red, green, and blue rows of 100 LEDs, which exemplarily may be used for projecting 100 unique views in a horizontal direction with a total FOV 1910 of ~8.5 degrees. It may cover an ~225mm wide viewing window 1906 at a 1500mm viewing distance 1912, where the different horizontal view directions are spatially separated from each other by a distance of ~2.3mm. As such, at least two views can be simultaneously projected into a viewer’s ~4mm diameter eye pupils, fulfilling the SMV condition. The distance 1908 from the display surface to the viewer’s position is 1000mm for this embodiment.
[0152] As in FIG. 17, the whole display 1902 of FIG. 19 is curved with a radius 1912 of 1500mm in the horizontal direction. This makes the single LF pixel viewing windows overlap and cause an ~225mm wide viewing window 1906 to form for a single user at the designated 1500mm viewing distance. In the vertical direction, the viewing window height is ~150mm as determined by the total height (66 x 3mhi = 0.2mm) of the LED rows and LF pixel optics magnification to the viewing distance (750:1 ). A cylindrical polycarbonate Fresnel lens sheet with 1500mm focal length may be used for overlapping the vertical views. A created visibility zone around the viewing window 1906 may enable the viewer to move their head ~75mm left or right, as well as -150 mm forward and ~190mm backwards from the nominal position. Both eye pupils of an average person will stay inside the visibility zone with these measurements and the 3D image can be seen in the whole display viewing zone 1904 shown in FIG. 19. Tolerance for the head position may be improved by display structures which permit adjustment of the display tilt angle in the vertical direction or display stand height, and is generally adequate for a single viewer sitting in front of a display in a stable setting (e.g., at a desk or table).
[0153] Some embodiments of an example light field display configuration have an display optical structure at a 1500mm distance from the viewing window. The example light field display configuration has four intermediate focal planes located between the device and the viewer at distances of 15mm, 150mm, 325mm, and 500mm from the display surface. For this example light field display configuration, a pair of rectangular light emitters created with LEDs produce a 2 m x 2 m surface area and 3mhi pitch. These light emitters were used in simulations that traced light distributions from the display optical structure to the four intermediate focal planes and a viewing window. Simulations were made using only green 550nm light, which represents an average wavelength in the visible light range. Illumination distribution simulation results show the geometric imaging effects. Diffraction effects are estimated separately for an aperture based on a simulated airy disc radius. With blue light sources, the diffraction effects are somewhat smaller than with the green sources and with red light sources, the diffraction effects are somewhat larger.
[0154] FIG. 20 is a schematic plan view illustrating exemplary simulation ray trace images according to some embodiments. In some embodiments of an example central aperture, the rays focus to a plane (which is 15mm away from the device’s display surface for this example configuration). In some embodiments of example outer edge apertures and example intermediate apertures produce beams that are more collimated and focus the pixel images at much further distances from the device. The exemplary ray trace scenario 2000 shown in FIG. 20 illustrates how three neighboring LF pixels 2002, 2004, 2006 may be used together in creation of two focused voxels 2008, 2010 at a focal plane (which may be at a 15mm distance from the display surface, for example). In the scenario 2000 of FIG. 20, the upper voxel 2008 may be created by activating three LEDs that are located at 39mhi above the optical axis in the topmost LF pixel 2002, at 57mhi below the optical axis in the intermediate LF pixel 2004 and 150mhi below the optical axis in the bottom LF pixel 2006. The lower voxel 2010 may be created in a similar way as the upper voxel 2008. This scenario 2000 shows how a voxel may be created on a focal surface by crossing beams emitted from multiple LF pixels.
[0155] FIG. 21 is a flowchart showing an example process for operating a spatial light modulator to generate a focal plane image according to some embodiments. A light field projection process 2100 includes emitting 2102 light from an array of light emitting elements in some embodiments. The light field projection process 2100 further includes collimating 2104 the emitted light from the array of light emitting elements into at least one light beam with at least one collimating optical element. The light field projection process 2100 further includes operating 2106 a spatial light modulator to create a time-synchronized aperture for the light beam. The time-synchronized aperture controls a portion of the light beam to be passed through the spatial light modulator. The light field projection process 2100 further includes focusing 2108 the controlled portion of the light beam passed through the spatial light modulator with at least one optical element used as a focusing lens.
[0156] Note that various hardware elements of one or more of the described embodiments are referred to as“modules” that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
[0157] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Claims

CLAIMS What is Claimed:
1. An apparatus comprising:
an array of light-emitting elements configured to emit light;
an array of multifocal lenses configured to focus the light from the array of light-emitting elements, each of the multifocal lenses having two or more portions, different portions being configured to focus the light to different distances; and
a spatial light modulator configured to control the portion of each lens upon which the light falls.
2. The apparatus of claim 1 , further comprising an array of collimating optical elements between the array of light-emitting elements and the array of multifocal lenses, the array of collimating optical elements being configured to collimate the light from the array of light-emitting elements.
3. The apparatus of any of claims 1 -2, wherein at least two of the portions of each multifocal lens are substantially annular in shape, each annular portion being configured to focus the light to a different respective focal distance.
4. The apparatus of claim 3, wherein the spatial light modulator comprises, for each of the multifocal lenses, a plurality of concentric annular pixels, each pixel corresponding to a respective portion of the corresponding multifocal lens.
5. The apparatus of any of claims 1 -4, wherein the apparatus comprises a plurality of light field pixels, each light field pixel comprising one of the multifocal lenses and a corresponding plurality of the light- emitting elements.
6. The apparatus of claim 5, further comprising opaque boundary structures between adjacent light field pixels.
7. The apparatus of any of claims 1-6, wherein the spatial light modulator is a liquid crystal display (LCD).
8. The apparatus of any of claims 1 -7, wherein the multifocal lenses are axicon lenses.
9. The apparatus of any of claims 1 -7, wherein the multifocal lenses are hyperbolic lenses.
10. The apparatus of any of claims 1 -9,
wherein a first portion of the two or more portions of each lens is a diffractive lens, and wherein a second portion of the two or more portions of each lens is a refractive lens.
1 1. A method comprising:
emitting light from a light-emitting element an array of light-emitting elements toward a corresponding multifocal lens in an array of multifocal lenses;
operating a spatial light modulator between the array of light-emitting elements and the array of multifocal lenses to control which portion of the multifocal lens the light is incident on; and
focusing the light by the multifocal lens to a focal distance associated with the portion of the multifocal lens the light is incident on.
12. The method of claim 11 , further comprising collimating the emitted light before it reaches the spatial light modulator.
13. The method of any of claims 1 1 -12, further comprising displaying at least one voxel by focusing light from a plurality of multifocal lenses onto a common focal spot.
14. The method of any of claims 1 1 -13, wherein operating the spatial light modulator comprises generating a substantially annular aperture such that the light is incident on a substantially annular portion of the multifocal lens.
15. The method of any of claims 1 1-14, wherein a size of the substantially annular aperture is selected to determine the focal distance.
- se
PCT/US2019/018018 2018-02-20 2019-02-14 Multifocal optics for light field displays WO2019164745A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862633047P 2018-02-20 2018-02-20
US62/633,047 2018-02-20

Publications (1)

Publication Number Publication Date
WO2019164745A1 true WO2019164745A1 (en) 2019-08-29

Family

ID=65529893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/018018 WO2019164745A1 (en) 2018-02-20 2019-02-14 Multifocal optics for light field displays

Country Status (1)

Country Link
WO (1) WO2019164745A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111736362A (en) * 2020-07-29 2020-10-02 中国人民解放军陆军装甲兵学院 Integrated imaging three-dimensional display system
WO2021076424A1 (en) 2019-10-15 2021-04-22 Pcms Holdings, Inc. Method for projecting an expanded virtual image with a small light field display
US20210356746A1 (en) * 2020-05-14 2021-11-18 Korea Institute Of Science And Technology Image display apparatus with extended depth of focus and method of controlling the same
US20220103804A1 (en) * 2019-01-03 2022-03-31 Psholix Ag Autostereoscopic display
US20220146853A1 (en) * 2020-02-19 2022-05-12 Boe Technology Group Co., Ltd. Light field display system
WO2022179312A1 (en) * 2021-02-26 2022-09-01 University Of Central Florida Research Foundation, Inc. Optical display system and electronics apparatus
WO2023283348A1 (en) * 2021-07-07 2023-01-12 The Regents Of The University Of California Metalens and metalens array with extended depth-of-view and bounded angular field-of-view
US11927776B2 (en) 2019-03-08 2024-03-12 Interdigital Madison Patent Holdings, Sas Optical method and system for light field displays based on beams with extended depth of focus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4210391A (en) 1977-09-14 1980-07-01 Cohen Allen L Multifocal zone plate
US7446733B1 (en) * 1998-03-27 2008-11-04 Hideyoshi Horimai Three-dimensional image display
EP2045648B1 (en) 2007-10-02 2012-04-11 Novartis AG Zonal diffractive multifocal intraocular lenses
US20170102545A1 (en) * 2014-03-05 2017-04-13 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3d augmented reality display with variable focus and/or object recognition
US20170371076A1 (en) * 2016-06-28 2017-12-28 Arizona Board Of Regents On Behalf Of The University Of Arizona Multifocal optical system, methods, and applications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4210391A (en) 1977-09-14 1980-07-01 Cohen Allen L Multifocal zone plate
US7446733B1 (en) * 1998-03-27 2008-11-04 Hideyoshi Horimai Three-dimensional image display
EP2045648B1 (en) 2007-10-02 2012-04-11 Novartis AG Zonal diffractive multifocal intraocular lenses
US20170102545A1 (en) * 2014-03-05 2017-04-13 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3d augmented reality display with variable focus and/or object recognition
US20170371076A1 (en) * 2016-06-28 2017-12-28 Arizona Board Of Regents On Behalf Of The University Of Arizona Multifocal optical system, methods, and applications

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220103804A1 (en) * 2019-01-03 2022-03-31 Psholix Ag Autostereoscopic display
US11729368B2 (en) * 2019-01-03 2023-08-15 Psholix Ag Autostereoscopic display
US11927776B2 (en) 2019-03-08 2024-03-12 Interdigital Madison Patent Holdings, Sas Optical method and system for light field displays based on beams with extended depth of focus
WO2021076424A1 (en) 2019-10-15 2021-04-22 Pcms Holdings, Inc. Method for projecting an expanded virtual image with a small light field display
US20220146853A1 (en) * 2020-02-19 2022-05-12 Boe Technology Group Co., Ltd. Light field display system
US20210356746A1 (en) * 2020-05-14 2021-11-18 Korea Institute Of Science And Technology Image display apparatus with extended depth of focus and method of controlling the same
CN111736362A (en) * 2020-07-29 2020-10-02 中国人民解放军陆军装甲兵学院 Integrated imaging three-dimensional display system
WO2022179312A1 (en) * 2021-02-26 2022-09-01 University Of Central Florida Research Foundation, Inc. Optical display system and electronics apparatus
WO2023283348A1 (en) * 2021-07-07 2023-01-12 The Regents Of The University Of California Metalens and metalens array with extended depth-of-view and bounded angular field-of-view

Similar Documents

Publication Publication Date Title
CN112219154B (en) 3D display directional backlight based on diffraction element
JP7227224B2 (en) Lightfield image engine method and apparatus for generating a projected 3D lightfield
US11991343B2 (en) Optical method and system for light field displays based on distributed apertures
WO2019164745A1 (en) Multifocal optics for light field displays
US11624934B2 (en) Method and system for aperture expansion in light field displays
CN112868227B (en) Optical method and system for light field display based on mosaic periodic layer
WO2018200417A1 (en) Systems and methods for 3d displays with flexible optical layers
US11846790B2 (en) Optical method and system for light field displays having light-steering layers and periodic optical layer
US11927776B2 (en) Optical method and system for light field displays based on beams with extended depth of focus
EP3987346A1 (en) Method for enhancing the image of autostereoscopic 3d displays based on angular filtering
WO2021076424A1 (en) Method for projecting an expanded virtual image with a small light field display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19707642

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19707642

Country of ref document: EP

Kind code of ref document: A1