WO2021076424A1 - Method for projecting an expanded virtual image with a small light field display - Google Patents

Method for projecting an expanded virtual image with a small light field display Download PDF

Info

Publication number
WO2021076424A1
WO2021076424A1 PCT/US2020/055084 US2020055084W WO2021076424A1 WO 2021076424 A1 WO2021076424 A1 WO 2021076424A1 US 2020055084 W US2020055084 W US 2020055084W WO 2021076424 A1 WO2021076424 A1 WO 2021076424A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
layer
mode
display device
image
Prior art date
Application number
PCT/US2020/055084
Other languages
French (fr)
Inventor
Jukka-Tapani Makinen
Kai Ojala
Original Assignee
Pcms Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pcms Holdings, Inc. filed Critical Pcms Holdings, Inc.
Publication of WO2021076424A1 publication Critical patent/WO2021076424A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/10Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images using integral imaging methods
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/33Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving directional light or back-light sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • H04N13/312Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers the parallax barriers being placed behind the display panel, e.g. between backlight and spatial light modulator [SLM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/32Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using arrays of controllable light sources; using moving apertures or moving light sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • H04N13/359Switching between monoscopic and stereoscopic modes
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/1333Constructional arrangements; Manufacturing methods
    • G02F1/1334Constructional arrangements; Manufacturing methods based on polymer dispersed liquid crystals, e.g. microencapsulated liquid crystals
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/1333Constructional arrangements; Manufacturing methods
    • G02F1/1347Arrangement of liquid crystal layers or cells in which the final condition of one light beam is achieved by the addition of the effects of two or more layers or cells
    • G02F1/13476Arrangement of liquid crystal layers or cells in which the final condition of one light beam is achieved by the addition of the effects of two or more layers or cells in which at least one liquid crystal cell or layer assumes a scattering state

Definitions

  • An example method of operating a multiview display where the multiview display may include a switchable diffuser
  • the example method in accordance with some embodiments may include: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a first image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a second image.
  • Some embodiments of the example method may further include: determining a distance between a viewer and the multiview display; and switching between the first display mode and the second display mode based on the distance.
  • switching between the first display mode and the second display mode may switch the switchable diffuser between the transparent state and the translucent state.
  • the first image may be a virtual image displayed at a distance from the multiview display.
  • the distance from the multiview display may include a viewing distance between a viewer of the multiview display and a physical display of the multiview display.
  • the first image may be a three-dimensional (3D) image.
  • the first image may be a two-dimensional (2D) image.
  • the second image may be a two-dimensional (2D) image displayed on the multiview display.
  • the switchable diffuser may be a liquid crystal diffuser.
  • the multiview display may be a directed backlight display.
  • the multiview display may be a light field display.
  • the multiview display may be a stereoscopic display.
  • the multiview display may include: a light-emitting layer comprising an addressable array of light-emitting elements; and an optical layer overlaying the light- emitting layer, wherein the switchable diffuser may be overlaying the optical layer, and wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • the optical layer may include a two-dimensional array of substantially collimating lenses.
  • the optical layer may include a two-dimensional array of collimating lenses.
  • the optical layer may include a two-dimensional array of converging lenses.
  • the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
  • Some embodiments of the example method may further include: a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
  • the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
  • the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
  • An example apparatus in accordance with some embodiments may include: a light-emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
  • a light-emitting layer comprising one or more light emitting elements
  • LC liquid crystal
  • SLM spatial light modulator
  • the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
  • An example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer, wherein the display device may be configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a virtual image, and wherein the display device may be configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on a physical display of the multiview display, a two-dimensional image.
  • the virtual image may be a three- dimensional (3D) image.
  • the virtual image may be a two-dimensional (2D) image.
  • An additional example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer and comprising a physical display, wherein the display device is configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display, in a manner configured to be seen by a viewer of the display device at a distance from the physical display, at least one of a three-dimensional virtual image or a two-dimensional virtual image, and wherein the display device is configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on the physical display, a two-dimensional image.
  • a further example display device in accordance with some embodiments may include: a light- emitting layer comprising an addressable array of light-emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • the optical layer may include a two- dimensional array of substantially collimating lenses.
  • the optical layer may include a two- dimensional array of collimating lenses.
  • the optical layer may include a two- dimensional array of converging lenses.
  • the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
  • Some embodiments of the further example display device may further include a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
  • the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
  • the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
  • a further additional example display device in accordance with some embodiments may include: optics configured to generate a virtual display at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • An example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
  • An additional example method in accordance with some embodiments may include: operating in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device; and operating in a second mode to generate an image in-focus at a physical location of a multiview display device.
  • Some embodiments of the additional example method may further include transitioning between the first mode and the second mode responsively to transitioning a state of a switchable diffuser between a transparent state and a translucent state.
  • Some embodiments of the additional example method may further include transitioning between the first mode and the second mode in accordance with transitioning a state of a switchable diffuser between a transparent state and a translucent state.
  • operating in the first mode and operating in the second mode each comprise controlling a liquid crystal (LC) diffuser positioned between a light emitting layer and a spatial light modulator of the multiview display device.
  • operating in the first mode may include controlling the LC diffuser to prevent light diffusion
  • operating in the second mode may include controlling the LC diffuser to cause light diffusion.
  • LC liquid crystal
  • operating in the first mode comprises operating a camera of the multiview display device to measure a viewing distance between a viewer and a physical display of the multiview display device.
  • Some embodiments of the additional example method may further include transitioning between operating in the first mode and operating in the second mode based on a viewing distance between a viewer and a physical display of the multiview display device.
  • Some embodiments of the additional example method may further include determining the viewing distance using an image of a front-facing camera of the multiview display device to determine a distance from the multiview display device to an eye of a user.
  • transitioning between operating in the first mode and operating in the second mode may cause the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold.
  • Some embodiments of the additional example method may further include displaying touchable user interface elements in at least one monocular display region of a display of the multiview display device if operating in the first mode.
  • An additional example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
  • a further additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
  • a light- emitting layer comprising one or more light emitting elements
  • LC liquid crystal
  • SLM spatial light modulator
  • the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
  • MLA micro lens array
  • a further additional example method of displaying images with a multiview display in accordance with some embodiments may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
  • FIG. 1A is a system diagram illustrating an example communications system according to some embodiments.
  • FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to some embodiments.
  • WTRU wireless transmit/receive unit
  • FIG. 2A is a schematic illustration showing a first example projector device.
  • FIG. 2B is a schematic illustration showing a second example projector device.
  • FIG. 2C is a schematic illustration showing an example foldable display device.
  • FIG. 3 is a schematic illustration showing example VR goggles generating a large virtual image.
  • FIG. 4 is a schematic plan view illustrating example light emission angles of a light field display according to some embodiments.
  • FIG. 5A is a schematic illustration showing an example beam divergence caused by a first geometric factor according to some embodiments.
  • FIG. 5B is a schematic illustration showing an example beam divergence caused by a second geometric factor according to some embodiments.
  • FIG. 5C is a schematic illustration showing an example beam divergence caused by a third geometric factor according to some embodiments.
  • FIG. 5D is a schematic illustration showing an example beam divergence caused by diffraction and a first aperture size according to some embodiments.
  • FIG. 5E is a schematic illustration showing an example beam divergence caused by diffraction and a second aperture size according to some embodiments.
  • FIG. 5F is a schematic illustration showing an example beam divergence caused by diffraction and a third aperture size according to some embodiments.
  • FIG. 6A is a schematic illustration showing an example image magnification lens with a first optical power according to some embodiments.
  • FIG. 6B is a schematic illustration showing an example image magnification lens with a second optical power according to some embodiments.
  • FIG. 6C is a schematic illustration showing an example image magnification lens with a third optical power according to some embodiments.
  • FIG. 7A is a schematic illustration showing an example first light source and lens configuration according to some embodiments.
  • FIG. 7B is a schematic illustration showing an example second light source and lens configuration according to some embodiments.
  • FIG. 7C is a schematic illustration showing an example third light source and lens configuration according to some embodiments.
  • FIG. 7D is a schematic illustration showing an example fourth light source and lens configuration according to some embodiments.
  • FIG. 8A is a schematic side view showing an example mobile display operating in a transparent diffuser display mode according to some embodiments.
  • FIG. 8B is a schematic side view showing an example mobile display operating in a translucent diffuser display mode according to some embodiments.
  • FIG. 9 is a schematic plan view illustrating an example optical display apparatus according to some embodiments.
  • FIG. 10A is a schematic plan view illustrating an example mobile display in virtual display mode according to some embodiments.
  • FIG. 10B is a schematic side view illustrating an example mobile display in virtual display mode according to some embodiments.
  • FIG. 11 is a schematic front view illustrating an example display surface divided into monocular and binocular regions according to some embodiments.
  • FIG. 12A is a schematic plan view illustrating an example viewing geometry for a display with parallel emission direction angles according to some embodiments.
  • FIG. 12B is a schematic plan view illustrating an example viewing geometry for a display with converging emission direction angles according to some embodiments.
  • FIG. 13A is a schematic plan view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments.
  • FIG. 13B is a schematic plan view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments.
  • FIG. 13C is a schematic side view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments.
  • FIG. 13D is a schematic side view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments.
  • FIG. 14 is a schematic plan view illustrating an example light field display geometry according to some embodiments.
  • FIG. 15A is a schematic plan view illustrating an example curved display in virtual display mode according to some embodiments.
  • FIG. 15B is a schematic side view illustrating an example curved display in virtual display mode according to some embodiments.
  • FIG. 16A is a schematic front view illustrating an example first projection simulation according to some embodiments.
  • FIG. 16B is a schematic front view illustrating an example second projection simulation according to some embodiments.
  • FIG. 16C is a schematic front view illustrating an example third projection simulation according to some embodiments.
  • FIG. 16D is a schematic front view illustrating an example fourth projection simulation according to some embodiments.
  • FIG. 16E is a schematic front view illustrating an example fifth projection simulation according to some embodiments.
  • FIG. 17A is a schematic plan view illustrating an example multiview display device according to some embodiments.
  • FIG. 17B is a schematic plan view illustrating an example multiview display device according to some embodiments.
  • FIG. 18 is a flowchart illustrating a first example process for operating a multiview display according to some embodiments.
  • FIG. 19 is a flowchart illustrating a second example process for operating a multiview display according to some embodiments.
  • a wireless transmit/receive unit may be used, e.g., as a multiview display in some embodiments described herein.
  • FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • ZT UW DTS-s OFDM zero-tail unique-word DFT-Spread OFDM
  • UW-OFDM unique word OFDM
  • FBMC filter bank multicarrier
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a ON 106, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-mounted display
  • a vehicle a drone
  • the communications systems 100 may also include a base station 114a and/or a base station 114b.
  • Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106, the Internet 110, and/or the other networks 112.
  • the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA High-Speed Packet Access
  • HSPA+ Evolved HSPA
  • HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
  • DL High-Speed Downlink
  • HSDPA High-Speed Downlink Packet Access
  • HSUPA High-Speed UL Packet Access
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
  • a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.11 i.e., Wireless Fidelity (WiFi)
  • IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
  • CDMA2000, CDMA2000 1X, CDMA2000 EV-DO Code Division Multiple Access 2000
  • IS-95 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global System for
  • the base station 114b in FIG. 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like.
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell orfemtocell.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the CN 106.
  • the RAN 104/113 may be in communication with the CN 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the CN 106 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104/113 and/or the CN 106 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT.
  • the CN 106 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
  • the CN 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112.
  • the PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite.
  • the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. 1 B is a system diagram illustrating an example WTRU 102.
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
  • GPS global positioning system
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (1C), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • a base station e.g., the base station 114a
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
  • the WTRU 102 may have multi-mode capabilities.
  • the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11, for example.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location- determination method while remaining consistent with an embodiment.
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • a gyroscope an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118).
  • the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, and/or any other device(s) described herein may be performed by one or more emulation devices (not shown).
  • the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
  • the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • RF circuitry e.g., which may include one or more antennas
  • FIG. 2A is a schematic illustration showing a first example projector device 202.
  • FIG. 2B is a schematic illustration showing a second example projector device 232.
  • One of the main design challenges to be solved with portable displays is how to make a device that is small enough to be carried around and is still able to produce a large image for an individual person easily and conveniently.
  • Pico Projectors are understood to include a projector within a mobile device, but the device relies on the availability of a suitable screen or surface for the projected image.
  • FIG. 2C is a schematic illustration showing an example foldable display device 262. Many foldable displays needing a mechanical operation to change the display size are prone to mechanical failure. Many foldable displays also need physical space for operation and may be costly.
  • FIG. 3 is a schematic illustration showing example VR goggles generating a large virtual image. Visual information enters the human perception system through the eye pupils. One method of covering a large FOV involves bringing the display surface as close to the eyes as possible.
  • This method has led to the development of a large number of different head mounted virtual reality (VR), augmented reality (AR) and mixed reality (MR) display devices. These devices usually have a very high resolution display that is brought close ( ⁇ 10 cm) to the face. A pair of magnifying lenses 306 or a combination of projector optics and a lightguide in front of the eyes may be used to see the enlarged image at a further distance (see FIG. 3).
  • the use of a close-range physical display 304 permits having a display or display section for each eye separately and permits, e.g. creation of stereoscopic 3D images.
  • head mounted devices FIMDs
  • FIMDs may cover a large FOV with more compact optical constructions than goggleless displays. FIMDs also may be more efficient in producing the amount of light used because the eye pupil pair “target area” is well defined in a relatively fixed position.
  • Goggleless displays may offer a more natural and convenient viewing experience than FIMDs. Goggleless displays also may allow image sharing, which may not be possible if the display is fixed in front of a single user’s eyes. Flowever, goggleless displays usually are physically large to cover a significant portion of the viewer’s FOV, and goggleless displays are more expensive to manufacture. Because user position is not fixed to the device, the light emitted from the display is spread over a large angular range to make the picture visible from multiple positions, and most of the light may be wasted. Contrary to most FIMDs, a viewer without goggles is able to change position around the display, and the viewer without goggles will be provided several different “views” of the same 3D scenery.
  • eye tracking systems may be used with goggleless 3D displays to determine the position and line of sight of the user(s), making it possible to direct the image or the “light field” straight to the eye pupils.
  • Such eye tracking and projection systems have their own hardware and processing power costs.
  • One potential issue with portable displays is how to make the device small enough to be carried by a user but able to produce images large enough for users to conveniently use the images. Bringing a display closer to the eyes may increase the FOV but the eyes are unable to focus properly to very short distances, even with larger, high-resolution displays. This potential issue is especially prominent with people 40 years of age or older that have hyperopia or “far sighted” eyes and only a narrow range of eye focus adjustment due to the stiffening of the eye lens. With an aging population, future displays will need to be viewable from longer rather than shorter distances, and this fact will diminish the immersive effect of the displayed content if display FOVs are too small.
  • foldable displays One possibility for larger images is to use foldable displays. Unfortunately, many such devices are prone to mechanical failure, are relatively difficult to use because such devices may require some manual operation, and need a lot of space when in use. Another possibility is to use a miniature image projector and a reflective screen, but this possibility has the difficulty of finding a large, flat (even), white, and diffuse reflecting surface for a good quality picture. Many such foldable displays are unable to limit visibility of the large image to the individual using the device. Good privacy and large virtual screen size may be achieved with many current VR systems, but many such systems are typically not fully portable and may require optical pieces in front of the physical display to function properly. Many such systems tend to isolate the viewer from surroundings, and devices attached in front of the eyes may make social interactions more difficult. The use of fixed optical pieces in front of the display also effectively prohibits using the device in any mode other than the expanded virtual image mode.
  • Displaying visual information is generally currently achieved mostly by using physical displays that control the color and luminance of multiple small pixels that emit light in all directions.
  • multiple display paradigms exist that improve the visual experience, the best visual experience may be produced by a display that is able to produce any arbitrary distribution of luminance and color as a function of position and viewing direction.
  • This luminance distribution is often called a light field (LF), or the plenoptic function. If a light field may be produced with sufficient accuracy, a human viewer (or eye 308) may not be able to notice the difference between a synthetic light field and a real one.
  • a real LF display device 304 should in many cases have full control over both the spatial and angular domains of light distribution. With such properties, a real LF display 304 may be used in creating a virtual image 302 at any position in space and with any size within the FOV covered by the display device 304 itself.
  • the human mind perceives and determines depths of observed objects in part by receiving signals from muscles used to orient each eye.
  • This eye convergence uses a triangulation method to estimate the object distance.
  • the brain associates the relative angular orientations of the eyes with the determined depths of focus.
  • Eye muscles connected to the single eye lens automatically adjust the lens shape in such a way that the eyes are focused to the same distance where the two eyes converge.
  • Correct retinal focus cues give rise to a natural image blur on objects outside of an observed focal plane and a natural dynamic parallax effect. In a natural setting, eye convergence and retinal focus cues are both coherent.
  • Correct retinal focus cues may require very high angular density light fields, potentially making it a big challenge to build a sufficiently accurate display that is capable of emitting the necessary light rays. Also, the rendering of the artificial image needs to be performed with high enough a fidelity.
  • PCT Patent Application W02008138986 is understood to describe using electroholography to create a light field display.
  • US Patent 6, 118,584 is understood to describe using integral imaging to create a light field display.
  • US Patent Application 2016/0116752 and US Patent Application US2014/0035959 are understood to describe using parallax barriers to create a light field display.
  • PCT Patent Application WO2011149641, US Patent 9,298, 168, and EP Patent Application 3273302 are understood to describe using directional backlighting to create a light field display.
  • dense spatial light modulators SLMs
  • SLMs dense spatial light modulators
  • a microlens array is placed in front of a 2D display. This divides the resolution of the underlying display to spatial and angular domains.
  • an array of static pinholes or slits is used to selectively block light.
  • US Patent 8,848,006 is understood to use dynamic barriers implemented with an SLM, or multiple stacked SLMs.
  • Parallax barrier displays also may include time multiplexing by displaying multiple different patterns (usually called frames) on the SLMs, so that the frames are integrated together due to persistence of vision.
  • beam redirection methods beams of light are scanned sequentially in time while their intensity is modulated. This method may be implemented, for example, with a directional backlight whose intensity is modulated by a SLM. As another example, this method may be implemented by having an array of intensity controlled beam generators combined with a beam redirection method.
  • FIG. 4 is a schematic plan view illustrating example light emission angles of a light field display according to some embodiments.
  • FIG. 4 shows a schematic view of the geometry involved in creation of the light emission angles from a LF display.
  • the LF display 404 in FIG. 4 produces the desired retinal focus cues and multiple views of 3D content in a single panel display.
  • one virtual image object point 402 is located behind the LF display 404.
  • a single display surface is able to generate at least two different views to the two eyes of a single user to create a coarse 3D perception effect.
  • the brain uses these two different eye images to determine 3D distance. Logically this is based on triangulation and interpupillary distance.
  • the LF display projects at least two different views inside a single eye pupil to provide the correct retinal focus cues.
  • an “eye box” (and thereby an eye box width 406) may be defined around the viewer eye pupil if determining the volume of space within which a viewable image is formed.
  • at least two partially overlapping views are projected inside an Eye-Box Angle (EBA) 414 covered by the eye-box at a certain viewing distance 410.
  • the LF display is viewed by multiple viewers 416, 418, 420 looking at the display from different viewing angles.
  • a multiview display may be a light field display, such as the light field display shown in FIG. 4.
  • a high-quality LF display may be created by using multiple projected beams that form voxels to different focal distances from the display.
  • each beam is very well collimated with a narrow diameter.
  • the beam waist may be positioned at the same spot than where the beams are crossing to avoid contradicting focus cues for the eye.
  • the beam diameter is large, the voxel formed in the beam crossing is imaged to the eye retina as a large spot.
  • Natural beam divergence means, e.g., that in front of the display, the beam is becoming wider as the distance between voxel and eye is getting smaller, and the virtual focal plane spatial resolution decreases while the eye resolution increases due to the close distance. Behind the display beam widening and resolution loss is compensated by the fact that eye spatial resolution drops with distance and it may become somewhat easier to create images with adequate resolution.
  • FIG. 5A is a schematic illustration showing an example beam divergence caused by a first geometric factor according to some embodiments.
  • FIG. 5B is a schematic illustration showing an example beam divergence caused by a second geometric factor according to some embodiments.
  • FIG. 5C is a schematic illustration showing an example beam divergence caused by a third geometric factor according to some embodiments.
  • the achievable light beam collimation is dependent on two geometrical factors: size of the light source and focal length of the lens. Perfect collimation 504 without any beam divergence may only be achieved in the theoretical case in which a single color point source (PS) 502 is located exactly at focal length distance from an ideal positive lens. This case is pictured in FIG. 5A.
  • PS single color point source
  • diffraction Another, non-geometrical, feature causing beam divergence is diffraction.
  • the term refers to various phenomena that occur when a wave (of light) encounters an obstacle or a slit. Diffraction is the bending of light around the corners of an aperture into the region of a geometrical shadow. Diffraction effects may occur in all imaging systems and cannot be removed, even with a perfect lens design that is able to balance out all optical aberrations. A lens that is able to reach the highest optical quality is often called “diffraction limited” because most of the blurring remaining in the image comes from diffraction.
  • the angular resolution achievable with a diffraction limited lens may be calculated from the formula of Eq. 1 :
  • FIG. 5D is a schematic illustration showing an example beam divergence caused by diffraction and a first aperture size according to some embodiments.
  • FIG. 5E is a schematic illustration showing an example beam divergence caused by diffraction and a second aperture size according to some embodiments.
  • FIG. 5F is a schematic illustration showing an example beam divergence caused by diffraction and a third aperture size according to some embodiments.
  • FIGs. 5D to 5F show a schematic presentation of light emitting from a point source (PS) 532, 542, 552 and how the beam divergence 534, 544, 554 is increased when the lens aperture size is reduced. This effect may actually be formulated into a general rule in imaging optics design: if the design is diffraction limited, the only way to improve resolution is to make the aperture larger. Diffraction is typically the dominating feature causing beam divergence with relatively small light sources.
  • PS point source
  • the size of an extended source has a big effect on the achievable beam divergence.
  • the source geometry or spatial distribution is actually mapped to the angular distribution of the beam, and this property may be seen in the resulting “far field pattern” of the source-lens system.
  • FIG. 6A is a schematic illustration showing an example image magnification lens with a first optical power according to some embodiments.
  • FIG. 6B is a schematic illustration showing an example image magnification lens with a second optical power according to some embodiments.
  • FIG. 6C is a schematic illustration showing an example image magnification lens with a third optical power according to some embodiments.
  • FIGs. 6A to 6C illustrate Eq. 2 for three different distances 602, 632, 672 between the lens and the image 604, 634, 674, resulting in larger images 604, 634, 674 as the distance 602, 632, 672 is increased. If the distance between source and lens is fixed, different image distances may be achieved by changing the optical power of the lens with the lens curvature.
  • the display projection lenses typically have very small focal lengths to achieve the flat structure, and the beams from a single display optics cell are projected to a relatively large viewing distance. This means, e.g., that the sources are effectively imaged with high magnification when the beams of light propagate to the viewer.
  • the source size is 50 m x 50 pm
  • projection lens focal length is 1 mm
  • viewing distance is 1 m
  • magnification ratio is 1000:1
  • the source geometric image is 50 mm x 50 mm.
  • the divergence causes the beams to expand. This applies not only to the actual beam emitted from the display towards the viewer but also to the virtual beam that appears to be emitted behind the display, converging to the single virtual focal point close to the display surface.
  • the divergence expands the size of the eye box. If the beam size at the viewing distance exceeds the distance between the two eyes, the stereoscopic effect breaks down. Flowever, if a voxel to a virtual focal plane is created with two or more crossing beams outside the display surface, the spatial resolution achievable with the beams will decrease as the divergence increases. If the beam size at the viewing distance is larger than the size of the eye pupil, the pupil will become the limiting aperture of the whole optical system.
  • FIGs. 7A to 7D illustrate how the geometric and diffraction effects work together in cases where one and two extended sources are imaged to a fixed distance with a fixed magnification.
  • FIGs. 7A to 7D show light source spot sizes for different geometric magnification and diffraction effects.
  • FIG. 7A is a schematic illustration showing an example first light source and lens configuration according to some embodiments.
  • an extended source (ES) 702 is located 10 cm from the magnification lens.
  • Light beams passing through an example lens aperture 704 are separated by 5 cm.
  • the light beams have a geometric image indicated as Gl 706.
  • the light source has a diffracted image height indicated by Dl 708.
  • FIG. 7A shows a lens aperture size that is relatively small, and the geometric image (Gl) 706 is surrounded by a blur that comes from diffraction in making the diffracted image (Dl) 708 much larger.
  • FIG. 7B is a schematic illustration showing an example second light source and lens configuration according to some embodiments.
  • two extended sources (ES1 (724) and ES2 (722)) are located 10 cm from the magnification lens.
  • Light beams passing through an example lens aperture 726 are separated by 5 cm.
  • the light beams generate respective image indicated with heights of GI1 (728) and GI2 (732), respectively.
  • Each light source has a respective diffracted image height indicated by DI1 (730) and DI2 (734), respectively.
  • FIG. 7B shows a case where two extended sources 724, 722 are placed side-by-side and imaged with the same small aperture lens.
  • the two source images cannot be resolved because the diffracted images 730, 732 overlap.
  • the aperture size of the imaging lens may be increased.
  • FIG. 7C is a schematic illustration showing an example third light source and lens configuration according to some embodiments.
  • an extended source (ES) 742 is located 10 cm from the magnification lens.
  • Light beams passing through an example lens aperture 744 are separated by 10 cm.
  • the light beams generate an image indicated with a height of Gl 746.
  • the light source has a diffraction index indicated by Dl 748.
  • the distance Gl 706, 746 is the same in both figures, but the diffracted image height 748 in FIG. 7C is smaller than the diffracted image height 708 in FIG. 7A.
  • FIG. 7C shows the same focal length lens as FIGs. 7A and 7B, but a larger aperture 744 is used in imaging the extended source 742. Diffraction is reduced, and the diffracted image 748 may be only slightly larger than the geometric image 746, which has remained the same size because magnification is fixed.
  • FIG. 7D is a schematic illustration showing an example fourth light source and lens configuration according to some embodiments.
  • two extended sources ES1 (764) and ES2 (762)
  • ES1 764
  • ES2 ES2
  • FIG. 7D is a schematic illustration showing an example fourth light source and lens configuration according to some embodiments.
  • two extended sources ES1 (764) and ES2 (762)
  • ES1 764
  • ES2 762
  • FIG. 7D are smaller than the diffracted image heights (730, 734) in FIG. 7B.
  • the two spots may be resolved because the diffracted images (770, 774) are not overlapping, thereby permitting, e.g., the use of two different sources (764, 762) and improvement of spatial resolution of the voxel grid.
  • the journal article Vincent W. Lee, Nancy Twu, & loannis Kymissis, Micro-LED Technologies and Applications, 6/16 INFORMATION DISPLAY 16-23 (2016) discusses an emerging display technology based on the use of so-called pLEDs. Micro LEDs are LED chips that are manufactured typically with the same techniques and from the same materials as other LED chips in use today.
  • pLEDs are miniaturized versions of the commonly available components and pLEDs may be made as small as 1 m - 10 pm.
  • One of the challenges with pLEDs is how to handle the very small components in display manufacturing.
  • the journal article Frangois Templier, et al., A Novel Process for Fabricating High-Resolution and Very Small Pixel-pitch GaNLED Microdisplays, SID 2017 DIGEST 268-271 (2017) discusses one of the densest matrices that has been manufactured so far, 2 m x 2 m chips assembled with 3 pm pitch.
  • the pLEDs have been used as backlight components in TVs, but pLEDs are expected to challenge OLEDs in the p-display markets in the near future.
  • pLEDs are more stable components and are able to produce high light intensities, which generally enables pLEDs to be used in many applications, such as head mounted display systems, adaptive car headlamps (as an LED matrix), and TV backlights.
  • the pLEDs also may be used in 3D displays, which use very dense matrices of individually addressable light emitters that may be switched on and off very fast.
  • a bare pLED chip may emit a specific color with a spectral width of -20-30 nm.
  • a white source may be created by coating the chip with a layer of phosphor, which converts the light emitted by blue or UV LEDs into a wider white light emission spectra.
  • a full-color source may also be created by placing separate red, green, and blue LED chips side-by-side. The combination of these three primary colors may create the sensation of a full color pixel when the separate color emissions are combined by the human visual system.
  • a very dense matrix may allow the manufacturing of self-emitting full-color pixels that have a total width below 10 pm (3 x 3 pm pitch).
  • Light extraction efficiency from the semiconductor chip is a parameter that indicates electricity-to- light efficiency of LED structures.
  • One method presented in US Patent No. 7,994,527 is understood to be based on the use of a shaped plastic optical element that is integrated directly on top of a LED chip. Due to a lower refractive index difference, integration of a plastic shape extracts more light from chip material in comparison to a chip surrounded by air. The plastic shape also directs the light in a way that enhances light extraction from the plastic piece and makes the emission pattern more directional.
  • 7,518,149 is understood to enhance light extraction from a pLED chip. This is done by shaping the chip to a form that favors light emission angles that are more perpendicular towards the front facet of the semiconductor chip and enable light to escape from the high refractive index material. These structures also direct the light emitted from the chip. In the latter case, the extraction efficiency is calculated to be twice as good in comparison with typical pLEDs, and more light is emitted to an emission cone of 30° in comparison with a typical chip with a Lambertian distribution of emitted light that is distributed evenly to the surrounding hemisphere.
  • LC-based tunable components may be used in 3D displays. Such LC components may be able to cause light beam adjustments without moving mechanical parts. LC-based components typically use linearly polarized light, which may lower optical efficiency and increase power consumption. Because LCDs are typically polarization-dependent devices, light propagation controlling components may be used in 3D displays without a high cost in efficiency.
  • This parameter may be affected by the LC grating electrode design that may be fine enough to, e.g., induce high density variating refractive index pattern to the LC material.
  • Tunable diffusers are used for scattering light and typically tunable diffusers may be electrically switched between transparent and translucent states. These components are based on electrical tuning of LC material that has been modified to perform some particular change under applied electric field.
  • the refractive indices of the aligned droplets are changed, and the interfaces between the droplets and surrounding polymer start to refract light.
  • the light diffusing effect tends to be large, but transmission through the component is lowered because a large portion of the light is scattered back, and there tends to be no control over the scattered light angular distribution.
  • 9,462,261 is understood to describe a hybrid structure where a combination of lenticular microlenses and LC diffuser layer is used in switching an autostereoscopic display between 2D and 3D display modes. This switching is done by diffusing the angular distribution of the multiview display by switching on the LC diffuser.
  • PCT Patent Application No. W02005011292 is also understood to describe switching between 2D and 3D display modes.
  • Electrically tunable LC diffusers that may be switched between transparent and translucent states have been widely utilized, e.g., in volumetric displays based on sequential image projection to multiple switchable screens as understood to be described in PCT Application Nos. W002059691 and WO2017055894. Such components scatter light uniformly to all directions. This feature may be useful when the components are used in volumetric displays such that the voxels are visible from all directions. However, such volumetric 3D images are not able to create occlusion properly, making them look translucent and somewhat unnatural.
  • an expanded virtual display image may be created with a small mobile display that is held close to the viewer eyes.
  • the example described optical method in accordance with some embodiments may be used in a portable device that has a light field display, which functions with directed backlight principle. The device may be switched between two modes: a high resolution mobile display that is viewed at intermediate distances and a virtual expanded display that is viewed at longer distances.
  • an example light field display includes, e.g., a dense matrix of light emitting elements, a light collimating layer, an electrically switchable diffuser, and a spatial light modulator (SLM).
  • the light emitters and collimating layer create directional backlight that is used together with the SLM in projecting the image from the display surface to an enlarged virtual image lying on a focus plane further behind the device.
  • Design of an example optical structure in accordance with some embodiments may, e.g., enable projecting virtual pixel images to the viewer eyes with high resolution and correct retinal focus cues for the more distant display.
  • an expanded virtual display image may be created with a small mobile physical display that is held close to the viewer eyes.
  • the device may be switched between two modes: (1) a high resolution mobile display that is viewed at intermediate distance and (2) a virtual expanded display that is viewed at longer distance.
  • the first viewing mode is suitable, e.g., for normal mobile phone display use like messaging or browsing social media, whereas the second mode may be activated for applications that may benefit from larger display size like watching movies or gaming.
  • a large immersive display is generated with a small, portable device.
  • a light field display functioning with directed backlight principle may be the optical structure used for virtual image projection.
  • Light is emitted from a layer with separately controllable small emitters, e.g., pLEDs.
  • a lens structure may be placed in front of the emitters to collimate light into a set of beams that illuminate the back of a spatial light modulator (SLM), which may be, e.g., a high density LCD panel.
  • SLM spatial light modulator
  • the emitters and collimator lenses form a series of projector cells that are able to create highly directional and controllable back illumination to the SLM. Beams generated with the backlight module are modulated with the SLM, and virtual image pixels are projected to the viewer’s eyes.
  • An electrically switchable liquid crystal (LC) diffuser placed between the collimating lenses and SLM may be used to switch between the two operational modes.
  • LC liquid crystal
  • the mobile LF display may not use as much physical material for creation of the enlarged image. Because the image is projected as a virtual display, the physical area used by the device may be relatively small with a potentially lower cost to manufacture. Many foldable or reliable displays are prone to mechanical failure, unlike a virtual display that may be constructed without any moving mechanical parts.
  • the virtual display size and position may be selected based on the intended use case.
  • the device size may be small even if larger virtual images are generated.
  • the virtual image size may be changed during use without changing the hardware.
  • the portable device may use a light field display functioning as a directed backlight.
  • This example optical projection functionality permits a larger display FOV, enlarging the image size and creating a viewing distance that is comfortable to use over extended periods of time, even with eyes that have lost their natural adaptation to close distances due to stiffening of the eye lenses with age.
  • the display is not attached to the head, and a viewer may use standard prescription glasses for close range viewing distances and for larger viewing distances in the virtual expanded view mode.
  • optical structure used for light field creation is embedded into the handheld device behind the display surface, some embodiments do not use additional carry-on devices (like smart glasses) or mechanical holders (like VR headsets that use mobile phones as the display). This allows easy and convenient use of the device in each viewing mode. In virtual extended view mode, the user is not isolated from surroundings because, for some embodiments, there are no obstructing structures around the eyes and the display may be moved away from the line of sight.
  • the switchable LC diffuser may have looser tolerances for positional accuracy because the switchable LC diffuser is transparent in the virtual image mode and diffusing in the normal display mode. Use of large virtual image projection distances enable having relatively loose tolerances for the light collimating layer focal length and distance from the emitters.
  • the optical structure may be manufactured with lower cost compared to many other possible LF display structures.
  • Display rendering of the LF image may use two views, one for each eye, of the same flat surface.
  • some embodiments may use an LF display structure because the two separate eye views overlap at the display surface and there is a need to provide adequate eye boxes that allow movement of the user eyes with respect to the unattached display device.
  • the display functionality may be restricted to two switchable viewing modes, some embodiments of the LF display optical structure may be optimized for showing the virtual expanded flat display view, whereas many other LF displays seek to create 3D images with depth. This difference may enable more robust optical structures that are easier to calibrate.
  • FIG. 8A is a schematic side view showing an example mobile display operating in a transparent diffuser display mode according to some embodiments.
  • FIG. 8B is a schematic side view showing an example mobile display operating in a translucent diffuser display mode according to some embodiments.
  • An example high resolution display with a modest field of view (FOV A) 808 may be obtained if the display 804 is located at arm’s length (VD A) 806 from the viewer 802.
  • a second mode with a virtual display 862 and a large field of view (FOV B) 858 may be created at a virtual viewing distance 860 when the device 854 is located a short physical distance (VD B) 856 from the viewer 852.
  • Adaptive optics and rendering enable a large in focus virtual image to be generated if the display is held close to the viewer (VD B) 856.
  • the resulting virtual display 862 generated for a single focal distance may not support image depth, but the virtual display may be flat or curved.
  • an expanded virtual display image is created with a small mobile display that is held close to the viewer’s eyes.
  • the device (which may be an LF display device) may be switched between two modes: (1) a high resolution mobile display that is viewed at intermediate distance and (2) a virtual expanded (or extended) display that is viewed at longer distance.
  • FIGs. 8A and 8B illustrate these two modes that may be generated with a single device by switching the device between the two operational modes.
  • the first mode may be used, e.g., for normal mobile phone display such as messaging or browsing social media, whereas the second mode may be activated for applications that may benefit from a larger display size, like watching movies or gaming.
  • a small, portable device may be used to generate a large display.
  • FIG. 8A presents viewing geometry during normal use of the display when the device is held at an intermediate distance, around 25 - 50 cm from the viewer eyes.
  • the handheld device may be used as a standard small-scale high-resolution display.
  • a viewer’s eyes may accommodate to the first viewing distance VD A because the image is located on the display surface, and the distance is adequate for comfortable viewing with hyperopic eyes.
  • FIG. 8B presents the viewing geometry in the second operational mode such that the device is brought closer to the eyes at a second viewing distance VD B, which may be around, e.g., 10 - 20 cm.
  • VD B second viewing distance
  • FOV A second viewing distance
  • An optical structure embedded to the mobile display may be used to project a virtual image to the viewer eyes. This image appears to be behind the display at a virtual viewing distance, which may be, e.g., between 50 - 150 cm. Because the optical structure is able to create an image that is clearly visible only when the eyes are focused at the designated virtual viewing distance and not the nearby real display, a comfortable and immersive viewing experience may be created with the longer virtual viewing distance and expanded FOV.
  • a method of operating the multiview display may include determining a distance between the viewer and the multiview display and switching between a translucent diffuser display mode (such as the display mode of FIG. 8A) and a transparent diffuser display mode (such as the display mode of FIG. 8B) based on the distance between the viewer and the multiview display. Such a switching between the transparent diffuser and translucent diffuser display modes may occur if the distance is above or below, respectively, a threshold.
  • a translucent diffuser display mode such as the display mode of FIG. 8A
  • a transparent diffuser display mode such as the display mode of FIG. 8B
  • operating in the translucent diffuser display mode may include controlling an LC diffuser to cause light diffusion and operating in the transparent diffuser display mode may include controlling the LC diffuser to prevent light diffusion.
  • the LC diffuser may be a polymer dispersed liquid crystal (PDLC) diffuser.
  • operating in the translucent diffuser display mode may include turning off the voltage to the PDLC diffuser to cause light diffusion and operating in the transparent diffuser display mode may include turning on the voltage to the PDLC diffuser to make the PDLC diffuser clear and prevent light diffusion.
  • the LC diffuser may be active or inactive, respectively, so as to cause light diffusion in the translucent diffuser display mode and to prevent light diffusion in the transparent diffuser display mode.
  • the method of operating the multiview display may include transitioning between operating in the first display mode and operating in the second display mode based on viewing distance, such as operating in the first display mode for viewing distances greater than a threshold and operating in the second display mode for viewing distances less than a threshold.
  • transitioning between operating in the first mode and operating in the second mode causes the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold
  • a method of displaying images with a multiview display may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
  • three-dimensional, virtual images are displayed in the first display mode in which the switchable diffuser is in a transparent state, and the two-dimensional images are displayed in the second display mode in which the switchable diffuser is in a translucent state.
  • two- dimensional, virtual images are displayed in the first display mode in which the switchable diffuser is in a transparent state.
  • Transparency_and_translucency refers to light that is transmitted but also diffused. If a display diffuser component is switched from a transparent (optically clear) state to a translucent (diffused) state, the angular distribution of light is mixed (scattered/diffused) and directionality is lost. A 3D stereoscopic image may not be formed when light lacks a clear direction that limits visibility of a display pixel to a single eye at a time.
  • the diffuser if the diffuser is in a transparent state, a 2D or 3D virtual image is created, whereas, if the diffuser is in translucent/diffusing state, a 2D image without stereoscopic effects is created on the physical surface of the display.
  • FIG. 9 is a schematic plan view illustrating an example optical display apparatus according to some embodiments.
  • the LC diffuser 918 spreads the light to create an omni-directional backlight
  • the SLM 920 operates in an, e.g., traditional display to give a high resolution.
  • the LC diffuser 918 is transparent, allowing a directional lighting source to be created from the light emitting layer and the micro lens array (MLA) 914.
  • MLA micro lens array
  • a light field display functioning with a directed backlight may be used for virtual image projection for some embodiments.
  • FIG. 9 shows schematic image of an example LF display optical structure and functionality utilizing the presented method.
  • Light is emitted from a layer 912 with separately controllable small emitters, e.g., pLEDs (such as source 1 (908) and source 2 (910)).
  • a lens structure that may be, e.g., an embossed polycarbonate microlens sheet 914, is placed in front of the emitters, and the lens structure collimates light into a set of beams that illuminate the back of a spatial light modulator (SLM) 920 that may be e.g., a high density LCD panel.
  • SLM spatial light modulator
  • the emitters and collimator lenses form a series of projector cells that are able to create highly directional and controllable back illumination to the SLM 920.
  • the structure may contain a polarizer sheet 916 between the microlenses 914 and LC diffuser 918 as well as in front of the SLM 920 (e.g., closer to the viewer 930) if, e.g., linearly polarized light is used for the operation of the switching component and/or SLM 920.
  • the light emitting layer 912 and MLA 914 may be identified as a backlight module 904, and the polarizer(s) 916, 922, the LC diffuser 918, and the SLM 920 may be identified as an image modulator 906.
  • the switchable diffuser is a liquid crystal (LC) diffuser.
  • the multiview display is a directed backlight display.
  • converging lenses such as the MLA 914 of FIG. 9
  • the multiview display may include a spatial light modulator (SLM) layer 920 that is external to the optical layer (such as the MLA 914 and/or the polarizer(s) 916, 922 of FIG. 9).
  • the switchable diffuser layer (such as the LC diffuser 918 of FIG.
  • a display device may include optics (such as the MLA 914 and/or the polarizer(s) 916, 922 of FIG. 9) for generating a virtual display (which may include a plurality of virtual display pixels 902) at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer (such as the LC diffuser of FIG.
  • an multiview display apparatus may include a light-emitting layer 912 comprising one or more light emitting elements 908, 910; a liquid crystal (LC) diffuser layer 918; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM) 920.
  • the one or more optical layers of a multiview display may include a micro lens array (MLA) 914; and one or more polarizers 916, 922.
  • a method of operating a multiview display may include operating in a first display mode and operating in a second display mode such that each display mode includes controlling a liquid crystal (LC) diffuser 918 positioned between a light emitting layer 912 and a spatial light modulator 920 of the multiview display device.
  • LC liquid crystal
  • each source on the light emitting layer creates a separate and well collimated illumination beam with the help of the MLA 914.
  • Beam direction is dependent on the spatial location of the source in respect to the projector cell optical axis determined by the collimating lens rotational axis.
  • the collimator lens is designed in such a way that the beam is virtually focused to the predetermined virtual display distance behind the display structure. Such a beam focus enables correct retinal focus cues to be created for each eye when the beams are projected into the viewer eye pupils.
  • an SLM 920 is used to modulate the beam luminosity and color to the right value based on the image content.
  • a virtual display pixel 902 is formed when two such beams with correct propagation angles enter the two eyes and the angular difference between the beams initiates the correct eye convergence 928 for the image content. If both retinal focus 924, 926 and eye convergence visible cues are coherent, the virtual display pixel 902 may be seen at the correct distance without any vergence accommodation conflict (VAC).
  • VAC vergence accommodation conflict
  • the display directional backlight structure is designed in such a way that the individual illumination beams are wider on the display surface than at the virtual display distance, the real display itself may be visible as a blurred illuminated surface that does not show misleading focus cues.
  • a design may be created by making the collimating lens aperture sizes larger than the virtual projected images of the individual sources. Large aperture designs may be used because the beams are created in a backlight module 904, and the SLM 920 may have much smaller pixels than the collimating lenses. Diffraction effects may be mitigated by using larger apertures.
  • mosaic lenses may be used to increase spatial resolution of images via interlaced apertures or diffraction blur removal, some examples of which are discussed in PCT Patent Application No.
  • PCT/US19/47761 entitled, Optical Method and System for Light Field Displays Based on Mosaic Periodic Layer,” filed on Aug. 22, 2019.
  • the fact that eyes cannot often easily adapt to very short viewing distances may be used by bringing the display very close to the viewer’s face when the virtual display mode is used. This means that, e.g., aging people with hyperopic eyes and limited eye focus range may adapt more naturally to the distant virtual image than younger people who have more flexible eye lenses.
  • an LC diffuser When an LC diffuser is made to scatter light, the highly directional illumination created with the sources and collimating lenses is diffused into an even backlight that is used in the first operational mode. In this mode, e.g., a Lambertian illumination distribution may be created with fewer active emitters behind the SLM because the directionality from the beams is lost.
  • the LC diffuser used for display mode switching may be, e.g., some currently available component based on volume scattering, for example.
  • an LC diffuser may use surface scattering. If a surface scattering effect is used with directional backlighting, light diffusion may be controlled to enable a private mode with limited angular visibility of the image or to save power with active adjustment of the FOV with a combination of illumination component selection and control over light diffusion level.
  • Color filters commonly utilized in LCDs may be used for generating a full-color projected virtual image from white backlight. Such lighting may be created with the help of e.g., blue or UV LEDs coated with thin phosphor layer that transforms the narrow single color emission spectra into a wider white light emission spectra.
  • white backlight Some example potential benefits of using white backlight are that single color components may be used for the light emitting layer and the manufacturing of the backplane becomes simpler and lower cost.
  • Other example potential benefits may include the ability to use readily available three-color high- resolution LCD panels for the SLM component as well as the ability to use commonly used structural designs and manufacturing processes of current mobile phones.
  • Single-color LED and phosphor light sources tend to have relatively low illumination efficiency because a large part of the generated white light is absorbed by the display panel color filters and polarizers.
  • LED sizes and bonding accuracy enable three-color pixels to be under 10 m in size.
  • full-color virtual images may be generated with separate red, green, and blue emitter components in which case color filters in the SLM structure are not used.
  • single color emitters may be coated, e.g., with quantum dot materials that transform the single color emissions to the three primary colors. Both when using separate red, green, and blue emitter components and when using single color emitters that are coated, color separation in directional illumination beams may occur due to different spatial positions of the light emitters. This color separation affects only the virtual display mode.
  • the switchable diffuser When the display is used in the standard small display operational mode, the switchable diffuser is turned on and angular distributions of different color beams are naturally mixed in the backlight unit to form a uniform back illumination.
  • the display colors are produced with temporal color rendering by showing the red, green, and blue images successively and synchronously with the SLM panel.
  • LEDs may be miniaturized and used in the light emitting layer in addition to LEDs.
  • LEDs may be very small with high brightness.
  • Larger light sources may require microlenses and larger focal lengths to achieve the same level of beam collimation.
  • Larger focal lengths may mean thicker display structures.
  • a relatively large source size may mean large spatial pitch values for the illumination beams and large angular spacing when the beams are collimated. Such effects may decrease the virtual display resolution because high spatial resolution on the virtual display may correspond to high angular resolution of the real LF display.
  • Laser diodes or RC LEDs may be used for the light sources, for some embodiments, because many laser diodes and RC LEDs have similar (or better) optoelectronic characteristics as pLEDs.
  • a backlight module may provide controllable directional illumination for the display SLM, which provides image modulation. Because the two modules have separate optical functions, the two modules may be optimized separately for different use cases and better manufacturability. For example, the size of the backlight projector cells may be much larger than the size of the SLM pixels, making them easier to manufacture. Spatial and angular distributions of light for the SLM surface may be very homogeneous in the small display operational mode. The switchable diffuser may be used for this purpose because the diffuser mixes naturally the angular distributions into a diffuse backlight.
  • the SLM may function with full native resolution and, e.g., a 500 ppi (pixels-per-inch) panel may be used to create images that do not have visible pixel structure at a designated intermediate viewing distance.
  • Resolution of the projected virtual image may be dependent on the source size, optical layer properties, and viewing geometry. Smaller retinal images of sources may be projected with displays that are closer to the eyes because geometric magnification is reduced with the viewing distance. This allows reasonable resolution to the projected virtual display because retinal spot sizes determine the perceptible virtual pixel sizes.
  • color filters transmit light differently if the light is coming from different angles. Filters based on material absorbance do this due to the different attenuation lengths connected to layer thickness and geometry, whereas filters based on dielectric layers generate different transmission properties due to light interference differences connected to incidence angles. Both of these filter types may be optimized for specific angular ranges and wavelengths, but this optimization should generally be done when designing the whole display system.
  • the different image beam directions are created by shining the LCD color filters from different directions, and the absorption lengths in the color filter material layers become different. This effect may cause somewhat different colors to appear in beams propagating to different directions and may require special color calibration with LCD pixel transmissions.
  • the phosphor material applied on top of the LEDs may be fine-tuned to compensate for this effect. Because the LEDs emitting light in different directions are located at different spatial positions under the collimating lenses, phosphor materials with slightly different color characteristics may be applied selectively.
  • FIG. 10A is a schematic plan view illustrating an example mobile display in virtual display mode according to some embodiments.
  • FIG. 10B is a schematic side view illustrating an example mobile display in virtual display mode according to some embodiments.
  • FIGs. 10A and 10B show the viewing geometry of a virtual display image beam projection. If a virtual display image 1008, 1058 is created, beams exiting the optical structure of the mobile display 1004, 1054 may be collimated to limit the visibility of a single beam to one eye at a time.
  • a minimum of two beams are required, one for each eye, to create correct eye convergence.
  • in the vertical direction shown in FIG.
  • the beams should, e.g., also have a focus at the virtual display location to create the correct retinal focus cues at the eye box 1006, 1056 of the viewer 1002, 1052.
  • the average interpupillary distance of adults is ⁇ 64 mm, which is the upper limit for beam size at the designated viewing distance. Because the real viewing distance is small in this operational mode, this size limit may be achieved if small enough light sources are available. If visibility of a single backlight illumination source is limited to a single eye at a time, the light field effect may be created because unique 2D images are projected to different eyes, and the natural parallax effect is created.
  • FIG. 11 is a schematic front view illustrating an example display surface divided into monocular and binocular regions according to some embodiments.
  • the display surface regions used for virtual image projection to left and right eyes become somewhat separated in the horizontal direction.
  • Horizontal distance between the two eyes means, e.g., that there are two regions (one region on each side of the display) which are visible only to one eye at a time.
  • These monocular regions surround a central binocular region which is able to project images to both eyes at the same time as shown in FIG. 11.
  • the display areas used for left and right eye projection may not be totally separated from each other, like in the case of VR headsets, due to the fact that the display is located relatively close, e.g., 10 - 20 cm, to the viewing distance. If the areas are separated artificially, e.g., by forming a baffle between the two display halves, the total FOV of the display would become limited. If the display areas for left and right eye overlap at the center, the FOV may be increased considerably. This increase may cause a need to perform optical image multiplexing for the binocular region, but the LF display optical structure may perform such multiplexing by shifting the virtual image focus to a further distance.
  • Central binocular region 1104 offers the most natural viewing condition and may be used for the main image content because the central binocular region is visible to both eyes at the same time at the correct viewing distance.
  • the left monocular region 1102 is the display area visible only to the left eye 1108, and the right monocular region 1106 is the display area visible only to the right eye 1110.
  • the monocular regions 1102, 1106 on both sides may be left dark for the other (or opposite) eye to reduce potentially distracting visual content, but the monocular regions 1102, 1106 may be used for extending the horizontal FOV further and for showing secondary visual elements. Examples of such cases are, e.g., virtual control buttons for video playback or directional controls for a video game. These elements may be switched between hidden and visible states with, e.g., voice or gesture control without disturbing visibility of the central main image content.
  • the small backlight projector cells may be copied over a finite display area. Because a single cell generates a limited set of beams that have very limited angular extent, the cell has a limited Total Divergence Angle (TDA). This parameter measures the total FOV of one projector cell.
  • TDAs of the projector cells located at the edges of the display may be designed to have a larger overlap region. Without overlapping areas, the edges of the projected image may not be visible simultaneously to both eyes, making the virtual display image incomplete.
  • the multiview display may be a stereoscopic display, such as the example of FIG. 11.
  • a multiview display apparatus may be operated in a virtual display mode to project display elements behind the multiview display apparatus.
  • touchable user interface elements may be displayed in at least one monocular display region of the display.
  • touchable user interface elements may be displayed in the left and/or right monocular display regions or the central binocular display region of FIG. 11.
  • FIG. 12A is a schematic plan view illustrating an example viewing geometry for a display with parallel emission direction angles according to some embodiments.
  • FIG. 12A shows the horizontal viewing geometry of a display 1202 in a case such that wide overlapping TDAs 1204 form a viewing window 1210 around the facial area of the viewer 1208. If a continuous array of small sources is used behind the projector lens array, the TDA 1204 may be very wide, and source location selection alone may be enough for overlapping the image beams from different parts of the display.
  • the Emission Direction Angles (EDA) 1206 are shown without a tilt.
  • FIG. 12B is a schematic plan view illustrating an example viewing geometry for a display with converging emission direction angles according to some embodiments.
  • the Emission Direction Angles (EDAs) 1256 of the MDP 1252 located at the display edges may be tilted towards the display center line.
  • angular resolution may be increased if the TDAs 1254 that form a viewing window 1260 around the facial area of the viewer 1258 are narrower.
  • this increase may be achieved by shifting the nominal positions of light sources slightly inside the MDP 1252 and by increasing the amount of shift at the display edges.
  • an extra optical element e.g., a combination of Fresnel lens and protective window
  • the overlap issue may be addressed by making the whole display surface with a specific curvature for a predetermined viewing distance.
  • Light intensity of each SLM pixel and illumination beam may be controllable through a certain dynamic range to make a virtual display with good image quality.
  • the light intensity may be adjusted by controlling the amount of light passing through each pixel using two polarizers and electrically controllable liquid crystal material that twists the polarization state of the passing light.
  • a combination of backlight intensity adjustment and LCD pixel absorbance may be used, e.g., to achieve a picture with a higher contrast ratio.
  • this method is called “local dimming.”
  • the electric current flowing through each light emitting component may be adjusted continuously.
  • the component brightness may be adjusted by pulse width modulation (PWM).
  • LEDs in general are components that may be switched extremely fast with adequate dynamic range to generate a flicker-free image.
  • the size of the backlight module single beam size may be fitted to the size of the LCD pixel.
  • a pixel-level intensity adjustment may be made with the combination of the backlight module and the LCD. This method may be used for larger dynamic range image pixels and may enable faster display panel switching speeds because the intensity adjustments may be partially handled by the backlight module.
  • Quality of the virtual display image may be improved by increasing the number of directional illumination beams projected to each eye pupil. Short projection distance from the mobile display to the eye pupils may ease achievement of the SMV condition and generate better retinal focus cues for a natural viewing experience. Even if individual backlight projection cell size is increased to, e.g., 1 mm diameter, a total of four parallel beams may be created with successive projector cells on the display because the average eye pupil size is estimated to be ⁇ 4 mm in diameter in normal ambient lighting conditions. The close range of the display and relatively large pupil size also loosen the tolerance for beam spatial location on the display surface and increase the angular resolution. For some embodiments, this trade-off may be made by increasing the projector cell aperture size. Also, relatively large aperture sizes decrease diffraction blur.
  • width of the eye box also may be increased by using more than one beam emitted from neighboring projector cells for the creation of a virtual display pixel image directed to an eye.
  • Size of the eye box is determined by the projection geometry, optical design, and achievable angular resolution of the successive beams.
  • the eye box size may depend on the use case and the viewing geometry. For example, a 10 mm x 10 mm area may be adequate if the display is fixed to a relatively stable location in relation to the viewer eyes, or if eye tracking and embedded motion sensors are used for guiding the image projection.
  • a larger eye box provides more room for user eye movements, like saccades (rapid eye movement between fixation points), and sway of the head by creating a tolerance area around the eye pupils.
  • the virtual display may be fully visible if the pupils are inside this area, and the virtual image appears to be fixed to the device.
  • the virtual display image may appear to “float” with respect to the physical device by shifting pixel emission points with the tracked eye movement on the actual display surface, which may create an image stabilization effect.
  • all of the virtual pixels may be located at the same surface determined by the image forming beam focus distance. This eases production of the almost parallel beams that are used for overlapping virtual pixel images projected from neighboring projector cells and from different sources.
  • the virtual display surface may be either flat or curved, depending on the design of the optical layers embedded to the mobile display structure and shape of the display itself.
  • the switching speed of the SLM may be a factor limiting the number of beams that may be projected to the eye box in such a LF display.
  • Many LCD panels are relatively slow components for this purpose because the displays have refresh rates of -240 Hz. Generating a flicker-free image allows creation of 4 unique beams from each projector cell at a time because the commonly-accepted threshold value for the human eye is 60 Hz. If, e.g., eye tracking is used, this refresh rate may be adequate because the minimum is two beams for one virtual image pixel, and eye tracking may be used to determine the exact location of the viewer eye pupils.
  • the LCD pixels may be grouped to cover, as adaptive masks, only a limited set of the successive projector lens apertures.
  • the LCD adaptive masks produce only those images that are in the focus region for the two eye directions, and the masks may be interlaced and swept over the display surface with the maximum refresh rate. If the real display viewing distance in the virtual image mode becomes shorter, the viewing geometry may be similar to VR glasses. With very short viewing distances, the two halves of the display may be dedicated to each eye separately.
  • each projector cell has a group of light emitting components below the projector cell that is focused on a specific view direction.
  • the matrix of emitters array may be activated and synchronized to SLM masks that selectively pass or block the specific set of beams for the formation of directional virtual 2D display views.
  • the images projected to different eyes may be created sequentially, and the masks applied in an interlaced manner may handle backlight projector cell grouping.
  • This multiplexing scheme may be based on balancing temporal and spatial multiplexing, which makes the rendering somewhat more complex compared to a 2D display with a single image.
  • Many displays use SLM progressive scanning for rendering images. In this method, each display line is drawn by sequentially activating the pixels in a row one after another. For some embodiments, light emitters on the backplane are selectively activated corresponding to the virtual display pixel location and beam direction and by “scanning” these emitters along with the SLM pixels.
  • the separate beams are modulated.
  • the emission of a single light emitter is modulated with current or with pulse width deviation (PWD).
  • PWD pulse width deviation
  • the control signals on the light emitter backplane may have individual values for each emitter component, and the SLM pixel may be used as an on-off switch. For some embodiments, exact timing between each emitter activation may be used.
  • the different source components are activated at slightly different times and because the SLM goes through one refresh cycle allowing light to pass the pixels at different intensities with relative values from 0 to 1, the intensity modulation of the different beams depends on the exact time of light emission.
  • LCD panel pixels have specific response time curves such that the timing of LED emissions may be fitted according to the image content.
  • an optical method of creating an expanded virtual display image to a further distance from the viewer may be used with many different light field display configurations.
  • a backlight module may be constructed with, e.g., micromirrors, some examples of which are described in PCT Patent Application No. PCT/US19/47313, entitled, “3D Display Directional Backlight Based on Micromirrors,” filed on Aug. 21, 2018, published as International Publication No. WO2019/040484.
  • the number and angular density of directional illumination beams may be increased with a diffractive backlight optical structure, some examples of which are described in PCT Patent Application No. PCT/US19/31332, entitled “3D Display Directional Backlight Based on Diffractive Elements,” filed on May 8, 2019.
  • mosaic lens may be used in a backlight module, some examples of which are described in PCT Patent Application No. PCT/US19/47761, entitled, Optical Method and System for Light Field Displays Based on Mosaic Periodic Layer,” filed on Aug. 22, 2019.
  • Such techniques may be used in accordance with some embodiments, e.g., if the virtual image is projected relatively close to the actual display and aperture interlacing is used for adequately small beam separation on the display surface.
  • a backlight module also may have adaptive collimating lenses or lenses with multiple focus depths to create different sized displays at different depths. This property may also be used in making the display to adapt to different users who may require prescription lenses.
  • an adaptive mosaic lens optical structure may be used.
  • a multifocal optical structure may be used, some examples of which are described in PCT Patent Application No. PCT/US19/18018, entitled “Multifocal Optics for Light Field Displays,” filed on Feb. 14, 2019, published as International Publication No. WO2019/164745.
  • the display device may have a camera at the back that is able to track, e.g., user hands behind the physical device inside the virtual display FOV. This tracking may enable interaction with the virtual display image content, and the hands may be shown as augmented reality pointers or controllable virtual extensions that, e.g., may control the display image properties, pause and start a video feed, or control a game.
  • a camera may be used to mix in other real image content to digital content, enabling an augmented or mixed reality (AR/MR) experience.
  • AR/MR augmented or mixed reality
  • the surroundings may be tracked with a camera or sensor.
  • the display device may cover a large FOV around the viewer’s eyes (which may be somewhat similar to VR goggles)
  • the camera and associated software may keep track and notify the user if there are objects in the surrounding environment worth noting, like, e.g., an approaching vehicle.
  • FIG. 13A is a schematic plan view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments.
  • FIG. 13B is a schematic plan view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments.
  • FIG. 13C is a schematic side view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments.
  • FIG. 13D is a schematic side view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments.
  • a flat 8.5” (197 mm x 88 mm) display 1304, 1344 is used in the translucent operational mode with, e.g., a mobile phone display located at -350 mm viewing distance 1306, 1346 from a single viewer 1302, 1342.
  • the display 1304, 1344 covers a 31 ° horizontal FOV 1308 by 14° vertical FOV 1348.
  • the transparent operational mode FIGs.
  • the display 1324, 1364 is brought closer to the eyes of the viewer 1322, 1362 to a distance 1328, 1368 of 150 mm, and the device 1324, 1364 is switched to virtual display projection mode by switching the device 1324, 1364 to a predetermined 150 mm radius 1328, 1368 of curvature concave shape.
  • this shaping is enabled by the device structure having a flexible LCD panel and the device mechanics being designed with limited movement joints that divide the body into sections. If the second mode is activated, the device optical structure projects images to the viewer’s eyes, creating a 32” virtual display image 1326, 1366 to 750 mm distance 1330, 1370 from the viewer as shown in FIG. 13D.
  • This virtual image 1326, 1366 covers an enlarged 54° horizontal FOV 1332 by 30° vertical FOV 1372, making the viewing experience more immersive.
  • An eye tracking system embedded to the display device may detect the viewer’s eye locations and be used in controlling the projected image beams to two 10 mm x 10 mm eye box regions created around each eye pupil.
  • the translucent diffuser display mode may display images on a display 1304, 1344 that is viewed by a user 1302, 1342 at a distance 1344, 1346 of 350 mm, for example, between the user 1302, 1342 and the images on the display 1304, 1344.
  • the transparent diffuser display mode may project images behind the display device 1324, 1364 at a distance 1330, 1370 of 750 mm, for example, between the user 1322, 1362 and the images 1326, 1366.
  • FIG. 14 is a schematic plan view illustrating an example light field display geometry according to some embodiments.
  • FIG. 14 shows schematically the structure and measurements of example LF display optics.
  • Light is emitted from an example continuous array of pLEDs 1404 with component sizes of 2 m x 2 pm and a 3 pm pitch.
  • the array is mounted to a backplane 1402 and has, e.g., blue components that are overcoated with phosphor for white light emissions.
  • a collimator lens array 1406 may be located, e.g., 4.3 mm from the pLEDs 1404, and the MLA component 1406 may be made from elastic and optically clear silicone as a hot-embossed 1.0 mm thick microlens sheet.
  • the first surfaces of the collimator lens may have a 15.2 mm radius of curvature
  • the second surfaces may have 3.1 mm curvature and a conic constant of -0.8.
  • Aperture sizes of the collimating lenses may be 1.0 mm.
  • An LCD panel 1412 may be laminated together with two polarizers 1408, 1414 and an electrically switchable LC diffuser 1410 forming a 1.15 mm thick stack that is placed in front of the directional back illumination structure.
  • the LCD panel 1412 may have 50 pm sized pixels that contain red, green, and blue color filters.
  • each beam emitted from the structure in the virtual display mode is modulated with an array of ⁇ 20 x 20 LCD pixels.
  • the LC diffuser mixes the backlight distribution, and the panel is used with its native resolution, e.g., 3940 x 1760 pixels.
  • This 4K display panel has a density of -500 ppi, which means, e.g., that individual pixels are not visible to the naked eye at the nominal 350 mm viewing distance.
  • the example structure of FIG. 14 shows a beam pitch 1418 of -1 mm and an angular range 1416 of -28°.
  • a display device may include a multiview display including a switchable diffuser layer; the display device having a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a two- or three-dimensional image; and the display device having a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display a two-dimensional image.
  • the display device may include a light-emitting layer comprising an addressable array of light- emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
  • the switchable diffuser layer of the display device may include a liquid crystal diffuser layer.
  • the optical layer of the display device may include a two-dimensional array of substantially collimating lenses.
  • the optical layer of the display device may include a two-dimensional array of collimating lenses.
  • the optical layer of the display device may include a two-dimensional array of converging lenses.
  • FIG. 15A is a schematic plan view illustrating an example curved display in virtual display mode according to some embodiments.
  • FIG. 15B is a schematic side view illustrating an example curved display in virtual display mode according to some embodiments.
  • FIGs. 15A and 15B show the beam projection geometry of an example curved display 1504, 1554 for generating a curved 32” virtual display 1508, 1558.
  • the device 1504, 1554 switches the LC diffuser into transparent mode.
  • Two groups of beams per virtual display pixel may be emitted from the display surface towards the two 10 mm high 1556 eye boxes around each eye pupil of the viewer 1552.
  • each projector cell may produce beams within the -28° angular range 1416 shown in FIG. 14.
  • a continuous emitter matrix may be generated by activating the emitter corresponding to the next neighboring collimator lens, such as the example shown in FIG. 14.
  • the minimum pitch between two beams that have the same angular direction and generate overlapping retinal images to the eye is ⁇ 1 mm, which means, e.g., that more than one beam may be projected to the eye pupils at a given time, even if the pupil size is at the minimum 2 mm diameter.
  • FIG. 16A is a schematic front view illustrating an example first projection simulation according to some embodiments.
  • FIG. 16B is a schematic front view illustrating an example second projection simulation according to some embodiments.
  • FIG. 16C is a schematic front view illustrating an example third projection simulation according to some embodiments.
  • FIG. 16D is a schematic front view illustrating an example fourth projection simulation according to some embodiments.
  • FIG. 16E is a schematic front view illustrating an example fifth projection simulation according to some embodiments.
  • FIGs. 16A to 16D show images 1600, 1620, 1640, 1660 of simulated retinal spots at four eye focus distances.
  • FIG. 16E shows an image 1680 of a virtual pixel pair that is used as a size reference for the simulated retinal spots obtained with the example display structure.
  • optical simulation software OpticsStudio 19 was performed with optical simulation software OpticsStudio 19.
  • the display optical structure was placed at 150 mm distance from a simplified eye model that was constructed from a 4 mm aperture (pupil) and two ideal paraxial lenses that were used for adjusting the eye focal length ( ⁇ 17 mm) to the appropriate focus distance.
  • Four different eye focus distances were used: 350 mm, 500 mm, 750 mm and 1 ,250 mm from the eye model to test the beam focusing effect.
  • one simulation (FIG. 16E) was performed with a source pair that was placed at the virtual image distance from a bare eye model that was focused to the same distance.
  • This simulation was used as a size reference to the other retinal images obtained with the combination of display optics and eye model.
  • One pair of light sources was used in the simulations with the pLED measurements of 2 m square surface and 3 pm pitch between the two emitters. Only green wavelengths (530 nm - 560 nm) were used to demonstrate creation of single eye retinal focus cues.
  • the simulations were made only with geometric raytracing, and the simulations did not include diffraction effects. This accommodation is considered adequate because the projection lens aperture size is relatively large (1 mm), and the diffractions effects are small.
  • FIGs. 16A to 16E shows a table of retinal images on a detector surface, which size is 40 pm x 40 pm in all cases.
  • the first four simulated irradiance distributions from the left show the retinal images obtained with the display optical structure when the eye is focused to the four different depths. These images show that the pair of rectangular sources is visible only when the eye is focused to the designated 750 mm distance of the virtual display. When the eye is focused to a closer distance of 500 mm, the two spots are blurred but still distinguishable from each other. At the two other focus distances of 350 mm and 1 ,250 mm, the retinal images are single spots that are bigger than the image of the sources at the 750 mm distance.
  • the simulation results prove that single eyes may have proper focus cues with the designed display optical structure because the sources have clear virtual images only at the proper virtual viewing distance.
  • the simulated image of FIG. 16E shows the retinal image obtained with a pair of square emitters placed at the designed 750 mm virtual image distance.
  • the emitter squares were 250 pm wide and pitch between the two sources was 370 pm.
  • the resulting retinal image has the same pitch between the two source images as obtained with the display optical structure and eye model combination when the eye is focused to the 750 mm distance.
  • the virtual image Given the 54° x 30° FOV of the curved virtual display, the virtual image has a 1920 x 1080 (Full FID) matrix of virtual pixels on the 32” diameter surface for some embodiments.
  • the mobile display optical structure has a large-scale optical functionality.
  • the LF display optical structure projects an enlarged view to a viewer’s eyes and creates a virtual display that has a large FOV.
  • the optical structure may be used in a VR/AR device or in a device that may be transformed from mobile phone use to a head mounted display device. The latter approach has been used in products like Google Cardboard and Samsung Gear VR that have head mounted optics and a separate mobile phone that is used as the VR display.
  • Some embodiments of the multiview display device do not use separate optical pieces to create virtual images, making the device able to be used with and without a head attachment apparatus.
  • the good image quality goggleless real LF displays may use very small light sources to generate very small beam sizes and higher levels of collimation that form virtual image pixels at a particular focal plane. Micro LEDs may be used for such light sources.
  • two active optoelectronic layers may be controlled and synchronized with the image rendering system. These layers may add cost to the system. Increased energy may be consumed due to a large portion of the emitted light being blocked by the LCD aperture masks. Because some of the light is blocked, the image may not be as bright, and the total efficiency of the device may be lower. The increased energy and lower efficiency may occur more if the goggleless display is placed further away from the viewer, and these effects made be reduced if the display is brought closer to the viewer’s eyes.
  • FIG. 17A is a schematic plan view illustrating an example multiview display device according to some embodiments.
  • an example multiview display device may be a smartphone 1702, 1752 as shown in FIGs. 17A and 17B.
  • the multiview display device 1702 may include a front-facing camera 1704 as shown in FIG. 17A.
  • an image taken by the front-facing camera 1704 may be used in determining the viewing distance between the multiview display device 1702 and an eye of a user.
  • FIG. 17B is a schematic plan view illustrating an example multiview display device according to some embodiments.
  • the multiview display device 1752 may include a rear-facing camera 1754 as shown in FIG. 17B.
  • operating in the virtual display mode comprises operating a rear-facing camera 1754 of the multiview display device 1752 to measure a viewing distance between a viewer and a physical display of the multiview display device.
  • FIG. 18 is a flowchart illustrating a first example process for operating a multiview display according to some embodiments.
  • an example process of operating a multiview display may include, in a first display mode in which the switchable diffuser is in a transparent state, operating 1802 the multiview display to display a first three-dimensional image.
  • the example process may further include, in a second display mode in which the switchable diffuser is in a translucent state, operating 1804 the multiview display to display a second two-dimensional image.
  • the multiview display may include a switchable diffuser.
  • the first image may be a three-dimensional image.
  • FIG. 19 is a flowchart illustrating a second example process for operating a multiview display according to some embodiments.
  • an example process may include operating 1902 in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device.
  • the example process may further include operating 1904 in a second mode to generate an image in-focus at a physical location of a multiview display device.
  • an apparatus may include a processor and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor to perform the example process.
  • the apparatus may include a front-facing and/or rear-facing camera.
  • the apparatus may include one or more positioning sensors, such as, for example, a gyroscope sensor, GPS hardware and software functionality, and imaging sensors, which may be capable of making measurements used in determining distance between the apparatus and other objects.
  • MR mixed reality
  • AR augmented reality
  • FIMD head mounted display
  • An example method of operating a multiview display, where the multiview display includes a switchable diffuser may include: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a three-dimensional image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a two-dimensional image.
  • the switchable diffuser may be a liquid crystal diffuser.
  • the multiview display may be a directed backlight display.
  • the multiview display may be a light field display.
  • the multiview display may be a stereoscopic display.
  • the example method in accordance with some embodiments may further include determining a distance of a viewer from the multiview display, the method further including switching between the first display mode and the second display mode based on the distance.
  • An example apparatus in accordance with some embodiments may include: a multiview display including a switchable diffuser layer; the display device having a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a three- dimensional image; and the display device having a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display a two-dimensional image.
  • An additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising an addressable array of light-emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • the optical layer may include a two- dimensional array of substantially collimating lenses.
  • the optical layer may include a two- dimensional array of collimating lenses.
  • the optical layer may include a two- dimensional array of converging lenses.
  • the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
  • the additional example method in accordance with some embodiments may further include a spatial light modulator layer, wherein the spatial light modulator is external to the optical layer.
  • the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
  • the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
  • Another example apparatus in accordance with some embodiments may include: optics for generating a virtual display at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • Another example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to perform any method described herein.
  • a further example method in accordance with some embodiments may include: operating in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device; and operating in a second mode to generate an image in-focus at a physical location of a multiview display device.
  • operating in the first mode and operating in the second mode each may include controlling a liquid crystal (LC) diffuser positioned between a light emitting layer and a spatial light modulator of the multiview display device.
  • LC liquid crystal
  • operating in the first mode may include controlling the LC diffuser to prevent light diffusion; and operating in the second mode may include controlling the LC diffuser to cause light diffusion.
  • operating in the first mode may include operating a rear-facing camera of the multiview display device to measure a viewing distance between a viewer and a physical display of the multiview display device.
  • the further example method in accordance with some embodiments may further include transitioning between operating in the first mode and operating in the second mode based on viewing distance.
  • the further example method in accordance with some embodiments may further include determining the viewing distance using an image of a front-facing camera of the multiview display device to determine a distance from the multiview display device to an eye of a user.
  • transitioning between operating in the first mode and operating in the second mode may cause the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold.
  • the further example method in accordance with some embodiments may further include displaying touchable user interface elements in at least one monocular display region of a display of the multiview display device if operating in the first mode.
  • a further example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to perform any method described herein.
  • a further additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
  • a light- emitting layer comprising one or more light emitting elements
  • LC liquid crystal
  • SLM spatial light modulator
  • the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
  • MLA micro lens array
  • a further additional example method of displaying images with a multiview display in accordance with some embodiments may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
  • An example method of operating a multiview display where the multiview display may include a switchable diffuser
  • the example method in accordance with some embodiments may include: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a first image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a second image.
  • Some embodiments of the example method may further include: determining a distance between a viewer and the multiview display; and switching between the first display mode and the second display mode based on the distance.
  • switching between the first display mode and the second display mode may switch the switchable diffuser between the transparent state and the translucent state.
  • the first image may be a virtual image displayed at a distance from the multiview display.
  • the distance from the multiview display may include a viewing distance between a viewer of the multiview display and a physical display of the multiview display.
  • the first image may be a three-dimensional (3D) image.
  • the first image may be a two-dimensional (2D) image.
  • the second image may be a two-dimensional (2D) image displayed on the multiview display.
  • the switchable diffuser may be a liquid crystal diffuser.
  • the multiview display may be a directed backlight display.
  • the multiview display may be a light field display.
  • the multiview display may be a stereoscopic display.
  • the multiview display may include: a light-emitting layer comprising an addressable array of light-emitting elements; and an optical layer overlaying the light- emitting layer, wherein the switchable diffuser may be overlaying the optical layer, and wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • the optical layer may include a two-dimensional array of substantially collimating lenses.
  • the optical layer may include a two-dimensional array of collimating lenses.
  • the optical layer may include a two-dimensional array of converging lenses.
  • the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
  • Some embodiments of the example method may further include: a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
  • the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
  • the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
  • An example apparatus in accordance with some embodiments may include: a light-emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
  • a light-emitting layer comprising one or more light emitting elements
  • LC liquid crystal
  • SLM spatial light modulator
  • the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
  • MLA micro lens array
  • polarizers one or more polarizers
  • An example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer, wherein the display device may be configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a virtual image, and wherein the display device may be configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on a physical display of the multiview display, a two-dimensional image.
  • the virtual image may be a three- dimensional (3D) image.
  • the virtual image may be a two-dimensional (2D) image.
  • An additional example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer and comprising a physical display, wherein the display device is configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display, in a manner configured to be seen by a viewer of the display device at a distance from the physical display, at least one of a three-dimensional virtual image or a two-dimensional virtual image, and wherein the display device is configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on the physical display, a two-dimensional image.
  • a further example display device in accordance with some embodiments may include: a light- emitting layer comprising an addressable array of light-emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • the optical layer may include a two- dimensional array of substantially collimating lenses.
  • the optical layer may include a two- dimensional array of collimating lenses.
  • the optical layer may include a two- dimensional array of converging lenses.
  • the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
  • Some embodiments of the further example display device may further include a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
  • the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
  • the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
  • a further additional example display device in accordance with some embodiments may include: optics configured to generate a virtual display at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
  • the switchable diffuser layer may be a liquid crystal diffuser layer.
  • An example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
  • An additional example method in accordance with some embodiments may include: operating in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device; and operating in a second mode to generate an image in-focus at a physical location of a multiview display device.
  • Some embodiments of the additional example method may further include transitioning between the first mode and the second mode responsively to transitioning a state of a switchable diffuser between a transparent state and a translucent state.
  • Some embodiments of the additional example method may further include transitioning between the first mode and the second mode in accordance with transitioning a state of a switchable diffuser between a transparent state and a translucent state.
  • operating in the first mode and operating in the second mode each comprise controlling a liquid crystal (LC) diffuser positioned between a light emitting layer and a spatial light modulator of the multiview display device.
  • LC liquid crystal
  • operating in the first mode may include controlling the LC diffuser to prevent light diffusion
  • operating in the second mode may include controlling the LC diffuser to cause light diffusion
  • operating in the first mode comprises operating a camera of the multiview display device to measure a viewing distance between a viewer and a physical display of the multiview display device.
  • Some embodiments of the additional example method may further include transitioning between operating in the first mode and operating in the second mode based on a viewing distance between a viewer and a physical display of the multiview display device.
  • Some embodiments of the additional example method may further include determining the viewing distance using an image of a front-facing camera of the multiview display device to determine a distance from the multiview display device to an eye of a user.
  • transitioning between operating in the first mode and operating in the second mode may cause the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold.
  • Some embodiments of the additional example method may further include displaying touchable user interface elements in at least one monocular display region of a display of the multiview display device if operating in the first mode.
  • An additional example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
  • a further additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
  • a light- emitting layer comprising one or more light emitting elements
  • LC liquid crystal
  • SLM spatial light modulator
  • the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
  • MLA micro lens array
  • a further additional example method of displaying images with a multiview display in accordance with some embodiments may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
  • modules that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules.
  • a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation.
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Abstract

Some embodiments of an example method may comprise operating a multiview display, wherein the multiview display comprises a switchable diffuser. The example method may comprise: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a three-dimensional image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a two-dimensional image. Some embodiments of an example method may comprise switching a switchable diffuser of a multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.

Description

METHOD FOR PROJECTING AN EXPANDED VIRTUAL IMAGE WITH A SMALL LIGHT FIELD
DISPLAY
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a non-provisional filing of, and claims benefit under 35 U.S.C. § 119(e) from, U.S. Provisional Patent Application Serial No. 62/915,348, entitled “METHOD FOR PROJECTING AN EXPANDED VIRTUAL IMAGE WITH A SMALL LIGHT FIELD DISPLAY” and filed October 15, 2019, which is hereby incorporated by reference in its entirety.
BACKGROUND
[0002] Creation of an immersive visual experience generally requires high resolution displays that may cover a large portion of the human visual system field-of-view (FOV). Human monocular FOV that represents the total horizontal angular range visible at any given time with two eyes combined is generally estimated to be around 200° - 220°. Binocular FOV, which is the angular extent of stereoscopic vision range shared by both eyes simultaneously, is around 115° wide with normal eyesight. Covering such a large angular range with any display device is a technically demanding task that also drives the R&D work of display industry forward - evident in the increasing sizes of, e.g., TVs and mobile phone displays offered to consumers.
[0003] The average size of household TVs has increased steadily for many decades. Larger TV sets offer more options for sharing the viewing experience because the display device may be positioned further away from the viewers without sacrificing the immersion effect. Movie theaters offer shared visual experiences for hundreds of people at the same time due to the huge screen sizes. The same trend that is evident at homes with TV sets also may be seen in display devices that are meant to be used by a single person. For example, mobile phone and personal computer displays have increased in size considerably in the last ten years, and there is heavy competition among manufacturers for who offers the biggest displays at the lowest prices. Gaming applications is one example of image content directed to a single user which may benefit from the immersion effect provided by large screen sizes.
[0004] With mobile phones, screen sizes are limited by the need for such devices to be portable. However, this fact has not stopped the development of mobiles with screens of 7” or more, even though such displays may be impractical for carrying around all the time. As digital content consumed by individuals is becoming more and more picture- and video-based communication, the trend of increasing display size may be expected to continue as long as devices continue to meet screen size and portability requirements of mobile devices. Examples of this development trend are flexible and bendable displays that may be folded or rolled in for better portability if not in use. These emerging devices emphasize the desire for a large display that may be carried and used by a person for immersive viewing experiences.
SUMMARY
[0005] An example method of operating a multiview display, where the multiview display may include a switchable diffuser, the example method in accordance with some embodiments may include: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a first image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a second image.
[0006] Some embodiments of the example method may further include: determining a distance between a viewer and the multiview display; and switching between the first display mode and the second display mode based on the distance.
[0007] For some embodiments of the example method, switching between the first display mode and the second display mode may switch the switchable diffuser between the transparent state and the translucent state.
[0008] For some embodiments of the example method, the first image may be a virtual image displayed at a distance from the multiview display.
[0009] For some embodiments of the example method, the distance from the multiview display may include a viewing distance between a viewer of the multiview display and a physical display of the multiview display.
[0010] For some embodiments of the example method, the first image may be a three-dimensional (3D) image.
[0011] For some embodiments of the example method, the first image may be a two-dimensional (2D) image.
[0012] For some embodiments of the example method, the second image may be a two-dimensional (2D) image displayed on the multiview display.
[0013] For some embodiments of the example method, the switchable diffuser may be a liquid crystal diffuser.
[0014] For some embodiments of the example method, the multiview display may be a directed backlight display. [0015] For some embodiments of the example method, the multiview display may be a light field display.
[0016] For some embodiments of the example method, the multiview display may be a stereoscopic display.
[0017] For some embodiments of the example method, the multiview display may include: a light-emitting layer comprising an addressable array of light-emitting elements; and an optical layer overlaying the light- emitting layer, wherein the switchable diffuser may be overlaying the optical layer, and wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
[0018] For some embodiments of the example method, the switchable diffuser layer may be a liquid crystal diffuser layer.
[0019] For some embodiments of the example method, the optical layer may include a two-dimensional array of substantially collimating lenses.
[0020] For some embodiments of the example method, the optical layer may include a two-dimensional array of collimating lenses.
[0021] For some embodiments of the example method, the optical layer may include a two-dimensional array of converging lenses.
[0022] For some embodiments of the example method, the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
[0023] Some embodiments of the example method may further include: a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
[0024] For some embodiments of the example method, the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
[0025] For some embodiments of the example method, the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
[0026] An example apparatus in accordance with some embodiments may include: a light-emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
[0027] For some embodiments of the example apparatus, the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers. [0028] An example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer, wherein the display device may be configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a virtual image, and wherein the display device may be configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on a physical display of the multiview display, a two-dimensional image.
[0029] For some embodiments of the example display device, the virtual image may be a three- dimensional (3D) image.
[0030] For some embodiments of the example display device, the virtual image may be a two-dimensional (2D) image.
[0031] An additional example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer and comprising a physical display, wherein the display device is configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display, in a manner configured to be seen by a viewer of the display device at a distance from the physical display, at least one of a three-dimensional virtual image or a two-dimensional virtual image, and wherein the display device is configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on the physical display, a two-dimensional image.
[0032] A further example display device in accordance with some embodiments may include: a light- emitting layer comprising an addressable array of light-emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
[0033] For some embodiments of the further example display device, the switchable diffuser layer may be a liquid crystal diffuser layer.
[0034] For some embodiments of the further example display device, the optical layer may include a two- dimensional array of substantially collimating lenses.
[0035] For some embodiments of the further example display device, the optical layer may include a two- dimensional array of collimating lenses.
[0036] For some embodiments of the further example display device, the optical layer may include a two- dimensional array of converging lenses. [0037] For some embodiments of the further example display device, the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
[0038] Some embodiments of the further example display device may further include a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
[0039] For some embodiments of the further example display device, the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
[0040] For some embodiments of the further example display device, the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
[0041] A further additional example display device in accordance with some embodiments may include: optics configured to generate a virtual display at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
[0042] For some embodiments of the further additional example display device, the switchable diffuser layer may be a liquid crystal diffuser layer.
[0043] An example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
[0044] An additional example method in accordance with some embodiments may include: operating in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device; and operating in a second mode to generate an image in-focus at a physical location of a multiview display device.
[0045] Some embodiments of the additional example method may further include transitioning between the first mode and the second mode responsively to transitioning a state of a switchable diffuser between a transparent state and a translucent state.
[0046] Some embodiments of the additional example method may further include transitioning between the first mode and the second mode in accordance with transitioning a state of a switchable diffuser between a transparent state and a translucent state.
[0047] For some embodiments of the additional example method, operating in the first mode and operating in the second mode each comprise controlling a liquid crystal (LC) diffuser positioned between a light emitting layer and a spatial light modulator of the multiview display device. [0048] For some embodiments of the additional example method, operating in the first mode may include controlling the LC diffuser to prevent light diffusion, and operating in the second mode may include controlling the LC diffuser to cause light diffusion.
[0049] For some embodiments of the additional example method, operating in the first mode comprises operating a camera of the multiview display device to measure a viewing distance between a viewer and a physical display of the multiview display device.
[0050] Some embodiments of the additional example method may further include transitioning between operating in the first mode and operating in the second mode based on a viewing distance between a viewer and a physical display of the multiview display device.
[0051] Some embodiments of the additional example method may further include determining the viewing distance using an image of a front-facing camera of the multiview display device to determine a distance from the multiview display device to an eye of a user.
[0052] For some embodiments of the additional example method, transitioning between operating in the first mode and operating in the second mode may cause the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold.
[0053] Some embodiments of the additional example method may further include displaying touchable user interface elements in at least one monocular display region of a display of the multiview display device if operating in the first mode.
[0054] An additional example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
[0055] A further additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
[0056] For some embodiments of the further additional example apparatus, the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
[0057] A further additional example method of displaying images with a multiview display in accordance with some embodiments may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
BRIEF DESCRIPTION OF THE DRAWINGS
[0058] FIG. 1A is a system diagram illustrating an example communications system according to some embodiments.
[0059] FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to some embodiments.
[0060] FIG. 2A is a schematic illustration showing a first example projector device.
[0061] FIG. 2B is a schematic illustration showing a second example projector device.
[0062] FIG. 2C is a schematic illustration showing an example foldable display device.
[0063] FIG. 3 is a schematic illustration showing example VR goggles generating a large virtual image.
[0064] FIG. 4 is a schematic plan view illustrating example light emission angles of a light field display according to some embodiments.
[0065] FIG. 5A is a schematic illustration showing an example beam divergence caused by a first geometric factor according to some embodiments.
[0066] FIG. 5B is a schematic illustration showing an example beam divergence caused by a second geometric factor according to some embodiments.
[0067] FIG. 5C is a schematic illustration showing an example beam divergence caused by a third geometric factor according to some embodiments.
[0068] FIG. 5D is a schematic illustration showing an example beam divergence caused by diffraction and a first aperture size according to some embodiments.
[0069] FIG. 5E is a schematic illustration showing an example beam divergence caused by diffraction and a second aperture size according to some embodiments.
[0070] FIG. 5F is a schematic illustration showing an example beam divergence caused by diffraction and a third aperture size according to some embodiments.
[0071] FIG. 6A is a schematic illustration showing an example image magnification lens with a first optical power according to some embodiments.
[0072] FIG. 6B is a schematic illustration showing an example image magnification lens with a second optical power according to some embodiments. [0073] FIG. 6C is a schematic illustration showing an example image magnification lens with a third optical power according to some embodiments.
[0074] FIG. 7A is a schematic illustration showing an example first light source and lens configuration according to some embodiments.
[0075] FIG. 7B is a schematic illustration showing an example second light source and lens configuration according to some embodiments.
[0076] FIG. 7C is a schematic illustration showing an example third light source and lens configuration according to some embodiments.
[0077] FIG. 7D is a schematic illustration showing an example fourth light source and lens configuration according to some embodiments.
[0078] FIG. 8A is a schematic side view showing an example mobile display operating in a transparent diffuser display mode according to some embodiments.
[0079] FIG. 8B is a schematic side view showing an example mobile display operating in a translucent diffuser display mode according to some embodiments.
[0080] FIG. 9 is a schematic plan view illustrating an example optical display apparatus according to some embodiments.
[0081] FIG. 10A is a schematic plan view illustrating an example mobile display in virtual display mode according to some embodiments.
[0082] FIG. 10B is a schematic side view illustrating an example mobile display in virtual display mode according to some embodiments.
[0083] FIG. 11 is a schematic front view illustrating an example display surface divided into monocular and binocular regions according to some embodiments.
[0084] FIG. 12A is a schematic plan view illustrating an example viewing geometry for a display with parallel emission direction angles according to some embodiments.
[0085] FIG. 12B is a schematic plan view illustrating an example viewing geometry for a display with converging emission direction angles according to some embodiments.
[0086] FIG. 13A is a schematic plan view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments.
[0087] FIG. 13B is a schematic plan view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments. [0088] FIG. 13C is a schematic side view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments.
[0089] FIG. 13D is a schematic side view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments.
[0090] FIG. 14 is a schematic plan view illustrating an example light field display geometry according to some embodiments.
[0091] FIG. 15A is a schematic plan view illustrating an example curved display in virtual display mode according to some embodiments.
[0092] FIG. 15B is a schematic side view illustrating an example curved display in virtual display mode according to some embodiments.
[0093] FIG. 16A is a schematic front view illustrating an example first projection simulation according to some embodiments.
[0094] FIG. 16B is a schematic front view illustrating an example second projection simulation according to some embodiments.
[0095] FIG. 16C is a schematic front view illustrating an example third projection simulation according to some embodiments.
[0096] FIG. 16D is a schematic front view illustrating an example fourth projection simulation according to some embodiments.
[0097] FIG. 16E is a schematic front view illustrating an example fifth projection simulation according to some embodiments.
[0098] FIG. 17A is a schematic plan view illustrating an example multiview display device according to some embodiments.
[0099] FIG. 17B is a schematic plan view illustrating an example multiview display device according to some embodiments.
[0100] FIG. 18 is a flowchart illustrating a first example process for operating a multiview display according to some embodiments.
[0101] FIG. 19 is a flowchart illustrating a second example process for operating a multiview display according to some embodiments.
[0102] The entities, connections, arrangements, and the like that are depicted in— and described in connection with— the various figures are presented by way of example and not by way of limitation. As such, any and all statements or other indications as to what a particular figure “depicts,” what a particular element or entity in a particular figure “is” or “has,” and any and all similar statements— that may in isolation and out of context be read as absolute and therefore limiting— may only properly be read as being constructively preceded by a clause such as “In at least one embodiment, For brevity and clarity of presentation, this implied leading clause is not repeated ad nauseum in the detailed description.
DETAILED DESCRIPTION
[0103] A wireless transmit/receive unit (WTRU) may be used, e.g., as a multiview display in some embodiments described herein.
[0104] FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0105] As shown in FIG. 1A, the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a ON 106, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a “station” and/or a “STA”, may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d may be interchangeably referred to as a UE. [0106] The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106, the Internet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
[0107] The base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
[0108] The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
[0109] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA). [0110] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
[0111] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
[0112] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
[0113] In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
[0114] The base station 114b in FIG. 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell orfemtocell. As shown in FIG. 1A, the base station 114b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the CN 106.
[0115] The RAN 104/113 may be in communication with the CN 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1A, it will be appreciated that the RAN 104/113 and/or the CN 106 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT. For example, in addition to being connected to the RAN 104/113, which may be utilizing a NR radio technology, the CN 106 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
[0116] The CN 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
[0117] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
[0118] FIG. 1 B is a system diagram illustrating an example WTRU 102. As shown in FIG. 1B, the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0119] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (1C), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0120] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0121] Although the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
[0122] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11, for example.
[0123] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0124] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
[0125] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location- determination method while remaining consistent with an embodiment.
[0126] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
[0127] The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
[0128] In view of Figures 1A-1 B, and the corresponding description of Figures 1A-1B, one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0129] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
[0130] The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
[0131] FIG. 2A is a schematic illustration showing a first example projector device 202. FIG. 2B is a schematic illustration showing a second example projector device 232. One of the main design challenges to be solved with portable displays is how to make a device that is small enough to be carried around and is still able to produce a large image for an individual person easily and conveniently. Pico Projectors are understood to include a projector within a mobile device, but the device relies on the availability of a suitable screen or surface for the projected image. FIG. 2C is a schematic illustration showing an example foldable display device 262. Many foldable displays needing a mechanical operation to change the display size are prone to mechanical failure. Many foldable displays also need physical space for operation and may be costly. [0132] FIG. 3 is a schematic illustration showing example VR goggles generating a large virtual image. Visual information enters the human perception system through the eye pupils. One method of covering a large FOV involves bringing the display surface as close to the eyes as possible.
[0133] This method has led to the development of a large number of different head mounted virtual reality (VR), augmented reality (AR) and mixed reality (MR) display devices. These devices usually have a very high resolution display that is brought close (<10 cm) to the face. A pair of magnifying lenses 306 or a combination of projector optics and a lightguide in front of the eyes may be used to see the enlarged image at a further distance (see FIG. 3). The use of a close-range physical display 304 permits having a display or display section for each eye separately and permits, e.g. creation of stereoscopic 3D images. Because head mounted devices (FIMDs) are very close to the eyes, FIMDs may cover a large FOV with more compact optical constructions than goggleless displays. FIMDs also may be more efficient in producing the amount of light used because the eye pupil pair “target area” is well defined in a relatively fixed position.
[0134] Goggleless displays may offer a more natural and convenient viewing experience than FIMDs. Goggleless displays also may allow image sharing, which may not be possible if the display is fixed in front of a single user’s eyes. Flowever, goggleless displays usually are physically large to cover a significant portion of the viewer’s FOV, and goggleless displays are more expensive to manufacture. Because user position is not fixed to the device, the light emitted from the display is spread over a large angular range to make the picture visible from multiple positions, and most of the light may be wasted. Contrary to most FIMDs, a viewer without goggles is able to change position around the display, and the viewer without goggles will be provided several different “views” of the same 3D scenery. To save energy and processing power, eye tracking systems may be used with goggleless 3D displays to determine the position and line of sight of the user(s), making it possible to direct the image or the “light field” straight to the eye pupils. Such eye tracking and projection systems have their own hardware and processing power costs.
[0135] One potential issue with portable displays is how to make the device small enough to be carried by a user but able to produce images large enough for users to conveniently use the images. Bringing a display closer to the eyes may increase the FOV but the eyes are unable to focus properly to very short distances, even with larger, high-resolution displays. This potential issue is especially prominent with people 40 years of age or older that have hyperopia or “far sighted” eyes and only a narrow range of eye focus adjustment due to the stiffening of the eye lens. With an aging population, future displays will need to be viewable from longer rather than shorter distances, and this fact will diminish the immersive effect of the displayed content if display FOVs are too small.
[0136] One possibility for larger images is to use foldable displays. Unfortunately, many such devices are prone to mechanical failure, are relatively difficult to use because such devices may require some manual operation, and need a lot of space when in use. Another possibility is to use a miniature image projector and a reflective screen, but this possibility has the difficulty of finding a large, flat (even), white, and diffuse reflecting surface for a good quality picture. Many such foldable displays are unable to limit visibility of the large image to the individual using the device. Good privacy and large virtual screen size may be achieved with many current VR systems, but many such systems are typically not fully portable and may require optical pieces in front of the physical display to function properly. Many such systems tend to isolate the viewer from surroundings, and devices attached in front of the eyes may make social interactions more difficult. The use of fixed optical pieces in front of the display also effectively prohibits using the device in any mode other than the expanded virtual image mode.
[0137] Displaying visual information is generally currently achieved mostly by using physical displays that control the color and luminance of multiple small pixels that emit light in all directions. Although multiple display paradigms exist that improve the visual experience, the best visual experience may be produced by a display that is able to produce any arbitrary distribution of luminance and color as a function of position and viewing direction. This luminance distribution is often called a light field (LF), or the plenoptic function. If a light field may be produced with sufficient accuracy, a human viewer (or eye 308) may not be able to notice the difference between a synthetic light field and a real one. A real LF display device 304 should in many cases have full control over both the spatial and angular domains of light distribution. With such properties, a real LF display 304 may be used in creating a virtual image 302 at any position in space and with any size within the FOV covered by the display device 304 itself.
[0138] The human mind perceives and determines depths of observed objects in part by receiving signals from muscles used to orient each eye. This eye convergence uses a triangulation method to estimate the object distance. The brain associates the relative angular orientations of the eyes with the determined depths of focus. Eye muscles connected to the single eye lens automatically adjust the lens shape in such a way that the eyes are focused to the same distance where the two eyes converge. Correct retinal focus cues give rise to a natural image blur on objects outside of an observed focal plane and a natural dynamic parallax effect. In a natural setting, eye convergence and retinal focus cues are both coherent. Correct retinal focus cues may require very high angular density light fields, potentially making it a big challenge to build a sufficiently accurate display that is capable of emitting the necessary light rays. Also, the rendering of the artificial image needs to be performed with high enough a fidelity.
[0139] Multiple different optical methods may be used to create a light field display. PCT Patent Application W02008138986 is understood to describe using electroholography to create a light field display. US Patent 6, 118,584 is understood to describe using integral imaging to create a light field display. US Patent Application 2016/0116752 and US Patent Application US2014/0035959 are understood to describe using parallax barriers to create a light field display. PCT Patent Application WO2011149641, US Patent 9,298, 168, and EP Patent Application 3273302 are understood to describe using directional backlighting to create a light field display. In electroholography, dense spatial light modulators (SLMs) are used to modulate coherent monochromatic light, which creates the light field as a wavefront. In integral imaging, a microlens array is placed in front of a 2D display. This divides the resolution of the underlying display to spatial and angular domains. In parallax barrier methods, an array of static pinholes or slits is used to selectively block light.
[0140] US Patent 8,848,006 is understood to use dynamic barriers implemented with an SLM, or multiple stacked SLMs. Parallax barrier displays also may include time multiplexing by displaying multiple different patterns (usually called frames) on the SLMs, so that the frames are integrated together due to persistence of vision. In example beam redirection methods, beams of light are scanned sequentially in time while their intensity is modulated. This method may be implemented, for example, with a directional backlight whose intensity is modulated by a SLM. As another example, this method may be implemented by having an array of intensity controlled beam generators combined with a beam redirection method.
[0141] FIG. 4 is a schematic plan view illustrating example light emission angles of a light field display according to some embodiments. In particular, FIG. 4 shows a schematic view of the geometry involved in creation of the light emission angles from a LF display. The LF display 404 in FIG. 4 produces the desired retinal focus cues and multiple views of 3D content in a single panel display. In the example picture of FIG. 4, one virtual image object point 402 is located behind the LF display 404. A single display surface is able to generate at least two different views to the two eyes of a single user to create a coarse 3D perception effect. The brain uses these two different eye images to determine 3D distance. Logically this is based on triangulation and interpupillary distance. To provide this effect, at least two views are projected into a single- user viewing angle (SVA) 408, as shown in FIG. 4. Furthermore, in at least one embodiment the LF display projects at least two different views inside a single eye pupil to provide the correct retinal focus cues. For optical design purposes, an “eye box” (and thereby an eye box width 406) may be defined around the viewer eye pupil if determining the volume of space within which a viewable image is formed. In some embodiments of the LF display, at least two partially overlapping views are projected inside an Eye-Box Angle (EBA) 414 covered by the eye-box at a certain viewing distance 410. In some embodiments, the LF display is viewed by multiple viewers 416, 418, 420 looking at the display from different viewing angles. In such embodiments, several different views of the same 3D content are projected to respective viewers covering a whole intended Multi-user Viewing Angle (MVA) 412. For some embodiments, a multiview display may be a light field display, such as the light field display shown in FIG. 4.
[0142] For some embodiments, a high-quality LF display may be created by using multiple projected beams that form voxels to different focal distances from the display. To create good resolution images, each beam is very well collimated with a narrow diameter. Furthermore, ideally the beam waist may be positioned at the same spot than where the beams are crossing to avoid contradicting focus cues for the eye. If the beam diameter is large, the voxel formed in the beam crossing is imaged to the eye retina as a large spot. Natural beam divergence means, e.g., that in front of the display, the beam is becoming wider as the distance between voxel and eye is getting smaller, and the virtual focal plane spatial resolution decreases while the eye resolution increases due to the close distance. Behind the display beam widening and resolution loss is compensated by the fact that eye spatial resolution drops with distance and it may become somewhat easier to create images with adequate resolution.
[0143] FIG. 5A is a schematic illustration showing an example beam divergence caused by a first geometric factor according to some embodiments. FIG. 5B is a schematic illustration showing an example beam divergence caused by a second geometric factor according to some embodiments. FIG. 5C is a schematic illustration showing an example beam divergence caused by a third geometric factor according to some embodiments. For the ideal lens of FIG. 5A, the achievable light beam collimation is dependent on two geometrical factors: size of the light source and focal length of the lens. Perfect collimation 504 without any beam divergence may only be achieved in the theoretical case in which a single color point source (PS) 502 is located exactly at focal length distance from an ideal positive lens. This case is pictured in FIG. 5A. Unfortunately, all real-life light sources have some surface area from which the light is emitted making them extended sources (ES) 512, 522. As each point of the source is separately imaged by the lens, the total beam ends up consisting from a group of collimated sub-beams that propagate to somewhat different directions after the lens. As shown in FIGs. 5A to 5C, as the source 502, 512, 522 grows larger, the total beam divergence 504, 514, 524 increases. This geometrical factor generally cannot be avoided with using optical devices or techniques, and this geometric factor is the dominating feature causing beam divergence with relatively large light sources.
[0144] Another, non-geometrical, feature causing beam divergence is diffraction. The term refers to various phenomena that occur when a wave (of light) encounters an obstacle or a slit. Diffraction is the bending of light around the corners of an aperture into the region of a geometrical shadow. Diffraction effects may occur in all imaging systems and cannot be removed, even with a perfect lens design that is able to balance out all optical aberrations. A lens that is able to reach the highest optical quality is often called “diffraction limited” because most of the blurring remaining in the image comes from diffraction. The angular resolution achievable with a diffraction limited lens may be calculated from the formula of Eq. 1 :
Q = arcsin 1.22
Figure imgf000022_0001
where l is the wavelength of light, and D is the diameter of the lens aperture. It may be seen from the equation that the color (wavelength) of light and lens aperture size (diameter of light entering a viewer’s pupil) are the only things that have an influence on the amount of diffraction.
[0145] FIG. 5D is a schematic illustration showing an example beam divergence caused by diffraction and a first aperture size according to some embodiments. FIG. 5E is a schematic illustration showing an example beam divergence caused by diffraction and a second aperture size according to some embodiments. FIG. 5F is a schematic illustration showing an example beam divergence caused by diffraction and a third aperture size according to some embodiments. FIGs. 5D to 5F show a schematic presentation of light emitting from a point source (PS) 532, 542, 552 and how the beam divergence 534, 544, 554 is increased when the lens aperture size is reduced. This effect may actually be formulated into a general rule in imaging optics design: if the design is diffraction limited, the only way to improve resolution is to make the aperture larger. Diffraction is typically the dominating feature causing beam divergence with relatively small light sources.
[0146] As shown in FIGs. 5A to 5C, the size of an extended source has a big effect on the achievable beam divergence. The source geometry or spatial distribution is actually mapped to the angular distribution of the beam, and this property may be seen in the resulting “far field pattern” of the source-lens system. In practice, this property means, e.g., that if the collimating lens is positioned at the focal distance from the source, the source is actually imaged to a relatively large distance from the lens and the size of the image may be determined from the system “magnification ratio”. In the case of a single imaging lens, this magnification ratio may be calculated by dividing the distance between lens and image with the distance between source and lens, as shown in Eq. 2: distance between lens and image magnification ratio = Eq. 2 distance between source and lens
[0147] FIG. 6A is a schematic illustration showing an example image magnification lens with a first optical power according to some embodiments. FIG. 6B is a schematic illustration showing an example image magnification lens with a second optical power according to some embodiments. FIG. 6C is a schematic illustration showing an example image magnification lens with a third optical power according to some embodiments. FIGs. 6A to 6C illustrate Eq. 2 for three different distances 602, 632, 672 between the lens and the image 604, 634, 674, resulting in larger images 604, 634, 674 as the distance 602, 632, 672 is increased. If the distance between source and lens is fixed, different image distances may be achieved by changing the optical power of the lens with the lens curvature. But when the image distance becomes larger and larger in comparison to the lens focal length, the changes in lens optical power become smaller and smaller, approaching the situation where the lens is effectively collimating the emitted light into a beam that has the spatial distribution of the source mapped into the angular distribution, and the source image is formed without focusing. [0148] In flat form factor goggleless 3D displays, the display projection lenses typically have very small focal lengths to achieve the flat structure, and the beams from a single display optics cell are projected to a relatively large viewing distance. This means, e.g., that the sources are effectively imaged with high magnification when the beams of light propagate to the viewer. For example, if the source size is 50 m x 50 pm, projection lens focal length is 1 mm, and viewing distance is 1 m, then the magnification ratio is 1000:1, and the source geometric image is 50 mm x 50 mm. This means, e.g., that the single light emitter may be seen only with one eye inside this 50 mm diameter eye box.
[0149] For a lens with a magnification ratio of 1000: 1 , if the source has a diameter of 100 pm, the resulti ng image is 100 mm wide, and the same pixel may be visible to both eyes simultaneously because the average distance between eye pupils is only 64 mm. In this latter case, a stereoscopic 3D image is not be formed because both eyes see the same images. This example calculation shows how geometrical parameters, like light source size, lens focal length, and viewing distance, are tied to each other.
[0150] As the beams of light are projected from the LF display pixels, divergence causes the beams to expand. This applies not only to the actual beam emitted from the display towards the viewer but also to the virtual beam that appears to be emitted behind the display, converging to the single virtual focal point close to the display surface. In the case of a multiview display, the divergence expands the size of the eye box. If the beam size at the viewing distance exceeds the distance between the two eyes, the stereoscopic effect breaks down. Flowever, if a voxel to a virtual focal plane is created with two or more crossing beams outside the display surface, the spatial resolution achievable with the beams will decrease as the divergence increases. If the beam size at the viewing distance is larger than the size of the eye pupil, the pupil will become the limiting aperture of the whole optical system.
[0151] Both geometric and diffraction effects work in unison, and an LF display pixel design may balance geometric and diffraction effects to achieve a particular voxel resolution. This is emphasized with very small light sources as the optical system measurements become closer to the wavelength of light and diffraction effects start to dominate the performance. FIGs. 7A to 7D illustrate how the geometric and diffraction effects work together in cases where one and two extended sources are imaged to a fixed distance with a fixed magnification. FIGs. 7A to 7D show light source spot sizes for different geometric magnification and diffraction effects.
[0152] FIG. 7A is a schematic illustration showing an example first light source and lens configuration according to some embodiments. For the example structure of FIG. 7A, an extended source (ES) 702 is located 10 cm from the magnification lens. Light beams passing through an example lens aperture 704 are separated by 5 cm. The light beams have a geometric image indicated as Gl 706. The light source has a diffracted image height indicated by Dl 708. FIG. 7A shows a lens aperture size that is relatively small, and the geometric image (Gl) 706 is surrounded by a blur that comes from diffraction in making the diffracted image (Dl) 708 much larger.
[0153] FIG. 7B is a schematic illustration showing an example second light source and lens configuration according to some embodiments. For the example structure of FIG. 7B, two extended sources (ES1 (724) and ES2 (722)) are located 10 cm from the magnification lens. Light beams passing through an example lens aperture 726 are separated by 5 cm. The light beams generate respective image indicated with heights of GI1 (728) and GI2 (732), respectively. Each light source has a respective diffracted image height indicated by DI1 (730) and DI2 (734), respectively. FIG. 7B shows a case where two extended sources 724, 722 are placed side-by-side and imaged with the same small aperture lens. Even though the Gls 728, 732 of both sources 724, 722 are separated, the two source images cannot be resolved because the diffracted images 730, 732 overlap. In practice this would mean that reduction of light source size may not improve the achievable voxel resolution because the resulting source image size may be the same with two separate light sources as with one larger source that covers the area of both separate emitters. To resolve the two source images as separate pixels/voxels, the aperture size of the imaging lens may be increased.
[0154] FIG. 7C is a schematic illustration showing an example third light source and lens configuration according to some embodiments. For the example structure of FIG. 7C, an extended source (ES) 742 is located 10 cm from the magnification lens. Light beams passing through an example lens aperture 744 are separated by 10 cm. The light beams generate an image indicated with a height of Gl 746. The light source has a diffraction index indicated by Dl 748. Compared with FIG. 7A, the distance Gl 706, 746 is the same in both figures, but the diffracted image height 748 in FIG. 7C is smaller than the diffracted image height 708 in FIG. 7A. FIG. 7C shows the same focal length lens as FIGs. 7A and 7B, but a larger aperture 744 is used in imaging the extended source 742. Diffraction is reduced, and the diffracted image 748 may be only slightly larger than the geometric image 746, which has remained the same size because magnification is fixed.
[0155] FIG. 7D is a schematic illustration showing an example fourth light source and lens configuration according to some embodiments. For the example structure of FIG. 7D, two extended sources (ES1 (764) and ES2 (762)) are located 10 cm from the magnification lens. Light beams passing through an example lens aperture 766 are separated by 10 cm. The light beams generate respective image indicated with heights of GI1 (768) and GI2 (772), respectively. Each light source has a respective diffracted image height indicated by DI1 (770) and DI2 (774), respectively. Compared with FIG. 7B, the distances GI1 (728, 768) and GI2 (730, 770) are the same in both figures, but the diffracted image heights (770, 774) in FIG. 7D are smaller than the diffracted image heights (730, 734) in FIG. 7B. In FIG 7D, the two spots may be resolved because the diffracted images (770, 774) are not overlapping, thereby permitting, e.g., the use of two different sources (764, 762) and improvement of spatial resolution of the voxel grid. [0156] The journal article Vincent W. Lee, Nancy Twu, & loannis Kymissis, Micro-LED Technologies and Applications, 6/16 INFORMATION DISPLAY 16-23 (2016) discusses an emerging display technology based on the use of so-called pLEDs. Micro LEDs are LED chips that are manufactured typically with the same techniques and from the same materials as other LED chips in use today. However, pLEDs are miniaturized versions of the commonly available components and pLEDs may be made as small as 1 m - 10 pm. One of the challenges with pLEDs is how to handle the very small components in display manufacturing. The journal article Frangois Templier, et al., A Novel Process for Fabricating High-Resolution and Very Small Pixel-pitch GaNLED Microdisplays, SID 2017 DIGEST 268-271 (2017) discusses one of the densest matrices that has been manufactured so far, 2 m x 2 m chips assembled with 3 pm pitch. The pLEDs have been used as backlight components in TVs, but pLEDs are expected to challenge OLEDs in the p-display markets in the near future. Compared to OLEDs, many pLEDs are more stable components and are able to produce high light intensities, which generally enables pLEDs to be used in many applications, such as head mounted display systems, adaptive car headlamps (as an LED matrix), and TV backlights. The pLEDs also may be used in 3D displays, which use very dense matrices of individually addressable light emitters that may be switched on and off very fast.
[0157] A bare pLED chip may emit a specific color with a spectral width of -20-30 nm. A white source may be created by coating the chip with a layer of phosphor, which converts the light emitted by blue or UV LEDs into a wider white light emission spectra. A full-color source may also be created by placing separate red, green, and blue LED chips side-by-side. The combination of these three primary colors may create the sensation of a full color pixel when the separate color emissions are combined by the human visual system. A very dense matrix may allow the manufacturing of self-emitting full-color pixels that have a total width below 10 pm (3 x 3 pm pitch).
[0158] Light extraction efficiency from the semiconductor chip is a parameter that indicates electricity-to- light efficiency of LED structures. There are several methods that aim to enhance the extraction efficiency and that aim to permit building of LED-based light sources that efficiently use electric energy, such as in mobile devices that have a limited power supply. One method presented in US Patent No. 7,994,527 is understood to be based on the use of a shaped plastic optical element that is integrated directly on top of a LED chip. Due to a lower refractive index difference, integration of a plastic shape extracts more light from chip material in comparison to a chip surrounded by air. The plastic shape also directs the light in a way that enhances light extraction from the plastic piece and makes the emission pattern more directional. Another method presented in US Patent No. 7,518,149 is understood to enhance light extraction from a pLED chip. This is done by shaping the chip to a form that favors light emission angles that are more perpendicular towards the front facet of the semiconductor chip and enable light to escape from the high refractive index material. These structures also direct the light emitted from the chip. In the latter case, the extraction efficiency is calculated to be twice as good in comparison with typical pLEDs, and more light is emitted to an emission cone of 30° in comparison with a typical chip with a Lambertian distribution of emitted light that is distributed evenly to the surrounding hemisphere.
[0159] Multiple components and systems based on liquid crystal (LC) materials have been developed for electrical control of light propagation, and such components are available at lower costs in large quantities. For some embodiments, LC-based tunable components may be used in 3D displays. Such LC components may be able to cause light beam adjustments without moving mechanical parts. LC-based components typically use linearly polarized light, which may lower optical efficiency and increase power consumption. Because LCDs are typically polarization-dependent devices, light propagation controlling components may be used in 3D displays without a high cost in efficiency. The journal article Shang, Xiaobing, et al., Fast Switching Cholesteric Liquid Crystal Optical Beam Deflector with Polarization Independence, SCIENTIFIC REPORTS Jul 26;7(1):6492 (2017) (“Shang”) describes using cholesteric LCs (instead of the more common nematic phase crystals) that may be used, e.g., for beam steering without polarization dependence, enabling increasing of the component transmittance for display panels based on OLEDs or LEDs.
[0160] Journal articles H. Chen, et al., A Low Voltage Liquid Crystal Phase Grating with Switchable Diffraction Angles, 7 NATURE SCIENTIFIC REPORTS, Article No. 39923 (2017) (“Chen”) and Y. Ma, et al., Fast Switchable Ferroelectric Liquid Crystal Gratings with Two Electro-Optical Modes, 6:3 AIP ADVANCES, Article No. 035207 (2016) describe using liquid crystal materials in switchable or adjustable diffraction gratings. Liquid crystal materials also may be used in diffusers. The switchable gratings are used in splitting light beams into multiple child beams that propagate to different directions according to the grating equation, where the main optical parameter is the grating period. This parameter may be affected by the LC grating electrode design that may be fine enough to, e.g., induce high density variating refractive index pattern to the LC material. Tunable diffusers are used for scattering light and typically tunable diffusers may be electrically switched between transparent and translucent states. These components are based on electrical tuning of LC material that has been modified to perform some particular change under applied electric field.
[0161 ] Journal articles A. Moheghi, et al., PSCT for Switchable Transparent Liquid Crystal Displays, 46: 1 SID 2015 DIGEST (2015) and J. Ma, L. Shi, & D-K. Yang, Bistable Polymer Stabilized Cholesteric Texture Light Shutter, 3:2 APPLIED PHYSICS EXPRESS (2010) describe LC diffusers that are based on polymer stabilized cholesteric texture (PSCT) approach. In these components, the transparent / scattering material has been prepared in such a way that the texture includes liquid crystal material. According to journal articles R. Yamaguchi, et al, Normal and Reverse Mode Light Scattering Properties in Nematic Liquid Crystal Cell Using Polymer Stabilized Effect; 28:3 J. PHOTOPOLYMER SCI. AND TECH. 319-23 (2015) and G. Nabil Hassanein, Optical Tuning of Polymer Stabilized Liquid Crystals Refractive Index ; 5(3): 3 J. Lasers, Optics & Photonics (2018), another tunable diffuser type uses polymer stabilized liquid crystals (PSLC) that have micrometersized LC droplets embedded in optically transparent polymer matrix with matched refractive index. If an electric field is applied to the PSLC material, the refractive indices of the aligned droplets are changed, and the interfaces between the droplets and surrounding polymer start to refract light. This means, e.g., that when the tunable diffuser is activated, light is scattered in all directions from the small LC particles embedded inside the material. For such material-based scattering, the light diffusing effect tends to be large, but transmission through the component is lowered because a large portion of the light is scattered back, and there tends to be no control over the scattered light angular distribution.
[0162] Some hybrid LC diffuser optical structures have been developed. PCT Patent Application No. WO2016140851 is understood to describe combining a switchable LC diffuser layer with a light diffusing surface structure. The diffusing surface is either a separately manufactured foil possibly laminated to the tunable part or the diffusing surface may be an integrated structure directly patterned to the outer surface of the LC diffuser. The diffusion property of the static diffusion surface is increased or decreased by switching on and off the tunable diffuser. U.S. Patent Application No., 2010/0079584, now U.S. Patent No. 9,462,261 is understood to describe a hybrid structure where a combination of lenticular microlenses and LC diffuser layer is used in switching an autostereoscopic display between 2D and 3D display modes. This switching is done by diffusing the angular distribution of the multiview display by switching on the LC diffuser. PCT Patent Application No. W02005011292 is also understood to describe switching between 2D and 3D display modes. Electrically tunable LC diffusers that may be switched between transparent and translucent states have been widely utilized, e.g., in volumetric displays based on sequential image projection to multiple switchable screens as understood to be described in PCT Application Nos. W002059691 and WO2017055894. Such components scatter light uniformly to all directions. This feature may be useful when the components are used in volumetric displays such that the voxels are visible from all directions. However, such volumetric 3D images are not able to create occlusion properly, making them look translucent and somewhat unnatural.
[0163] For some embodiments, an expanded virtual display image may be created with a small mobile display that is held close to the viewer eyes. The example described optical method in accordance with some embodiments may be used in a portable device that has a light field display, which functions with directed backlight principle. The device may be switched between two modes: a high resolution mobile display that is viewed at intermediate distances and a virtual expanded display that is viewed at longer distances.
[0164] In some embodiments, an example light field display includes, e.g., a dense matrix of light emitting elements, a light collimating layer, an electrically switchable diffuser, and a spatial light modulator (SLM). The light emitters and collimating layer create directional backlight that is used together with the SLM in projecting the image from the display surface to an enlarged virtual image lying on a focus plane further behind the device. Design of an example optical structure in accordance with some embodiments may, e.g., enable projecting virtual pixel images to the viewer eyes with high resolution and correct retinal focus cues for the more distant display.
[0165] For some embodiments, an expanded virtual display image may be created with a small mobile physical display that is held close to the viewer eyes. The device may be switched between two modes: (1) a high resolution mobile display that is viewed at intermediate distance and (2) a virtual expanded display that is viewed at longer distance. The first viewing mode is suitable, e.g., for normal mobile phone display use like messaging or browsing social media, whereas the second mode may be activated for applications that may benefit from larger display size like watching movies or gaming. For some embodiments, a large immersive display is generated with a small, portable device.
[0166] For some embodiments, a light field display functioning with directed backlight principle may be the optical structure used for virtual image projection. Light is emitted from a layer with separately controllable small emitters, e.g., pLEDs. A lens structure may be placed in front of the emitters to collimate light into a set of beams that illuminate the back of a spatial light modulator (SLM), which may be, e.g., a high density LCD panel. The emitters and collimator lenses form a series of projector cells that are able to create highly directional and controllable back illumination to the SLM. Beams generated with the backlight module are modulated with the SLM, and virtual image pixels are projected to the viewer’s eyes. An electrically switchable liquid crystal (LC) diffuser placed between the collimating lenses and SLM may be used to switch between the two operational modes.
[0167] When compared to, e.g., a foldable display, the mobile LF display may not use as much physical material for creation of the enlarged image. Because the image is projected as a virtual display, the physical area used by the device may be relatively small with a potentially lower cost to manufacture. Many foldable or reliable displays are prone to mechanical failure, unlike a virtual display that may be constructed without any moving mechanical parts.
[0168] The virtual display size and position may be selected based on the intended use case. The device size may be small even if larger virtual images are generated. For some embodiments, if the virtual image is projected to the eyes with adjustable optical focus layers and adjustable image rendering, the virtual image size may be changed during use without changing the hardware.
[0169] For some embodiments, the portable device may use a light field display functioning as a directed backlight. This example optical projection functionality permits a larger display FOV, enlarging the image size and creating a viewing distance that is comfortable to use over extended periods of time, even with eyes that have lost their natural adaptation to close distances due to stiffening of the eye lenses with age. For some embodiments, the display is not attached to the head, and a viewer may use standard prescription glasses for close range viewing distances and for larger viewing distances in the virtual expanded view mode.
[0170] Because the optical structure used for light field creation is embedded into the handheld device behind the display surface, some embodiments do not use additional carry-on devices (like smart glasses) or mechanical holders (like VR headsets that use mobile phones as the display). This allows easy and convenient use of the device in each viewing mode. In virtual extended view mode, the user is not isolated from surroundings because, for some embodiments, there are no obstructing structures around the eyes and the display may be moved away from the line of sight.
[0171] Because the light source matrix is continuous, some embodiments do not have strict tolerances between the optical layers. The switchable LC diffuser may have looser tolerances for positional accuracy because the switchable LC diffuser is transparent in the virtual image mode and diffusing in the normal display mode. Use of large virtual image projection distances enable having relatively loose tolerances for the light collimating layer focal length and distance from the emitters.
[0172] The optical structure may be manufactured with lower cost compared to many other possible LF display structures. Display rendering of the LF image may use two views, one for each eye, of the same flat surface. Unlike many VR headsets that use two separate displays, some embodiments may use an LF display structure because the two separate eye views overlap at the display surface and there is a need to provide adequate eye boxes that allow movement of the user eyes with respect to the unattached display device. Because the display functionality may be restricted to two switchable viewing modes, some embodiments of the LF display optical structure may be optimized for showing the virtual expanded flat display view, whereas many other LF displays seek to create 3D images with depth. This difference may enable more robust optical structures that are easier to calibrate.
[0173] FIG. 8A is a schematic side view showing an example mobile display operating in a transparent diffuser display mode according to some embodiments. FIG. 8B is a schematic side view showing an example mobile display operating in a translucent diffuser display mode according to some embodiments. An example high resolution display with a modest field of view (FOV A) 808 may be obtained if the display 804 is located at arm’s length (VD A) 806 from the viewer 802. A second mode with a virtual display 862 and a large field of view (FOV B) 858 may be created at a virtual viewing distance 860 when the device 854 is located a short physical distance (VD B) 856 from the viewer 852. Adaptive optics and rendering enable a large in focus virtual image to be generated if the display is held close to the viewer (VD B) 856. The resulting virtual display 862 generated for a single focal distance may not support image depth, but the virtual display may be flat or curved. [0174] For some embodiments, an expanded virtual display image is created with a small mobile display that is held close to the viewer’s eyes. The device (which may be an LF display device) may be switched between two modes: (1) a high resolution mobile display that is viewed at intermediate distance and (2) a virtual expanded (or extended) display that is viewed at longer distance. FIGs. 8A and 8B illustrate these two modes that may be generated with a single device by switching the device between the two operational modes. The first mode may be used, e.g., for normal mobile phone display such as messaging or browsing social media, whereas the second mode may be activated for applications that may benefit from a larger display size, like watching movies or gaming. For some embodiments, a small, portable device may be used to generate a large display.
[0175] FIG. 8A presents viewing geometry during normal use of the display when the device is held at an intermediate distance, around 25 - 50 cm from the viewer eyes. In this first operational mode, the handheld device may be used as a standard small-scale high-resolution display. A viewer’s eyes may accommodate to the first viewing distance VD A because the image is located on the display surface, and the distance is adequate for comfortable viewing with hyperopic eyes.
[0176] FIG. 8B presents the viewing geometry in the second operational mode such that the device is brought closer to the eyes at a second viewing distance VD B, which may be around, e.g., 10 - 20 cm. In this mode, the field of view is larger (FOV B > FOV A) because the display is closer to the eyes than in the first operational mode. An optical structure embedded to the mobile display may be used to project a virtual image to the viewer eyes. This image appears to be behind the display at a virtual viewing distance, which may be, e.g., between 50 - 150 cm. Because the optical structure is able to create an image that is clearly visible only when the eyes are focused at the designated virtual viewing distance and not the nearby real display, a comfortable and immersive viewing experience may be created with the longer virtual viewing distance and expanded FOV.
[0177] For some embodiments, a method of operating the multiview display may include determining a distance between the viewer and the multiview display and switching between a translucent diffuser display mode (such as the display mode of FIG. 8A) and a transparent diffuser display mode (such as the display mode of FIG. 8B) based on the distance between the viewer and the multiview display. Such a switching between the transparent diffuser and translucent diffuser display modes may occur if the distance is above or below, respectively, a threshold.
[0178] For some embodiments, operating in the translucent diffuser display mode may include controlling an LC diffuser to cause light diffusion and operating in the transparent diffuser display mode may include controlling the LC diffuser to prevent light diffusion. For example, the LC diffuser may be a polymer dispersed liquid crystal (PDLC) diffuser. For some embodiments, operating in the translucent diffuser display mode may include turning off the voltage to the PDLC diffuser to cause light diffusion and operating in the transparent diffuser display mode may include turning on the voltage to the PDLC diffuser to make the PDLC diffuser clear and prevent light diffusion. For some embodiments, the LC diffuser may be active or inactive, respectively, so as to cause light diffusion in the translucent diffuser display mode and to prevent light diffusion in the transparent diffuser display mode.
[0179] For some embodiments, the method of operating the multiview display may include transitioning between operating in the first display mode and operating in the second display mode based on viewing distance, such as operating in the first display mode for viewing distances greater than a threshold and operating in the second display mode for viewing distances less than a threshold.
[0180] For some embodiments, transitioning between operating in the first mode and operating in the second mode causes the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold
[0181] For some embodiments, a method of displaying images with a multiview display may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state. For some embodiments, three-dimensional, virtual images are displayed in the first display mode in which the switchable diffuser is in a transparent state, and the two-dimensional images are displayed in the second display mode in which the switchable diffuser is in a translucent state. For some embodiments, two- dimensional, virtual images are displayed in the first display mode in which the switchable diffuser is in a transparent state. As discussed in a Wikipedia webpage {Transparency and Translucency, WIKIPEDIA (Sept. 25, 2019), en.wikipedia[dot]org/wiki Transparency_and_translucency (last accessed Oct. 11, 2019)), the term “translucent” refers to light that is transmitted but also diffused. If a display diffuser component is switched from a transparent (optically clear) state to a translucent (diffused) state, the angular distribution of light is mixed (scattered/diffused) and directionality is lost. A 3D stereoscopic image may not be formed when light lacks a clear direction that limits visibility of a display pixel to a single eye at a time. For some embodiments, if the diffuser is in a transparent state, a 2D or 3D virtual image is created, whereas, if the diffuser is in translucent/diffusing state, a 2D image without stereoscopic effects is created on the physical surface of the display.
[0182] FIG. 9 is a schematic plan view illustrating an example optical display apparatus according to some embodiments. In one mode, the LC diffuser 918 spreads the light to create an omni-directional backlight, and the SLM 920 operates in an, e.g., traditional display to give a high resolution. In virtual viewing mode, the LC diffuser 918 is transparent, allowing a directional lighting source to be created from the light emitting layer and the micro lens array (MLA) 914. [0183] A light field display functioning with a directed backlight may be used for virtual image projection for some embodiments. FIG. 9 shows schematic image of an example LF display optical structure and functionality utilizing the presented method. Light is emitted from a layer 912 with separately controllable small emitters, e.g., pLEDs (such as source 1 (908) and source 2 (910)). A lens structure that may be, e.g., an embossed polycarbonate microlens sheet 914, is placed in front of the emitters, and the lens structure collimates light into a set of beams that illuminate the back of a spatial light modulator (SLM) 920 that may be e.g., a high density LCD panel. The emitters and collimator lenses form a series of projector cells that are able to create highly directional and controllable back illumination to the SLM 920. An electrically switchable liquid crystal (LC) diffuser placed between the collimating lenses, and the SLM is used in switching between the two operational modes. The structure may contain a polarizer sheet 916 between the microlenses 914 and LC diffuser 918 as well as in front of the SLM 920 (e.g., closer to the viewer 930) if, e.g., linearly polarized light is used for the operation of the switching component and/or SLM 920. For some embodiments, the light emitting layer 912 and MLA 914 may be identified as a backlight module 904, and the polarizer(s) 916, 922, the LC diffuser 918, and the SLM 920 may be identified as an image modulator 906.
[0184] For some embodiments, the switchable diffuser is a liquid crystal (LC) diffuser. For some embodiments, the multiview display is a directed backlight display. For some embodiments, converging lenses (such as the MLA 914 of FIG. 9) may be configured to generate virtual images of the light-emitting elements at a predetermined distance behind the display device, such as the example virtual display pixel shown in FIG. 9. For some embodiments, the multiview display may include a spatial light modulator (SLM) layer 920 that is external to the optical layer (such as the MLA 914 and/or the polarizer(s) 916, 922 of FIG. 9). For some embodiments, the switchable diffuser layer (such as the LC diffuser 918 of FIG. 9) is between the optical layer (such as the MLA 914 and/or the polarizer 916 of FIG. 9) and the spatial light modulator layer 920. For some embodiments, the spatial light modulator layer 920 is between the switchable diffuser layer (such as the LC diffuser 918 of FIG. 9) and the optical layer (such as a polarizer 916). For some embodiments, a display device may include optics (such as the MLA 914 and/or the polarizer(s) 916, 922 of FIG. 9) for generating a virtual display (which may include a plurality of virtual display pixels 902) at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer (such as the LC diffuser of FIG. 9) is switchable between a transparent state and a translucent state. For some embodiments, the switchable diffuser layer is a liquid crystal diffuser layer. For some embodiments, an multiview display apparatus may include a light-emitting layer 912 comprising one or more light emitting elements 908, 910; a liquid crystal (LC) diffuser layer 918; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM) 920. For some embodiments, the one or more optical layers of a multiview display may include a micro lens array (MLA) 914; and one or more polarizers 916, 922. For some embodiments, a method of operating a multiview display may include operating in a first display mode and operating in a second display mode such that each display mode includes controlling a liquid crystal (LC) diffuser 918 positioned between a light emitting layer 912 and a spatial light modulator 920 of the multiview display device.
[0185] When the LC diffuser 918 is driven with the transparent state, each source on the light emitting layer creates a separate and well collimated illumination beam with the help of the MLA 914. Beam direction is dependent on the spatial location of the source in respect to the projector cell optical axis determined by the collimating lens rotational axis. For some embodiments, the collimator lens is designed in such a way that the beam is virtually focused to the predetermined virtual display distance behind the display structure. Such a beam focus enables correct retinal focus cues to be created for each eye when the beams are projected into the viewer eye pupils. For some embodiments, an SLM 920 is used to modulate the beam luminosity and color to the right value based on the image content. A virtual display pixel 902 is formed when two such beams with correct propagation angles enter the two eyes and the angular difference between the beams initiates the correct eye convergence 928 for the image content. If both retinal focus 924, 926 and eye convergence visible cues are coherent, the virtual display pixel 902 may be seen at the correct distance without any vergence accommodation conflict (VAC).
[0186] If the display directional backlight structure is designed in such a way that the individual illumination beams are wider on the display surface than at the virtual display distance, the real display itself may be visible as a blurred illuminated surface that does not show misleading focus cues. For some embodiments, such a design may be created by making the collimating lens aperture sizes larger than the virtual projected images of the individual sources. Large aperture designs may be used because the beams are created in a backlight module 904, and the SLM 920 may have much smaller pixels than the collimating lenses. Diffraction effects may be mitigated by using larger apertures. For some embodiments, mosaic lenses may be used to increase spatial resolution of images via interlaced apertures or diffraction blur removal, some examples of which are discussed in PCT Patent Application No. PCT/US19/47761, entitled, Optical Method and System for Light Field Displays Based on Mosaic Periodic Layer,” filed on Aug. 22, 2019. The fact that eyes cannot often easily adapt to very short viewing distances may be used by bringing the display very close to the viewer’s face when the virtual display mode is used. This means that, e.g., aging people with hyperopic eyes and limited eye focus range may adapt more naturally to the distant virtual image than younger people who have more flexible eye lenses.
[0187] When an LC diffuser is made to scatter light, the highly directional illumination created with the sources and collimating lenses is diffused into an even backlight that is used in the first operational mode. In this mode, e.g., a Lambertian illumination distribution may be created with fewer active emitters behind the SLM because the directionality from the beams is lost. The LC diffuser used for display mode switching may be, e.g., some currently available component based on volume scattering, for example. For some embodiments, an LC diffuser may use surface scattering. If a surface scattering effect is used with directional backlighting, light diffusion may be controlled to enable a private mode with limited angular visibility of the image or to save power with active adjustment of the FOV with a combination of illumination component selection and control over light diffusion level.
[0188] Color filters commonly utilized in LCDs may be used for generating a full-color projected virtual image from white backlight. Such lighting may be created with the help of e.g., blue or UV LEDs coated with thin phosphor layer that transforms the narrow single color emission spectra into a wider white light emission spectra. Some example potential benefits of using white backlight are that single color components may be used for the light emitting layer and the manufacturing of the backplane becomes simpler and lower cost. Other example potential benefits may include the ability to use readily available three-color high- resolution LCD panels for the SLM component as well as the ability to use commonly used structural designs and manufacturing processes of current mobile phones. Single-color LED and phosphor light sources tend to have relatively low illumination efficiency because a large part of the generated white light is absorbed by the display panel color filters and polarizers.
[0189] For some embodiments, LED sizes and bonding accuracy enable three-color pixels to be under 10 m in size. For some embodiments, full-color virtual images may be generated with separate red, green, and blue emitter components in which case color filters in the SLM structure are not used. For some embodiments, single color emitters may be coated, e.g., with quantum dot materials that transform the single color emissions to the three primary colors. Both when using separate red, green, and blue emitter components and when using single color emitters that are coated, color separation in directional illumination beams may occur due to different spatial positions of the light emitters. This color separation affects only the virtual display mode. When the display is used in the standard small display operational mode, the switchable diffuser is turned on and angular distributions of different color beams are naturally mixed in the backlight unit to form a uniform back illumination. The display colors are produced with temporal color rendering by showing the red, green, and blue images successively and synchronously with the SLM panel.
[0190] For some embodiments, other light sources (e.g., OLEDs) may be miniaturized and used in the light emitting layer in addition to LEDs. However, LEDs may be very small with high brightness. Larger light sources may require microlenses and larger focal lengths to achieve the same level of beam collimation. Larger focal lengths may mean thicker display structures. A relatively large source size may mean large spatial pitch values for the illumination beams and large angular spacing when the beams are collimated. Such effects may decrease the virtual display resolution because high spatial resolution on the virtual display may correspond to high angular resolution of the real LF display. Laser diodes or RC LEDs may be used for the light sources, for some embodiments, because many laser diodes and RC LEDs have similar (or better) optoelectronic characteristics as pLEDs.
[0191] For some embodiments, a backlight module may provide controllable directional illumination for the display SLM, which provides image modulation. Because the two modules have separate optical functions, the two modules may be optimized separately for different use cases and better manufacturability. For example, the size of the backlight projector cells may be much larger than the size of the SLM pixels, making them easier to manufacture. Spatial and angular distributions of light for the SLM surface may be very homogeneous in the small display operational mode. The switchable diffuser may be used for this purpose because the diffuser mixes naturally the angular distributions into a diffuse backlight. In the normal operational mode without virtual image projection, the SLM may function with full native resolution and, e.g., a 500 ppi (pixels-per-inch) panel may be used to create images that do not have visible pixel structure at a designated intermediate viewing distance. Resolution of the projected virtual image may be dependent on the source size, optical layer properties, and viewing geometry. Smaller retinal images of sources may be projected with displays that are closer to the eyes because geometric magnification is reduced with the viewing distance. This allows reasonable resolution to the projected virtual display because retinal spot sizes determine the perceptible virtual pixel sizes.
[0192] In many cases, appropriate care should be taken in matching the backlight and SLM modules for producing color. As discussed above, different methods may be used to produce color images with a three- color backlight or with a module that provides a wider white light spectrum.
[0193] For example, color filters transmit light differently if the light is coming from different angles. Filters based on material absorbance do this due to the different attenuation lengths connected to layer thickness and geometry, whereas filters based on dielectric layers generate different transmission properties due to light interference differences connected to incidence angles. Both of these filter types may be optimized for specific angular ranges and wavelengths, but this optimization should generally be done when designing the whole display system. In the case of white light illumination, the different image beam directions are created by shining the LCD color filters from different directions, and the absorption lengths in the color filter material layers become different. This effect may cause somewhat different colors to appear in beams propagating to different directions and may require special color calibration with LCD pixel transmissions. The phosphor material applied on top of the LEDs may be fine-tuned to compensate for this effect. Because the LEDs emitting light in different directions are located at different spatial positions under the collimating lenses, phosphor materials with slightly different color characteristics may be applied selectively.
[0194] FIG. 10A is a schematic plan view illustrating an example mobile display in virtual display mode according to some embodiments. FIG. 10B is a schematic side view illustrating an example mobile display in virtual display mode according to some embodiments. FIGs. 10A and 10B show the viewing geometry of a virtual display image beam projection. If a virtual display image 1008, 1058 is created, beams exiting the optical structure of the mobile display 1004, 1054 may be collimated to limit the visibility of a single beam to one eye at a time. In accordance with the example, in the horizontal direction, as shown in FIG. 10A, a minimum of two beams are required, one for each eye, to create correct eye convergence. In accordance with the example, in the vertical direction, shown in FIG. 10B, only one beam is required. All of the beams should, e.g., also have a focus at the virtual display location to create the correct retinal focus cues at the eye box 1006, 1056 of the viewer 1002, 1052. The average interpupillary distance of adults is ~64 mm, which is the upper limit for beam size at the designated viewing distance. Because the real viewing distance is small in this operational mode, this size limit may be achieved if small enough light sources are available. If visibility of a single backlight illumination source is limited to a single eye at a time, the light field effect may be created because unique 2D images are projected to different eyes, and the natural parallax effect is created.
[0195] FIG. 11 is a schematic front view illustrating an example display surface divided into monocular and binocular regions according to some embodiments. As the device is brought close to the eyes in the LF image mode, the display surface regions used for virtual image projection to left and right eyes become somewhat separated in the horizontal direction. Horizontal distance between the two eyes means, e.g., that there are two regions (one region on each side of the display) which are visible only to one eye at a time. These monocular regions surround a central binocular region which is able to project images to both eyes at the same time as shown in FIG. 11. The display areas used for left and right eye projection may not be totally separated from each other, like in the case of VR headsets, due to the fact that the display is located relatively close, e.g., 10 - 20 cm, to the viewing distance. If the areas are separated artificially, e.g., by forming a baffle between the two display halves, the total FOV of the display would become limited. If the display areas for left and right eye overlap at the center, the FOV may be increased considerably. This increase may cause a need to perform optical image multiplexing for the binocular region, but the LF display optical structure may perform such multiplexing by shifting the virtual image focus to a further distance.
[0196] Measurements of the different display regions visualized in FIG. 11 may be dependent on the geometrical factors connected to individual user interpupillary distance, physical display size, projected virtual display size and virtual image mode viewing distance. Central binocular region 1104 offers the most natural viewing condition and may be used for the main image content because the central binocular region is visible to both eyes at the same time at the correct viewing distance. The left monocular region 1102 is the display area visible only to the left eye 1108, and the right monocular region 1106 is the display area visible only to the right eye 1110. The monocular regions 1102, 1106 on both sides may be left dark for the other (or opposite) eye to reduce potentially distracting visual content, but the monocular regions 1102, 1106 may be used for extending the horizontal FOV further and for showing secondary visual elements. Examples of such cases are, e.g., virtual control buttons for video playback or directional controls for a video game. These elements may be switched between hidden and visible states with, e.g., voice or gesture control without disturbing visibility of the central main image content.
[0197] To make a full display, the small backlight projector cells may be copied over a finite display area. Because a single cell generates a limited set of beams that have very limited angular extent, the cell has a limited Total Divergence Angle (TDA). This parameter measures the total FOV of one projector cell. The TDAs of the projector cells located at the edges of the display may be designed to have a larger overlap region. Without overlapping areas, the edges of the projected image may not be visible simultaneously to both eyes, making the virtual display image incomplete.
[0198] For some embodiments, the multiview display may be a stereoscopic display, such as the example of FIG. 11. For some embodiments, a multiview display apparatus may be operated in a virtual display mode to project display elements behind the multiview display apparatus. For such a display mode, touchable user interface elements may be displayed in at least one monocular display region of the display. For example, touchable user interface elements may be displayed in the left and/or right monocular display regions or the central binocular display region of FIG. 11.
[0199] FIG. 12A is a schematic plan view illustrating an example viewing geometry for a display with parallel emission direction angles according to some embodiments. FIG. 12A shows the horizontal viewing geometry of a display 1202 in a case such that wide overlapping TDAs 1204 form a viewing window 1210 around the facial area of the viewer 1208. If a continuous array of small sources is used behind the projector lens array, the TDA 1204 may be very wide, and source location selection alone may be enough for overlapping the image beams from different parts of the display. For the example shown in FIG. 12A, the Emission Direction Angles (EDA) 1206 are shown without a tilt.
[0200] FIG. 12B is a schematic plan view illustrating an example viewing geometry for a display with converging emission direction angles according to some embodiments. For some embodiments, the Emission Direction Angles (EDAs) 1256 of the MDP 1252 located at the display edges may be tilted towards the display center line. As illustrated in FIG. 12B, angular resolution may be increased if the TDAs 1254 that form a viewing window 1260 around the facial area of the viewer 1258 are narrower. For some embodiments, this increase may be achieved by shifting the nominal positions of light sources slightly inside the MDP 1252 and by increasing the amount of shift at the display edges. For some embodiments, an extra optical element (e.g., a combination of Fresnel lens and protective window) may be placed on top of the SLM to implement such optical tilting. For some embodiments, the overlap issue may be addressed by making the whole display surface with a specific curvature for a predetermined viewing distance.
[0201] Light intensity of each SLM pixel and illumination beam may be controllable through a certain dynamic range to make a virtual display with good image quality. With LCDs, the light intensity may be adjusted by controlling the amount of light passing through each pixel using two polarizers and electrically controllable liquid crystal material that twists the polarization state of the passing light. For some embodiments, a combination of backlight intensity adjustment and LCD pixel absorbance may be used, e.g., to achieve a picture with a higher contrast ratio. With some LED TV products, this method is called “local dimming.” For some embodiments of the backlight structure, the electric current flowing through each light emitting component may be adjusted continuously. For some embodiments, the component brightness may be adjusted by pulse width modulation (PWM). LEDs in general are components that may be switched extremely fast with adequate dynamic range to generate a flicker-free image. For some embodiments, the size of the backlight module single beam size (microlens aperture size) may be fitted to the size of the LCD pixel. In this case, a pixel-level intensity adjustment may be made with the combination of the backlight module and the LCD. This method may be used for larger dynamic range image pixels and may enable faster display panel switching speeds because the intensity adjustments may be partially handled by the backlight module.
[0202] Quality of the virtual display image may be improved by increasing the number of directional illumination beams projected to each eye pupil. Short projection distance from the mobile display to the eye pupils may ease achievement of the SMV condition and generate better retinal focus cues for a natural viewing experience. Even if individual backlight projection cell size is increased to, e.g., 1 mm diameter, a total of four parallel beams may be created with successive projector cells on the display because the average eye pupil size is estimated to be ~4 mm in diameter in normal ambient lighting conditions. The close range of the display and relatively large pupil size also loosen the tolerance for beam spatial location on the display surface and increase the angular resolution. For some embodiments, this trade-off may be made by increasing the projector cell aperture size. Also, relatively large aperture sizes decrease diffraction blur.
[0203] For some embodiments, width of the eye box also may be increased by using more than one beam emitted from neighboring projector cells for the creation of a virtual display pixel image directed to an eye. Size of the eye box is determined by the projection geometry, optical design, and achievable angular resolution of the successive beams. The eye box size may depend on the use case and the viewing geometry. For example, a 10 mm x 10 mm area may be adequate if the display is fixed to a relatively stable location in relation to the viewer eyes, or if eye tracking and embedded motion sensors are used for guiding the image projection. A larger eye box provides more room for user eye movements, like saccades (rapid eye movement between fixation points), and sway of the head by creating a tolerance area around the eye pupils. The virtual display may be fully visible if the pupils are inside this area, and the virtual image appears to be fixed to the device. For some embodiments, the virtual display image may appear to “float” with respect to the physical device by shifting pixel emission points with the tracked eye movement on the actual display surface, which may create an image stabilization effect. Because the virtual display shows only 2D images for some embodiments, all of the virtual pixels may be located at the same surface determined by the image forming beam focus distance. This eases production of the almost parallel beams that are used for overlapping virtual pixel images projected from neighboring projector cells and from different sources. The virtual display surface may be either flat or curved, depending on the design of the optical layers embedded to the mobile display structure and shape of the display itself.
[0204] The switching speed of the SLM may be a factor limiting the number of beams that may be projected to the eye box in such a LF display. Many LCD panels are relatively slow components for this purpose because the displays have refresh rates of -240 Hz. Generating a flicker-free image allows creation of 4 unique beams from each projector cell at a time because the commonly-accepted threshold value for the human eye is 60 Hz. If, e.g., eye tracking is used, this refresh rate may be adequate because the minimum is two beams for one virtual image pixel, and eye tracking may be used to determine the exact location of the viewer eye pupils. The LCD pixels may be grouped to cover, as adaptive masks, only a limited set of the successive projector lens apertures. In this case, the LCD adaptive masks produce only those images that are in the focus region for the two eye directions, and the masks may be interlaced and swept over the display surface with the maximum refresh rate. If the real display viewing distance in the virtual image mode becomes shorter, the viewing geometry may be similar to VR glasses. With very short viewing distances, the two halves of the display may be dedicated to each eye separately.
[0205] Only two 2D images (one for each eye) may be created for virtual projected images, and the image rendering calculations may be faster and less complex to perform. Because retinal focus cues may be created with a single beam focus and the effect improves with multiple retinal beams that have the same focus points, calculation of multiple views of the same virtual display may be avoided. For some embodiments, each projector cell has a group of light emitting components below the projector cell that is focused on a specific view direction. The matrix of emitters array may be activated and synchronized to SLM masks that selectively pass or block the specific set of beams for the formation of directional virtual 2D display views. The images projected to different eyes may be created sequentially, and the masks applied in an interlaced manner may handle backlight projector cell grouping. This multiplexing scheme may be based on balancing temporal and spatial multiplexing, which makes the rendering somewhat more complex compared to a 2D display with a single image. [0206] Many displays use SLM progressive scanning for rendering images. In this method, each display line is drawn by sequentially activating the pixels in a row one after another. For some embodiments, light emitters on the backplane are selectively activated corresponding to the virtual display pixel location and beam direction and by “scanning” these emitters along with the SLM pixels. This means, e.g., that all light beams emitted by a single projector cell go through the same SLM pixels at almost the same time within the refresh frequency of the SLM. Because the individual beams going to different directions have different light intensities, the separate beams are modulated. For some embodiments, the emission of a single light emitter is modulated with current or with pulse width deviation (PWD). The control signals on the light emitter backplane may have individual values for each emitter component, and the SLM pixel may be used as an on-off switch. For some embodiments, exact timing between each emitter activation may be used. The different source components are activated at slightly different times and because the SLM goes through one refresh cycle allowing light to pass the pixels at different intensities with relative values from 0 to 1, the intensity modulation of the different beams depends on the exact time of light emission. For example, LCD panel pixels have specific response time curves such that the timing of LED emissions may be fitted according to the image content.
[0207] For some embodiments, an optical method of creating an expanded virtual display image to a further distance from the viewer may be used with many different light field display configurations. In addition to the optical structure with sources and collimating microlenses, in some embodiments, a backlight module may be constructed with, e.g., micromirrors, some examples of which are described in PCT Patent Application No. PCT/US19/47313, entitled, “3D Display Directional Backlight Based on Micromirrors,” filed on Aug. 21, 2018, published as International Publication No. WO2019/040484. In some embodiments, the number and angular density of directional illumination beams may be increased with a diffractive backlight optical structure, some examples of which are described in PCT Patent Application No. PCT/US19/31332, entitled “3D Display Directional Backlight Based on Diffractive Elements,” filed on May 8, 2019.
[0208] To increase spatial resolution of the projected display, in some embodiments, mosaic lens may be used in a backlight module, some examples of which are described in PCT Patent Application No. PCT/US19/47761, entitled, Optical Method and System for Light Field Displays Based on Mosaic Periodic Layer,” filed on Aug. 22, 2019. Such techniques may be used in accordance with some embodiments, e.g., if the virtual image is projected relatively close to the actual display and aperture interlacing is used for adequately small beam separation on the display surface.
[0209] In some embodiments, a backlight module also may have adaptive collimating lenses or lenses with multiple focus depths to create different sized displays at different depths. This property may also be used in making the display to adapt to different users who may require prescription lenses. For some embodiments, an adaptive mosaic lens optical structure may be used. For some embodiments, a multifocal optical structure may be used, some examples of which are described in PCT Patent Application No. PCT/US19/18018, entitled “Multifocal Optics for Light Field Displays,” filed on Feb. 14, 2019, published as International Publication No. WO2019/164745.
[0210] The display device may have a camera at the back that is able to track, e.g., user hands behind the physical device inside the virtual display FOV. This tracking may enable interaction with the virtual display image content, and the hands may be shown as augmented reality pointers or controllable virtual extensions that, e.g., may control the display image properties, pause and start a video feed, or control a game. A camera may be used to mix in other real image content to digital content, enabling an augmented or mixed reality (AR/MR) experience. For some embodiments, the surroundings may be tracked with a camera or sensor. Because the display device may cover a large FOV around the viewer’s eyes (which may be somewhat similar to VR goggles), the camera and associated software may keep track and notify the user if there are objects in the surrounding environment worth noting, like, e.g., an approaching vehicle.
[0211] FIG. 13A is a schematic plan view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments. FIG. 13B is a schematic plan view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments. FIG. 13C is a schematic side view illustrating an example flat mobile display in a translucent diffuser display mode according to some embodiments. FIG. 13D is a schematic side view illustrating an example curved mobile display in a transparent diffuser display mode according to some embodiments.
[0212] A flat 8.5” (197 mm x 88 mm) display 1304, 1344 is used in the translucent operational mode with, e.g., a mobile phone display located at -350 mm viewing distance 1306, 1346 from a single viewer 1302, 1342. In this translucent operational mode (FIGs. 13A and 13C), the display 1304, 1344 covers a 31 ° horizontal FOV 1308 by 14° vertical FOV 1348. In the transparent operational mode (FIGs. 13B and 13D), the display 1324, 1364 is brought closer to the eyes of the viewer 1322, 1362 to a distance 1328, 1368 of 150 mm, and the device 1324, 1364 is switched to virtual display projection mode by switching the device 1324, 1364 to a predetermined 150 mm radius 1328, 1368 of curvature concave shape. For some embodiments, this shaping is enabled by the device structure having a flexible LCD panel and the device mechanics being designed with limited movement joints that divide the body into sections. If the second mode is activated, the device optical structure projects images to the viewer’s eyes, creating a 32” virtual display image 1326, 1366 to 750 mm distance 1330, 1370 from the viewer as shown in FIG. 13D. This virtual image 1326, 1366 covers an enlarged 54° horizontal FOV 1332 by 30° vertical FOV 1372, making the viewing experience more immersive. An eye tracking system embedded to the display device may detect the viewer’s eye locations and be used in controlling the projected image beams to two 10 mm x 10 mm eye box regions created around each eye pupil.
[0213] For some embodiments, the translucent diffuser display mode may display images on a display 1304, 1344 that is viewed by a user 1302, 1342 at a distance 1344, 1346 of 350 mm, for example, between the user 1302, 1342 and the images on the display 1304, 1344. For some embodiments, the transparent diffuser display mode may project images behind the display device 1324, 1364 at a distance 1330, 1370 of 750 mm, for example, between the user 1322, 1362 and the images 1326, 1366.
[0214] FIG. 14 is a schematic plan view illustrating an example light field display geometry according to some embodiments. FIG. 14 shows schematically the structure and measurements of example LF display optics. Light is emitted from an example continuous array of pLEDs 1404 with component sizes of 2 m x 2 pm and a 3 pm pitch. The array is mounted to a backplane 1402 and has, e.g., blue components that are overcoated with phosphor for white light emissions. A collimator lens array 1406 may be located, e.g., 4.3 mm from the pLEDs 1404, and the MLA component 1406 may be made from elastic and optically clear silicone as a hot-embossed 1.0 mm thick microlens sheet. For some embodiments, the first surfaces of the collimator lens (left side of the MLA 1406 as shown in FIG. 14) may have a 15.2 mm radius of curvature, and the second surfaces (right side of the MLA 1406 as shown in FIG. 14) may have 3.1 mm curvature and a conic constant of -0.8. Aperture sizes of the collimating lenses may be 1.0 mm. An LCD panel 1412 may be laminated together with two polarizers 1408, 1414 and an electrically switchable LC diffuser 1410 forming a 1.15 mm thick stack that is placed in front of the directional back illumination structure. The LCD panel 1412 may have 50 pm sized pixels that contain red, green, and blue color filters. This means, e.g., that the color and luminosity of each beam emitted from the structure in the virtual display mode is modulated with an array of ~20 x 20 LCD pixels. If the device is used in normal operational mode, the LC diffuser mixes the backlight distribution, and the panel is used with its native resolution, e.g., 3940 x 1760 pixels. This 4K display panel has a density of -500 ppi, which means, e.g., that individual pixels are not visible to the naked eye at the nominal 350 mm viewing distance. The example structure of FIG. 14 shows a beam pitch 1418 of -1 mm and an angular range 1416 of -28°.
[0215] For some embodiments, a display device may include a multiview display including a switchable diffuser layer; the display device having a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a two- or three-dimensional image; and the display device having a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display a two-dimensional image. For some embodiments, the display device may include a light-emitting layer comprising an addressable array of light- emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state. For some embodiments, the switchable diffuser layer of the display device may include a liquid crystal diffuser layer. For some embodiments, the optical layer of the display device may include a two-dimensional array of substantially collimating lenses. For some embodiments, the optical layer of the display device may include a two-dimensional array of collimating lenses. For some embodiments, the optical layer of the display device may include a two-dimensional array of converging lenses.
[0216] FIG. 15A is a schematic plan view illustrating an example curved display in virtual display mode according to some embodiments. FIG. 15B is a schematic side view illustrating an example curved display in virtual display mode according to some embodiments. FIGs. 15A and 15B show the beam projection geometry of an example curved display 1504, 1554 for generating a curved 32” virtual display 1508, 1558. For some embodiments, if virtual image projection mode is activated by shaping the display 1504, 1554 into a curved surface, the device 1504, 1554 switches the LC diffuser into transparent mode. Two groups of beams per virtual display pixel may be emitted from the display surface towards the two 10 mm high 1556 eye boxes around each eye pupil of the viewer 1552. Together, the two eye boxes form a 74 mm wide area 1506 for the viewer 1502 in the horizontal direction that may be covered by each projection lens. This means, e.g., that each projector cell may produce beams within the -28° angular range 1416 shown in FIG. 14. A continuous emitter matrix may be generated by activating the emitter corresponding to the next neighboring collimator lens, such as the example shown in FIG. 14. For some embodiments, the minimum pitch between two beams that have the same angular direction and generate overlapping retinal images to the eye is ~1 mm, which means, e.g., that more than one beam may be projected to the eye pupils at a given time, even if the pupil size is at the minimum 2 mm diameter.
[0217] FIG. 16A is a schematic front view illustrating an example first projection simulation according to some embodiments. FIG. 16B is a schematic front view illustrating an example second projection simulation according to some embodiments. FIG. 16C is a schematic front view illustrating an example third projection simulation according to some embodiments. FIG. 16D is a schematic front view illustrating an example fourth projection simulation according to some embodiments. FIG. 16E is a schematic front view illustrating an example fifth projection simulation according to some embodiments. FIGs. 16A to 16D show images 1600, 1620, 1640, 1660 of simulated retinal spots at four eye focus distances. FIG. 16E shows an image 1680 of a virtual pixel pair that is used as a size reference for the simulated retinal spots obtained with the example display structure.
[0218] To test the functionality of the optical structure in virtual image projection mode, a set of simulations in accordance with some examples was performed with optical simulation software OpticsStudio 19. The display optical structure was placed at 150 mm distance from a simplified eye model that was constructed from a 4 mm aperture (pupil) and two ideal paraxial lenses that were used for adjusting the eye focal length (~17 mm) to the appropriate focus distance. Four different eye focus distances were used: 350 mm, 500 mm, 750 mm and 1 ,250 mm from the eye model to test the beam focusing effect. In addition to these simulations, one simulation (FIG. 16E) was performed with a source pair that was placed at the virtual image distance from a bare eye model that was focused to the same distance. This simulation was used as a size reference to the other retinal images obtained with the combination of display optics and eye model. One pair of light sources was used in the simulations with the pLED measurements of 2 m square surface and 3 pm pitch between the two emitters. Only green wavelengths (530 nm - 560 nm) were used to demonstrate creation of single eye retinal focus cues. The simulations were made only with geometric raytracing, and the simulations did not include diffraction effects. This accommodation is considered adequate because the projection lens aperture size is relatively large (1 mm), and the diffractions effects are small.
[0219] Simulation results are presented in FIGs. 16A to 16E, which shows a table of retinal images on a detector surface, which size is 40 pm x 40 pm in all cases. The first four simulated irradiance distributions from the left show the retinal images obtained with the display optical structure when the eye is focused to the four different depths. These images show that the pair of rectangular sources is visible only when the eye is focused to the designated 750 mm distance of the virtual display. When the eye is focused to a closer distance of 500 mm, the two spots are blurred but still distinguishable from each other. At the two other focus distances of 350 mm and 1 ,250 mm, the retinal images are single spots that are bigger than the image of the sources at the 750 mm distance. The simulation results prove that single eyes may have proper focus cues with the designed display optical structure because the sources have clear virtual images only at the proper virtual viewing distance.
[0220] The simulated image of FIG. 16E shows the retinal image obtained with a pair of square emitters placed at the designed 750 mm virtual image distance. In this case, the emitter squares were 250 pm wide and pitch between the two sources was 370 pm. The resulting retinal image has the same pitch between the two source images as obtained with the display optical structure and eye model combination when the eye is focused to the 750 mm distance. This means, e.g., that the display optical structure is able to project images to the eye retina that appear to have a 0.37 mm pitch between pixels at the 750 mm viewing distance. Given the 54° x 30° FOV of the curved virtual display, the virtual image has a 1920 x 1080 (Full FID) matrix of virtual pixels on the 32” diameter surface for some embodiments.
[0221] For some embodiments, the mobile display optical structure has a large-scale optical functionality. The LF display optical structure projects an enlarged view to a viewer’s eyes and creates a virtual display that has a large FOV. For some embodiments, the optical structure may be used in a VR/AR device or in a device that may be transformed from mobile phone use to a head mounted display device. The latter approach has been used in products like Google Cardboard and Samsung Gear VR that have head mounted optics and a separate mobile phone that is used as the VR display. Some embodiments of the multiview display device do not use separate optical pieces to create virtual images, making the device able to be used with and without a head attachment apparatus.
[0222] The good image quality goggleless real LF displays may use very small light sources to generate very small beam sizes and higher levels of collimation that form virtual image pixels at a particular focal plane. Micro LEDs may be used for such light sources. For some embodiments, two active optoelectronic layers (light emitters and SLM) may be controlled and synchronized with the image rendering system. These layers may add cost to the system. Increased energy may be consumed due to a large portion of the emitted light being blocked by the LCD aperture masks. Because some of the light is blocked, the image may not be as bright, and the total efficiency of the device may be lower. The increased energy and lower efficiency may occur more if the goggleless display is placed further away from the viewer, and these effects made be reduced if the display is brought closer to the viewer’s eyes.
[0223] FIG. 17A is a schematic plan view illustrating an example multiview display device according to some embodiments. For some embodiments, an example multiview display device may be a smartphone 1702, 1752 as shown in FIGs. 17A and 17B. For some embodiments, the multiview display device 1702 may include a front-facing camera 1704 as shown in FIG. 17A. For some embodiments, an image taken by the front-facing camera 1704 may be used in determining the viewing distance between the multiview display device 1702 and an eye of a user.
[0224] FIG. 17B is a schematic plan view illustrating an example multiview display device according to some embodiments. For some embodiments, the multiview display device 1752 may include a rear-facing camera 1754 as shown in FIG. 17B. For some embodiments, operating in the virtual display mode comprises operating a rear-facing camera 1754 of the multiview display device 1752 to measure a viewing distance between a viewer and a physical display of the multiview display device.
[0225] FIG. 18 is a flowchart illustrating a first example process for operating a multiview display according to some embodiments. For some embodiments, an example process of operating a multiview display may include, in a first display mode in which the switchable diffuser is in a transparent state, operating 1802 the multiview display to display a first three-dimensional image. For some embodiments, the example process may further include, in a second display mode in which the switchable diffuser is in a translucent state, operating 1804 the multiview display to display a second two-dimensional image. For some embodiments of the example process, the multiview display may include a switchable diffuser. For some embodiments, the first image may be a three-dimensional image. For some embodiments, the second image may be a two- dimensional image. [0226] FIG. 19 is a flowchart illustrating a second example process for operating a multiview display according to some embodiments. For some embodiments, an example process may include operating 1902 in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device. For some embodiments, the example process may further include operating 1904 in a second mode to generate an image in-focus at a physical location of a multiview display device. For some embodiments, an apparatus may include a processor and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor to perform the example process. The apparatus may include a front-facing and/or rear-facing camera. The apparatus may include one or more positioning sensors, such as, for example, a gyroscope sensor, GPS hardware and software functionality, and imaging sensors, which may be capable of making measurements used in determining distance between the apparatus and other objects.
[0227] The example relative proportions, scale, dimensions (e.g., distances, angles), values (e.g., distance values and angle values) used in this description, and illustrated in, e.g., in FIGs. such as FIGs. 6A, 6B, 6C, 7 A, 7B, 7C, 7D, 13A, 13B, 13C, 13D, 14, 15A, 15B, 16A, 16B, 16C, 16D, and 16E, are merely for explanatory purposes in accordance with some example embodiments, and a variety of relative proportions, scale, dimensions, values, are possible.
[0228] While the methods and systems in accordance with some embodiments are discussed in the context of virtual reality (VR), some embodiments may be applied to mixed reality (MR) / augmented reality (AR) contexts as well. Also, although the term “head mounted display (FIMD)” is used herein in accordance with some embodiments, some embodiments may be applied to a wearable device (which may or may not be attached to the head) capable of, e.g., VR, AR, and/or MR for some embodiments.
[0229] An example method of operating a multiview display, where the multiview display includes a switchable diffuser, in accordance with some embodiments may include: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a three-dimensional image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a two-dimensional image.
[0230] For some embodiments of the example method, the switchable diffuser may be a liquid crystal diffuser.
[0231] For some embodiments of the example method, the multiview display may be a directed backlight display.
[0232] For some embodiments of the example method, the multiview display may be a light field display. [0233] For some embodiments of the example method, the multiview display may be a stereoscopic display.
[0234] The example method in accordance with some embodiments may further include determining a distance of a viewer from the multiview display, the method further including switching between the first display mode and the second display mode based on the distance.
[0235] An example apparatus in accordance with some embodiments may include: a multiview display including a switchable diffuser layer; the display device having a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a three- dimensional image; and the display device having a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display a two-dimensional image.
[0236] An additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising an addressable array of light-emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
[0237] For some embodiments of the additional example apparatus, the switchable diffuser layer may be a liquid crystal diffuser layer.
[0238] For some embodiments of the additional example apparatus, the optical layer may include a two- dimensional array of substantially collimating lenses.
[0239] For some embodiments of the additional example apparatus, the optical layer may include a two- dimensional array of collimating lenses.
[0240] For some embodiments of the additional example apparatus, the optical layer may include a two- dimensional array of converging lenses.
[0241] For some embodiments of the additional example apparatus, the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
[0242] The additional example method in accordance with some embodiments may further include a spatial light modulator layer, wherein the spatial light modulator is external to the optical layer.
[0243] For some embodiments of the additional example apparatus, the switchable diffuser layer may be between the optical layer and the spatial light modulator layer. [0244] For some embodiments of the additional example apparatus, the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
[0245] Another example apparatus in accordance with some embodiments may include: optics for generating a virtual display at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
[0246] For some embodiments of another example apparatus, the switchable diffuser layer may be a liquid crystal diffuser layer.
[0247] Another example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to perform any method described herein.
[0248] A further example method in accordance with some embodiments may include: operating in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device; and operating in a second mode to generate an image in-focus at a physical location of a multiview display device.
[0249] For some embodiments of the further example method, operating in the first mode and operating in the second mode each may include controlling a liquid crystal (LC) diffuser positioned between a light emitting layer and a spatial light modulator of the multiview display device.
[0250] For some embodiments of the further example method, operating in the first mode may include controlling the LC diffuser to prevent light diffusion; and operating in the second mode may include controlling the LC diffuser to cause light diffusion.
[0251] For some embodiments of the further example method, operating in the first mode may include operating a rear-facing camera of the multiview display device to measure a viewing distance between a viewer and a physical display of the multiview display device.
[0252] The further example method in accordance with some embodiments may further include transitioning between operating in the first mode and operating in the second mode based on viewing distance.
[0253] The further example method in accordance with some embodiments may further include determining the viewing distance using an image of a front-facing camera of the multiview display device to determine a distance from the multiview display device to an eye of a user.
[0254] For some embodiments of the further example method transitioning between operating in the first mode and operating in the second mode may cause the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold.
[0255] The further example method in accordance with some embodiments may further include displaying touchable user interface elements in at least one monocular display region of a display of the multiview display device if operating in the first mode.
[0256] A further example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to perform any method described herein.
[0257] A further additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
[0258] For some embodiments of the further additional example method, the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
[0259] A further additional example method of displaying images with a multiview display in accordance with some embodiments may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
[0260] An example method of operating a multiview display, where the multiview display may include a switchable diffuser, the example method in accordance with some embodiments may include: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a first image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a second image.
[0261] Some embodiments of the example method may further include: determining a distance between a viewer and the multiview display; and switching between the first display mode and the second display mode based on the distance.
[0262] For some embodiments of the example method, switching between the first display mode and the second display mode may switch the switchable diffuser between the transparent state and the translucent state.
[0263] For some embodiments of the example method, the first image may be a virtual image displayed at a distance from the multiview display. [0264] For some embodiments of the example method, the distance from the multiview display may include a viewing distance between a viewer of the multiview display and a physical display of the multiview display.
[0265] For some embodiments of the example method, the first image may be a three-dimensional (3D) image.
[0266] For some embodiments of the example method, the first image may be a two-dimensional (2D) image.
[0267] For some embodiments of the example method, the second image may be a two-dimensional (2D) image displayed on the multiview display.
[0268] For some embodiments of the example method, the switchable diffuser may be a liquid crystal diffuser.
[0269] For some embodiments of the example method, the multiview display may be a directed backlight display.
[0270] For some embodiments of the example method, the multiview display may be a light field display.
[0271] For some embodiments of the example method, the multiview display may be a stereoscopic display.
[0272] For some embodiments of the example method, the multiview display may include: a light-emitting layer comprising an addressable array of light-emitting elements; and an optical layer overlaying the light- emitting layer, wherein the switchable diffuser may be overlaying the optical layer, and wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
[0273] For some embodiments of the example method, the switchable diffuser layer may be a liquid crystal diffuser layer.
[0274] For some embodiments of the example method, the optical layer may include a two-dimensional array of substantially collimating lenses.
[0275] For some embodiments of the example method, the optical layer may include a two-dimensional array of collimating lenses.
[0276] For some embodiments of the example method, the optical layer may include a two-dimensional array of converging lenses. [0277] For some embodiments of the example method, the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
[0278] Some embodiments of the example method may further include: a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
[0279] For some embodiments of the example method, the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
[0280] For some embodiments of the example method, the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
[0281] An example apparatus in accordance with some embodiments may include: a light-emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
[0282] For some embodiments of the example apparatus, the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
[0283] An example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer, wherein the display device may be configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a virtual image, and wherein the display device may be configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on a physical display of the multiview display, a two-dimensional image.
[0284] For some embodiments of the example display device, the virtual image may be a three- dimensional (3D) image.
[0285] For some embodiments of the example display device, the virtual image may be a two-dimensional (2D) image.
[0286] An additional example display device in accordance with some embodiments may include: a multiview display including a switchable diffuser layer and comprising a physical display, wherein the display device is configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display, in a manner configured to be seen by a viewer of the display device at a distance from the physical display, at least one of a three-dimensional virtual image or a two-dimensional virtual image, and wherein the display device is configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on the physical display, a two-dimensional image.
[0287] A further example display device in accordance with some embodiments may include: a light- emitting layer comprising an addressable array of light-emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer may be switchable between a transparent state and a translucent state.
[0288] For some embodiments of the further example display device, the switchable diffuser layer may be a liquid crystal diffuser layer.
[0289] For some embodiments of the further example display device, the optical layer may include a two- dimensional array of substantially collimating lenses.
[0290] For some embodiments of the further example display device, the optical layer may include a two- dimensional array of collimating lenses.
[0291] For some embodiments of the further example display device, the optical layer may include a two- dimensional array of converging lenses.
[0292] For some embodiments of the further example display device, the converging lenses may be operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
[0293] Some embodiments of the further example display device may further include a spatial light modulator layer, wherein the spatial light modulator may be external to the optical layer.
[0294] For some embodiments of the further example display device, the switchable diffuser layer may be between the optical layer and the spatial light modulator layer.
[0295] For some embodiments of the further example display device, the spatial light modulator layer may be between the switchable diffuser layer and the optical layer.
[0296] A further additional example display device in accordance with some embodiments may include: optics configured to generate a virtual display at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
[0297] For some embodiments of the further additional example display device, the switchable diffuser layer may be a liquid crystal diffuser layer. [0298] An example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
[0299] An additional example method in accordance with some embodiments may include: operating in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device; and operating in a second mode to generate an image in-focus at a physical location of a multiview display device.
[0300] Some embodiments of the additional example method may further include transitioning between the first mode and the second mode responsively to transitioning a state of a switchable diffuser between a transparent state and a translucent state.
[0301] Some embodiments of the additional example method may further include transitioning between the first mode and the second mode in accordance with transitioning a state of a switchable diffuser between a transparent state and a translucent state.
[0302] For some embodiments of the additional example method, operating in the first mode and operating in the second mode each comprise controlling a liquid crystal (LC) diffuser positioned between a light emitting layer and a spatial light modulator of the multiview display device.
[0303] For some embodiments of the additional example method, operating in the first mode may include controlling the LC diffuser to prevent light diffusion, and operating in the second mode may include controlling the LC diffuser to cause light diffusion.
[0304] For some embodiments of the additional example method, operating in the first mode comprises operating a camera of the multiview display device to measure a viewing distance between a viewer and a physical display of the multiview display device.
[0305] Some embodiments of the additional example method may further include transitioning between operating in the first mode and operating in the second mode based on a viewing distance between a viewer and a physical display of the multiview display device.
[0306] Some embodiments of the additional example method may further include determining the viewing distance using an image of a front-facing camera of the multiview display device to determine a distance from the multiview display device to an eye of a user.
[0307] For some embodiments of the additional example method, transitioning between operating in the first mode and operating in the second mode may cause the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold.
[0308] Some embodiments of the additional example method may further include displaying touchable user interface elements in at least one monocular display region of a display of the multiview display device if operating in the first mode.
[0309] An additional example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform any one of the methods described above.
[0310] A further additional example apparatus in accordance with some embodiments may include: a light- emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
[0311] For some embodiments of the further additional example apparatus, the one or more optical layers may include: a micro lens array (MLA); and one or more polarizers.
[0312] A further additional example method of displaying images with a multiview display in accordance with some embodiments may include switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
[0313] Note that various hardware elements of one or more of the described embodiments are referred to as “modules” that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
[0314] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Claims

1. A method of operating a multiview display, where the multiview display comprises a switchable diffuser; the method comprising: in a first display mode in which the switchable diffuser is in a transparent state, operating the multiview display to display a first image; and in a second display mode in which the switchable diffuser is in a translucent state, operating the multiview display to display a second image.
2. The method of claim 1 , further comprising: determining a distance between a viewer and the multiview display; and switching between the first display mode and the second display mode based on the distance.
3. The method of claim 2, wherein switching between the first display mode and the second display mode switches the switchable diffuser between the transparent state and the translucent state.
4. The method of any one of claims 1-3, wherein the first image is a virtual image displayed at a distance from the multiview display.
5. The method of claim 4, wherein the distance from the multiview display comprises a viewing distance between a viewer of the multiview display and a physical display of the multiview display.
6. The method of any one of claims 1 -5, wherein the first image is a three-dimensional (3D) image.
7. The method of any one of claims 1 -5, wherein the first image is a two-dimensional (2D) image.
8. The method of any one of claims 1-7, wherein the second image is a two-dimensional (2D) image displayed on the multiview display.
9. The method of any one of claims 1 -8, wherein the switchable diffuser is a liquid crystal diffuser.
10. The method of any one of claims 1 -8, wherein the multiview display is a directed backlight display.
11. The method of any one of claims 1 -8, wherein the multiview display is a light field display.
12. The method of any one of claims 1 -8, wherein the multiview display is a stereoscopic display.
13. The method of any one of claims 1 -8, wherein the multiview display comprises: a light-emitting layer comprising an addressable array of light-emitting elements; and an optical layer overlaying the light-emitting layer, wherein the switchable diffuser is overlaying the optical layer, and wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
14. The method of claim 13, wherein the switchable diffuser layer is a liquid crystal diffuser layer.
15. The method of any one of claims 13-14, wherein the optical layer comprises a two-dimensional array of substantially collimating lenses.
16. The method of any one of claims 13-14, wherein the optical layer comprises a two-dimensional array of collimating lenses.
17. The method of any one of claims 13-14, wherein the optical layer comprises a two-dimensional array of converging lenses.
18. The method of claim 17, wherein the converging lenses are operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
19. The method of any one of claims 1-18, further comprising a spatial light modulator layer, wherein the spatial light modulator is external to the optical layer.
20. The display device of claim 19, wherein the switchable diffuser layer is between the optical layer and the spatial light modulator layer.
21. The display device of claim 19, wherein the spatial light modulator layer is between the switchable diffuser layer and the optical layer.
22. An apparatus comprising: a light-emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
23. The apparatus of claim 22, wherein the one or more optical layers comprises: a micro lens array (MLA); and one or more polarizers.
24. A display device comprising: a multiview display comprising a switchable diffuser layer, wherein the display device is configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display a virtual image, and wherein the display device is configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on a physical display of the multiview display, a two-dimensional image.
25. The display device of claim 24, wherein the virtual image is a three-dimensional (3D) image.
26. The display device of claim 24, wherein the virtual image is a two-dimensional (2D) image.
27. A display device comprising: a multiview display comprising a switchable diffuser layer and comprising a physical display, wherein the display device is configured to operate in a first display mode in which the switchable diffuser layer is in a transparent state and in which the multiview display is operable to display, in a manner configured to be seen by a viewer of the display device at a distance from the physical display, at least one of a three-dimensional virtual image or a two-dimensional virtual image, and wherein the display device is configured to operate in a second display mode in which the switchable diffuser layer is in a translucent state and in which the multiview display is operable to display, on the physical display, a two-dimensional image.
28. A display device comprising: a light-emitting layer comprising an addressable array of light-emitting elements; an optical layer overlaying the light-emitting layer; and a switchable diffuser layer overlaying the optical layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
29. The display device of claim 28, wherein the switchable diffuser layer is a liquid crystal diffuser layer.
30. The display device of any one of claims 28-29, wherein the optical layer comprises a two-dimensional array of substantially collimating lenses.
31. The display device of any one of claims 28-29, wherein the optical layer comprises a two-dimensional array of collimating lenses.
32. The display device of any one of claims 28-29, wherein the optical layer comprises a two-dimensional array of converging lenses.
33. The display device of claim 32, wherein the converging lenses are operative to generate respective virtual images of the light-emitting elements at a predetermined depth behind the display device.
34. The display device of any one of claims 28-33, further comprising a spatial light modulator layer, wherein the spatial light modulator is external to the optical layer.
35. The display device of claim 34, wherein the switchable diffuser layer is between the optical layer and the spatial light modulator layer.
36. The display device of claim 34, wherein the spatial light modulator layer is between the switchable diffuser layer and the optical layer.
37. A display device comprising: optics configured to generate a virtual display at a predetermined depth behind the display device; and a switchable diffuser layer, wherein the switchable diffuser layer is switchable between a transparent state and a translucent state.
38. The display device of claim 37, wherein the switchable diffuser layer is a liquid crystal diffuser layer.
39. An apparatus comprising: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform the method of any of claims 1 through 21.
40. A method comprising: operating in a first mode to generate a virtual image in-focus behind the physical location of the multiview display device; and operating in a second mode to generate an image in-focus at a physical location of a multiview display device.
41. The method of claim 40, further comprising transitioning between the first mode and the second mode responsively to transitioning a state of a switchable diffuser between a transparent state and a translucent state.
42. The method of claim 40, further comprising transitioning between the first mode and the second mode in accordance with transitioning a state of a switchable diffuser between a transparent state and a translucent state.
43. The method of any one of claims 40-42, wherein operating in the first mode and operating in the second mode each comprise controlling a liquid crystal (LC) diffuser positioned between a light emitting layer and a spatial light modulator of the multiview display device.
44. The method of any one of claims 40-42, wherein operating in the first mode comprises controlling the LC diffuser to prevent light diffusion, and wherein operating in the second mode comprises controlling the LC diffuser to cause light diffusion.
45. The method of any one of claims 40-44, wherein operating in the first mode comprises operating a camera of the multiview display device to measure a viewing distance between a viewer and a physical display of the multiview display device.
46. The method of any one of claims 40-45, further comprising transitioning between operating in the first mode and operating in the second mode based on a viewing distance between a viewer and a physical display of the multiview display device.
47. The method of claim 46, further comprising determining the viewing distance using an image of a frontfacing camera of the multiview display device to determine a distance from the multiview display device to an eye of a user.
48. The method of any one of claims 40-47, wherein transitioning between operating in the first mode and operating in the second mode causes the multiview display to operate in the first mode if the viewing distance is above a threshold and to operate in the second mode if the viewing distance is below the threshold.
49. The method of any one of claims 40-48, further comprising displaying touchable user interface elements in at least one monocular display region of a display of the multiview display device if operating in the first mode.
50. An apparatus comprising: a processor; and a non-transitory computer-readable medium storing instructions that are operative, when executed by the processor, to cause the apparatus to perform the method of any one of claims 40-49.
51. An apparatus comprising: a light-emitting layer comprising one or more light emitting elements; a liquid crystal (LC) diffuser layer; one or more optical layers configured to focus light beams emitted from the one or more light emitting elements; and a spatial light modulator (SLM).
52. The apparatus of claim 51 , wherein the one or more optical layers comprises: a micro lens array (MLA); and one or more polarizers.
53. A method of displaying images with a multiview display comprising switching a switchable diffuser of the multiview display between a first display mode in which the switchable diffuser is in a transparent state and a second display mode in which the switchable diffuser is in a translucent state.
PCT/US2020/055084 2019-10-15 2020-10-09 Method for projecting an expanded virtual image with a small light field display WO2021076424A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962915348P 2019-10-15 2019-10-15
US62/915,348 2019-10-15

Publications (1)

Publication Number Publication Date
WO2021076424A1 true WO2021076424A1 (en) 2021-04-22

Family

ID=73038458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/055084 WO2021076424A1 (en) 2019-10-15 2020-10-09 Method for projecting an expanded virtual image with a small light field display

Country Status (1)

Country Link
WO (1) WO2021076424A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210356746A1 (en) * 2020-05-14 2021-11-18 Korea Institute Of Science And Technology Image display apparatus with extended depth of focus and method of controlling the same

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6118584A (en) 1995-07-05 2000-09-12 U.S. Philips Corporation Autostereoscopic display apparatus
WO2002059691A2 (en) 2000-12-18 2002-08-01 Alan Sullivan 3d display devices with transient light scattering shutters
WO2005011292A1 (en) 2003-07-31 2005-02-03 Koninklijke Philips Electronics N.V. Switchable 2d/3d display
US20060176557A1 (en) * 2005-02-09 2006-08-10 Adrian Travis 2D/3D compatible display system
GB2424717A (en) * 2005-03-31 2006-10-04 Arisawa Seisakusho Kk Image displaying apparatus
WO2008138986A2 (en) 2007-05-16 2008-11-20 Seereal Technologies S.A. Holographic display with microprism array
US20080316597A1 (en) * 2007-06-25 2008-12-25 Industrial Technology Research Institute Three-dimensional (3d) display
US7518149B2 (en) 2003-05-02 2009-04-14 University College Cork - National University Of Ireland, Cork Light emitting mesa structures with high aspect ratio and near-parabolic sidewalls
US20090225244A1 (en) * 2008-03-07 2009-09-10 Wen-Chun Wang Image display device and light source control device therefor
US20100079584A1 (en) 2008-09-30 2010-04-01 Samsung Electronics Co., Ltd. 2D/3D switchable autostereoscopic display apparatus and method
US7994527B2 (en) 2005-11-04 2011-08-09 The Regents Of The University Of California High light extraction efficiency light emitting diode (LED)
WO2011149641A2 (en) 2010-05-24 2011-12-01 3M Innovative Properties Company Directional backlight with reduced crosstalk
US20140035959A1 (en) 2012-08-04 2014-02-06 Paul Lapstun Light Field Display Device and Method
US8848006B2 (en) 2012-01-25 2014-09-30 Massachusetts Institute Of Technology Tensor displays
US9298168B2 (en) 2013-01-31 2016-03-29 Leia Inc. Multiview 3D wrist watch
US20160116752A1 (en) 2013-04-28 2016-04-28 Boe Technology Group Co, Ltd. 3d display device
WO2016140851A1 (en) 2015-03-05 2016-09-09 3M Innovative Properties Company Optical system with switchable diffuser
KR20170006318A (en) * 2015-07-07 2017-01-18 엘지디스플레이 주식회사 Stereopsis image display device
WO2017055894A1 (en) 2015-09-30 2017-04-06 Lightspace Technologies Sia Multi-planar volumetric real time three-dimensional display and method of operation
EP3273302A1 (en) 2016-07-19 2018-01-24 Samsung Electronics Co., Ltd Beam steering backlight unit and holographic display apparatus including the same
WO2019001004A1 (en) * 2017-06-27 2019-01-03 京东方科技集团股份有限公司 Display system and display method therefor, and vehicle
WO2019040484A1 (en) 2017-08-23 2019-02-28 Pcms Holdings, Inc. Light field image engine method and apparatus for generating projected 3d light fields
WO2019164745A1 (en) 2018-02-20 2019-08-29 Pcms Holdings, Inc. Multifocal optics for light field displays

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6118584A (en) 1995-07-05 2000-09-12 U.S. Philips Corporation Autostereoscopic display apparatus
WO2002059691A2 (en) 2000-12-18 2002-08-01 Alan Sullivan 3d display devices with transient light scattering shutters
US7518149B2 (en) 2003-05-02 2009-04-14 University College Cork - National University Of Ireland, Cork Light emitting mesa structures with high aspect ratio and near-parabolic sidewalls
WO2005011292A1 (en) 2003-07-31 2005-02-03 Koninklijke Philips Electronics N.V. Switchable 2d/3d display
US20060176557A1 (en) * 2005-02-09 2006-08-10 Adrian Travis 2D/3D compatible display system
GB2424717A (en) * 2005-03-31 2006-10-04 Arisawa Seisakusho Kk Image displaying apparatus
US7994527B2 (en) 2005-11-04 2011-08-09 The Regents Of The University Of California High light extraction efficiency light emitting diode (LED)
WO2008138986A2 (en) 2007-05-16 2008-11-20 Seereal Technologies S.A. Holographic display with microprism array
US20080316597A1 (en) * 2007-06-25 2008-12-25 Industrial Technology Research Institute Three-dimensional (3d) display
US20090225244A1 (en) * 2008-03-07 2009-09-10 Wen-Chun Wang Image display device and light source control device therefor
US20100079584A1 (en) 2008-09-30 2010-04-01 Samsung Electronics Co., Ltd. 2D/3D switchable autostereoscopic display apparatus and method
US9462261B2 (en) 2008-09-30 2016-10-04 Samsung Electronics Co., Ltd. 2D/3D switchable autostereoscopic display apparatus and method
WO2011149641A2 (en) 2010-05-24 2011-12-01 3M Innovative Properties Company Directional backlight with reduced crosstalk
US8848006B2 (en) 2012-01-25 2014-09-30 Massachusetts Institute Of Technology Tensor displays
US20140035959A1 (en) 2012-08-04 2014-02-06 Paul Lapstun Light Field Display Device and Method
US9298168B2 (en) 2013-01-31 2016-03-29 Leia Inc. Multiview 3D wrist watch
US20160116752A1 (en) 2013-04-28 2016-04-28 Boe Technology Group Co, Ltd. 3d display device
WO2016140851A1 (en) 2015-03-05 2016-09-09 3M Innovative Properties Company Optical system with switchable diffuser
KR20170006318A (en) * 2015-07-07 2017-01-18 엘지디스플레이 주식회사 Stereopsis image display device
WO2017055894A1 (en) 2015-09-30 2017-04-06 Lightspace Technologies Sia Multi-planar volumetric real time three-dimensional display and method of operation
EP3273302A1 (en) 2016-07-19 2018-01-24 Samsung Electronics Co., Ltd Beam steering backlight unit and holographic display apparatus including the same
WO2019001004A1 (en) * 2017-06-27 2019-01-03 京东方科技集团股份有限公司 Display system and display method therefor, and vehicle
WO2019040484A1 (en) 2017-08-23 2019-02-28 Pcms Holdings, Inc. Light field image engine method and apparatus for generating projected 3d light fields
WO2019164745A1 (en) 2018-02-20 2019-08-29 Pcms Holdings, Inc. Multifocal optics for light field displays

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
"Transparency and Translucency", W, 25 September 2019 (2019-09-25), Retrieved from the Internet <URL:en.wikipedia[dot]org/wiki/Transparency_and_translucency>
A. MOHEGHI ET AL.: "PSCT for Switchable Transparent Liquid Crystal Displays", SID 2015 DIGEST, vol. 46, 2015, pages 1
FRANGOIS TEMPLIER ET AL.: "A Novel Process for Fabricating High-Resolution and Very Small Pixel-pitch GaN LED Microdisplays", SID 2017 DIGEST, 2017, pages 268 - 271, XP055608172, DOI: 10.1002/sdtp.11684
G. NABIL HASSANEIN: "Optical Tuning of Polymer Stabilized Liquid Crystals Refractive Index", J. LASERS, OPTICS & PHOTONICS, vol. 5, no. 3, 2018, pages 3
H. CHEN ET AL.: "A Low Voltage Liquid Crystal Phase Grating with Switchable Diffraction Angles", 7 NATURE SCIENTIFIC REPORTS, 2017
J. MAL. SHID-K. YANG: "Bistable Polymer Stabilized Cholesteric Texture Light Shutter", APPLIED PHYSICS EXPRESS, vol. 3, 2010, pages 2, XP055314711, DOI: 10.1143/APEX.3.021702
J. PHOTOPOLYMER S . AND TECH., 2015, pages 319 - 23
R. YAMAGUCHI ET AL., NORMAL AND REVERSE MODE LIGHT SCATTERING PROPERTIES IN NEMATIC LIQUID CRYSTAL CELL USING POLYMER STABILIZED EFFECT, vol. 28, pages 3
SHANG, XIAOBING ET AL.: "Fast Switching Cholesteric Liquid Crystal Optical Beam Deflector with Polarization Independence,", SCIENTIFIC REPORTS, vol. 7, no. 1, 2017, pages 6492
VINCENT W. LEENANCY TWULOANNIS KYMISSIS: "Micro-LED Technologies and Applications", INFORMATION DISPLAY, 2016, pages 16 - 23, XP055608166, DOI: 10.1002/j.2637-496X.2016.tb00949.x
Y. MA ET AL.: "Fast Switchable Ferroelectric Liquid Crystal Gratings with Two Electro-Optical Modes", AIP ADVANCES, vol. 6, 2016, pages 3

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210356746A1 (en) * 2020-05-14 2021-11-18 Korea Institute Of Science And Technology Image display apparatus with extended depth of focus and method of controlling the same

Similar Documents

Publication Publication Date Title
TWI813681B (en) Apparatus and method for displaying a three-dimensional content
KR102625625B1 (en) Light field imaging method and apparatus for generating a projected 3D light field
US20220311990A1 (en) Optical method and system for light field displays based on distributed apertures
US11846790B2 (en) Optical method and system for light field displays having light-steering layers and periodic optical layer
WO2018200417A1 (en) Systems and methods for 3d displays with flexible optical layers
US11624934B2 (en) Method and system for aperture expansion in light field displays
WO2019164745A1 (en) Multifocal optics for light field displays
US11917121B2 (en) Optical method and system for light field (LF) displays based on tunable liquid crystal (LC) diffusers
EP3987346A1 (en) Method for enhancing the image of autostereoscopic 3d displays based on angular filtering
WO2021076424A1 (en) Method for projecting an expanded virtual image with a small light field display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20799948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20799948

Country of ref document: EP

Kind code of ref document: A1