WO2022020861A1 - Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor - Google Patents

Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor Download PDF

Info

Publication number
WO2022020861A1
WO2022020861A1 PCT/US2021/070944 US2021070944W WO2022020861A1 WO 2022020861 A1 WO2022020861 A1 WO 2022020861A1 US 2021070944 W US2021070944 W US 2021070944W WO 2022020861 A1 WO2022020861 A1 WO 2022020861A1
Authority
WO
WIPO (PCT)
Prior art keywords
light field
perception
display
lfsl
field shaping
Prior art date
Application number
PCT/US2021/070944
Other languages
French (fr)
Inventor
Raul Mihali
Faleh Mohammad Faleh ALTAL
Thanh Quang TAT
Mostafa DARVISHI
Joseph Ivar ETIGSON
Original Assignee
Evolution Optiks Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evolution Optiks Limited filed Critical Evolution Optiks Limited
Priority to CA3186253A priority Critical patent/CA3186253A1/en
Priority to EP21845718.2A priority patent/EP4185183A4/en
Publication of WO2022020861A1 publication Critical patent/WO2022020861A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0025Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for optical correction, e.g. distorsion, aberration
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays
    • G02B3/0037Arrays characterized by the distribution or form of lenses
    • G02B3/0056Arrays characterized by the distribution or form of lenses arranged along two different directions in a plane, e.g. honeycomb arrangement of lenses
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/002Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to project the image of a two-dimensional display, such as an array of light emitting or modulating elements or a CRT
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/307Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using fly-eye lenses, e.g. arrangements of circular lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/322Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using varifocal lenses or mirrors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/08Biomedical applications

Definitions

  • the present disclosure relates to light field displays, and, in particular, to a light field display for rendering perception- adjusted content, and dynamic light field shaping system and method, and layer therefor.
  • the operating systems of current electronic devices having graphical displays offer certain “Accessibility” features built into the software of the device to attempt to provide users with reduced vision the ability to read and view content on the electronic device.
  • current accessibility options include the ability to invert images, increase the image size, adjust brightness and contrast settings, bold text, view the device display only in grey, and for those with legal blindness, the use of speech technology. These techniques focus on the limited ability of software to manipulate display images through conventional image manipulation, with limited success.
  • 4D light field displays with lenslet arrays or parallax barriers to correct visual aberrations have since been proposed by Pamplona et al.
  • Optical devices such as refractors and phoropters, are commonly used to test or evaluate the visual acuity of its users, for example, in the prescription of corrective eyewear, contact lenses or intraocular implants.
  • a light field shaping system for interfacing with light emanated from pixels of a digital display to govern display of perception- adjusted content
  • the system comprising: a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the digital display so to align said array of light field shaping elements with the pixels in accordance with a current light field shaping geometry to thereby define a perception adjustment of displayed content in accordance with said current geometry; an actuator operable to adjust an optical path length between said LSFL and the digital display to adjust alignment of said light field shaping elements with the pixels in accordance with an adjusted geometry thereby defining an adjusted perception adjustment of displayed content in accordance with said adjusted geometry; and a digital data processor operable to activate said actuator to translate said LFSL to adjust the perception-adjusted content.
  • LFSL light field shaping layer
  • the LFSL comprises a microlens array.
  • the adjusted perception adjustment corresponds to a reduced visual acuity of a user.
  • the actuator is operable to translate said LFSL in a direction perpendicular to the digital display.
  • the light field shaping geometry relates to a physical distance between the digital display and said LFSL.
  • the adjusted geometry corresponds to a selectable range of perception adjustments of displayed content, wherein distinctly selectable geometries correspond with distinct selectable ranges of perception adjustments.
  • the distinct selectable ranges comprise distinct dioptric correction ranges.
  • the digital data processor is further operable to: receive as input a requested perception adjustment as said adjusted perception adjustment; based at least in part on said requested perception adjustment, calculate an optimal optical path length to thereby define an optimal geometry as said adjusted geometry; and activate said actuator to adjust said optical path length to said optimal optical path length and thereby optimally achieve said requested perception adjustment.
  • the digital data processor is further operable to: receive as input feedback data related to a quality of said adjusted perception adjustment; and dynamically adjust said optical path length via said actuator in response to said feedback data.
  • the light field shaping system comprises a system for administering a vision-based test.
  • the vision-based test comprises a visual acuity examination
  • the perception-adjusted content comprises an optotype
  • the vision-based test comprises a cognitive impairment test.
  • the actuator selectively introduces an optical path length increasing medium within said optical path length to selectively adjust said optical path length.
  • the digital data processor is operable to translate said LFSL to adjust the perception-adjusted content while satisfying a visual content quality parameter.
  • the visual content quality parameter comprises one or more of a perception- adjusted content resolution, a corneal beam size, a view zone size, or a distance between a pupil and a view zone edge.
  • a method for dynamically adjusting a perception adjustment of displayed content in a light field display system comprising a digital processor and a digital display defined by an array of pixels and a light field shaping layer (LFSL) disposed relative thereto, the method comprising: accessing display geometry data related to one or more of the light field display system and a user thereof, said display geometry data at least in part defining the perception adjustment of displayed content; digitally identifying a preferred display geometry based, at least in part, on said display geometry data, said preferred display geometry comprising a desirable optical path length between said LFSL and the pixels to optimally produce a requested perception adjustment of displayed content; automatically adjusting said optical path length, via the digital processor and an actuator operable to adjust said optical path length and thereby adjust the perception adjustment of displayed content to said requested perception adjustment.
  • LFSL light field shaping layer
  • a light field shaping system for interfacing with light emanated from underlying pixels of a digital screen in a light field display to display content in accordance with a designated perception adjustment
  • the system comprising: a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the digital screen so to define a system configuration, said system configuration at least in part defining a subset of perception adjustments displayable by the light field display; an actuator operable to adjust a relative distance between said LFSL and the digital screen to adjust said system configuration; and a digital data processor operable to activate said actuator to selectively adjust said relative distance and thereby provide a preferred system configuration defining a preferred subset of perception adjustments; wherein said preferred subset of perception adjustments comprises the designated perception adjustment.
  • LFSL light field shaping layer
  • the digital data processor is further operable to: receive as input data related to said designated perception adjustment; based at least in part on said data related to said designated perception adjustment, calculate said preferred system configuration.
  • the digital data processor is further operable to dynamically adjust said system configuration during use of the light field display.
  • a light field display system for displaying content in accordance with a designated perception adjustment, the system comprising: a digital display screen comprising an array of pixels; a light field shaping layer (LFSL) comprising an array of light field shaping elements shaping a light field emanating from said array of pixels and disposable relative thereto in accordance with a system configuration at least in part defining a subset of displayable perception adjustments; an actuator operable to translate said LFSL relative to said array of pixels to adjust said system configuration; and a digital data processor operable to activate said actuator to translate said LFSL and thereby provide a preferred system configuration defining a preferred subset of perception adjustments; wherein said preferred subset of perception adjustments comprises the designated perception adjustment.
  • LFSL light field shaping layer
  • a light field shaping layer to be used in conjunction with a digital display comprising an array of digital pixels, wherein an optimal rendering of a perceptively adjusted image is provided by minimizing a spread of light from the display pixels through the LFSL in accordance with the following expression:
  • a light field shaping system for performing a vision-based assessment using perception-adjusted content, the system comprising: a pixelated digital display; a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the pixelated digital display so to align said array of light field shaping elements with pixels of the pixelated digital display in accordance with a current light field shaping geometry to thereby define a perception adjustment of displayed content in accordance with said current geometry; an actuator operable to adjust an optical path length between said LSFL and the pixelated digital display to adjust alignment of said light field shaping elements with the pixels in accordance with an adjusted geometry thereby defining an adjusted perception adjustment of displayed content in accordance with said adjusted geometry; and a digital data processor operable to activate said actuator to translate said LFSL to adjust the perception- adjusted content for the vision-based assessment.
  • the vision-based assessment comprises a cognitive impairment assessment.
  • the vision-based assessment comprises a visual acuity assessment.
  • the digital data processor is operable to translate said LFSL to adjust the perception-adjusted content while maintaining a visual content quality parameter associated with the vision-based assessment.
  • the visual content quality parameter comprises one or more of a perception- adjusted content resolution, a corneal beam size of said perception-adjusted content, a view zone size, or a distance between a pupil and a view zone edge.
  • the adjusted perception adjustment comprises a range of perception adjustments corresponding to said adjusted geometry.
  • the vision-based assessment comprises the display of content in accordance with an assessment range of perception adjustments.
  • the range of perception adjustments corresponds at least in part to said assessment range of perception adjustments.
  • the system further comprises an optical component intersecting an optical path of the perception-adjusted content and configured to adjust an optical power of the perception- adjusted content for the vision-based assessment.
  • the optical component comprises a lens or a tunable lens.
  • Figures 1A and IB are schematic diagrams of an exemplary light field vision testing or previewing system, in accordance with one embodiment
  • Figures 2A to 2C schematically illustrate normal vision, blurred vision, and corrected vision in accordance with one embodiment, respectively;
  • Figures 3A and 3B are schematic diagrams of a light field display in which respective pixel subsets are aligned to emit light through a corresponding microlens or lenslet, in accordance with one embodiment;
  • Figures 4A to 4C are schematic diagrams of exemplary light field vision testing or previewing systems (e.g. refractors/phoropters), in accordance with different embodiments;
  • Figure 5 is a plot of the angular resolution of an exemplary light field display as a function of the dioptric power generated, in accordance with one embodiment;
  • Figures 6A to 6D are schematic plots of the image quality generated by a light field refractor/phoropter as a function of the dioptric power generated by using in combination with the light field display (A) no refractive component, (B) one refractive component, (C) and (D) a multiplicity of refractive components;
  • Figures 7 A, 7B and 7C are perspective views of exemplary light field refractors/phoropters, showing a casing thereof in cross-section (A and B) and a unit combining side-by-side two of the units (C) shown in 7A and 7B, in accordance with one embodiment;
  • Figure 8 is a process flow diagram of an exemplary dynamic subjective vision testing method, in accordance with one embodiment
  • Figure 9 is a schematic diagram of an exemplary light field image showing two columns of optotypes at different dioptric power for the method of Figure 8, in accordance with one embodiment
  • Figure 10 is a schematic diagram of an exemplary vision testing system configuration employing a microlens array, in accordance with at least one embodiment
  • Figure 11 is a schematic diagram of an exemplary pixel and lenslet system, in accordance with various embodiments.
  • Figure 12 is a schematic diagram illustrating a focusing of a rectangular beam, in accordance with some embodiments;
  • Figure 13 is an exemplary plot illustrating a spacing of retinal beam spots as a function of various parameters, in accordance with at least one embodiment
  • Figure 14 is an exemplary plot illustrating an interplay of various parameters in an exemplary light field system, in accordance with some embodiments;
  • Figures 15A and 15B are exemplary plots of a minimum beam size in an exemplary light field display system as a function of light field shaping layer focal length, with the condition that interference from different lenslets is avoided, and
  • Figures 15C and 15D are exemplary plots of a minimum beam size in the exemplary light field display system of Figures 15A and 15B, without the condition that interference from different lenslets is avoided, in accordance with various embodiments;
  • Figures 16A and 16B are exemplary plots illustrating the effect of lenslet size on retinal spot size, in accordance with various embodiments
  • Figures 17A and 17C, and Figures 17B and 17D are exemplary plots illustrating the field of view and retina spread, respectively, for an exemplary light field display system, in accordance with some embodiments;
  • Figures 18A and 18B are schematic diagrams illustrating an exemplary light field display system comprising a dynamic light field shaping layer, in accordance with various embodiments;
  • Figure 19 is a schematic diagram illustrating an exemplary process for displaying adjusted content using a dynamic light field shaping layer, in accordance with some embodiments;
  • Figures 20A to 201 are exemplary plots illustrating various parameters for an exemplary light field system, in accordance with one embodiment;
  • Figures 21 A to 21 J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment;
  • Figures 22A to 22J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment
  • Figures 23A to 23J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment;
  • Figures 24A and 24B are perspective views of an exemplary portable cognitive impairment assessment system, in accordance with one embodiment,
  • Figures 25A and 25B are schematic diagrams illustrating an exemplary cognitive impairment assessment device comprising a user-interfacing portion and a load- bearing portion and comprising a displaceable light field shaping layer, in accordance with one embodiment
  • Figures 26A to 26J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment.
  • Figures 27A to 27K are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment.
  • elements may be described as “configured to” perform one or more functions or “configured for” such functions.
  • an element that is configured to perform or configured for performing a function is enabled to perform the function, or is suitable for performing the function, or is adapted to perform the function, or is operable to perform the function, or is otherwise capable of performing the function.
  • the systems and methods described herein provide, in accordance with different embodiments, different examples of a light field display, adjusted pixel rendering method therefor, and adjusted vision perception system and method using same.
  • some of the herein-described embodiments provide improvements or alternatives to current light field display technologies, for instance, in providing a range of dioptric corrections that may be displayed or a given light field display system having a particular light field shaping layer geometry and display resolution.
  • Various embodiments relate to the provision of an increased range of perception adjustments accessible to, for instance, a vision testing system (e.g. a refractor or phoropter), or other display system operable to provide a perception adjustment through the provision of a light field, such as a smart phone, television, pixelated display, car dashboard interface, or the like.
  • devices, displays and methods described herein may allow a user’s perception of one or more input images (or input image portions), where each image or image portion is virtually located or perceived to be at a distinct image plane/depth location, to be adjusted or altered using the described light field display technology, again allowing for, for instance, corrective assessment, or the display of media content (e.g. images or videos) in accordance with a dioptric shift or perception adjustment that may not be enabled by a light field display technology having a fixed geometry or configuration.
  • media content e.g. images or videos
  • a geometry, disposition and/or relative positioning of an integrated or cooperative light field shaping element array e.g. light field shaping layer (LFSL)
  • LFSL light field shaping layer
  • Some of the herein described embodiments provide for digital display devices, or devices encompassing such displays, for use by users having reduced visual acuity, whereby images ultimately rendered by such devices can be dynamically processed to accommodate the user’s reduced visual acuity so that they may comfortably consume rendered images without the use of corrective eyewear, contact lenses, or surgical intervention, as would otherwise be required.
  • embodiments are not to be limited as such as the notions and solutions described herein may also be applied to other technologies in which a user’s perception of an input image to be displayed can be altered or adjusted via the light field display.
  • digital displays as considered herein will comprise a set of image rendering pixels and a corresponding set of light field shaping elements that at least partially govern a light field emanated thereby to produce a perceptively adjusted version of the input image, notably distinct perceptively adjusted portions of an input image or input scene, which may include distinct portions of a same image, a same 2.5D/3D scene, or distinct images (portions) associated with different image depths, effects and/or locations and assembled into a combined visual input.
  • light field shaping elements may take the form of a light field shaping layer or like array of optical elements to be disposed relative to the display pixels in at least partially governing the emanated light field.
  • such light field shaping layer elements may take the form of a microlens and/or pinhole array, or other like arrays of optical elements, or again take the form of an underlying light field shaping layer, such as an underlying array of optical gratings or like optical elements operable to produce a directional pixelated output.
  • the light field shaping layer may be disposed at a preset or adjustable distance from the pixelated display so to controllably shape or influence a light field emanating therefrom.
  • each light field shaping layer can be defined by an array of optical elements centered over a corresponding subset of the display’s pixel array to optically influence a light field emanating therefrom and thereby govern a projection thereof from the display medium toward the user, for instance, providing some control over how each pixel or pixel group will be viewed by the viewer’ s eye(s).
  • arrayed optical elements may include, but are not limited to, lenslets, microlenses or other such diffractive optical elements that together form, for example, a lenslet array; pinholes or like apertures or windows that together form, for example, a parallax or like barrier; concentrically patterned barriers, e.g.
  • the display device may also generally invoke a hardware processor operable on image pixel (or subpixel) data for an image to be displayed to output corrected or adjusted image pixel data to be rendered as a function of a stored characteristic of the light field shaping elements and/or layer (e.g.
  • Image processing may, in some embodiments, be dynamically adjusted as a function of the user’s visual acuity or intended application so to actively adjust a distance of a virtual image plane, or perceived image on the user’s retinal plane given a quantified user eye focus or like optical aberration(s), induced upon rendering the corrected/adjusted image pixel data via the optical layer and/or elements, for example, or otherwise actively adjust image processing parameters as may be considered, for example, when implementing a viewer- adaptive pre-filtering algorithm or like approach (e.g. compressive light field optimization), so to at least in part govern an image perceived by the user’ s eye(s) given pixel or subpixel-specific light visible thereby through the layer.
  • a viewer- adaptive pre-filtering algorithm or like approach e.g. compressive light field optimization
  • various ray tracing processes may be employed, in accordance with various embodiments, for rendering, for instance, adjusted content to accommodate a user’s reduced visual acuity.
  • various embodiments relate to computationally implementing one or more of the ray tracing processes described in Applicant’s U.S. Patent No. 10,394,322 issued August 27, 2019, U.S. Patent No. 10,636,116 issued April 28, 2020, and/or U.S. Patent No. 10,761,604 issued September 1, 2020, the entire contents of which are hereby incorporated herein by reference.
  • a subjective vision testing device/system (interchangeably referred to as a corrective vision previewing device/system), generally referred to using the numeral 100, will now be described.
  • a light field vision testing device such as a light field refractor or phoropter device 102.
  • light field refractor 102 is a device comprising, as mentioned above, a light field display 104 and which is operable to display or generate one or more images, including optotypes, to a user or patient having his/her vision acuity (e.g. refractive error) tested.
  • light field display 104 comprises a light field shaping layer (LFSL) 108 overlaid or placed in front of a digital pixel display 110 (i.e. LCD, LED, OLED, etc.).
  • LFSL light field shaping layer
  • digital pixel display 110 i.e. LCD, LED, OLED, etc.
  • a lenslet array comprising an array of microlenses (also interchangeably referred to herein as lenslets) that are each disposed at a distance from a corresponding subset of image rendering pixels in an underlying digital display.
  • a light field shaping layer may be manufactured and disposed as a digital screen overlay
  • other integrated concepts may also be considered, for example, where light field shaping elements such as a textured or masked glass plate, beam-shaping light sources (e.g. directional light sources and/or backlit integrated optical grating array), or like component, may be coupled with integrally formed or manufactured integral components of a digital display system.
  • beam-shaping light sources e.g. directional light sources and/or backlit integrated optical grating array
  • each lenslet will predictively shape light emanating from these pixel subsets to at least partially govern light rays being projected toward the user by the display device.
  • other light field shaping layers may also be considered herein without departing from the general scope and nature of the present disclosure, whereby light field shaping will be understood by the person of ordinary skill in the art to reference measures by which light, that would otherwise emanate indiscriminately (i.e. isotropically) from each pixel group, is deliberately controlled to define predictable light rays that can be traced between the user and the device’s pixels through the shaping layer.
  • a light field is generally defined as a vector function that describes the amount of light flowing in every direction through every point in space.
  • anything that produces or reflects light has an associated light field.
  • the embodiments described herein produce light fields from an object that are not “natural” vector functions one would expect to observe from that object. This gives it the ability to emulate the “natural” light fields of objects that do not physically exist, such as a virtual display located far behind the light field display.
  • a light field display 104 projects the correct sharp image (H) on the retina for an eye with a crystalline lens which otherwise could not accommodate sufficiently to produce a sharp image.
  • the other two light field pixels (I) and (J) are drawn lightly, but would otherwise fill out the rest of the image.
  • a light field as seen in Figure 2C cannot be produced with a ‘normal’ two-dimensional display because the pixels’ light field emits light isotopically. Instead it is necessary to exercise tight control on the angle and origin of the light emitted, for example, using a microlens array or other light field shaping layer such as a parallax barrier, or combination thereof.
  • Figure 3A schematically illustrates a single light field pixel defined by a convex microlens 302 disposed at its focus from a corresponding subset of pixels in a digital pixel display 108 to produce a substantially collimated beam of light emitted by these pixels, whereby the direction of the beam is controlled by the location of the pixel(s) relative to the microlens.
  • the single light field pixel produces a beam similar to that shown in Figure 2C where the outside rays are lighter and the majority inside rays are darker.
  • the digital pixel display 108 emits light which hits the microlens 302 and it results in a beam of substantially collimated light (A).
  • FIG. 3B schematically illustrates an example of a light field display assembly in which a LFSL 106 sits above a pixel display 108 to have pixels 304 emit light through the microlens array.
  • a ray-tracing algorithm can thus be used to produce a pattern to be displayed on the pixel array below the microlens in order to create the desired virtual image that will effectively correct for the viewer’s reduced visual acuity.
  • the separation between the LFSL 106 and the pixel array 108, as well as the pitch of the lenses can be selected as a function of various operating characteristics, such as the normal or average operating distance of the display, and/or normal or average operating ambient light levels.
  • LFSL 106 may be a microlens array (MLA) defined by a hexagonal array of microlenses or lenslet disposed so to overlay a corresponding square pixel array of digital pixel display 108.
  • MLA microlens array
  • each microlens can be aligned with a designated subset of pixels to produce light field pixels as described above
  • the hexagonal-to-square array mismatch can alleviate certain periodic optical artifacts that may otherwise be manifested given the periodic nature of the optical elements and principles being relied upon to produce the desired optical image corrections.
  • other geometries such as a square microlens array, or an array comprising elongated hexagonal lenslets, may be favoured when operating a digital display comprising a hexagonal pixel array.
  • the MLA may further or alternatively be overlaid or disposed at an angle (rotation) relative to the underlying pixel array, which can further or alternatively alleviate period optical artifacts.
  • a pitch ratio between the microlens array and pixel array may be deliberately selected to further or alternatively alleviate periodic optical artifacts.
  • a perfectly matched pitch ratio i.e. an exact integer number of display pixels per microlens
  • a pitch ratio mismatch can help reduce such occurrences.
  • the pitch ratio will be selected to define an irrational number, or at least, an irregular ratio, so to minimise periodic optical artifacts.
  • a structural periodicity can be defined so to reduce the number of periodic occurrences within the dimensions of the display screen at hand, e.g. ideally selected so to define a structural period that is greater than the size of the display screen being used.
  • light field display 104 can render dynamic images at over 30 frames per second on the hardware in a smartphone.
  • a display device as described above and further exemplified below, can be configured to render a corrected or adjusted image via the light field shaping layer that accommodates, tests or simulates for the user’s visual acuity.
  • the image correction in accordance with the user’s actual predefined, set or selected visual acuity level different users and visual acuity may be accommodated using a same device configuration, whereas adjusting such parameters for a given user may allow for testing for or simulation of different corrective or visual adjustment solutions.
  • corrective image pixel data may be dynamically adjust a virtual image distance below/above the display as rendered via the light field shaping layer, different visual acuity levels may be accommodated, and that, for an image input as a whole, for distinctly various portions thereof, or again progressively across a particular input.
  • light field rendering may be adjusted to effectively generate a virtual image on a virtual image plane that is set at a designated distance from an input user pupil location, for example, so to effectively push back, or move forward, a perceived image, or portion thereof, relative to the light field refractor device 102.
  • light field rendering may rather or alternatively seek to map the input image on a retinal plane of the user, taking into account visual aberrations, so to adaptively adjust rendering of the input image on the display device to produce the mapped effect.
  • the unadjusted input image would otherwise typically come into focus in front of or behind the retinal plane (and/or be subject to other optical aberrations)
  • this approach allows to map the intended image on the retinal plane and work therefrom to address designated optical aberrations accordingly.
  • the device may further computationally interpret and compute virtual image distances tending toward infinity, for example, for extreme cases of presbyopia.
  • This approach may also more readily allow, as will be appreciated by the below description, for adaptability to other visual aberrations that may not be as readily modeled using a virtual image and image plane implementation.
  • the input image is digitally mapped to an adjusted image plane (e.g.
  • FIG. 5 As an example of the effectiveness of the light field display in generating a diopter displacement (e.g. simulate the effect of looking through an optical component (i.e. a lens) of a given diopter strength or power) is shown in Figure 5, where a plot is shown of the angular resolution (in arcminutes) of an exemplary light field display comprising a 1500 ppi digital pixel display, as a function of the dioptric power of the light field image (in diopters).
  • a diopter displacement e.g. simulate the effect of looking through an optical component (i.e. a lens) of a given diopter strength or power
  • Figure 5 As an example of the effectiveness of the light field display in generating a diopter displacement (e.g. simulate the effect of looking through an optical component (i.e. a lens) of a given diopter strength or power) is shown in Figure 5, where a plot is shown of the angular resolution (in arcminutes) of an exemplary light field display comprising a 1500 ppi
  • the light field display is able to generate displacements (line 502) in diopters that have higher resolution corresponding to 20/20 vision (line 504) or better (e.g. 20/15 - line 506) and close to (20/10 - line 508)), here within a dioptric power range of 2 to 2.5 diopters.
  • line 502 displacements in diopters that have higher resolution corresponding to 20/20 vision (line 504) or better (e.g. 20/15 - line 506) and close to (20/10 - line 508)), here within a dioptric power range of 2 to 2.5 diopters.
  • light field display 104 in conjunction with light field rendering or ray-tracing methods referenced above
  • the light field display can display a virtual image at optical infinity, meaning that any level of accommodation-based presbyopia (e.g. first order) can be corrected for.
  • the light field display can both push the image back or forward, thus allowing for selective image corrections for both hyperopia (far-sightedness) and myopia (nearsightedness).
  • variable displacements and/or accommodations may be applied as a function of non- uniform visual aberrations, or again to provide perceptive previewing or simulation of non- uniform or otherwise variable corrective powers/measures across a particular input or field of view.
  • the light field rendering system introduced above, in conjunction with various ray-tracing methods as referenced above, may also be used with other devices which may similarly comprise a light field display.
  • this may include a smartphone, tablets, e-readers, watches, televisions, GPS devices, laptops, desktop computer monitors, televisions, smart televisions, handheld video game consoles and controllers, vehicular dashboard and/or entertainment displays, and the like, without limitation.
  • any light field processing or ray-tracing methods as referenced herein, and related light field display solutions can be equally applied to image perception adjustment solutions for visual media consumption, as they can for subjective vision testing solutions, or other technologically related fields of endeavour.
  • the light field display and rendering/ray-tracing methods discussed above may ah be used to implement, according to various embodiments, a subjective vision testing device or system such as a phoropter or refractor.
  • a light field display may replace, at least in part, the various refractive optical components usually present in such a device.
  • vision correction light field ray tracing methods may equally be applied to render optotypes at different dioptric power or refractive correction by generating vision correction for hyperopia (far-sightedness) and myopia (nearsightedness), as was described above in the general case of a vision correction display.
  • Light field systems and methods described herein may be applied to create the same capabilities as a traditional instrument and to open a spectrum of new features, ah while improving upon many other operating aspects of the device.
  • the digital nature of the light field display enables continuous changes in dioptric power compared to the discrete change caused by switching or changing a lens or similar; displaying two or more different dioptric corrections seamlessly at the same time; and, in some embodiments, the possibility of measuring higher-order aberrations and/or to simulate them for different purposes such as, deciding for free-form lenses, cataract surgery operation protocols, IOL choice, etc.
  • a correct light field can be produced, in some embodiments, only at or around the location of the user’s pupil(s).
  • the light field display can be paired with pupil tracking technology, as will be discussed below, to track a location of the user’ s eyes/pupils relative to the display. The display can then compensate for the user’ s eye location and produce the correct virtual image, for example, in real-time.
  • light field refractor 102 may include, integrated therein or interfacing therewith, a pupil/eye tracking system 110 to improve or enhance corrective image rendering by tracking a location of the user’s eye(s)/pupil(s) (e.g. both or one, e.g. dominant, eye(s)) and adjusting light field corrections accordingly one or more eye/pupil tracking light sources, such as one or more infrared (IR) or near-IR (NIR) light source(s) to accommodate operation in limited ambient light conditions, leverage retinal retro- reflections, invoke comeal reflection, and/or other such considerations.
  • IR/NIR pupil tracking techniques may employ one or more (e.g.
  • IR/NIR IR-based machine vision and facial recognition techniques
  • Other techniques may employ ambient or IR/NIR light-based machine vision and facial recognition techniques to otherwise locate and track the user’s eye(s)/pupil(s).
  • one or more corresponding (e.g. visible, IR/NIR) cameras may be deployed to capture eye/pupil tracking signals that can be processed, using various image/sensor data processing techniques, to map a 3D location of the user’s eye(s)/pupil(s).
  • eye/pupil tracking hardware/software may be integral to device 102, for instance, operating in concert with integrated components such as one or more front facing camera(s), onboard IR/NIR light source(s) (not shown) and the like.
  • eye/pupil tracking hardware may be further distributed within the environment, such as dash, console, ceiling, windshield, mirror or similarly-mounted camera(s), light sources, etc.
  • light field refractor 102 may be configured with light field display 104 located relatively far away (e.g. one or more meters) from the user’s eye currently being diagnosed.
  • the pointed line in Figures 4A to 4C is used to schematically illustrate the direction of the light rays emitted by light field display 104.
  • eye-tracker 110 which may be provided as a physically separate element, for example, installed at a given location in a room or similar.
  • the noted eye/pupil tracker 110 may include the projection of IR markers/patterns to help align the patient’s eye with the light field display.
  • a tolerance window e.g.
  • light field refractor 102 may also comprise, according to different embodiments and as will be further discussed below, one or more refractive optical components 112, a processing unit 114, a data storage unit or internal memory 116, one or more cameras 118, a power source 120, a network interface 122 for communicating via network to a remote database or server 124.
  • power source 120 may comprise, for example, a rechargeable Li-ion battery or similar. In some embodiments, it may comprise an additional external power source, such as, for example, a USB-C external power supply. It may also comprise a visual indicator (screen or display) for communicating the device’s power status, for example whether the device is on/off or recharging.
  • internal memory 116 may be any form of electronic storage, including a disk drive, optical drive, read-only memory, random-access memory, or flash memory, to name a few examples.
  • a library of chart patterns may be located in internal memory 116 and/or retrievable from remote server 124 via network interface 122.
  • one or more optical components 112 may be used in combination with the light field display 104, for example to shorten the size of refractor 102 and still offer an acceptable range in dioptric power.
  • the general principle is schematically illustrated in the plots of Figures 6A to 6D. In these schematic plots, the image quality (e.g. inverse of the angular resolution, higher is better) at which optotypes are small enough to be useful for vision testing in this plot is above horizontal line 602 which represents typical 20/20 vision.
  • Figure 6A shows the plot for the light field display having a static LFSL only, where we see the characteristic two peaks corresponding to the smallest resolvable point, one of which was plotted in Figure 5 (here inverted and shown as a peak instead of a basin), and where each region above the line may cover a few diopters of dioptric power, according to some embodiments. While the dioptric range may, in some embodiments, be more limited than needed when relying only on the light field display with a static LFSL, it is possible to shift this interval by adding one or more refractive optical components. This is shown in Figure 6B where the regions above the line 602 is shifted to the left (negative diopters) by adding a single lens in the optical path.
  • FIG. 4B One example, according to one embodiment, of such a light field refractor 102 is schematically illustrated in Figure 4B, wherein the light field display 104 (herein shown comprising LFSL 106 and digital pixel display 108) is combined with a multiplicity of refractive components 112 (herein illustrated as a reel of lenses as an example only).
  • the refractive component used in combination with light field display 104 with a static LFSL, a larger dioptric range may be covered. This may also provide means to reduce the dimension of device 102 as mentioned above, making it more portable, so that all its internal components may be encompassed into a shell, housing or casing 402.
  • light field refractor 102 may thus comprise a durable ABS housing that may be shock and harsh-environment resistant.
  • light field refractor 102 may also comprise a telescopic feel for fixed or portable usage; optional mounting brackets, and/or a carrying case (not shown).
  • all components may be internally protected and sealed from the elements.
  • casing 402 may further comprise a head-rest or similar (not shown) to keep the user’s head still and substantially in the same location, thus, in such examples, foregoing the general utility of a pupil tracker or similar techniques by substantially fixing a pupil location relative to this headrest.
  • a mirror or any device which may increase the optical path.
  • Figure 4C where the length of the device was reduced by adding a mirror 404.
  • refractive components 112 may be include, without limitation, one or more lenses, sometimes arranged in order of increasing dioptric power in one or more reels of lenses similar to what is typically found in traditional refractors/phoropters; an electrically controlled fluid lens; active Fresnel lens; and/or Spatial Light Modulators (SLM).
  • additional motors and/or actuators may be used to operate refractive components 112.
  • the motors/actuators may be communicatively linked to processing unit 114 and power source 120, and operate seamlessly with light field display 102 to provide the required dioptric power.
  • Figures 7A and 7B show a perspective view of an exemplary light field phoropter 102 similar to the one schematically shown in Figure 3B, but wherein the refractive component 112 is an electrically tunable liquid lens.
  • the electrically tunable lens may have a range of ⁇ 13 diopters.
  • one or more refractive components 112 may be removed from the system 102, as the dioptric range enabled by the refractive components 112 may be provided through mechanical displacement of a LFSL, as discussed below.
  • the dynamic range of the light field display solution can be significantly increased by dynamically adjusted a relative distance of the LFSL, such as a microlens array, relative to the pixelated display such that, in combination with a correspondingly adaptive ray tracing process, a greater perceptive adjustment range can be achieved (e.g. greater perceived dioptric correction range can be implemented).
  • a relative distance of the LFSL such as a microlens array
  • a 1000 dpi display is used with a MLA having a 65 mm focal distance and 1000 ⁇ m pitch with the user’ s eye located at a distance of about 26 cm.
  • a similar embodiment uses the same MLA and user distance with a 3000 dpi display.
  • Other displays having resolutions including 750 dpi, 1000 dpi, 1500 dpi and 3000 dpi may also be used, as may be MLAs with a focal distance and pitch of 65 mm and 1000 ⁇ m, 43 mm and 525 ⁇ m, 65 mm and 590 ⁇ m, 60 mm and 425 ⁇ m, 30 mm and 220 ⁇ m, and 60 mm and 425 ⁇ m, respectively, and user distances of 26 cm, 45 cm or 65 cm.
  • eye-tracker 110 may further comprise a digital camera, in which case it may be used to further acquire images of the user’s eye to provide further diagnostics, such as pupillary reflexes and responses during testing for example.
  • one or more additional cameras 118 may be used to acquire these images instead.
  • light field refractor 102 may comprise built-in stereoscopic tracking cameras.
  • control interface 126 may comprise a dedicated handheld controller-like device 128.
  • This controller 128 may be connected via a cable or wirelessly, and may be used by the patient directly and/or by an operator like an eye professional.
  • both the patient and operator may have their own dedicated controller 128.
  • the controller may comprise digital buttons, analog thumbstick, dials, touch screens, and/or triggers.
  • control interface 126 may comprise a digital screen or touch screen, either on refractor 102 itself or part of an external module (not shown).
  • control interface 126 may let on or more external remote devices (i.e. computer, laptop, tablet, smartphone, remote, etc.) control light field refractor 102 via network interface 122.
  • remote digital device 130 may be connected to light field refractor 102 via a cable (e.g. USB cable, etc.) or wirelessly (e.g. via Wi-Fi, Bluetooth or similar) and interface with light field refractor 102 via a dedicated application, software or website (not shown).
  • a dedicated application may comprise a graphical user interface (GUI), and may also be communicatively linked to remote database 124.
  • GUI graphical user interface
  • the user or patient may give feedback verbally and the operator may control the vision test as a function of that verbal feedback.
  • refractor 102 may comprise a microphone (not shown) to record the patient’ s verbal communications, either to communicate them to a remote operator via network interface 122 or to directly interact with the device (e.g. via speech recognition or similar).
  • processing unit 114 may be communicatively connected to data storage 116, eye tracker 110, light field display 104 and refractive components 112. Processing unit 114 may be responsible for rendering one or more images or optotypes via light field display 104 and, in some embodiments, jointly control refractive components 112 or a LFSL 106 position to achieve a required total change in dioptric power. It may also be operable to send and receive data to internal memory 116 or to/from remote database 124 via network interface 122.
  • diagnostic data may be automatically transmitted/communicated to remote database 124 or remote digital device 130 via network interface 122 through the use of a wired or wireless network connection.
  • a wired or wireless network connection may be considered herein, such as, but not limited to, Wi-Fi, Bluetooth, NFC, Cellular, 2G, 3G, 4G, 5G or similar.
  • the connection may be made via a connector cable (e.g. USB including microUSB, USB-C, Lightning connector, etc.).
  • remote digital device 130 may be located in a different room, building or city.
  • two light field refractors 102 may be combined side-by- side to independently measure the visual acuity of both left and right eye at the same time.
  • An example is shown in Figure 7C, where two units corresponding to the embodiment of Figures 7 A or 7B (used as an example only) are placed side-by-side or fused into a single device.
  • a dedicated application, software or website may provide integration with third party patient data software.
  • software required to operate and installed on refractor 102 may be updated on-the-fly via a network connection and/or be integrated with the patient’s smartphone app for updates and reminders.
  • the dedicated application, software or website may further provide a remote, real-time collaboration platform between an eye professional and user/patient, and/or between different eye professionals. This may include interaction between different participants via video chat, audio chat, text messages, etc.
  • light field refractor 102 may be self-operated or operated by an optometrist, ophthalmologist or other certified eye-care professional.
  • a user/patient may use refractor 102 in the comfort of his/her own home, in a store or a remote location.
  • the light field display 102 may comprise various forms of systems for performing vision-based tests. For instance, while some embodiments described above relate to the light field display 102 comprising a refractor or phoropter for assessing a user’ s visual acuity, various other embodiments relate to a light field system 102 operable to perform, for instance, a cognitive impairment assessment. In some embodiments, the light field system 102 may display, for instance, moving content to assess a user’s ability to track motion, or perform any number of other cognitive impairment evaluations known in the art, such as those related to saccadic movement and/or fixation. Naturally, such embodiments may further relate to light field display systems that are further operable to track a user’s gaze and/or eye movement and store data related thereto for further processing and assessment.
  • a dynamic subjective vision testing method using vision testing system 100 generally referred to using the numeral 800, will now be described.
  • method 800 seeks to diagnose a patient’s reduced visual acuity and produce therefrom, in some embodiments, an eye prescription or similar.
  • eye prescription information may include, for each eye, one or more of: distant spherical, cylindrical and/or axis values, and/or a near (spherical) addition value.
  • the eye prescription information may also include the date of the eye exam and the name of the eye professional that performed the eye exam.
  • the eye prescription information may also comprise a set of vision correction parameter(s) for operating any vision correction light field displays using the systems and methods described below.
  • the eye prescription may be tied to a patient profile or similar, which may contain additional patient information such as a name, address or similar. The patient profile may also contain additional medical information about the user. All information or data (i.e. set of vision correction parameter(s), user profile data, etc.) may be kept on external database 124.
  • the user’s current vision correction parameter(s) may be actively stored and accessed from external database 124 operated within the context of a server- based vision correction subscription system or the like, and/or unlocked for local access via the client application post user authentication with the server-based system.
  • Refractor 102 being, in some embodiments, portable, a large range of environments may be chosen to deliver the vision test (home, eye practitioner’s office, etc.).
  • the patient’s eye may be placed at the required location. This may be done by placing his/her head on a headrest or by placing the objective (i.e. eyepiece) on the eye to be diagnosed.
  • the vision test may be self-administered or partially self-administered by the patient.
  • the operator e.g. eye professional or other
  • the operator may have control over the type of test being delivered, and/or be the person who generates or helps generate therefrom an eye prescription, while the patient may enter inputs dynamically during the test (e.g. by choosing or selecting an optotype, etc.).
  • a location is acquired.
  • such a pupil location may be acquired via eye tracker 110, either once, at intervals, or continuously.
  • the location may be derived from the device or system’s dimension.
  • the use a head-rest and/or an eye-piece or similar provides an indirect means of deriving the pupil location.
  • refractor 102 may be self-calibrating and not require any additional external configuration or manipulation from the patient or the practitioner before being operable to start a vision test.
  • one or more optotypes is/are displayed to the patient, at one or more dioptric power (e.g. in sequence, side-by-side, or in a grid pattem/layout).
  • the use of light field display 104 offers multiple possibilities regarding how the images/optotypes are presented, and at which dioptric power each may be rendered.
  • the optotypes may be presented sequentially at different dioptric power, via one or more dioptric power increments.
  • the patient and/or operator may control the speed and size of the dioptric power increments.
  • optotypes may also be presented, at least in part, simultaneously on the same image but rendered at a different dioptric power.
  • Figure 9 shows an example of how different optotypes may be displayed to the patient but rendered with different dioptric powers simultaneously. These may be arranged in columns or in a table or similar. In Figure 9, we see two columns of three optotypes (K, S, V), varying in size, as they are perceived by a patient, each column being rendered at different degrees of refractive correction (e.g. dioptric power). In this specific example, the optotypes on the right are being perceived as blunder than the optotypes on the left.
  • method 800 may be configured to implement dynamic testing functions that dynamically adjust one or more displayed optotype’s dioptric power in real-time in response to a designated input, herein shown by the arrow going back from step 808 to step 804 in the case where at step 808, the user or patient communicates that the perceived optotypes are still blurry or similar.
  • the patient may indicate when the optotypes shown are clearer.
  • the patient may control the sequence of optotypes shown (going back and forth as needed in dioptric power), and the speed and increment at which these are presented, until he/she identifies the clearest optotype.
  • the patient may indicate which optotype or which group of optotypes is the clearest by moving an indicator icon or similar within the displayed image.
  • the optotypes may be presented via a video feed or similar.
  • step 804 may in this case comprise first displaying larger increments of dioptric power by changing lens as needed, and when the clearest or less blurry optotypes are identified, fine-tuning with continuous or smaller increments in dioptric power using the light field display.
  • a LFSL position may be dynamically adjusted to provide larger increments in dioptric power, or again to migrate between different operative dioptric ranges, resulting in a smoother transition than would otherwise be observed when changing lenses in a reel.
  • the LFSL may be displaced in small increments or continuously for fine-tuning a displayed optotype.
  • the refractive components 112 may act on all optotypes at the same time, and the change in dioptric power between them may be controlled by the light field display 104, for example in a static position to accommodate variations within a given dioptric range (e.g. applicable to all distinctly rendered optotypes), or across different LFSL positions dynamically selected to accommodate different operative ranges.
  • the change in dioptric power may be continuous.
  • eye images may be recorded during steps 802 to 806 and analyzed to provide further diagnostics.
  • eye images may be compared to a bank or database of proprietary eye exam images and analyzed, for example via an artificial intelligence (AI) or machine learning (ML) system, or similar. This analysis may be done by refractor 102 locally or via a remote server or database 124.
  • AI artificial intelligence
  • ML machine learning
  • an eye prescription or vision correction parameter(s) may be derived from the total dioptric power used to display the best perceived optotypes.
  • the patient, an optometrist or other eye-care professional may be able to transfer the patient’s eye prescription directly and securely to his/her user profile store on said server or database 124. This may be done via a secure website, for example, so that the new prescription information is automatically uploaded to the secure user profile on remote database 124.
  • the eye prescription may be sent remotely to a lens specialist or similar to have prescription glasses prepared.
  • vision testing system 100 may also or alternatively be used to simulate compensation for higher-order aberrations.
  • the light field rendering methods described above may be used to compensation for higher order aberrations (HO A), and thus be used to validate externally measured or tested HOA, in that a measured, estimated or predicted HOA can be dynamically compensated for using the system described herein and thus subjectively visually validated by the viewer in confirming whether the applied HOA correction satisfactory addresses otherwise experienced vision deficiencies.
  • HO A higher order aberrations
  • different light field image processing techniques may be considered, such as those introduced above and taught by Pamplona and/or Huang, for example, which may also influence other light field parameters to achieve appropriate image correction, virtual image resolution, brightness and the like.
  • various systems and methods described herein may provide an improvement over conventional light field displays through the provision or actuation of a translatable LFSL to, for instance, expand the dioptric range of perception adjustments accessible to a light field display system.
  • a smartphone, tablet, television screen, embedded entertainment system e.g.
  • translatable LFSL may benefit from a translatable LFSL to, for instance, increase a range of visual acuity corrections that may be readily accommodated with a similar degree of accommodation and rendering quality, or again, to expand a vision-based testing or accommodation range in a device so to effectively render optotypes or testing images across a greater perceptive adjustment, with, for instance, a given LFSL geometry and/or display resolution.
  • a vision testing system may comprise a translatable LFSL so to enable removal of one or more optical components (e.g. an electrically tunable lens, or one or more reels of lenses, or the like) from the light field system, while providing sufficient resolution to perform a vision-based assessment, non-limiting examples of which may include an eye exam or a cognitive impairment assessment.
  • a translatable LFSL may increase the widths of the peaks in Figures 6A to 6D, increasing the range of diopters that may be displayed with sufficient quality to perform an eye exam or cognitive assessment without requiring changing of optical components.
  • a translatable LFSL may further, in accordance with various embodiments, relate to reducing the size, weight, form factor, complexity, or the like, of a vision-based testing system. This may, for instance, reduce device cost while increasing the portability of a device for in-field assessments.
  • Various embodiments described hereafter may relate systems and methods related to vision testing systems that may employ a translatable LFSL.
  • a dynamic or translatable LFSL may be applied in various other light field display systems and methods. Further, description will be provided with reference to mathematical equations and relationships that may be evaluated or considered in the provision of a light field system used to generate corrected content.
  • the performance of a conventional eye exam may comprise placing a corrective lens in front of the eye being tested to correct the focal length of the eye having a depth DE, wherein the object may have a minimum feature separation of approximately 1 arcminute.
  • the required retinal spot spacing is given by Equation 1.
  • FIG. 10 schematically illustrates such an exemplary vision testing system 1000 comprising a pixelated display screen 1010 and MLA 1012, in accordance with at least one embodiment.
  • the nodal rays 1014 cross the centre of a pupil 1016, yielding a retinal nodal spot spacing of:
  • the spread around the nodal ray spot points is given in consideration of marginal rays 1018 reaching the edges of the pupil 1016: This corresponds to a width and number of pixels on the display 1010 equal to:
  • the beam spot spacing on the retina for one nodal band is given by:
  • the spread of each nodal band in accordance with some embodiments, may be equal to or greater than the modal spacing (i.e. assuming uniform retinal spot spacing in a single nodal band of width
  • the central spot of each band may be shifted, in accordance with some embodiments, according to:
  • the position of the MLA 1012 may be set, in accordance with some embodiments, to prevent overlap between different nodal bands on the display 1010.
  • the uniform retinal spacing of the overlapped nodal beams 1014 may therefore be given by the following (Equation 3), where the approximation denotes the possibility of a non-integer overlap factor O, in accordance with some embodiments: [00160]
  • the spread of light from the display pixels 1010 may, in accordance with some embodiments, be decreased as much as possible.
  • the nodal ray 1014 from an extremum pixel to an extremum of the pupil 1016 on the other side of the optical axis may give one portion of the angular spread. Another portion of the angular spread may arise from the spread to fill the MLA 1012. Accordingly,
  • Figure 11 schematically illustrates one lenslet 1110 where the pixel 1112 position is given by yp.
  • yL i s the lenslet central coordinate
  • W L is the width of the lenslet 1110
  • the eye 1114 is centered at the optical axis 1116 with the eye lens O centered at the origin.
  • the ray 1118 propagation distance in this example is denoted as positive from left to right, while angles are denoted as positive in counter-clockwise direction, and y is positive above the optical axis 1116.
  • a ray 1118 that is emitted towards the lenslet 1110 is characterised by the following angle, where y RL is the coordinate at which the ray 1118 hits the lenslet 1110: [00162]
  • the ray angle exiting the lens may be given by, in accordance with some embodiments:
  • the position at the eye lens plane y re and the angle ⁇ re , after the eye lens O may be given by, respectively:
  • Equation 4 Equation 4
  • the spot size from one lenslet 1010 may be found by tracing marginal rays Mr (e.g. marginal rays 1018 in Figure 10):
  • pupil size W ppl may be accounted for, in accordance with some embodiments as Equation 5: where the following identity was used:
  • various light field systems and methods may comprise the treatment of rays as a rectangular beam.
  • a beam size may approach a minimum theoretical size of zero width.
  • a diffraction model may be employed.
  • the divergence of the beam may be used to obtain the actual size of the beam. Accordingly, assuming a beam with a rectangular width of W Rect . diffracted light is therefore given by:
  • the first zero crossing width of the diffracted beam Sinc(W Rect y / ⁇ z ) may be considered, which occurs at:
  • a beam may be converging or diverging and/or collimated. For instance, considering adjacent lenslets, i.e.
  • D PL being less than or equal to f L in diverging or collimated beams
  • marginal rays at adjacent edges of the lenslets may be prevented from overlapping at the pupil
  • Equation 2 While the above described conditions, in accordance with some embodiments, may not be strong if, for instance, part of the interference beam hits the pupil, it may be difficult to notice, particularly at reduced intensities ( D PL not equal to f L ), due to the inverse square law. Accordingly, the conditions arising from Equation 2 may have more significant importance.
  • angular pitch on a pupil for a corrected eye lens may be obtained by the following, which may be more similar to a conventional eye exam:
  • the separation between the rays from adjacent pixels on the eye [00180]
  • the angular pitch of the rays in a single nodal band (e.g. that in Figure 10) may be found as:
  • a light field angular view may be given by the distance from the display to the eye, and the MLA position. Assuming there are sufficient lenslets to support the display size of a light field system, the FoV and spread on the retina may therefore, in accordance with some embodiments, by given by, respectively:
  • the uniform spacing of retinal beam spots for overlapping nodal bands are plotted in Figure 13 as a function of D LE and F E , normalised to D PL / D LE using Equation 3 above.
  • spacing is dependent on the eye focal length. Using and choosing D LE > 150 mm allows for spacing of less than Accordingly, D LE /D PL may be greater than 2.6.
  • Figure 14 shows an exemplary plot of the different regions of D LE /D PL , where arrows indicate the direction that satisfies the condition of Equation 7 above.
  • the (in some embodiments, stronger) condition of Equation 2 may require that D LE /D PL > 1.5.
  • the regions 1410 and 1412 of Figure 14, and/or those in which D PL is less than or equal to FL may be preferred for embodiments of a light field system or method.
  • region 1414 of Figure 14 may comprise one that may not be satisfied, as beyond this range, a system form factor may rapidly escalate to one that is unreasonable or undesirable.
  • Figures 15A to 15D are illustrative plots for a system having a display-to-eye distance DPE of 320 mm.
  • the calculated minimum beam size is a function of the MLA focal length and display-to-MLA distance.
  • Figures 15A and 15B are illustrative plots showing calculations performed with the condition of avoiding single pixel beam interference from different lenslets.
  • Figures 15C and 15D show the calculation without this condition applied.
  • Such calculations may be used to evaluate, for instance, potential device specifications, rather than evaluating actual retinal beam size values.
  • the actual spot size in an eye that is complex in nature may differ from that determined when using a simple lens model.
  • the actual retinal spot size applied relative to the retinal spot spacing may be dependent on an eye functionality and/or empirical data, and may be calibrated experimentally.
  • increasing the lenslet size may contribute to minimising the retinal spot. This is shown, by way of example, only, as the non-limiting illustrative plots of Figures 16A and 16B.
  • Figures 17A to 17D show exemplary plots of the FoV and retina spread for a display-to-eye distance of 320 mm.
  • larger lenslets result in smaller FoV and retinal spread.
  • retinal spread may be reduced for positive refractive errors, an effect that may be considered when calculating a light field to be displayed, in accordance with various embodiments.
  • Table 1 below shows exemplary dioptric error corrections obtained in practice for various D PL distances in millimetres.
  • the light field display system comprised an MLA with a lenslet focal length of 106 mm, with 2 mm pitch and lenslet width, a 31.7 pm pixel size display, and an eye-to-display distance of 320 mm.
  • a dioptric error range may be enhanced over that of conventional systems through, for instance, a dynamic displacement of a LFSL relative to a pixelated display.
  • conventional systems may maintain a fixed MLA at a distance from the display corresponding to the focal length of the lenslets, placing the LFSL at a distance other than that corresponding to the lenslet focal length from the display, or dynamically displacing it relative thereto, may increase the dioptric range of perception adjustments accessible to the light field display system.
  • both the pixelated display and the LFSL may remain in place while still achieving a similar effect by otherwise adjusting an optical path length between them. Indeed, the selective introduction of one or more liquid cells or transparent (e.g. glass) blocks within this optical path may provide a similar effect.
  • Other means of dynamically adjusting an optical path length between the pixelated display and LFSL may also be considered without limitation, and without departing from the general scope and nature of the present disclosure.
  • such a LFSL i.e. one that may be disposed at a selectively variable optical path length from a display screen
  • a dynamic LFSL may comprise various light field shaping layers known in the art, such as an MLA, a parallax barrier, an array of apertures, such as a pinhole array, or the like, and may be fabricated by various means known in the art.
  • a dynamic LFSL may be coupled with a display screen via, for instance, one or more actuators that may move the LFSL towards or away from (i.e. perpendicularly to) a digital display, and thus control, for instance, a range of dioptric corrections that may be generated by the digital display.
  • an actuator may otherwise dynamically displace a pixelated display relative to a fixed LFSL to achieve a similar effect in dynamically adjusting an optical path length between them.
  • Figure 18A shows a schematic of a display system 1800 (not to scale) comprising a digital display 1810 having an array of pixels 1812.
  • a LFSL 1830 represented in Figure 18A as a parallax barrier 1830 having a barrier width (pitch) 1860, is coupled to the display 1810 via actuators 1820 and 1822, and is disposed between the display 1810 and two viewing locations 1840 and 1842, represented by white and grey eyes, respectively.
  • view zones 1840 and 1842 may correspond to, for instance, two different eyes of a user, or eyes of two or more different users. While various light field shaping systems may, in accordance with various embodiments, address more than one eye, for instance to provide two different dioptric corrections to different eyes of the user, this embodiment will henceforth be described with reference to the first eye 1840, for simplicity. Further, it will be appreciated that while the dynamic LFSL 1830 in Figure 18A is shown in one dimension, various other embodiments relate to the LFSL 1830 comprising a two dimensional configuration, such as a lenslet array or a static or dynamic 2D parallax barrier (e.g.
  • an LCD screen that may be activated to provide an arbitrary configuration of opaque and transparent pixels
  • a MLA 1830 may be dynamically adjusted or placed at various distances from the display 1810 to provide a designated light field to, for instance, enable a designated range of dioptric corrections via activation of appropriate pixels 1812.
  • Figure 18A shows a first configuration in which pixels of the display 1810 are activated to provide, for instance, a perception adjustment corresponding to a designated dioptric error correction (e.g. a dioptric error of -4.25), for a viewer at the first viewing location 1840.
  • a designated dioptric error correction e.g. a dioptric error of -4.25
  • the user is at a distance 1850 from the dynamic LFSL 1830, while the LFSL 1830 is at a distance 1852 from the screen 1810, where the distance 1852 corresponds to, for instance, the focal length of the microlens LFSL 1830, as may be employed in conventional light field systems.
  • the user at a distance corresponding to the sum of distances 1850 and 1852 (e.g.
  • the distance between a display screen and the user’s eye in a refractor system used for displaying various optotypes is unable to perceive, at a sufficiently high rendering quality, the dioptric adjustment (e.g. - 4.25 diopters) provided by the activated pixels, for instance due to geometrical constraints and/or limitations in screen resolution.
  • the configurational parameters may lead to an undesirable viewing experience, and/or a lack of perception of the visual correction requiring a particular configuration of activated pixels as a result of the considerations noted above.
  • a perceptive adjustment may be rendered, it may not exhibit the level of resolution and optical discernment required to conduct an effective vision test.
  • actuators 1820 and 1822 may translate the dynamic LFSL 1830 towards or away from the display 1810, i.e. at a distance within or beyond the MLA focal length for a lenslet LFSL, to dynamically improve a perception adjustment rendering quality consistent with testing requirements and designated or calibrated in accordance with the corrective range of interest.
  • actuators 1820 and 1822 have reconfigured the display system 1800 such that the LFSL 1830 has been dynamically shifted towards the display 1810 by a distance 1855, resulting in a new distance 1851 between the LFSL 1830 and viewing location 1840, and a new separation 1853 between the display 1810 and LFSL 1830.
  • perception adjustment quality can be improved to a desired, designated or optimized level thereby allowing for a given optical effect or test to be adequately or optimally conducted.
  • dynamic LFSLs such as a dynamic MLA
  • various dynamic LFSLs may function within this scope of the disclosure in accordance with various mechanisms to provide, for instance, a larger or designated range of image perception adjustments.
  • actuators may be employed to dynamically adjust a LFSL with, for instance, high precision, while having a robustness to reliably adjust a LFSL or system thereof (e.g. a plurality of LFSLs, a LFSL comprising a plurality of PBs, MLAs, or the like).
  • a LFSL or system thereof e.g. a plurality of LFSLs, a LFSL comprising a plurality of PBs, MLAs, or the like.
  • embodiments comprising heavier LFSL substrates e.g.
  • Gorilla glass or like tempered glass may employ, in accordance with some embodiments, particularly durable and/or robust actuators, examples of which may include, but are not limited to, electronically controlled linear actuators, servo and/or stepper motors, rod actuators such as the PQ12, L12, L16, or P16 Series from Actuonix ® Motion Devices Inc., or the like.
  • actuators may include, but are not limited to, electronically controlled linear actuators, servo and/or stepper motors, rod actuators such as the PQ12, L12, L16, or P16 Series from Actuonix ® Motion Devices Inc., or the like.
  • an actuator or actuator step size may be selected based on a screen or lenslet size, whereby larger elements may, in accordance with various embodiments, require only larger steps to introduce distinguishable changes in user perception of various pixel configurations.
  • various embodiments relate to actuators that may communicate with a processor/controller via a driver board, or be directly integrated into a processing unit for plug-and
  • Figures 18A and 18B show a dynamic adjustment of a LFSL layer in a direction perpendicular to the screen
  • perpendicular adjustments i.e. changing the separation 1853 between the display 1810 and LFSL 1830
  • the separation 1853 may be adjusted to configure a system 1800 for a wide range of preferred viewing positions in addition to, or alternatively to, providing a range of or designated perception adjustments.
  • actuators may finely adjust and/or displace the LFSL 1830 with a high degree of precision (e.g. micron-precision)
  • various embodiments of a dynamic LFSL as herein described relate to one that may be translated perpendicularly to a digital display to enhance user experience.
  • a translatable LFSL such as that of Figures 18A and 18B, may be employed in a vision-based testing platform.
  • a refractor or phoropter for assessing a user’s visual acuity may comprise a translatable LFSL operable to be placed at, for instance, a designated distance from a display screen, so to provide a system geometry that enables display of a preferred perception adjustment (e.g. dioptric correction).
  • a conventional light field refractor having a LFSL (e.g. MLA) disposed at a fixed distance e.g.
  • a focal length away from a pixelated display screen may have inherent limitations arising from, for instance, screen resolution or size. Such systems may therefore only enable display of a particular range of optotypes without the use of additional optical components.
  • a light field refractor having a translatable LFSL may allow for adjustment of the system geometry, thereby enabling an adjusted range of optotypes that may be displayed, without the use of one or more of the optical elements required by the conventional system.
  • a light field display system having a translatable LFSL may be employed to perform cognitive impairment tests, as described above.
  • the use of a light field display in performing such assessments may allow for accommodation of a reduced visual acuity of a user undergoing a test.
  • a dioptric correction may be applied to displayed content to provide the user with an improved perception thereof, and therefore improve the quality of evaluation.
  • such a system may employ a translatable or dynamic LFSL to quickly and easily apply the appropriate dioptric correction for the user.
  • such a light field system may be portable (e.g.
  • a LFSL as herein disclosed may further or alternatively be dynamically adjusted in more than one direction.
  • the LFSL may further be dynamically adjustable in up to three dimensions.
  • actuators such as those described above, may be coupled to displace any one LFSL, or system comprising a plurality of light field shaping components, in one or more directions.
  • Yet further embodiments may comprise one or more LFSLs that dynamically rotate in a plane of the display to, for instance, change an orientation of light field shaping elements relative to a pixel or subpixel configuration.
  • a LFSL as herein described may further allow for dynamic control of a LFSL layer pitch, or barrier width, in embodiments comprising a parallax barrier.
  • a light field shaping system or device may comprise a plurality of independently addressable parallax barriers.
  • Figure 19 is a diagram illustrating an exemplary process, generally referred to using the numeral 1900, for providing a designated user perception adjustment using, at least in part, a dynamic LFSL.
  • the process 1900 comprises input of parameters 1902 required by a light field display system, such as the type of LFSL employed (e.g. MLA, parallax barrier, or the like), and any required parameters related thereto or related to the display system.
  • a light field display system such as the type of LFSL employed (e.g. MLA, parallax barrier, or the like)
  • the scope and nature of parameters required as input for displaying perception- adjusted content will be understood by the skilled artisan.
  • the pupil location may be determined, for instance using eye tracking processes, in process step 1904.
  • any designated perception adjustments 1904 e.g. dioptric correction
  • range thereof 1904 may further be received the display system or a processor associated therewith.
  • the process 1900 may comprise any one or more ray tracing processes 1906 to compute display content according to any or all of the input parameters 1902, 1904, and 1906. Further, any ray tracing processes may further consider a LFSL position, as described above, which may, in accordance with various embodiments, comprise a range of potential positions that may be calculated 1908 to be optimal or preferred in view of, for instance, the range of dioptric corrections 1904 required. Accordingly, ray tracing 1906 and LFSL position 1908 calculations may be solved according to the Equations described above, or iteratively calculated based on, for instance, system constraints and/or input parameters. Upon determining a preferred LFSL position (e.g. distance from the display screen), the system may then position 1910 the LFSL, for instance via one or more automated actuators, at the preferred position for display 1912 of the perception adjusted content.
  • a preferred LFSL position e.g. distance from the display screen
  • any or all process steps may be performed at the initiation of a viewing session, or may be periodically or continuously updated throughout viewing.
  • viewing parameters 1904 may be updated as eye tracking processes determine a new viewing location, which may initiate calculation 1908 of and displacement 1910 to a new corresponding LFSL position.
  • a LFSL may be displaced 1910 during display of content 1912.
  • adjustment 1910 of the LFSL may be performed in an iterative or feedback- based process, whereby the LFSL may be adjusted, for instance in real-time, to improve or optimise a display-to-LFSL distance during operation.
  • a LFSL position may be selected (e.g. calculated 1908) or adjusted based on the application at hand.
  • system specifications e.g. the resolution of a pixelated display screen, the pitch and/or focal length of lenslets of a microlens array, or the like
  • a given eye examination may require a relatively large range of dioptric power adjustments (e.g. 20+ diopter range), while also requiring a visual content quality parameter (e.g. visual content provided with resolution of 1 arcminute angular pitch or smaller, a view zone width, or the like).
  • a system operable to this end may therefore comprise different components or configurations than a system that, for instance, is designed to be portable and is subject to various weight and/or size restrictions.
  • a portable system for performing saccade and/or vergence tests to assess for a cognitive impairment may require lower resolution and/or range of accessible perception adjustments than is required for a stationary eye examination system.
  • a portable system may be further limited in the amount of space over which a LFSL may be translated to affect a depth-based perception adjustment. Accordingly, various embodiments relate to systems and methods for providing perception adjustments with components and/or ranges of LFSL translation that are selected for a particular application.
  • Figures 20A to 23J are exemplary plots illustrating non-limiting parameter spaces for exemplary light field shaping system configurations.
  • system configurations and/or component specifications may enable, for instance, different perception adjustments or ranges thereof in view of one or more system or application parameters.
  • an MLA having a given lenslet pitch and focal length may enable a distinct range of perception adjustments for visual content (e.g. shifts in dioptric powers) at or below a required resolution threshold for a distinct system geometry (e.g. a particular LFSL position).
  • the LFSL position may then be adjusted to select a new distinct geometry corresponding to a new distinct range of perception adjustments.
  • a first distinct range of perception adjustments corresponding to a first geometry may enable a dioptric power range of -5 to +5 diopters
  • a second distinct range of perception adjustments corresponding to a second geometry may enable a dioptric power range of -1 to +8 diopters.
  • a system geometry may be selected to, for instance, select the entire range of perception adjustments required for a particular application (e.g. a vision-based cognitive assessment).
  • a plurality of system geometries may be selected over the course of an examination in order to select different perception adjustment ranges which, together, enable the entire range of perception adjustments required for an assessment.
  • Figures 20A to 201 illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to a light field system having a particular configuration and component specifications.
  • This exemplary embodiment relates to a LFSL and display that may be arranged to maximise the range of dioptric corrections that may be obtained by a translating LFSL (in this case an MLA), as described above.
  • the display screen comprises a SharpTM display screen (51.84 mm x 51.84 mm active area) with a 24 pm pixel pitch.
  • the maximal range of continuous dioptric corrections that is achievable with this system while maintaining a cut-off resolution of 1 arcminute angular pitch or less is 23 diopters.
  • This maximal range was obtained with an MLA characterised by an -53.5 mm focal length and a 4.6 mm pitch, while a 360 mm eye- to-display distance was maintained. While this 23-diopter range corresponds to a dioptric correction of an eye power error between 7 to 30 diopters, this range may be shifted to, for instance, correct an eye error within a different dioptric range using an additional optical component.
  • a 15-diopter lens placed in front of the eye of a user may enable the presentation of visual content with a perception adjustment corresponding to an eye dioptric error of -8 to 15 diopters, thereby enabling, for instance, an eye examination in this range.
  • a tunable lens may be employed to selectively increase or decrease a perception adjustment based on the needs of a particular assessment.
  • Figure 20A is a plot of the display-to-MLA distance as a function of MLA focal length for different dioptric powers, highlighting the ability of a translatable LFSL for enabling a selection of a preferred perception adjustment by selecting or adjusting a system geometry.
  • Figure 20B is a plot of the corresponding focus spot size at the retina.
  • Figure 20C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length.
  • Figure 20D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 20E is a plot of the corresponding cut- off resolution.
  • Figure 20F is a plot of view zone separation from a reference pupil of 5mm diameter
  • Figure 20G is a plot of spread on the retina
  • Figure 20H is a plot of the corresponding field of view as a function of MLA focal length
  • Figure 201 is a plot of the maximum number of continuous dioptric powers (with 1 diopter step) that can be corrected for by changing the MLA-to-display distance.
  • Figures 21 A to 21 J illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to a different light field system having a different configuration and different component specifications.
  • the LFSL layer comprises an MLA having a pitch of 1.98 mm and focal length of -24.5 mm, while the eye-to-display distance was selected as 190 mm.
  • the system again comprised a display with 24 pm pixel pitch.
  • a maximum continuous span of diopters that was achievable at 1 arcminute or less angular pitch/resolution was 19 diopters, corresponding to perception adjustments between 11 and 30 diopters.
  • an additional optical component may be introduced to the system to shift this range of dioptric corrections.
  • a 15-diopter lens placed in front of the eye of a user may enable correction for a range of eye power errors between -4 and 15 diopters.
  • Figure 21 A is a plot of the MLA-to-display distance as a function of MLA focal length for different eye power errors.
  • Figure 2 IB is a plot of the corresponding focus spot size at the retina.
  • Figure 21C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length.
  • Figure 2 ID is a plot of the uniform spatial pitch between focused beams on the retina
  • Figure 21 E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina.
  • Figure 2 IF is a plot of view zone separation from a reference 5 mm diameter pupil
  • Figure 21G is a plot of spread on the retina.
  • Figure 21H is a plot of the corresponding field of view as a function of MLA focal length.
  • Figure 211 is a plot of the beam size on the cornea.
  • Figure 21J is a plot of the maximum number of continuous dioptric power error points (spaced by 1 diopter steps) that can be corrected for by changing the MLA-to-display distance.
  • various other visual content quality parameters may be considered when selecting, for instance, light field system components (e.g. LFSL specifications) and/or geometries (e.g. LFSL position).
  • various embodiments relate to selecting system components and configurations in view of a comeal beam size constraint.
  • one embodiment relates to the selection of an MLA with specifications and a position within the system such that the beam size on the cornea of a user is maintained between 2.7 mm and 3.7 mm. This may, in accordance with one embodiment, enable a maximisation of focus range of retinal beam spots.
  • Figures 22A to 22J illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to an exemplary light field system.
  • the display is characterised by a 24 pm pitch
  • the MLA selected comprised a lenslet pitch of 1.98 mm and focal length of ⁇ 20 mm
  • the system was characterised by an eye-to-display distance of 145 mm.
  • a continuous range of 10 diopters was achieved, corresponding to a perception adjustment range of 16 to 26 diopters, which, again, may be shifted via the employ of an additional optical component.
  • Figure 22A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers.
  • Figure 22B is a plot of the corresponding focus spot size at the retina.
  • Figure 22C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length.
  • Figure 22D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 22E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina.
  • Figure 22F is a plot of view zone separation from a reference 5 mm pupil, while Figure 22G is a plot of spread on the retina.
  • Figure 22H is a plot of the corresponding field of view as a function of MLA focal length.
  • Figure 221 is a plot of the beam size on the cornea.
  • Figure 22J is a plot of the maximum number of continuous dioptric powers that can be corrected for by changing the MLA-to-display distance.
  • Figures 23A to 23J illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to another exemplary light field system.
  • the display again comprises a 24 pm pixel pitch
  • the LFSL comprises an MLA characterised by a lenslet pitch of 3.4 mm and a focal length -34.5 mm.
  • the eye-to-display distance is 200 mm.
  • a continuous range of 11 diopters is achievable, corresponding to perception adjustments between 15 and 26 diopters.
  • Figure 23A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers.
  • Figure 23B is a plot of the corresponding focus spot size at the retina.
  • Figure 23C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length.
  • Figure 23D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 23 E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina.
  • Figure 23F is a plot of view zone separation from 5 mm diameter reference pupil, while Figure 23G is a plot of spread on the retina.
  • Figure 23H is a plot of the corresponding field of view as a function of MLA focal length.
  • Figure 231 is a plot of the beam size on the cornea.
  • Figure 23J is a plot of the maximum number of continuous dioptric power points (separated by 1 diopter) that can be corrected for by changing the MLA-to-display distance.
  • various system components and/or configurations may be employed depending on, for instance, the application at hand (e.g. a vision assessment, a cognitive impairment assessment, or the like), or a visual content quality parameter associated therewith.
  • a visual acuity examination may require a large range of dioptric corrections (e.g. -8 to +8 diopters).
  • a system designed to this end may comprise components and configurations similar to those described with respect to Figures 20 A to 201 that relating to a continuous range of 23 diopters with at least 1 arcminute resolution.
  • a cognitive impairment assessment system operable to perform saccadic and vergence tests may have less strict content quality parameters associated therewith.
  • a cognitive impairment assessment device may employ simpler and/or more cost-effective components than a visual acuity refractor/phoropter.
  • more compact or lightweight components may be employed in order to, for instance, make a cognitive impairment assessment system more portable for in-field assessments.
  • Figures 24A and 24B show perspective views of a portable cognitive impairment assessment device. Due in part to its compact nature, the range of dioptric adjustments required for a vergence test may, in accordance with some embodiments, require that a LFSL be translated relative to a display screen.
  • a different cognitive impairment assessment device 2500 comprises a lightweight, user-interfacing potion 2504 and a load-bearing portion 2506.
  • Figure 25A shows the device 2500 while in use by a subject 2508, while Figure 25B schematically shows an exploded view of various exemplary components of the device 2500.
  • the load-bearing portion 2506 comprises a pixelated display screen 2508 and a displaceable light field shaping layer (LFSL) 2510 disposed between the user and the display screen 2508 to shape a light field emanating therefrom, thereby enabling adjustable perception adjustments.
  • LFSL displaceable light field shaping layer
  • various device configurations and components may be employed to perform a vision-based test.
  • Figures 26A to 26J are exemplary plots of different parameters of one such embodiment.
  • a display screen comprising a 24 pm pixel pitch is disposed 110 mm from the eye of a user.
  • a translatable LFSL layer comprising an MLA with a 1 mm pitch -22.5 mm focal length is also employed.
  • a content quality parameter comprising a 1.5 arcminute angular resolution threshold may be applied, which results in a continuous perceptive correction range of 4 diopters, corresponding to perception adjustments of 2 to 6 diopters eye accommodation power.
  • This range may, in accordance with one embodiment, be shifted by placing, for instance, a 2-diopter lens in front of the eye(s) of a user, enabling perception adjustments for eye accommodation power between 0 and 4 diopters, corresponding to an accommodation range of 25 cm to infinity. Furthermore, this may allow for a minimum view zone size of -10.4 mm, and a minimum separation of 2.07 mm between a centred pupil of 5 mm diameter and the neighbouring view zone edge. This may allow for the user’s eye pupil(s) to move at least 2.07 mm (or 9.5 degrees) to either side of centre without resulting in an overlap with neighboring view zones, or a dark area in which the user cannot perceive visual content.
  • a pupil position tracking system can be used to update a rendered light field and shift a view zone position along with view zone size control via software to further increase the possible range of motion of the pupil without adversely impacting performance of the device. While such aspects may be less important for acuity assessments, they may be useful in, for instance, saccade or pursuit tests of a cognitive assessment, where a user’s eyes are not fixed. Accordingly, employing a translatable LFSL enables various system components, specifications, and/or geometries that may be more optimally suited to a particular application.
  • Figure 26A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers.
  • Figure 26B is a plot of the corresponding focus spot size at the retina.
  • Figure 26C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length.
  • Figure 26D is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina.
  • Figure 26E is a plot of view zone separation from a 5 mm diameter reference pupil, while Figure 20F is a plot of spread on the retina.
  • Figure 26G is a plot of the corresponding field of view as a function of MLA focal length.
  • Figure 26H is a plot of the beam size on the cornea.
  • Figure 261 is a plot of the view zone width as a function of MLA focal length.
  • Figure 26J is a plot of the maximum number of continuous dioptric powers (separated by steps of 1 diopter) that can be corrected for by changing the MLA-to-display distance.
  • Figures 27A to 27K are exemplary plots of various parameters corresponding to another light field system. In this example, a 24 pm pixel pitch display was disposed 150 mm in front of the eye, while the LFSL comprised a pitch of 1.98 mm and a focal length of ⁇ 18.5 mm.
  • This configuration allowed for a continuous perception adjustment range of 5 diopters, between 14 and 19 diopters, with a 2-arcminute cut-off angular pitch/resolution. Again, this range may be shifted as needed, for instance by placing a 14-diopter lens in front of the eye, resulting in a light field power correction range for 0 to 5 diopters of accommodation eye power, which corresponds to accommodation range of 20 cm to infinity. Further, this may allow for a minimum view zone size of - 10 mm, and a minimum of 4 mm separation between a centered pupil of 5 mm diameter and the next view zone edge.
  • a pupil position tracking system can be used to update a rendered light field and shift a view zone position along with view zone size control via software to further increase the possible range of motion of the pupil without adversely impacting performance of the device.
  • Figure 27 A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers.
  • Figure 27B is a plot of the corresponding focus spot size at the retina.
  • Figure 27C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length.
  • Figure 27D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 27E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina.
  • Figure 27F is a plot of view zone separation from a reference pupil of 5 mm diameter, while Figure 27G is a plot of spread on the retina.
  • Figure 27H is a plot of the corresponding field of view as a function of MLA focal length.
  • Figure 271 is a plot of the beam size on the cornea, and
  • Figure 27J is a plot of the view zone width.
  • Figure 27K is a plot of the maximum number of continuous dioptric powers that can be corrected for by changing the MLA-to-display distance.
  • various embodiments relate to a dynamic LFSL system in which a system of one or more LFSLs may be incorporated on an existing display operable to perception- adjusted content.
  • Such embodiments may, for instance, relate to a clip-on solution that may interface and/or communicate with a display or digital application stored thereon, either directly or via a remote application (e.g. a smart phone application) and in wired or wireless fashion.
  • a LFSL may be further operable to rotate in the plane of a display via, for instance, actuators as described above, to improve user experience by, for instance, introducing a pitch mismatch offset between light field shaping elements and an underlying pixel array.
  • Such embodiments therefore relate to a LFSL that is dynamically adjustable/reconfigurable for a wide range of existing display systems (e.g. televisions, dashboard displays in an automobile, a display board in an airport or train terminal, a refractor, a smartphone, or the like).
  • existing display systems e.g. televisions, dashboard displays in an automobile, a display board in an airport or train terminal, a refractor, a smartphone, or the like.
  • Some embodiments relate to a standalone light field shaping system in which a display unit comprises a LFSL and smart display (e.g. a smart TV display having a LFSL disposed thereon).
  • a display unit comprises a LFSL and smart display (e.g. a smart TV display having a LFSL disposed thereon).
  • Such systems may comprise inherently well calibrated components (e.g. LFSL and display aspect ratios, LFSL elements and orientations appropriate for a particular display pixel or subpixel configuration, etc.).
  • various systems herein described may be further operable to receive as input data related to one or more view zone and/or user locations, or required number thereof (e.g. two or three view zones in which to display perception-adjusted content).
  • data related to a user location may be entered manually or semi-automatically via, for example, a TV remote or user application (e.g. smart phone application).
  • a television or LFSL may have a digital application stored thereon operable to dynamically adjust one or more LFSLs in one or more dimensions, pitch angles, and/or pitch widths upon receipt of user instruction, via manual clicking by a user of an appropriate button on a TV remote or smartphone application.
  • a number a view zones may be similarly selected.
  • a user may adjust the system (e.g. the distance between the display and a LFSL, etc.) with a remote or smartphone application until they are satisfied with the display of one or more view zones.
  • a remote or smartphone application may alternatively relate to, for instance, remote eye exams, wherein a doctor remotely adjusts the configuration of a display and LFSL.
  • Such systems may, for instance, provide a high-performance, self-contained, simple system that minimises complications arising from the sensitivity of view zone quality based on, for instance, minute differences from predicted relative component configurations as predicted by, for instance, Equations 1 to 7 above, component alignment, user perception, and the like.
  • a smartphone application or other like system may be used to communicate user preferences or location-related data (e.g. a quality of perceived content from a particular viewing zone), such an application, process, or function may reside in a system or application and be executable by a processing system associated with the display system.
  • data related to a user or viewing location may comprise a user instruction to, for instance, adjust a LFSL, based on, for instance, a user perception of an image quality, or the like.
  • a receiver such as a smartphone camera and digital application associated therewith, may be used to calibrate a display, in accordance with various embodiments.
  • a smartphone camera directed towards a display may be operable to receive and/or store signals/content emanating from the LFSL or display system.
  • a digital application associated therewith may be operated to characterise a quality of a particular view zone through analysis of received content and adjust the LFSL to improve the quality of content at the camera’s location (e.g. to improve on a calculated LFSL position relative to display that was determined theoretically, for instance using one or more of Equations 1 to 7 above).
  • a calibration may be initially performed wherein a user positions themselves in a desired viewing location and points a receiver at a display generating red and blue content for respective first and second view zones.
  • a digital application associated with the smartphone or remote receiver in the first viewing location may estimate a distance from the display by any means known in the art (e.g. a subroutine of a smartphone application associated with a light field display and operable to measure distances using a smartphone sensor).
  • the application may further record, store, and/or analyse (e.g.
  • the light emanating from the display determines whether or not, and/or in which dimensions, angle, etc., to adjust a dynamic LFSL to maximise the amount of red light received in the first view zone while minimising that of blue (i.e. reduce cross talk between view zones).
  • a semi-automatic LFSL may self-adjust until a digital application associated with a particular view zone receives less than a threshold value of content from a neighbouring view zone (e.g. receives at least 95% red light and less than 5% blue light, in the abovementioned example).
  • a digital application subroutine may calculate an extent of crosstalk occurring between view zones, or a degree of image sharpness corresponding to an intended perception adjustment (e.g.
  • a display having a LFSL disposed thereon may generate distinct content corresponding to a perception adjustment or dioptric shift that may comprise one or more of, but is not limited to, distinct colours, IR signals, patterns, or the like, to determine a displayed content quality, and initiate compensatory adjustments in a LFSL.
  • a semi - automatic LFSL calibration process may comprise a user moving a receiver within a designated range or region (e.g. a user may move a smartphone from left to right, or forwards/backwards) to acquire display content data.
  • Such data acquisition may, for instance, aid in LFSL layer adjustment, or in determining a LFSL configuration that is acceptable for one or more users of the system within an acceptable tolerance (e.g. all users receive 95% of their intended display content, or a resolution of at least 1 arcsecond is achieved in an eye examination device) within the geometrical limitations of the LFSL and/or display.
  • a pupil location may be determined via the use of one or more cameras or other like sensors and/or means known in the art for determining user, head, and/or eye locations, and dynamically adjusting a LFSL in one or more dimensions to render content so to be displayed at one or more appropriate locations.
  • Yet other embodiments relate to a self-localisation method and system that maintains user privacy with minimal user input or action required to determine one or more view zone locations, and dynamically adjust a LFSL to display appropriate content thereto.
  • a dynamic light field shaping layer subjected to oscillations or vibrations in one or more dimensions in order to, for instance, improve perception of an image generated by a pixelated display.
  • a system may by employed to increase an effective view zone size so as to accommodate user movement during viewing.
  • a LSFL may be vibrated in a direction perpendicular to a screen so to increase a depth of a view zone in that dimension to improve user experience by allowing movement of a user’s head towards/away from a screen without introducing a high degree of perceived crosstalk, or to improve a perceived image brightness.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Eye Examination Apparatus (AREA)
  • Devices For Indicating Variable Information By Combining Individual Elements (AREA)

Abstract

Described are various embodiments of a light field shaping systems for interfacing with light emanated from pixels of a digital display to govern display of perception-adjusted content. Various embodiments comprise a light field shaping layer (LFSL) disposable relative to the digital display to align an array of light field shaping elements with the pixels of the digital display, thereby defining a perception adjustment of displayed content. Various embodiments further comprise an actuator operable on by a digital processor for translating the LFSL to adjust an optical path length between the LSFL and the digital display, thereby defining an adjusted perception adjustment of displayed content in accordance with an adjusted geometry.

Description

LIGHT FIELD DISPLAY FOR RENDERING PERCEPTION- ADJUSTED CONTENT.
AND DYNAMIC LIGHT FIELD SHAPING SYSTEM AND LAYER THEREFOR
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S . Provisional Application No. 63/056, 188 filed July 24, 2020, and to U.S. Provisional Application No. 63/104,468 filed October 22, 2020, the entire disclosure of each of which is hereby incorporated herein by reference.
FIELD OF THE DISCLOSURE
[0002] The present disclosure relates to light field displays, and, in particular, to a light field display for rendering perception- adjusted content, and dynamic light field shaping system and method, and layer therefor.
BACKGROUND
[0003] Individuals routinely wear corrective lenses to accommodate for reduced vision acuity in consuming images and/or information rendered, for example, on digital displays provided, for example, in day-to-day electronic devices such as smartphones, smart watches, electronic readers, tablets, laptop computers and the like, but also provided as part of vehicular dashboard displays and entertainment systems, to name a few examples. The use of bifocals or progressive corrective lenses is also commonplace for individuals suffering from near and farsightedness.
[0004] The operating systems of current electronic devices having graphical displays offer certain “Accessibility” features built into the software of the device to attempt to provide users with reduced vision the ability to read and view content on the electronic device. Specifically, current accessibility options include the ability to invert images, increase the image size, adjust brightness and contrast settings, bold text, view the device display only in grey, and for those with legal blindness, the use of speech technology. These techniques focus on the limited ability of software to manipulate display images through conventional image manipulation, with limited success. [0005] The use of 4D light field displays with lenslet arrays or parallax barriers to correct visual aberrations have since been proposed by Pamplona et al. (PAMPLONA, V., OLIVEIRA, M., ALIAGA, D., AND RASKAR, R.2012. “Tailored displays to compensate for visual aberrations.” ACM Trans. Graph. (SIGGRAPH) 31.). Unfortunately, conventional light field displays as used by Pamplona et al. are subject to a spatio-angular resolution trade-off; that is, an increased angular resolution decreases the spatial resolution. Hence, the viewer sees a sharp image but at the expense of a significantly lower resolution than that of the screen. To mitigate this effect, Huang et al. (see, HUANG, F.-C., AND BARSKY, B. 2011. A framework for aberration compensated displays. Tech. Rep. UCB/EECS-2011-162, University of California, Berkeley, December; and HUANG, F.-C., LANMAN, D., BARSKY, B. A., AND RASKAR, R. 2012. Correcting for optical aberrations using multilayer displays. ACM Trans. Graph. (SiGGRAPH Asia) 31, 6, 185:1- 185:12) proposed the use of multilayer display designs together with prefiltering. The combination of prefiltering and these particular optical setups, however, significantly reduces the contrast of the resulting image.
[0006] Finally, in U.S. Patent Application Publication No. 2016/0042501 and Fu- Chung Huang, Gordon Wetzstein, Brian A. Barsky, and Ramesh Raskar. "Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays". ACM Transaction on Graphics , 33(4), Aug. 2014, the entire contents of each of which are hereby incorporated herein by reference, the combination of viewer-adaptive pre-filtering with off-the-shelf parallax barriers has been proposed to increase contrast and resolution, at the expense however, of computation time and power.
[0007] Optical devices, such as refractors and phoropters, are commonly used to test or evaluate the visual acuity of its users, for example, in the prescription of corrective eyewear, contact lenses or intraocular implants.
[0008] This background information is provided to reveal information believed by the applicant to be of possible relevance. No admission is necessarily intended, nor should be construed, that any of the preceding information constitutes prior art or forms part of the general common knowledge in the relevant art. SUMMARY
[0009] The following presents a simplified summary of the general inventive concept(s) described herein to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is not intended to restrict key or critical elements of embodiments of the disclosure or to delineate their scope beyond that which is explicitly or implicitly described by the following description and claims.
[0010] A need exists for a light field display for rendering perception-adjusted content, and dynamic light field shaping system and method, and layer therefor that overcome some of the drawbacks of known techniques, or at least, provides a useful alternative thereto. Some aspects of this disclosure provide examples of such systems and methods.
[0011] In accordance with one aspect, there is provided a light field shaping system for interfacing with light emanated from pixels of a digital display to govern display of perception- adjusted content, the system comprising: a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the digital display so to align said array of light field shaping elements with the pixels in accordance with a current light field shaping geometry to thereby define a perception adjustment of displayed content in accordance with said current geometry; an actuator operable to adjust an optical path length between said LSFL and the digital display to adjust alignment of said light field shaping elements with the pixels in accordance with an adjusted geometry thereby defining an adjusted perception adjustment of displayed content in accordance with said adjusted geometry; and a digital data processor operable to activate said actuator to translate said LFSL to adjust the perception-adjusted content.
[0012] In one embodiment, the LFSL comprises a microlens array. [0013] In one embodiment, the adjusted perception adjustment corresponds to a reduced visual acuity of a user.
[0014] In one embodiment, the actuator is operable to translate said LFSL in a direction perpendicular to the digital display. [0015] In one embodiment, the light field shaping geometry relates to a physical distance between the digital display and said LFSL.
[0016] In one embodiment, the adjusted geometry corresponds to a selectable range of perception adjustments of displayed content, wherein distinctly selectable geometries correspond with distinct selectable ranges of perception adjustments.
[0017] In one embodiment, the distinct selectable ranges comprise distinct dioptric correction ranges.
[0018] In one embodiment, the digital data processor is further operable to: receive as input a requested perception adjustment as said adjusted perception adjustment; based at least in part on said requested perception adjustment, calculate an optimal optical path length to thereby define an optimal geometry as said adjusted geometry; and activate said actuator to adjust said optical path length to said optimal optical path length and thereby optimally achieve said requested perception adjustment.
[0019] In one embodiment, the digital data processor is further operable to: receive as input feedback data related to a quality of said adjusted perception adjustment; and dynamically adjust said optical path length via said actuator in response to said feedback data.
[0020] In one embodiment, the light field shaping system comprises a system for administering a vision-based test.
[0021] In one embodiment, the vision-based test comprises a visual acuity examination, and the perception-adjusted content comprises an optotype.
[0022] In one embodiment, the vision-based test comprises a cognitive impairment test. [0023] In one embodiment, the actuator selectively introduces an optical path length increasing medium within said optical path length to selectively adjust said optical path length.
[0024] In one embodiment, the digital data processor is operable to translate said LFSL to adjust the perception-adjusted content while satisfying a visual content quality parameter.
[0025] In one embodiment, the visual content quality parameter comprises one or more of a perception- adjusted content resolution, a corneal beam size, a view zone size, or a distance between a pupil and a view zone edge.
[0026] In accordance with another aspect, there is provided a method for dynamically adjusting a perception adjustment of displayed content in a light field display system comprising a digital processor and a digital display defined by an array of pixels and a light field shaping layer (LFSL) disposed relative thereto, the method comprising: accessing display geometry data related to one or more of the light field display system and a user thereof, said display geometry data at least in part defining the perception adjustment of displayed content; digitally identifying a preferred display geometry based, at least in part, on said display geometry data, said preferred display geometry comprising a desirable optical path length between said LFSL and the pixels to optimally produce a requested perception adjustment of displayed content; automatically adjusting said optical path length, via the digital processor and an actuator operable to adjust said optical path length and thereby adjust the perception adjustment of displayed content to said requested perception adjustment.
[0027] In accordance with another aspect, there is provided a light field shaping system for interfacing with light emanated from underlying pixels of a digital screen in a light field display to display content in accordance with a designated perception adjustment, the system comprising: a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the digital screen so to define a system configuration, said system configuration at least in part defining a subset of perception adjustments displayable by the light field display; an actuator operable to adjust a relative distance between said LFSL and the digital screen to adjust said system configuration; and a digital data processor operable to activate said actuator to selectively adjust said relative distance and thereby provide a preferred system configuration defining a preferred subset of perception adjustments; wherein said preferred subset of perception adjustments comprises the designated perception adjustment.
[0028] In one embodiment, the digital data processor is further operable to: receive as input data related to said designated perception adjustment; based at least in part on said data related to said designated perception adjustment, calculate said preferred system configuration.
[0029] In one embodiment, the digital data processor is further operable to dynamically adjust said system configuration during use of the light field display.
[0030] In accordance with another aspect, there is provided a light field display system for displaying content in accordance with a designated perception adjustment, the system comprising: a digital display screen comprising an array of pixels; a light field shaping layer (LFSL) comprising an array of light field shaping elements shaping a light field emanating from said array of pixels and disposable relative thereto in accordance with a system configuration at least in part defining a subset of displayable perception adjustments; an actuator operable to translate said LFSL relative to said array of pixels to adjust said system configuration; and a digital data processor operable to activate said actuator to translate said LFSL and thereby provide a preferred system configuration defining a preferred subset of perception adjustments; wherein said preferred subset of perception adjustments comprises the designated perception adjustment.
[0031] In accordance with another aspect, there is provided a light field shaping layer (LFSL) to be used in conjunction with a digital display comprising an array of digital pixels, wherein an optimal rendering of a perceptively adjusted image is provided by minimizing a spread of light from the display pixels through the LFSL in accordance with the following expression:
Figure imgf000009_0001
[0032] In accordance with another aspect, there is provided a light field shaping system for performing a vision-based assessment using perception-adjusted content, the system comprising: a pixelated digital display; a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the pixelated digital display so to align said array of light field shaping elements with pixels of the pixelated digital display in accordance with a current light field shaping geometry to thereby define a perception adjustment of displayed content in accordance with said current geometry; an actuator operable to adjust an optical path length between said LSFL and the pixelated digital display to adjust alignment of said light field shaping elements with the pixels in accordance with an adjusted geometry thereby defining an adjusted perception adjustment of displayed content in accordance with said adjusted geometry; and a digital data processor operable to activate said actuator to translate said LFSL to adjust the perception- adjusted content for the vision-based assessment. [0033] In one embodiment, the vision-based assessment comprises a cognitive impairment assessment.
[0034] In one embodiment, the vision-based assessment comprises a visual acuity assessment.
[0035] In one embodiment, the digital data processor is operable to translate said LFSL to adjust the perception-adjusted content while maintaining a visual content quality parameter associated with the vision-based assessment.
[0036] In one embodiment, the visual content quality parameter comprises one or more of a perception- adjusted content resolution, a corneal beam size of said perception-adjusted content, a view zone size, or a distance between a pupil and a view zone edge. [0037] In one embodiment, the adjusted perception adjustment comprises a range of perception adjustments corresponding to said adjusted geometry. [0038] In one embodiment, the vision-based assessment comprises the display of content in accordance with an assessment range of perception adjustments.
[0039] In one embodiment, the range of perception adjustments corresponds at least in part to said assessment range of perception adjustments. [0040] In one embodiment, the system further comprises an optical component intersecting an optical path of the perception-adjusted content and configured to adjust an optical power of the perception- adjusted content for the vision-based assessment.
[0041] In one embodiment, the optical component comprises a lens or a tunable lens.
[0042] Other aspects, features and/or advantages will become more apparent upon reading of the following non-restrictive description of specific embodiments thereof, given by way of example only with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE FIGURES
[0043] Several embodiments of the present disclosure will be provided, by way of examples only, with reference to the appended drawings, wherein:
[0044] Figures 1A and IB are schematic diagrams of an exemplary light field vision testing or previewing system, in accordance with one embodiment;
[0045] Figures 2A to 2C schematically illustrate normal vision, blurred vision, and corrected vision in accordance with one embodiment, respectively; [0046] Figures 3A and 3B are schematic diagrams of a light field display in which respective pixel subsets are aligned to emit light through a corresponding microlens or lenslet, in accordance with one embodiment;
[0047] Figures 4A to 4C are schematic diagrams of exemplary light field vision testing or previewing systems (e.g. refractors/phoropters), in accordance with different embodiments; [0048] Figure 5 is a plot of the angular resolution of an exemplary light field display as a function of the dioptric power generated, in accordance with one embodiment;
[0049] Figures 6A to 6D are schematic plots of the image quality generated by a light field refractor/phoropter as a function of the dioptric power generated by using in combination with the light field display (A) no refractive component, (B) one refractive component, (C) and (D) a multiplicity of refractive components;
[0050] Figures 7 A, 7B and 7C are perspective views of exemplary light field refractors/phoropters, showing a casing thereof in cross-section (A and B) and a unit combining side-by-side two of the units (C) shown in 7A and 7B, in accordance with one embodiment;
[0051] Figure 8 is a process flow diagram of an exemplary dynamic subjective vision testing method, in accordance with one embodiment;
[0052] Figure 9 is a schematic diagram of an exemplary light field image showing two columns of optotypes at different dioptric power for the method of Figure 8, in accordance with one embodiment;
[0053] Figure 10 is a schematic diagram of an exemplary vision testing system configuration employing a microlens array, in accordance with at least one embodiment;
[0054] Figure 11 is a schematic diagram of an exemplary pixel and lenslet system, in accordance with various embodiments; [0055] Figure 12 is a schematic diagram illustrating a focusing of a rectangular beam, in accordance with some embodiments;
[0056] Figure 13 is an exemplary plot illustrating a spacing of retinal beam spots as a function of various parameters, in accordance with at least one embodiment;
[0057] Figure 14 is an exemplary plot illustrating an interplay of various parameters in an exemplary light field system, in accordance with some embodiments; [0058] Figures 15A and 15B are exemplary plots of a minimum beam size in an exemplary light field display system as a function of light field shaping layer focal length, with the condition that interference from different lenslets is avoided, and Figures 15C and 15D are exemplary plots of a minimum beam size in the exemplary light field display system of Figures 15A and 15B, without the condition that interference from different lenslets is avoided, in accordance with various embodiments;
[0059] Figures 16A and 16B are exemplary plots illustrating the effect of lenslet size on retinal spot size, in accordance with various embodiments;
[0060] Figures 17A and 17C, and Figures 17B and 17D, are exemplary plots illustrating the field of view and retina spread, respectively, for an exemplary light field display system, in accordance with some embodiments;
[0061] Figures 18A and 18B are schematic diagrams illustrating an exemplary light field display system comprising a dynamic light field shaping layer, in accordance with various embodiments; [0062] Figure 19 is a schematic diagram illustrating an exemplary process for displaying adjusted content using a dynamic light field shaping layer, in accordance with some embodiments;
[0063] Figures 20A to 201 are exemplary plots illustrating various parameters for an exemplary light field system, in accordance with one embodiment; [0064] Figures 21 A to 21 J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment;
[0065] Figures 22A to 22J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment;
[0066] Figures 23A to 23J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment; [0067] Figures 24A and 24B are perspective views of an exemplary portable cognitive impairment assessment system, in accordance with one embodiment,
[0068] Figures 25A and 25B are schematic diagrams illustrating an exemplary cognitive impairment assessment device comprising a user-interfacing portion and a load- bearing portion and comprising a displaceable light field shaping layer, in accordance with one embodiment;
[0069] Figures 26A to 26J are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment; and
[0070] Figures 27A to 27K are exemplary plots illustrating various parameters for another exemplary light field system, in accordance with one embodiment.
[0071] Elements in the several figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be emphasised relative to other elements for facilitating understanding of the various presently disclosed embodiments. Also, common, but well-understood elements that are useful or necessary in commercially feasible embodiments are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present disclosure.
DETAILED DESCRIPTION
[0072] Various implementations and aspects of the specification will be described with reference to details discussed below. The following description and drawings are illustrative of the specification and are not to be construed as limiting the specification. Numerous specific details are described to provide a thorough understanding of various implementations of the present specification. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of implementations of the present specification.
[0073] Various apparatuses and processes will be described below to provide examples of implementations of the system disclosed herein. No implementation described below limits any claimed implementation and any claimed implementations may cover processes or apparatuses that differ from those described below. The claimed implementations are not limited to apparatuses or processes having all of the features of any one apparatus or process described below or to features common to multiple or all of the apparatuses or processes described below. It is possible that an apparatus or process described below is not an implementation of any claimed subject matter.
[0074] Furthermore, numerous specific details are set forth in order to provide a thorough understanding of the implementations described herein. However, it will be understood by those skilled in the relevant arts that the implementations described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the implementations described herein.
[0075] In this specification, elements may be described as “configured to” perform one or more functions or “configured for” such functions. In general, an element that is configured to perform or configured for performing a function is enabled to perform the function, or is suitable for performing the function, or is adapted to perform the function, or is operable to perform the function, or is otherwise capable of performing the function.
[0076] It is understood that for the purpose of this specification, language of “at least one of X, Y, and Z” and “one or more of X, Y and Z” may be construed as X only, Y only, Z only, or any combination of two or more items X, Y, and Z (e.g., XYZ, XY, YZ, ZZ, and the like). Similar logic may be applied for two or more items in any occurrence of “at least one ...” and “one or more...” language.
[0077] Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
[0078] Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase “in one of the embodiments” or “in at least one of the various embodiments” as used herein does not necessarily refer to the same embodiment, though it may. Furthermore, the phrase “in another embodiment” or “in some embodiments” as used herein does not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments may be readily combined, without departing from the scope or spirit of the innovations disclosed herein.
[0079] In addition, as used herein, the term “or” is an inclusive “or” operator, and is equivalent to the term “and/or,” unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."
[0080] The term “comprising” as used herein will be understood to mean that the list following is non-exhaustive and may or may not include any other additional suitable items, for example one or more further feature(s), component(s) and/or element(s) as appropriate.
[0081] The systems and methods described herein provide, in accordance with different embodiments, different examples of a light field display, adjusted pixel rendering method therefor, and adjusted vision perception system and method using same. For example, some of the herein-described embodiments provide improvements or alternatives to current light field display technologies, for instance, in providing a range of dioptric corrections that may be displayed or a given light field display system having a particular light field shaping layer geometry and display resolution. Various embodiments relate to the provision of an increased range of perception adjustments accessible to, for instance, a vision testing system (e.g. a refractor or phoropter), or other display system operable to provide a perception adjustment through the provision of a light field, such as a smart phone, television, pixelated display, car dashboard interface, or the like.
[0082] These and other such applications will be described in further detail below. For example, devices, displays and methods described herein may allow a user’s perception of one or more input images (or input image portions), where each image or image portion is virtually located or perceived to be at a distinct image plane/depth location, to be adjusted or altered using the described light field display technology, again allowing for, for instance, corrective assessment, or the display of media content (e.g. images or videos) in accordance with a dioptric shift or perception adjustment that may not be enabled by a light field display technology having a fixed geometry or configuration. According, some of the herein-described embodiments provide a light field display for rendering perception- adjusted content in which a geometry, disposition and/or relative positioning of an integrated or cooperative light field shaping element array (e.g. light field shaping layer (LFSL)) can be dynamically adjusted to improve or optimize image rendering and perception adjustment capabilities (e.g. adjustment range, resolution, brightness, and/or overall quality).
[0083] Some of the herein described embodiments provide for digital display devices, or devices encompassing such displays, for use by users having reduced visual acuity, whereby images ultimately rendered by such devices can be dynamically processed to accommodate the user’s reduced visual acuity so that they may comfortably consume rendered images without the use of corrective eyewear, contact lenses, or surgical intervention, as would otherwise be required. As noted above, embodiments are not to be limited as such as the notions and solutions described herein may also be applied to other technologies in which a user’s perception of an input image to be displayed can be altered or adjusted via the light field display. Again, similar implementation of the herein described embodiments can allow for implementation of digitally adaptive vision tests or corrective or adaptive vision previews or simulations such that individuals with such reduced visual acuity can be exposed to distinct perceptively adjusted versions of an input image(s) to subjectively ascertain a potentially required or preferred vision correction.
[0084] Generally, digital displays as considered herein will comprise a set of image rendering pixels and a corresponding set of light field shaping elements that at least partially govern a light field emanated thereby to produce a perceptively adjusted version of the input image, notably distinct perceptively adjusted portions of an input image or input scene, which may include distinct portions of a same image, a same 2.5D/3D scene, or distinct images (portions) associated with different image depths, effects and/or locations and assembled into a combined visual input. For simplicity, the following will generally consider distinctly addressed portions or segments as distinct portions of an input image, whether that input image comprises a singular image having distinctly characterised portions, a digital assembly of distinctly characterised images, overlays, backgrounds, foregrounds or the like, or any other such digital image combinations.
[0085] In some examples, light field shaping elements may take the form of a light field shaping layer or like array of optical elements to be disposed relative to the display pixels in at least partially governing the emanated light field. As described in further detail below, such light field shaping layer elements may take the form of a microlens and/or pinhole array, or other like arrays of optical elements, or again take the form of an underlying light field shaping layer, such as an underlying array of optical gratings or like optical elements operable to produce a directional pixelated output.
[0086] Within the context of a light field shaping layer, as described in further detail below in accordance with some embodiments, the light field shaping layer may be disposed at a preset or adjustable distance from the pixelated display so to controllably shape or influence a light field emanating therefrom. For instance, each light field shaping layer can be defined by an array of optical elements centered over a corresponding subset of the display’s pixel array to optically influence a light field emanating therefrom and thereby govern a projection thereof from the display medium toward the user, for instance, providing some control over how each pixel or pixel group will be viewed by the viewer’ s eye(s). As will be further detailed below, arrayed optical elements may include, but are not limited to, lenslets, microlenses or other such diffractive optical elements that together form, for example, a lenslet array; pinholes or like apertures or windows that together form, for example, a parallax or like barrier; concentrically patterned barriers, e.g. cut outs and/or windows, such as a to define a Fresnel zone plate or optical sieve, for example, and that together form a diffractive optical barrier; and/or a combination thereof, such as for example, a lenslet array whose respective lenses or lenslets are partially shadowed or barriered around a periphery thereof so to combine the refractive properties of the lenslet with some of the advantages provided by a pinhole barrier. [0087] In operation, the display device may also generally invoke a hardware processor operable on image pixel (or subpixel) data for an image to be displayed to output corrected or adjusted image pixel data to be rendered as a function of a stored characteristic of the light field shaping elements and/or layer (e.g. layer distance from display screen, distance between optical elements (pitch), absolute relative location of each pixel or subpixel to a corresponding optical element, properties of the optical elements (size, diffractive and/or refractive properties, etc.), or other such properties, and a selected vision correction or adjustment parameter related to the user’s reduced visual acuity or intended viewing experience. Image processing may, in some embodiments, be dynamically adjusted as a function of the user’s visual acuity or intended application so to actively adjust a distance of a virtual image plane, or perceived image on the user’s retinal plane given a quantified user eye focus or like optical aberration(s), induced upon rendering the corrected/adjusted image pixel data via the optical layer and/or elements, for example, or otherwise actively adjust image processing parameters as may be considered, for example, when implementing a viewer- adaptive pre-filtering algorithm or like approach (e.g. compressive light field optimization), so to at least in part govern an image perceived by the user’ s eye(s) given pixel or subpixel-specific light visible thereby through the layer.
[0088] The skilled artisan will appreciate that various ray tracing processes may be employed, in accordance with various embodiments, for rendering, for instance, adjusted content to accommodate a user’s reduced visual acuity. For example, various embodiments relate to computationally implementing one or more of the ray tracing processes described in Applicant’s U.S. Patent No. 10,394,322 issued August 27, 2019, U.S. Patent No. 10,636,116 issued April 28, 2020, and/or U.S. Patent No. 10,761,604 issued September 1, 2020, the entire contents of which are hereby incorporated herein by reference. Similarly, while various embodiments herein described employ a LFSL layer disposed parallel to a display screen, the skilled artisan will appreciate that ray tracing with non-parallel planes, as described in, for instance, some of the above noted patents, the entire contents of which are hereby incorporated herein by reference, is also herein considered, in accordance with various embodiments. Various embodiments may, additionally or alternatively, relate to self-identifying light field display systems and processes, whereby a user may self-identify themselves for display of preferred content (e.g. content displayed with a perception adjustment corresponding to a particular dioptric adjustment) while, for instance, maintaining user privacy. Such embodiments are further described in co-pending United States Patent Application No. 63/056,188, the entire contents of which are also hereby incorporated herein by reference. [0089] While various embodiments may apply to various configurations of light field display systems known in the art, exemplary light field display systems in which a dynamic light field shaping layer as described herein may apply will be described with reference to exemplary vision testing systems (Figures 1A, IB, 4A to 4C, and 7 to 10). Such examples are not intended to limit the scope of the systems and methods herein described, and are included to provide context, only, for non-limiting exemplary light field display systems and methods that may employ a dynamic light field shaping layer.
[0090] With reference to Figures 1A and IB, and in accordance with different embodiments, an exemplary subjective vision testing device/system (interchangeably referred to as a corrective vision previewing device/system), generally referred to using the numeral 100, will now be described. At the heart of this exemplary system is a light field vision testing device such as a light field refractor or phoropter device 102. Generally, light field refractor 102 is a device comprising, as mentioned above, a light field display 104 and which is operable to display or generate one or more images, including optotypes, to a user or patient having his/her vision acuity (e.g. refractive error) tested. [0091] In some embodiments, as illustrated in Figure IB, light field display 104 comprises a light field shaping layer (LFSL) 108 overlaid or placed in front of a digital pixel display 110 (i.e. LCD, LED, OLED, etc.). For the sake of illustration, the following embodiments will be described within the context of a LFSL 108 defined, at least in part, by a lenslet array comprising an array of microlenses (also interchangeably referred to herein as lenslets) that are each disposed at a distance from a corresponding subset of image rendering pixels in an underlying digital display. It will be appreciated that while a light field shaping layer may be manufactured and disposed as a digital screen overlay, other integrated concepts may also be considered, for example, where light field shaping elements such as a textured or masked glass plate, beam-shaping light sources (e.g. directional light sources and/or backlit integrated optical grating array), or like component, may be coupled with integrally formed or manufactured integral components of a digital display system.
[0092] Accordingly, each lenslet will predictively shape light emanating from these pixel subsets to at least partially govern light rays being projected toward the user by the display device. As noted above, other light field shaping layers may also be considered herein without departing from the general scope and nature of the present disclosure, whereby light field shaping will be understood by the person of ordinary skill in the art to reference measures by which light, that would otherwise emanate indiscriminately (i.e. isotropically) from each pixel group, is deliberately controlled to define predictable light rays that can be traced between the user and the device’s pixels through the shaping layer.
[0093] For greater clarity, a light field is generally defined as a vector function that describes the amount of light flowing in every direction through every point in space. In other words, anything that produces or reflects light has an associated light field. The embodiments described herein produce light fields from an object that are not “natural” vector functions one would expect to observe from that object. This gives it the ability to emulate the “natural” light fields of objects that do not physically exist, such as a virtual display located far behind the light field display.
[0094] In one example, to apply this technology to vision correction, consider first the normal ability of the lens in an eye, as schematically illustrated in Figure 2A, where, for normal vision, the image is to the right of the eye (C) and is projected through the lens (B) to the retina at the back of the eye (A). As comparatively shown in Figure 2B, the poor lens shape and inability to accommodate (F) in presbyopia causes the image to be focused past the retina (D) forming a blurry image on the retina (E). The dotted lines outline the path of a beam of light (G). Naturally, other optical aberrations present in the eye will have different impacts on image formation on the retina. To address these aberrations, a light field display 104, in accordance with some embodiments, projects the correct sharp image (H) on the retina for an eye with a crystalline lens which otherwise could not accommodate sufficiently to produce a sharp image. The other two light field pixels (I) and (J) are drawn lightly, but would otherwise fill out the rest of the image.
[0095] As will be appreciated by the skilled artisan, a light field as seen in Figure 2C cannot be produced with a ‘normal’ two-dimensional display because the pixels’ light field emits light isotopically. Instead it is necessary to exercise tight control on the angle and origin of the light emitted, for example, using a microlens array or other light field shaping layer such as a parallax barrier, or combination thereof. Following with the example of a microlens array for LFSL 106, Figure 3A schematically illustrates a single light field pixel defined by a convex microlens 302 disposed at its focus from a corresponding subset of pixels in a digital pixel display 108 to produce a substantially collimated beam of light emitted by these pixels, whereby the direction of the beam is controlled by the location of the pixel(s) relative to the microlens. The single light field pixel produces a beam similar to that shown in Figure 2C where the outside rays are lighter and the majority inside rays are darker. The digital pixel display 108 emits light which hits the microlens 302 and it results in a beam of substantially collimated light (A).
[0096] Accordingly, upon predictably aligning a particular microlens array with a pixel array, a designated “circle” of pixels will correspond with each microlens and be responsible for delivering light to the pupil through that lens. Figure 3B schematically illustrates an example of a light field display assembly in which a LFSL 106 sits above a pixel display 108 to have pixels 304 emit light through the microlens array. A ray-tracing algorithm can thus be used to produce a pattern to be displayed on the pixel array below the microlens in order to create the desired virtual image that will effectively correct for the viewer’s reduced visual acuity.
[0097] As will be detailed further below, the separation between the LFSL 106 and the pixel array 108, as well as the pitch of the lenses, can be selected as a function of various operating characteristics, such as the normal or average operating distance of the display, and/or normal or average operating ambient light levels.
[0098] In some embodiments, LFSL 106 may be a microlens array (MLA) defined by a hexagonal array of microlenses or lenslet disposed so to overlay a corresponding square pixel array of digital pixel display 108. In doing so, while each microlens can be aligned with a designated subset of pixels to produce light field pixels as described above, the hexagonal-to-square array mismatch can alleviate certain periodic optical artifacts that may otherwise be manifested given the periodic nature of the optical elements and principles being relied upon to produce the desired optical image corrections. Conversely, other geometries, such as a square microlens array, or an array comprising elongated hexagonal lenslets, may be favoured when operating a digital display comprising a hexagonal pixel array.
[0099] In some embodiments, the MLA may further or alternatively be overlaid or disposed at an angle (rotation) relative to the underlying pixel array, which can further or alternatively alleviate period optical artifacts.
[00100] In yet some further or alternative embodiments, a pitch ratio between the microlens array and pixel array may be deliberately selected to further or alternatively alleviate periodic optical artifacts. For example, a perfectly matched pitch ratio (i.e. an exact integer number of display pixels per microlens) is most likely to induce periodic optical artifacts, whereas a pitch ratio mismatch can help reduce such occurrences.
[00101] Accordingly, in some embodiments, the pitch ratio will be selected to define an irrational number, or at least, an irregular ratio, so to minimise periodic optical artifacts. For instance, a structural periodicity can be defined so to reduce the number of periodic occurrences within the dimensions of the display screen at hand, e.g. ideally selected so to define a structural period that is greater than the size of the display screen being used.
[00102] While this example is provided within the context of a microlens array, similar structural design considerations may be applied within the context of a parallax barrier, diffractive barrier or combination thereof. In some embodiments, light field display 104 can render dynamic images at over 30 frames per second on the hardware in a smartphone.
[00103] Accordingly, a display device as described above and further exemplified below, can be configured to render a corrected or adjusted image via the light field shaping layer that accommodates, tests or simulates for the user’s visual acuity. By adjusting the image correction in accordance with the user’s actual predefined, set or selected visual acuity level, different users and visual acuity may be accommodated using a same device configuration, whereas adjusting such parameters for a given user may allow for testing for or simulation of different corrective or visual adjustment solutions. For example, by adjusting corrective image pixel data to dynamically adjust a virtual image distance below/above the display as rendered via the light field shaping layer, different visual acuity levels may be accommodated, and that, for an image input as a whole, for distinctly various portions thereof, or again progressively across a particular input.
[00104] As noted in the examples below, in some embodiments, light field rendering may be adjusted to effectively generate a virtual image on a virtual image plane that is set at a designated distance from an input user pupil location, for example, so to effectively push back, or move forward, a perceived image, or portion thereof, relative to the light field refractor device 102. In yet other embodiments, light field rendering may rather or alternatively seek to map the input image on a retinal plane of the user, taking into account visual aberrations, so to adaptively adjust rendering of the input image on the display device to produce the mapped effect. Namely, where the unadjusted input image would otherwise typically come into focus in front of or behind the retinal plane (and/or be subject to other optical aberrations), this approach allows to map the intended image on the retinal plane and work therefrom to address designated optical aberrations accordingly. Using this approach, the device may further computationally interpret and compute virtual image distances tending toward infinity, for example, for extreme cases of presbyopia. This approach may also more readily allow, as will be appreciated by the below description, for adaptability to other visual aberrations that may not be as readily modeled using a virtual image and image plane implementation. In both of these examples, and like embodiments, the input image is digitally mapped to an adjusted image plane (e.g. virtual image plane or retinal plane) designated to provide the user with a designated image perception adjustment that at least partially addresses designated visual aberrations. Naturally, while visual aberrations may be addressed using these approaches, other visual effects may also be implemented using similar techniques. [00105] As an example of the effectiveness of the light field display in generating a diopter displacement (e.g. simulate the effect of looking through an optical component (i.e. a lens) of a given diopter strength or power) is shown in Figure 5, where a plot is shown of the angular resolution (in arcminutes) of an exemplary light field display comprising a 1500 ppi digital pixel display, as a function of the dioptric power of the light field image (in diopters). From this plot, it is clear that, in this particular example employing a static LFSL, the light field display is able to generate displacements (line 502) in diopters that have higher resolution corresponding to 20/20 vision (line 504) or better (e.g. 20/15 - line 506) and close to (20/10 - line 508)), here within a dioptric power range of 2 to 2.5 diopters. [00106] Thus, in the context of a refractor 102, light field display 104 (in conjunction with light field rendering or ray-tracing methods referenced above) may, according to different embodiments, be used to replace, at least in part, traditional optical components.
[00107] In some embodiments, the light field display can display a virtual image at optical infinity, meaning that any level of accommodation-based presbyopia (e.g. first order) can be corrected for. In some further embodiments, the light field display can both push the image back or forward, thus allowing for selective image corrections for both hyperopia (far-sightedness) and myopia (nearsightedness). In yet further embodiments, variable displacements and/or accommodations may be applied as a function of non- uniform visual aberrations, or again to provide perceptive previewing or simulation of non- uniform or otherwise variable corrective powers/measures across a particular input or field of view.
[00108] However, the light field rendering system introduced above, in conjunction with various ray-tracing methods as referenced above, may also be used with other devices which may similarly comprise a light field display. For example, this may include a smartphone, tablets, e-readers, watches, televisions, GPS devices, laptops, desktop computer monitors, televisions, smart televisions, handheld video game consoles and controllers, vehicular dashboard and/or entertainment displays, and the like, without limitation. [00109] Accordingly, any light field processing or ray-tracing methods as referenced herein, and related light field display solutions, can be equally applied to image perception adjustment solutions for visual media consumption, as they can for subjective vision testing solutions, or other technologically related fields of endeavour. As alluded to above, the light field display and rendering/ray-tracing methods discussed above may ah be used to implement, according to various embodiments, a subjective vision testing device or system such as a phoropter or refractor. Indeed, a light field display may replace, at least in part, the various refractive optical components usually present in such a device. Thus, vision correction light field ray tracing methods may equally be applied to render optotypes at different dioptric power or refractive correction by generating vision correction for hyperopia (far-sightedness) and myopia (nearsightedness), as was described above in the general case of a vision correction display. Light field systems and methods described herein, according to some embodiments, may be applied to create the same capabilities as a traditional instrument and to open a spectrum of new features, ah while improving upon many other operating aspects of the device. For example, the digital nature of the light field display enables continuous changes in dioptric power compared to the discrete change caused by switching or changing a lens or similar; displaying two or more different dioptric corrections seamlessly at the same time; and, in some embodiments, the possibility of measuring higher-order aberrations and/or to simulate them for different purposes such as, deciding for free-form lenses, cataract surgery operation protocols, IOL choice, etc.
[00110] Going back to Figure 1A, as producing a light field with angular resolution sufficient for accommodation correction over the full viewing ‘zone’ of a display would generally require an astronomically high pixel density, instead, a correct light field can be produced, in some embodiments, only at or around the location of the user’s pupil(s). To do so, where a location or position of the user’s eye is not otherwise rigidly constrained (e.g. within the context of a subject eye testing device or the like) the light field display can be paired with pupil tracking technology, as will be discussed below, to track a location of the user’ s eyes/pupils relative to the display. The display can then compensate for the user’ s eye location and produce the correct virtual image, for example, in real-time. Thus, in some embodiments, light field refractor 102 may include, integrated therein or interfacing therewith, a pupil/eye tracking system 110 to improve or enhance corrective image rendering by tracking a location of the user’s eye(s)/pupil(s) (e.g. both or one, e.g. dominant, eye(s)) and adjusting light field corrections accordingly one or more eye/pupil tracking light sources, such as one or more infrared (IR) or near-IR (NIR) light source(s) to accommodate operation in limited ambient light conditions, leverage retinal retro- reflections, invoke comeal reflection, and/or other such considerations. For instance, different IR/NIR pupil tracking techniques may employ one or more (e.g. arrayed) directed or broad illumination light sources to stimulate retinal retro-reflection and/or comeal reflection in identifying a tracking a pupil location. Other techniques may employ ambient or IR/NIR light-based machine vision and facial recognition techniques to otherwise locate and track the user’s eye(s)/pupil(s). To do so, one or more corresponding (e.g. visible, IR/NIR) cameras may be deployed to capture eye/pupil tracking signals that can be processed, using various image/sensor data processing techniques, to map a 3D location of the user’s eye(s)/pupil(s). As mentioned above, in some embodiments, such eye/pupil tracking hardware/software may be integral to device 102, for instance, operating in concert with integrated components such as one or more front facing camera(s), onboard IR/NIR light source(s) (not shown) and the like. In other user environments, such as in a vehicular environment, eye/pupil tracking hardware may be further distributed within the environment, such as dash, console, ceiling, windshield, mirror or similarly-mounted camera(s), light sources, etc.
[00111] In one embodiment and as illustrated in Figure 4A, light field refractor 102 may be configured with light field display 104 located relatively far away (e.g. one or more meters) from the user’s eye currently being diagnosed. Note that the pointed line in Figures 4A to 4C is used to schematically illustrate the direction of the light rays emitted by light field display 104. Also illustrated is eye-tracker 110, which may be provided as a physically separate element, for example, installed at a given location in a room or similar. In some embodiments, the noted eye/pupil tracker 110 may include the projection of IR markers/patterns to help align the patient’s eye with the light field display. In some embodiments, a tolerance window (e.g. “eye box”) may be considered to limit the need to refresh the ray-tracing iteration. An exemplary value of the size of the eye box, in some embodiments, is around 6 mm, though smaller (e.g. 4mm) or larger eye boxes may alternatively be set to impact image quality, stability or like operational parameters. [00112] Going back to Figure 1 A, light field refractor 102 may also comprise, according to different embodiments and as will be further discussed below, one or more refractive optical components 112, a processing unit 114, a data storage unit or internal memory 116, one or more cameras 118, a power source 120, a network interface 122 for communicating via network to a remote database or server 124.
[00113] In some embodiments, power source 120 may comprise, for example, a rechargeable Li-ion battery or similar. In some embodiments, it may comprise an additional external power source, such as, for example, a USB-C external power supply. It may also comprise a visual indicator (screen or display) for communicating the device’s power status, for example whether the device is on/off or recharging.
[00114] In some embodiments, internal memory 116 may be any form of electronic storage, including a disk drive, optical drive, read-only memory, random-access memory, or flash memory, to name a few examples. In some embodiments, a library of chart patterns (Snellen charts, prescribed optotypes, forms, patterns, or other) may be located in internal memory 116 and/or retrievable from remote server 124 via network interface 122.
[00115] In some embodiments, one or more optical components 112 may be used in combination with the light field display 104, for example to shorten the size of refractor 102 and still offer an acceptable range in dioptric power. The general principle is schematically illustrated in the plots of Figures 6A to 6D. In these schematic plots, the image quality (e.g. inverse of the angular resolution, higher is better) at which optotypes are small enough to be useful for vision testing in this plot is above horizontal line 602 which represents typical 20/20 vision. Figure 6A shows the plot for the light field display having a static LFSL only, where we see the characteristic two peaks corresponding to the smallest resolvable point, one of which was plotted in Figure 5 (here inverted and shown as a peak instead of a basin), and where each region above the line may cover a few diopters of dioptric power, according to some embodiments. While the dioptric range may, in some embodiments, be more limited than needed when relying only on the light field display with a static LFSL, it is possible to shift this interval by adding one or more refractive optical components. This is shown in Figure 6B where the regions above the line 602 is shifted to the left (negative diopters) by adding a single lens in the optical path.
[00116] Thus, by using a multiplicity of refractive optical components 112 or by alternating sequentially between different refractive components 112 of increasing or decreasing dioptric power, it is possible to shift the center of the light field diopter range to any required value, as shown in Figure 6C, and thus the image quality may be kept above line 602 for any required dioptric power as shown in Figure 6D. In some embodiments, a range of 30 diopters from +10 to -20 may be covered for example. In the case of one or more reels of lenses, the lens may be switched for a given larger dioptric power increment, and the light field display would be used to provide a finer continuous change to accurately pin-point the required total dioptric power required to compensate for the patient’ s reduced visual acuity. This would still result in light field refractor 102 having a reduced number of refractive optical components compared to the number of components needed in a traditional refractor, while drastically enhancing the overall fine-tuning ability of the device.
[00117] One example, according to one embodiment, of such a light field refractor 102 is schematically illustrated in Figure 4B, wherein the light field display 104 (herein shown comprising LFSL 106 and digital pixel display 108) is combined with a multiplicity of refractive components 112 (herein illustrated as a reel of lenses as an example only). By changing the refractive component used in combination with light field display 104 with a static LFSL, a larger dioptric range may be covered. This may also provide means to reduce the dimension of device 102 as mentioned above, making it more portable, so that all its internal components may be encompassed into a shell, housing or casing 402. In some embodiments, light field refractor 102 may thus comprise a durable ABS housing that may be shock and harsh-environment resistant. In some embodiments, light field refractor 102 may also comprise a telescopic feel for fixed or portable usage; optional mounting brackets, and/or a carrying case (not shown). In some embodiments, all components may be internally protected and sealed from the elements. [00118] In some embodiments, casing 402 may further comprise a head-rest or similar (not shown) to keep the user’s head still and substantially in the same location, thus, in such examples, foregoing the general utility of a pupil tracker or similar techniques by substantially fixing a pupil location relative to this headrest. [00119] In some embodiments, it may also be possible to further reduce the size of device 102 by adding, for example, a mirror or any device which may increase the optical path. This is illustrated in Figure 4C where the length of the device was reduced by adding a mirror 404. This is shown schematically by the pointed arrow which illustrates the light being emitted from pixel display 108 travelling through LFSL 106 before being reflected back by mirror through refractive components 112 and ultimately hitting the eye.
[00120] The skilled technician will understand that different examples of refractive components 112 may be include, without limitation, one or more lenses, sometimes arranged in order of increasing dioptric power in one or more reels of lenses similar to what is typically found in traditional refractors/phoropters; an electrically controlled fluid lens; active Fresnel lens; and/or Spatial Light Modulators (SLM). In some embodiments, additional motors and/or actuators (not shown) may be used to operate refractive components 112. The motors/actuators may be communicatively linked to processing unit 114 and power source 120, and operate seamlessly with light field display 102 to provide the required dioptric power. [00121] For example, Figures 7A and 7B show a perspective view of an exemplary light field phoropter 102 similar to the one schematically shown in Figure 3B, but wherein the refractive component 112 is an electrically tunable liquid lens. In some embodiments, the electrically tunable lens may have a range of ±13 diopters. In accordance with other embodiments, one or more refractive components 112 may be removed from the system 102, as the dioptric range enabled by the refractive components 112 may be provided through mechanical displacement of a LFSL, as discussed below. That is, as further detailed below, the dynamic range of the light field display solution can be significantly increased by dynamically adjusted a relative distance of the LFSL, such as a microlens array, relative to the pixelated display such that, in combination with a correspondingly adaptive ray tracing process, a greater perceptive adjustment range can be achieved (e.g. greater perceived dioptric correction range can be implemented).
[00122] In one illustrative embodiment, a 1000 dpi display is used with a MLA having a 65 mm focal distance and 1000 μm pitch with the user’ s eye located at a distance of about 26 cm. A similar embodiment uses the same MLA and user distance with a 3000 dpi display.
[00123] Other displays having resolutions including 750 dpi, 1000 dpi, 1500 dpi and 3000 dpi may also be used, as may be MLAs with a focal distance and pitch of 65 mm and 1000 μm, 43 mm and 525 μm, 65 mm and 590 μm, 60 mm and 425 μm, 30 mm and 220 μm, and 60 mm and 425 μm, respectively, and user distances of 26 cm, 45 cm or 65 cm.
[00124] Going back to Figure 1A, in some embodiments, eye-tracker 110 may further comprise a digital camera, in which case it may be used to further acquire images of the user’s eye to provide further diagnostics, such as pupillary reflexes and responses during testing for example. In other embodiments, one or more additional cameras 118 may be used to acquire these images instead. In some embodiments, light field refractor 102 may comprise built-in stereoscopic tracking cameras.
[00125] In some embodiments, feedback and/or control of the vision test being administered by system 100 may be given via a control interface 126. In some embodiments, the control interface 126 may comprise a dedicated handheld controller-like device 128. This controller 128 may be connected via a cable or wirelessly, and may be used by the patient directly and/or by an operator like an eye professional. In some embodiments, both the patient and operator may have their own dedicated controller 128. In some embodiments, the controller may comprise digital buttons, analog thumbstick, dials, touch screens, and/or triggers. [00126] In some embodiments, control interface 126 may comprise a digital screen or touch screen, either on refractor 102 itself or part of an external module (not shown). In other embodiments, control interface 126 may let on or more external remote devices (i.e. computer, laptop, tablet, smartphone, remote, etc.) control light field refractor 102 via network interface 122. For example, remote digital device 130 may be connected to light field refractor 102 via a cable (e.g. USB cable, etc.) or wirelessly (e.g. via Wi-Fi, Bluetooth or similar) and interface with light field refractor 102 via a dedicated application, software or website (not shown). Such a dedicated application may comprise a graphical user interface (GUI), and may also be communicatively linked to remote database 124.
[00127] In some embodiments, the user or patient may give feedback verbally and the operator may control the vision test as a function of that verbal feedback. In some embodiments, refractor 102 may comprise a microphone (not shown) to record the patient’ s verbal communications, either to communicate them to a remote operator via network interface 122 or to directly interact with the device (e.g. via speech recognition or similar).
[00128] Going back to Figure 1A, processing unit 114 may be communicatively connected to data storage 116, eye tracker 110, light field display 104 and refractive components 112. Processing unit 114 may be responsible for rendering one or more images or optotypes via light field display 104 and, in some embodiments, jointly control refractive components 112 or a LFSL 106 position to achieve a required total change in dioptric power. It may also be operable to send and receive data to internal memory 116 or to/from remote database 124 via network interface 122.
[00129] In some embodiments, diagnostic data may be automatically transmitted/communicated to remote database 124 or remote digital device 130 via network interface 122 through the use of a wired or wireless network connection. The skilled artisan will understand that different means of connecting electronic devices may be considered herein, such as, but not limited to, Wi-Fi, Bluetooth, NFC, Cellular, 2G, 3G, 4G, 5G or similar. In some embodiments, the connection may be made via a connector cable (e.g. USB including microUSB, USB-C, Lightning connector, etc.). In some embodiments, remote digital device 130 may be located in a different room, building or city.
[00130] In some embodiments, two light field refractors 102 may be combined side-by- side to independently measure the visual acuity of both left and right eye at the same time. An example is shown in Figure 7C, where two units corresponding to the embodiment of Figures 7 A or 7B (used as an example only) are placed side-by-side or fused into a single device.
[00131] In some embodiments, a dedicated application, software or website may provide integration with third party patient data software. In some embodiments, software required to operate and installed on refractor 102 may be updated on-the-fly via a network connection and/or be integrated with the patient’s smartphone app for updates and reminders.
[00132] In some embodiments, the dedicated application, software or website may further provide a remote, real-time collaboration platform between an eye professional and user/patient, and/or between different eye professionals. This may include interaction between different participants via video chat, audio chat, text messages, etc.
[00133] In some embodiments, light field refractor 102 may be self-operated or operated by an optometrist, ophthalmologist or other certified eye-care professional. For example, in some embodiments, a user/patient may use refractor 102 in the comfort of his/her own home, in a store or a remote location.
[00134] In accordance with various embodiments, the light field display 102 may comprise various forms of systems for performing vision-based tests. For instance, while some embodiments described above relate to the light field display 102 comprising a refractor or phoropter for assessing a user’ s visual acuity, various other embodiments relate to a light field system 102 operable to perform, for instance, a cognitive impairment assessment. In some embodiments, the light field system 102 may display, for instance, moving content to assess a user’s ability to track motion, or perform any number of other cognitive impairment evaluations known in the art, such as those related to saccadic movement and/or fixation. Naturally, such embodiments may further relate to light field display systems that are further operable to track a user’s gaze and/or eye movement and store data related thereto for further processing and assessment.
[00135] With reference to Figure 8 and in accordance with one exemplary embodiment, a dynamic subjective vision testing method using vision testing system 100, generally referred to using the numeral 800, will now be described. As mentioned above, the use of a light field display enables refractor 102 to provide more dynamic and/or more modular vision tests than what is generally possible with traditional refractors/phoropters. Generally, method 800 seeks to diagnose a patient’s reduced visual acuity and produce therefrom, in some embodiments, an eye prescription or similar.
[00136] In some embodiments, eye prescription information may include, for each eye, one or more of: distant spherical, cylindrical and/or axis values, and/or a near (spherical) addition value.
[00137] In some embodiments, the eye prescription information may also include the date of the eye exam and the name of the eye professional that performed the eye exam. In some embodiments, the eye prescription information may also comprise a set of vision correction parameter(s) for operating any vision correction light field displays using the systems and methods described below. In some embodiments, the eye prescription may be tied to a patient profile or similar, which may contain additional patient information such as a name, address or similar. The patient profile may also contain additional medical information about the user. All information or data (i.e. set of vision correction parameter(s), user profile data, etc.) may be kept on external database 124. Similarly, in some embodiments, the user’s current vision correction parameter(s) may be actively stored and accessed from external database 124 operated within the context of a server- based vision correction subscription system or the like, and/or unlocked for local access via the client application post user authentication with the server-based system.
[00138] Refractor 102 being, in some embodiments, portable, a large range of environments may be chosen to deliver the vision test (home, eye practitioner’s office, etc.). At the start, the patient’s eye may be placed at the required location. This may be done by placing his/her head on a headrest or by placing the objective (i.e. eyepiece) on the eye to be diagnosed. As mentioned above, the vision test may be self-administered or partially self-administered by the patient. For example, the operator (e.g. eye professional or other) may have control over the type of test being delivered, and/or be the person who generates or helps generate therefrom an eye prescription, while the patient may enter inputs dynamically during the test (e.g. by choosing or selecting an optotype, etc.).
[00139] As will be discussed below, light field rendering methods described herein generally requires an accurate location of the patient’s pupil center. Thus, at step 802, such a location is acquired. In some embodiments, such a pupil location may be acquired via eye tracker 110, either once, at intervals, or continuously. In other embodiments, the location may be derived from the device or system’s dimension. For example, in some embodiments, the use a head-rest and/or an eye-piece or similar provides an indirect means of deriving the pupil location. In some embodiments, refractor 102 may be self-calibrating and not require any additional external configuration or manipulation from the patient or the practitioner before being operable to start a vision test.
[00140] At step 804, one or more optotypes is/are displayed to the patient, at one or more dioptric power (e.g. in sequence, side-by-side, or in a grid pattem/layout). The use of light field display 104 offers multiple possibilities regarding how the images/optotypes are presented, and at which dioptric power each may be rendered. The optotypes may be presented sequentially at different dioptric power, via one or more dioptric power increments. In some embodiments, the patient and/or operator may control the speed and size of the dioptric power increments.
[00141] In some embodiments, optotypes may also be presented, at least in part, simultaneously on the same image but rendered at a different dioptric power. For example, Figure 9 shows an example of how different optotypes may be displayed to the patient but rendered with different dioptric powers simultaneously. These may be arranged in columns or in a table or similar. In Figure 9, we see two columns of three optotypes (K, S, V), varying in size, as they are perceived by a patient, each column being rendered at different degrees of refractive correction (e.g. dioptric power). In this specific example, the optotypes on the right are being perceived as blunder than the optotypes on the left. Ray tracing and adjusted image rendering techniques applicable to the generation of respective dioptric corrections for distinct portions of a rendered image (or for distinct optotypes or distinct renditions of a same optotype) are detailed in Applicant’s above-reference U.S. Patent No. 10,761,604.
[00142] Thus, at step 806, the patient would communicate/verbalise this information to the operator or input/select via, for example, control interface 126 the left column as the one being clearer. Thus, in some embodiments, method 800 may be configured to implement dynamic testing functions that dynamically adjust one or more displayed optotype’s dioptric power in real-time in response to a designated input, herein shown by the arrow going back from step 808 to step 804 in the case where at step 808, the user or patient communicates that the perceived optotypes are still blurry or similar. In the case of sequentially presented optotypes, the patient may indicate when the optotypes shown are clearer. In some embodiments, the patient may control the sequence of optotypes shown (going back and forth as needed in dioptric power), and the speed and increment at which these are presented, until he/she identifies the clearest optotype. In some embodiments, the patient may indicate which optotype or which group of optotypes is the clearest by moving an indicator icon or similar within the displayed image.
[00143] In some embodiments, the optotypes may be presented via a video feed or similar.
[00144] In some embodiments, when using a reel of lenses or similar (for refractive components 112), discontinuous changes in dioptric power may be unavoidable. For example, the reel of lenses may be used to provide a larger increment in dioptric power, as discussed above. Thus, step 804 may in this case comprise first displaying larger increments of dioptric power by changing lens as needed, and when the clearest or less blurry optotypes are identified, fine-tuning with continuous or smaller increments in dioptric power using the light field display. In accordance with some embodiments, a LFSL position may be dynamically adjusted to provide larger increments in dioptric power, or again to migrate between different operative dioptric ranges, resulting in a smoother transition than would otherwise be observed when changing lenses in a reel. Similarly, the LFSL may be displaced in small increments or continuously for fine-tuning a displayed optotype. In the case of optotypes presented simultaneously, the refractive components 112 may act on all optotypes at the same time, and the change in dioptric power between them may be controlled by the light field display 104, for example in a static position to accommodate variations within a given dioptric range (e.g. applicable to all distinctly rendered optotypes), or across different LFSL positions dynamically selected to accommodate different operative ranges. In such embodiments, or, for example, when using an electrically tunable fluid lens or similar, the change in dioptric power may be continuous.
[00145] In some embodiments, eye images may be recorded during steps 802 to 806 and analyzed to provide further diagnostics. For example, eye images may be compared to a bank or database of proprietary eye exam images and analyzed, for example via an artificial intelligence (AI) or machine learning (ML) system, or similar. This analysis may be done by refractor 102 locally or via a remote server or database 124.
[00146] Once the correct dioptric power needed to correct for the patient’s reduced visual acuity is defined at step 810, an eye prescription or vision correction parameter(s) may be derived from the total dioptric power used to display the best perceived optotypes.
[00147] In some embodiments, the patient, an optometrist or other eye-care professional may be able to transfer the patient’s eye prescription directly and securely to his/her user profile store on said server or database 124. This may be done via a secure website, for example, so that the new prescription information is automatically uploaded to the secure user profile on remote database 124. In some embodiments, the eye prescription may be sent remotely to a lens specialist or similar to have prescription glasses prepared.
[00148] In some embodiments, vision testing system 100 may also or alternatively be used to simulate compensation for higher-order aberrations. Indeed, the light field rendering methods described above may be used to compensation for higher order aberrations (HO A), and thus be used to validate externally measured or tested HOA, in that a measured, estimated or predicted HOA can be dynamically compensated for using the system described herein and thus subjectively visually validated by the viewer in confirming whether the applied HOA correction satisfactory addresses otherwise experienced vision deficiencies. [00149] As will be appreciated by the skilled artisan, different light field image processing techniques may be considered, such as those introduced above and taught by Pamplona and/or Huang, for example, which may also influence other light field parameters to achieve appropriate image correction, virtual image resolution, brightness and the like.
[00150] In accordance with various embodiments, various systems and methods described herein may provide an improvement over conventional light field displays through the provision or actuation of a translatable LFSL to, for instance, expand the dioptric range of perception adjustments accessible to a light field display system. For instance, and in accordance with some embodiments, a smartphone, tablet, television screen, embedded entertainment system (e.g. in a car, train, airplane, etc.) or the like, may benefit from a translatable LFSL to, for instance, increase a range of visual acuity corrections that may be readily accommodated with a similar degree of accommodation and rendering quality, or again, to expand a vision-based testing or accommodation range in a device so to effectively render optotypes or testing images across a greater perceptive adjustment, with, for instance, a given LFSL geometry and/or display resolution.
[00151] In accordance with at least one embodiment, a vision testing system, such as that described above, may comprise a translatable LFSL so to enable removal of one or more optical components (e.g. an electrically tunable lens, or one or more reels of lenses, or the like) from the light field system, while providing sufficient resolution to perform a vision-based assessment, non-limiting examples of which may include an eye exam or a cognitive impairment assessment. For instance, and in accordance with at least one embodiment, such a translatable LFSL may increase the widths of the peaks in Figures 6A to 6D, increasing the range of diopters that may be displayed with sufficient quality to perform an eye exam or cognitive assessment without requiring changing of optical components. Similarly, a translatable LFSL may further, in accordance with various embodiments, relate to reducing the size, weight, form factor, complexity, or the like, of a vision-based testing system. This may, for instance, reduce device cost while increasing the portability of a device for in-field assessments. [00152] Various embodiments described hereafter may relate systems and methods related to vision testing systems that may employ a translatable LFSL. However, the skilled artisan will appreciate the scope and nature of the disclosure is not so limited, and that a dynamic or translatable LFSL may be applied in various other light field display systems and methods. Further, description will be provided with reference to mathematical equations and relationships that may be evaluated or considered in the provision of a light field system used to generate corrected content. Table 2 at the end of this section of the disclosure provides a summary of the terms and variables henceforth referred to for the reader’s convenience. [00153] The performance of a conventional eye exam may comprise placing a corrective lens in front of the eye being tested to correct the focal length of the eye having a depth DE, wherein the object may have a minimum feature separation of approximately 1 arcminute. To achieve this condition with, for instance, a light field-based visual testing system, the required retinal spot spacing is given by Equation 1.
Figure imgf000038_0001
As the depth of an unaberrated eye is typically assumed to be 25 mm, this results in a required spot spacing of approximately 7.27 pm, which, in accordance with some embodiments, increases by 1 mm per refractive error of -2.5 diopters.
[00154] Figure 10 schematically illustrates such an exemplary vision testing system 1000 comprising a pixelated display screen 1010 and MLA 1012, in accordance with at least one embodiment. In this non-limiting example, the nodal rays 1014 cross the centre of a pupil 1016, yielding a retinal nodal spot spacing of:
Figure imgf000038_0002
[00155] In this example, the spread around the nodal ray spot points is given in consideration of marginal rays 1018 reaching the edges of the pupil 1016:
Figure imgf000038_0003
This corresponds to a width and number of pixels on the display 1010 equal to:
Figure imgf000039_0001
[00156] Accordingly, the beam spot spacing on the retina for one nodal band is given by:
Figure imgf000039_0002
[00157] In order to achieve a continuous spread of beam spots on a retina, the spread of each nodal band, in accordance with some embodiments, may be equal to or greater than the modal spacing (i.e.
Figure imgf000039_0005
assuming uniform retinal spot spacing in a single nodal band of width For a number O of overlapping bands, to obtain uniform retinal
Figure imgf000039_0006
sub-spacing, the central spot of each band may be shifted, in accordance with some embodiments, according to:
Figure imgf000039_0003
[00158] To avoid light from the one or more pixels hitting two points on the retina, the position of the MLA 1012 may be set, in accordance with some embodiments, to prevent overlap between different nodal bands on the display 1010. For nodal spacing on the display 1010 and a display spread of
Figure imgf000039_0007
Figure imgf000039_0004
[00159] The uniform retinal spacing of the overlapped nodal beams 1014 may therefore be given by the following (Equation 3), where the approximation denotes the possibility of a non-integer overlap factor O, in accordance with some embodiments:
Figure imgf000040_0001
[00160] To increase intensity delivered to the retina and increase contrast, the spread of light from the display pixels 1010 may, in accordance with some embodiments, be decreased as much as possible. This may be calculated, in some embodiments, in consideration of the distance between the display 1010, the pupil 1016, as well as the pitch and lenslet width of MLA 1012. The nodal ray 1014 from an extremum pixel to an extremum of the pupil 1016 on the other side of the optical axis may give one portion of the angular spread. Another portion of the angular spread may arise from the spread to fill the MLA 1012. Accordingly,
Figure imgf000040_0002
[00161] Assuming the resolving threshold of the eye for two retinal spots is p (i.e. the ratio of from maximum at which two beams cross), the required beam size may be calculated, in accordance with various embodiments. For instance, Figure 11 schematically illustrates one lenslet 1110 where the pixel 1112 position is given by yp. In Figure 11 , yL i s the lenslet central coordinate, WL is the width of the lenslet 1110, and the eye 1114 is centered at the optical axis 1116 with the eye lens O centered at the origin. The ray 1118 propagation distance in this example is denoted as positive from left to right, while angles are denoted as positive in counter-clockwise direction, and y is positive above the optical axis 1116. Using a transfer matrix formulation of a thin lens, and in accordance with some embodiments, a ray 1118 that is emitted towards the lenslet 1110 is characterised by the following angle, where yRL is the coordinate at which the ray 1118 hits the lenslet 1110:
Figure imgf000040_0003
[00162] Similarly, the ray angle exiting the lens may be given by, in accordance with some embodiments:
Figure imgf000041_0001
[00163] In this exemplary embodiment, the position at the eye lens plane yre and the angle θre, after the eye lens O, may be given by, respectively:
Figure imgf000041_0002
[00164] Further, ray separation on the eye lens O may be described, in accordance with some embodiments, by Equation 4:
Figure imgf000041_0003
[00165] In accordance with some embodiments, the spot size from one lenslet 1010 (i.e. on the eye lens O may be found by tracing marginal rays Mr (e.g. marginal rays
Figure imgf000041_0005
1018 in Figure 10):
Figure imgf000041_0004
Figure imgf000042_0005
[00166] Further, pupil size Wppl may be accounted for, in accordance with some embodiments as Equation 5:
Figure imgf000042_0001
where the following identity was used:
Figure imgf000042_0002
[00167] For ray position and angles at the retina:
Figure imgf000042_0003
the spot size and divergence of beams are therefore given, in accordance with various embodiments, by:
Figure imgf000042_0004
[00168] In accordance with various embodiments, various light field systems and methods may comprise the treatment of rays as a rectangular beam. One such exemplary embodiment is shown in Figure 12, where focusing of a rectangular beam is schematically illustrated. With a ray tracing model, a beam size may approach a minimum theoretical size of zero width. In a practical system, however, a diffraction model may be employed. At the plane where the beam size is determined to be zero, the divergence of the beam may be used to obtain the actual size of the beam. Accordingly, assuming a beam with a rectangular width of WRect.
Figure imgf000043_0001
diffracted light is therefore given by:
Figure imgf000043_0002
[00169] To calculate beam divergence, in accordance with some embodiments, the first zero crossing width of the diffracted beam Sinc(WRect y / λz) may be considered,
Figure imgf000043_0003
which occurs at:
Figure imgf000043_0004
The divergence of the diffracted beam is therefore, and in accordance with some embodiments:
Figure imgf000043_0005
[00170] Hence, the beam spot size on the retina WMrr may be given by Equation 6:
Figure imgf000043_0006
[00171] In accordance with various embodiments, a beam may be converging or diverging and/or collimated. For instance, considering adjacent lenslets, i.e.
Figure imgf000044_0001
Figure imgf000044_0002
[00172] In the exemplary embodiment of DPL being less than or equal to fL in diverging or collimated beams, as beams are expanding, marginal rays at adjacent edges of the lenslets may be prevented from overlapping at the pupil
Figure imgf000044_0003
Figure imgf000044_0004
[00173] For a converging beam, and in accordance with some embodiments, a limit may be set on viewer distance where rays from further adjacent lenslet edges
Figure imgf000044_0005
[00175] Accordingly, for PL=WL.
Figure imgf000045_0001
[00176] Further, from Equation 2 above,
Figure imgf000045_0002
[00177] While the above described conditions, in accordance with some embodiments, may not be strong if, for instance, part of the interference beam hits the pupil, it may be difficult to notice, particularly at reduced intensities ( DPL not equal to fL), due to the inverse square law. Accordingly, the conditions arising from Equation 2 may have more significant importance.
[00178] Further, and in accordance with other embodiments, angular pitch on a pupil for a corrected eye lens may be obtained by the following, which may be more similar to a conventional eye exam:
Figure imgf000045_0003
[00179] In this example, the separation between the rays from adjacent pixels on the eye
Figure imgf000045_0004
[00180] After the eye lens, the angular pitch of the rays in a single nodal band (e.g. that in Figure 10) may be found as:
Figure imgf000046_0001
[00181] In accordance with some embodiments, a light field angular view (FoV) may be given by the distance from the display to the eye, and the MLA position. Assuming there are sufficient lenslets to support the display size of a light field system, the FoV and spread on the retina may therefore, in accordance with some embodiments, by given by, respectively:
Figure imgf000046_0002
[00182] Having established various relationships between parameters accessible to various light field systems and methods, various exemplary embodiments will now be described with reference to Equations 1 to 7 above and to Figures 13 to 17D.
[00183] In accordance with at least one embodiment, the uniform spacing of retinal beam spots for overlapping nodal bands are plotted in Figure 13 as a function of DLE and FE, normalised to DPL/ DLE using Equation 3 above. In this example, spacing is dependent on the eye focal length. Using
Figure imgf000046_0003
and choosing DLE > 150 mm allows for spacing of less than Accordingly, DLE/DPL may be greater than 2.6.
Figure imgf000046_0004
[00184] Figure 14 shows an exemplary plot of the different regions of DLE/DPL, where arrows indicate the direction that satisfies the condition of Equation 7 above. The (in some embodiments, stronger) condition of Equation 2 may require that DLE/DPL > 1.5. To have a reasonable form factor, and in accordance with some embodiments, the regions 1410 and 1412 of Figure 14, and/or those in which DPL is less than or equal to FL, may be preferred for embodiments of a light field system or method. Conversely, region 1414 of Figure 14 may comprise one that may not be satisfied, as beyond this range, a system form factor may rapidly escalate to one that is unreasonable or undesirable.
[00185] In accordance with at least one embodiment, a light field system may comprise values of DPL = 100 mm and DLE = 250 mm, with fL, = 100 mm.
[00186] Figures 15A to 15D are illustrative plots for a system having a display-to-eye distance DPE of 320 mm. In these exemplary embodiments, the calculated minimum beam size is a function of the MLA focal length and display-to-MLA distance. Figures 15A and 15B are illustrative plots showing calculations performed with the condition of avoiding single pixel beam interference from different lenslets. On the other hand, Figures 15C and 15D show the calculation without this condition applied.
[00187] Such calculations, in accordance with some embodiments, may be used to evaluate, for instance, potential device specifications, rather than evaluating actual retinal beam size values. For example, the actual spot size in an eye that is complex in nature may differ from that determined when using a simple lens model. Accordingly, and in accordance with different embodiments, the actual retinal spot size applied relative to the retinal spot spacing may be dependent on an eye functionality and/or empirical data, and may be calibrated experimentally.
[00188] Further, and in accordance with various embodiments, increasing the lenslet size may contribute to minimising the retinal spot. This is shown, by way of example, only, as the non-limiting illustrative plots of Figures 16A and 16B.
[00189] In accordance with various embodiments, Figures 17A to 17D show exemplary plots of the FoV and retina spread for a display-to-eye distance of 320 mm. In these examples, and in accordance with various embodiments, larger lenslets result in smaller FoV and retinal spread. Further, retinal spread may be reduced for positive refractive errors, an effect that may be considered when calculating a light field to be displayed, in accordance with various embodiments. [00190] In accordance with at least one embodiment, Table 1 below shows exemplary dioptric error corrections obtained in practice for various DPL distances in millimetres. In this non-limiting example, the light field display system comprised an MLA with a lenslet focal length of 106 mm, with 2 mm pitch and lenslet width, a 31.7 pm pixel size display, and an eye-to-display distance of 320 mm.
Table 1: Dioptric Error vs DPL
Figure imgf000048_0001
[00191] In the example of Table 1, while measured values did not exactly match calculated values, the general trend expected based on Equations 1 to 7 were observed, in accordance with various embodiments. That is, while the actual eye is more complex than the simple lens assumed in calculations, such deviation is expected, and calibration may be performed, for instance, based on eye functionality statistical data or the like, in accordance with some embodiments.
[00192] Nevertheless, the results of Table 1 highlight, in accordance with some embodiments, how a dioptric error range may be enhanced over that of conventional systems through, for instance, a dynamic displacement of a LFSL relative to a pixelated display. For instance, while conventional systems may maintain a fixed MLA at a distance from the display corresponding to the focal length of the lenslets, placing the LFSL at a distance other than that corresponding to the lenslet focal length from the display, or dynamically displacing it relative thereto, may increase the dioptric range of perception adjustments accessible to the light field display system.
[00193] In yet other embodiments, both the pixelated display and the LFSL may remain in place while still achieving a similar effect by otherwise adjusting an optical path length between them. Indeed, the selective introduction of one or more liquid cells or transparent (e.g. glass) blocks within this optical path may provide a similar effect. Other means of dynamically adjusting an optical path length between the pixelated display and LFSL may also be considered without limitation, and without departing from the general scope and nature of the present disclosure.
[00194] In any such embodiment, such a LFSL (i.e. one that may be disposed at a selectively variable optical path length from a display screen), herein also referred to as a dynamic LFSL, may comprise various light field shaping layers known in the art, such as an MLA, a parallax barrier, an array of apertures, such as a pinhole array, or the like, and may be fabricated by various means known in the art.
[00195] In accordance with various embodiments, a dynamic LFSL may be coupled with a display screen via, for instance, one or more actuators that may move the LFSL towards or away from (i.e. perpendicularly to) a digital display, and thus control, for instance, a range of dioptric corrections that may be generated by the digital display. Naturally, an actuator may otherwise dynamically displace a pixelated display relative to a fixed LFSL to achieve a similar effect in dynamically adjusting an optical path length between them.
[00196] For instance, Figure 18A shows a schematic of a display system 1800 (not to scale) comprising a digital display 1810 having an array of pixels 1812. In this example, conventional red, green, and blue subpixels are shown as grey, black, and white pixels, respectively. In order to generate a light field, a LFSL 1830, represented in Figure 18A as a parallax barrier 1830 having a barrier width (pitch) 1860, is coupled to the display 1810 via actuators 1820 and 1822, and is disposed between the display 1810 and two viewing locations 1840 and 1842, represented by white and grey eyes, respectively.
[00197] In accordance with various embodiments, view zones 1840 and 1842 may correspond to, for instance, two different eyes of a user, or eyes of two or more different users. While various light field shaping systems may, in accordance with various embodiments, address more than one eye, for instance to provide two different dioptric corrections to different eyes of the user, this embodiment will henceforth be described with reference to the first eye 1840, for simplicity. Further, it will be appreciated that while the dynamic LFSL 1830 in Figure 18A is shown in one dimension, various other embodiments relate to the LFSL 1830 comprising a two dimensional configuration, such as a lenslet array or a static or dynamic 2D parallax barrier (e.g. an LCD screen that may be activated to provide an arbitrary configuration of opaque and transparent pixels), and/or may comprise a MLA 1830 that may be dynamically adjusted or placed at various distances from the display 1810 to provide a designated light field to, for instance, enable a designated range of dioptric corrections via activation of appropriate pixels 1812.
[00198] Figure 18A shows a first configuration in which pixels of the display 1810 are activated to provide, for instance, a perception adjustment corresponding to a designated dioptric error correction (e.g. a dioptric error of -4.25), for a viewer at the first viewing location 1840. In this example, the user is at a distance 1850 from the dynamic LFSL 1830, while the LFSL 1830 is at a distance 1852 from the screen 1810, where the distance 1852 corresponds to, for instance, the focal length of the microlens LFSL 1830, as may be employed in conventional light field systems. Without adjustment, the user, at a distance corresponding to the sum of distances 1850 and 1852 (e.g. the distance between a display screen and the user’s eye in a refractor system used for displaying various optotypes), is unable to perceive, at a sufficiently high rendering quality, the dioptric adjustment (e.g. - 4.25 diopters) provided by the activated pixels, for instance due to geometrical constraints and/or limitations in screen resolution. For instance, the configurational parameters may lead to an undesirable viewing experience, and/or a lack of perception of the visual correction requiring a particular configuration of activated pixels as a result of the considerations noted above. For example, while a perceptive adjustment may be rendered, it may not exhibit the level of resolution and optical discernment required to conduct an effective vision test.
[00199] In accordance with various embodiments, actuators 1820 and 1822 may translate the dynamic LFSL 1830 towards or away from the display 1810, i.e. at a distance within or beyond the MLA focal length for a lenslet LFSL, to dynamically improve a perception adjustment rendering quality consistent with testing requirements and designated or calibrated in accordance with the corrective range of interest. In Figure 18B, actuators 1820 and 1822 have reconfigured the display system 1800 such that the LFSL 1830 has been dynamically shifted towards the display 1810 by a distance 1855, resulting in a new distance 1851 between the LFSL 1830 and viewing location 1840, and a new separation 1853 between the display 1810 and LFSL 1830. In this more optimised configuration, perception adjustment quality can be improved to a desired, designated or optimized level thereby allowing for a given optical effect or test to be adequately or optimally conducted. The skilled artisan will appreciate that this description is for illustrative purposes, only, and that various dynamic LFSLs, such as a dynamic MLA, may function within this scope of the disclosure in accordance with various mechanisms to provide, for instance, a larger or designated range of image perception adjustments.
[00200] The skilled artisan will appreciate that various actuators may be employed to dynamically adjust a LFSL with, for instance, high precision, while having a robustness to reliably adjust a LFSL or system thereof (e.g. a plurality of LFSLs, a LFSL comprising a plurality of PBs, MLAs, or the like). Furthermore, embodiments comprising heavier LFSL substrates (e.g. Gorilla glass or like tempered glass) may employ, in accordance with some embodiments, particularly durable and/or robust actuators, examples of which may include, but are not limited to, electronically controlled linear actuators, servo and/or stepper motors, rod actuators such as the PQ12, L12, L16, or P16 Series from Actuonix® Motion Devices Inc., or the like. The skilled artisan will further appreciate that an actuator or actuator step size may be selected based on a screen or lenslet size, whereby larger elements may, in accordance with various embodiments, require only larger steps to introduce distinguishable changes in user perception of various pixel configurations. Further, various embodiments relate to actuators that may communicate with a processor/controller via a driver board, or be directly integrated into a processing unit for plug-and-play functionality.
[00201] While Figures 18A and 18B show a dynamic adjustment of a LFSL layer in a direction perpendicular to the screen, the skilled artisan will appreciate that such perpendicular adjustments (i.e. changing the separation 1853 between the display 1810 and LFSL 1830) may result in a modification of an optimal viewing distance 1851 from the LFLS 1830. As such, the separation 1853 may be adjusted to configure a system 1800 for a wide range of preferred viewing positions in addition to, or alternatively to, providing a range of or designated perception adjustments. [00202] Furthermore, as readily available actuators may finely adjust and/or displace the LFSL 1830 with a high degree of precision (e.g. micron-precision), various embodiments of a dynamic LFSL as herein described relate to one that may be translated perpendicularly to a digital display to enhance user experience.
[00203] In accordance with various embodiments, a translatable LFSL, such as that of Figures 18A and 18B, may be employed in a vision-based testing platform. For instance, and as discussed above, a refractor or phoropter for assessing a user’s visual acuity may comprise a translatable LFSL operable to be placed at, for instance, a designated distance from a display screen, so to provide a system geometry that enables display of a preferred perception adjustment (e.g. dioptric correction). For instance, a conventional light field refractor having a LFSL (e.g. MLA) disposed at a fixed distance (e.g. a focal length away) from a pixelated display screen may have inherent limitations arising from, for instance, screen resolution or size. Such systems may therefore only enable display of a particular range of optotypes without the use of additional optical components. However, and in accordance with various embodiments, a light field refractor having a translatable LFSL may allow for adjustment of the system geometry, thereby enabling an adjusted range of optotypes that may be displayed, without the use of one or more of the optical elements required by the conventional system.
[00204] Similarly, and in accordance with other embodiments, a light field display system having a translatable LFSL may be employed to perform cognitive impairment tests, as described above. In some embodiments, the use of a light field display in performing such assessments may allow for accommodation of a reduced visual acuity of a user undergoing a test. For instance, a dioptric correction may be applied to displayed content to provide the user with an improved perception thereof, and therefore improve the quality of evaluation. Further, such a system may employ a translatable or dynamic LFSL to quickly and easily apply the appropriate dioptric correction for the user. In some embodiments, such a light field system may be portable (e.g. to quickly assess an athlete’s cognitive ability following a collision during performance of a sport), and may therefore benefit from the removal of one or more optical components required to generate an adequate range of optotypes for testing. [00205] A LFSL as herein disclosed, in accordance with various embodiments, may further or alternatively be dynamically adjusted in more than one direction. For instance, in addition to providing control of the distance between a display and a single LFSL (e.g. a single parallax barrier) oriented substantially parallel thereto, the LFSL may further be dynamically adjustable in up to three dimensions. The skilled artisan will appreciate that actuators, such as those described above, may be coupled to displace any one LFSL, or system comprising a plurality of light field shaping components, in one or more directions. Yet further embodiments may comprise one or more LFSLs that dynamically rotate in a plane of the display to, for instance, change an orientation of light field shaping elements relative to a pixel or subpixel configuration. Furthermore, in addition to providing control over the distance between a LFSL and a screen, a LFSL as herein described may further allow for dynamic control of a LFSL layer pitch, or barrier width, in embodiments comprising a parallax barrier. In accordance with various further embodiments, a light field shaping system or device may comprise a plurality of independently addressable parallax barriers. These and similar embodiments are further described in the above-referenced United States Patent Application No. 63/056,188.
[00206] In accordance with at least one embodiment, Figure 19 is a diagram illustrating an exemplary process, generally referred to using the numeral 1900, for providing a designated user perception adjustment using, at least in part, a dynamic LFSL. In this example, the process 1900 comprises input of parameters 1902 required by a light field display system, such as the type of LFSL employed (e.g. MLA, parallax barrier, or the like), and any required parameters related thereto or related to the display system. The scope and nature of parameters required as input for displaying perception- adjusted content will be understood by the skilled artisan. In some embodiments, where pupil location(s) or view zones are not input or received with parameters 1902, the pupil location may be determined, for instance using eye tracking processes, in process step 1904. Further, any designated perception adjustments 1904 (e.g. dioptric correction), or range thereof 1904, may further be received the display system or a processor associated therewith.
[00207] In accordance with various embodiments, the process 1900 may comprise any one or more ray tracing processes 1906 to compute display content according to any or all of the input parameters 1902, 1904, and 1906. Further, any ray tracing processes may further consider a LFSL position, as described above, which may, in accordance with various embodiments, comprise a range of potential positions that may be calculated 1908 to be optimal or preferred in view of, for instance, the range of dioptric corrections 1904 required. Accordingly, ray tracing 1906 and LFSL position 1908 calculations may be solved according to the Equations described above, or iteratively calculated based on, for instance, system constraints and/or input parameters. Upon determining a preferred LFSL position (e.g. distance from the display screen), the system may then position 1910 the LFSL, for instance via one or more automated actuators, at the preferred position for display 1912 of the perception adjusted content.
[00208] The skilled artisan will appreciate that the process steps depicted in the exemplary embodiment of Figure 19 may, in accordance with various embodiments, be performed in various orders without departing from the scope or nature of the disclosure. Further, any or all process steps may be performed at the initiation of a viewing session, or may be periodically or continuously updated throughout viewing. For instance, viewing parameters 1904 may be updated as eye tracking processes determine a new viewing location, which may initiate calculation 1908 of and displacement 1910 to a new corresponding LFSL position. Similarly, should a different dioptric adjustment be preferred, for instance during the course of an eye examination, a LFSL may be displaced 1910 during display of content 1912. Further, and in accordance with various embodiments, adjustment 1910 of the LFSL may be performed in an iterative or feedback- based process, whereby the LFSL may be adjusted, for instance in real-time, to improve or optimise a display-to-LFSL distance during operation.
[00209] Generally, a LFSL position may be selected (e.g. calculated 1908) or adjusted based on the application at hand. Similarly, system specifications (e.g. the resolution of a pixelated display screen, the pitch and/or focal length of lenslets of a microlens array, or the like) may be selected based on the particular needs of an assessment or display system. For example, and in accordance with one embodiment, a given eye examination may require a relatively large range of dioptric power adjustments (e.g. 20+ diopter range), while also requiring a visual content quality parameter (e.g. visual content provided with resolution of 1 arcminute angular pitch or smaller, a view zone width, or the like). A system operable to this end may therefore comprise different components or configurations than a system that, for instance, is designed to be portable and is subject to various weight and/or size restrictions. For example, and in accordance with one embodiment, a portable system for performing saccade and/or vergence tests to assess for a cognitive impairment may require lower resolution and/or range of accessible perception adjustments than is required for a stationary eye examination system. Further, a portable system may be further limited in the amount of space over which a LFSL may be translated to affect a depth-based perception adjustment. Accordingly, various embodiments relate to systems and methods for providing perception adjustments with components and/or ranges of LFSL translation that are selected for a particular application.
[00210] To this end, Figures 20A to 23J are exemplary plots illustrating non-limiting parameter spaces for exemplary light field shaping system configurations. It will be appreciated that, in accordance with various embodiments, system configurations and/or component specifications may enable, for instance, different perception adjustments or ranges thereof in view of one or more system or application parameters. For example, an MLA having a given lenslet pitch and focal length may enable a distinct range of perception adjustments for visual content (e.g. shifts in dioptric powers) at or below a required resolution threshold for a distinct system geometry (e.g. a particular LFSL position). If a perception adjustment is then required during, for instance, an eye examination or cognitive impairment assessment that is outside of that distinct range in view of, for instance, the resolution requirement for the presented content, the LFSL position may then be adjusted to select a new distinct geometry corresponding to a new distinct range of perception adjustments.
[00211] It will be appreciated that such distinct ranges of perception adjustments may or may not overlap, in accordance with different embodiments. For example, a first distinct range of perception adjustments corresponding to a first geometry may enable a dioptric power range of -5 to +5 diopters, while a second distinct range of perception adjustments corresponding to a second geometry may enable a dioptric power range of -1 to +8 diopters. Accordingly, a system geometry may be selected to, for instance, select the entire range of perception adjustments required for a particular application (e.g. a vision-based cognitive assessment). In accordance with another embodiment, a plurality of system geometries may be selected over the course of an examination in order to select different perception adjustment ranges which, together, enable the entire range of perception adjustments required for an assessment.
[00212] In accordance with one exemplary embodiment, Figures 20A to 201 illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to a light field system having a particular configuration and component specifications. This exemplary embodiment relates to a LFSL and display that may be arranged to maximise the range of dioptric corrections that may be obtained by a translating LFSL (in this case an MLA), as described above. In this example, the display screen comprises a Sharp™ display screen (51.84 mm x 51.84 mm active area) with a 24 pm pixel pitch. As illustrated in Figure 201, the maximal range of continuous dioptric corrections that is achievable with this system while maintaining a cut-off resolution of 1 arcminute angular pitch or less is 23 diopters. This maximal range was obtained with an MLA characterised by an -53.5 mm focal length and a 4.6 mm pitch, while a 360 mm eye- to-display distance was maintained. While this 23-diopter range corresponds to a dioptric correction of an eye power error between 7 to 30 diopters, this range may be shifted to, for instance, correct an eye error within a different dioptric range using an additional optical component. For example, and in accordance with one embodiment, a 15-diopter lens placed in front of the eye of a user may enable the presentation of visual content with a perception adjustment corresponding to an eye dioptric error of -8 to 15 diopters, thereby enabling, for instance, an eye examination in this range. In accordance with another embodiment, a tunable lens may be employed to selectively increase or decrease a perception adjustment based on the needs of a particular assessment.
[00213] In this example, Figure 20A is a plot of the display-to-MLA distance as a function of MLA focal length for different dioptric powers, highlighting the ability of a translatable LFSL for enabling a selection of a preferred perception adjustment by selecting or adjusting a system geometry. Figure 20B is a plot of the corresponding focus spot size at the retina. Figure 20C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length. Figure 20D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 20E is a plot of the corresponding cut- off resolution. Figure 20F is a plot of view zone separation from a reference pupil of 5mm diameter, while Figure 20G is a plot of spread on the retina. Figure 20H is a plot of the corresponding field of view as a function of MLA focal length. Finally, Figure 201 is a plot of the maximum number of continuous dioptric powers (with 1 diopter step) that can be corrected for by changing the MLA-to-display distance.
[00214] In accordance with another embodiment, Figures 21 A to 21 J illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to a different light field system having a different configuration and different component specifications. In this example, the LFSL layer comprises an MLA having a pitch of 1.98 mm and focal length of -24.5 mm, while the eye-to-display distance was selected as 190 mm. The system again comprised a display with 24 pm pixel pitch. In this example, a maximum continuous span of diopters that was achievable at 1 arcminute or less angular pitch/resolution was 19 diopters, corresponding to perception adjustments between 11 and 30 diopters. As described above, depending on the application at hand, an additional optical component may be introduced to the system to shift this range of dioptric corrections. For example, a 15-diopter lens placed in front of the eye of a user may enable correction for a range of eye power errors between -4 and 15 diopters.
[00215] In this example, Figure 21 A is a plot of the MLA-to-display distance as a function of MLA focal length for different eye power errors. Figure 2 IB is a plot of the corresponding focus spot size at the retina. Similarly, Figure 21C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length. Figure 2 ID is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 21 E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina. Figure 2 IF is a plot of view zone separation from a reference 5 mm diameter pupil, while Figure 21G is a plot of spread on the retina. Figure 21H is a plot of the corresponding field of view as a function of MLA focal length. Figure 211 is a plot of the beam size on the cornea. Finally, Figure 21J is a plot of the maximum number of continuous dioptric power error points (spaced by 1 diopter steps) that can be corrected for by changing the MLA-to-display distance.
[00216] In addition to, or as an alternative to maintaining a threshold resolution of presented content, various other visual content quality parameters may be considered when selecting, for instance, light field system components (e.g. LFSL specifications) and/or geometries (e.g. LFSL position). For example, and without limitation, various embodiments relate to selecting system components and configurations in view of a comeal beam size constraint. For example, one embodiment relates to the selection of an MLA with specifications and a position within the system such that the beam size on the cornea of a user is maintained between 2.7 mm and 3.7 mm. This may, in accordance with one embodiment, enable a maximisation of focus range of retinal beam spots.
[00217] Applying this constraint, Figures 22A to 22J illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to an exemplary light field system. In this example, the display is characterised by a 24 pm pitch, the MLA selected comprised a lenslet pitch of 1.98 mm and focal length of ~20 mm, and the system was characterised by an eye-to-display distance of 145 mm. In this example, a continuous range of 10 diopters was achieved, corresponding to a perception adjustment range of 16 to 26 diopters, which, again, may be shifted via the employ of an additional optical component.
[00218] In this exemplary embodiment, Figure 22A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers. Figure 22B is a plot of the corresponding focus spot size at the retina. Similarly, Figure 22C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length. Figure 22D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 22E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina. Figure 22F is a plot of view zone separation from a reference 5 mm pupil, while Figure 22G is a plot of spread on the retina. Figure 22H is a plot of the corresponding field of view as a function of MLA focal length. Figure 221 is a plot of the beam size on the cornea. Finally, Figure 22J is a plot of the maximum number of continuous dioptric powers that can be corrected for by changing the MLA-to-display distance.
[00219] Again applying the visual content quality parameter constraints of maintaining a corneal beam size between 2.7 and 3.7 mm and a perception- adjusted content resolution of 1 arcminute angular pitch at most, Figures 23A to 23J illustrate various exemplary parameters simulated as a function of MLA focal length for various dioptric powers accessible to another exemplary light field system. In this example, the display again comprises a 24 pm pixel pitch, while the LFSL comprises an MLA characterised by a lenslet pitch of 3.4 mm and a focal length -34.5 mm. The eye-to-display distance is 200 mm. In this exemplary embodiment, a continuous range of 11 diopters is achievable, corresponding to perception adjustments between 15 and 26 diopters.
[00220] In this exemplary embodiment, Figure 23A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers. Figure 23B is a plot of the corresponding focus spot size at the retina. Similarly, Figure 23C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length. Figure 23D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 23 E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina. Figure 23F is a plot of view zone separation from 5 mm diameter reference pupil, while Figure 23G is a plot of spread on the retina. Figure 23H is a plot of the corresponding field of view as a function of MLA focal length. Figure 231 is a plot of the beam size on the cornea. Finally, Figure 23J is a plot of the maximum number of continuous dioptric power points (separated by 1 diopter) that can be corrected for by changing the MLA-to-display distance.
[00221] As described above, various system components and/or configurations may be employed depending on, for instance, the application at hand (e.g. a vision assessment, a cognitive impairment assessment, or the like), or a visual content quality parameter associated therewith. For example, a visual acuity examination may require a large range of dioptric corrections (e.g. -8 to +8 diopters). Accordingly, a system designed to this end may comprise components and configurations similar to those described with respect to Figures 20 A to 201 that relating to a continuous range of 23 diopters with at least 1 arcminute resolution. Conversely, a cognitive impairment assessment system operable to perform saccadic and vergence tests may have less strict content quality parameters associated therewith. For example, less resolution (larger angular pitch) may be required to generally follow a moving stimulus in a pursuit assessment, and a smaller range of dioptric corrections may be needed to perform a vergence test. Accordingly, a cognitive impairment assessment device may employ simpler and/or more cost-effective components than a visual acuity refractor/phoropter. Similarly, more compact or lightweight components may be employed in order to, for instance, make a cognitive impairment assessment system more portable for in-field assessments.
[00222] For example, Figures 24A and 24B show perspective views of a portable cognitive impairment assessment device. Due in part to its compact nature, the range of dioptric adjustments required for a vergence test may, in accordance with some embodiments, require that a LFSL be translated relative to a display screen. [00223] This is further schematically illustrated in Figures 25 A and 25B. In this exemplary embodiment, a different cognitive impairment assessment device 2500 comprises a lightweight, user-interfacing potion 2504 and a load-bearing portion 2506. Figure 25A shows the device 2500 while in use by a subject 2508, while Figure 25B schematically shows an exploded view of various exemplary components of the device 2500. In this exemplary embodiment, the load-bearing portion 2506 comprises a pixelated display screen 2508 and a displaceable light field shaping layer (LFSL) 2510 disposed between the user and the display screen 2508 to shape a light field emanating therefrom, thereby enabling adjustable perception adjustments.
[00224] In accordance with various embodiments, various device configurations and components may be employed to perform a vision-based test.
[00225] For instance, Figures 26A to 26J are exemplary plots of different parameters of one such embodiment. In this example, a display screen comprising a 24 pm pixel pitch is disposed 110 mm from the eye of a user. A translatable LFSL layer comprising an MLA with a 1 mm pitch -22.5 mm focal length is also employed. In comparison to the exemplary embodiments described above, a content quality parameter comprising a 1.5 arcminute angular resolution threshold may be applied, which results in a continuous perceptive correction range of 4 diopters, corresponding to perception adjustments of 2 to 6 diopters eye accommodation power. This range may, in accordance with one embodiment, be shifted by placing, for instance, a 2-diopter lens in front of the eye(s) of a user, enabling perception adjustments for eye accommodation power between 0 and 4 diopters, corresponding to an accommodation range of 25 cm to infinity. Furthermore, this may allow for a minimum view zone size of -10.4 mm, and a minimum separation of 2.07 mm between a centred pupil of 5 mm diameter and the neighbouring view zone edge. This may allow for the user’s eye pupil(s) to move at least 2.07 mm (or 9.5 degrees) to either side of centre without resulting in an overlap with neighboring view zones, or a dark area in which the user cannot perceive visual content. In some embodiments, a pupil position tracking system can be used to update a rendered light field and shift a view zone position along with view zone size control via software to further increase the possible range of motion of the pupil without adversely impacting performance of the device. While such aspects may be less important for acuity assessments, they may be useful in, for instance, saccade or pursuit tests of a cognitive assessment, where a user’s eyes are not fixed. Accordingly, employing a translatable LFSL enables various system components, specifications, and/or geometries that may be more optimally suited to a particular application.
[00226] In this exemplary embodiment, Figure 26A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers. Figure 26B is a plot of the corresponding focus spot size at the retina. Similarly, Figure 26C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length. Figure 26D is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina. Figure 26E is a plot of view zone separation from a 5 mm diameter reference pupil, while Figure 20F is a plot of spread on the retina. Figure 26G is a plot of the corresponding field of view as a function of MLA focal length. Figure 26H is a plot of the beam size on the cornea. Figure 261 is a plot of the view zone width as a function of MLA focal length. Finally, Figure 26J is a plot of the maximum number of continuous dioptric powers (separated by steps of 1 diopter) that can be corrected for by changing the MLA-to-display distance. [00227] Similarly, Figures 27A to 27K are exemplary plots of various parameters corresponding to another light field system. In this example, a 24 pm pixel pitch display was disposed 150 mm in front of the eye, while the LFSL comprised a pitch of 1.98 mm and a focal length of ~18.5 mm. This configuration allowed for a continuous perception adjustment range of 5 diopters, between 14 and 19 diopters, with a 2-arcminute cut-off angular pitch/resolution. Again, this range may be shifted as needed, for instance by placing a 14-diopter lens in front of the eye, resulting in a light field power correction range for 0 to 5 diopters of accommodation eye power, which corresponds to accommodation range of 20 cm to infinity. Further, this may allow for a minimum view zone size of - 10 mm, and a minimum of 4 mm separation between a centered pupil of 5 mm diameter and the next view zone edge. This may allow pupil movement of over 2.5 mm to each side without experiencing overlap with neighbouring view zones or a dark region, again benefiting assessments related to, for instance, saccades, pursuit, vergence, or the like. Again, in some embodiments, a pupil position tracking system can be used to update a rendered light field and shift a view zone position along with view zone size control via software to further increase the possible range of motion of the pupil without adversely impacting performance of the device.
[00228] In this exemplary embodiment, Figure 27 A is a plot of the MLA-to-display distance as a function of MLA focal length for different dioptric powers. Figure 27B is a plot of the corresponding focus spot size at the retina. Similarly, Figure 27C is a plot of the corresponding cut-off angular pitch/resolution values as a function of MLA focal length. Figure 27D is a plot of the uniform spatial pitch between focused beams on the retina, while Figure 27E is a plot of the corresponding angular pitch/resolution for a uniform distribution of focused beams on the retina. Figure 27F is a plot of view zone separation from a reference pupil of 5 mm diameter, while Figure 27G is a plot of spread on the retina. Figure 27H is a plot of the corresponding field of view as a function of MLA focal length. Figure 271 is a plot of the beam size on the cornea, and Figure 27J is a plot of the view zone width. Finally, Figure 27K is a plot of the maximum number of continuous dioptric powers that can be corrected for by changing the MLA-to-display distance.
[00229] More generally, various embodiments relate to a dynamic LFSL system in which a system of one or more LFSLs may be incorporated on an existing display operable to perception- adjusted content. Such embodiments may, for instance, relate to a clip-on solution that may interface and/or communicate with a display or digital application stored thereon, either directly or via a remote application (e.g. a smart phone application) and in wired or wireless fashion. Such a LFSL may be further operable to rotate in the plane of a display via, for instance, actuators as described above, to improve user experience by, for instance, introducing a pitch mismatch offset between light field shaping elements and an underlying pixel array. Such embodiments therefore relate to a LFSL that is dynamically adjustable/reconfigurable for a wide range of existing display systems (e.g. televisions, dashboard displays in an automobile, a display board in an airport or train terminal, a refractor, a smartphone, or the like).
[00230] Some embodiments relate to a standalone light field shaping system in which a display unit comprises a LFSL and smart display (e.g. a smart TV display having a LFSL disposed thereon). Such systems may comprise inherently well calibrated components (e.g. LFSL and display aspect ratios, LFSL elements and orientations appropriate for a particular display pixel or subpixel configuration, etc.).
[00231] In either a detachable LFSL device or standalone dynamically adjustable display system, various systems herein described may be further operable to receive as input data related to one or more view zone and/or user locations, or required number thereof (e.g. two or three view zones in which to display perception-adjusted content). For instance, data related to a user location may be entered manually or semi-automatically via, for example, a TV remote or user application (e.g. smart phone application). For example, a television or LFSL may have a digital application stored thereon operable to dynamically adjust one or more LFSLs in one or more dimensions, pitch angles, and/or pitch widths upon receipt of user instruction, via manual clicking by a user of an appropriate button on a TV remote or smartphone application. In accordance with various embodiments, a number a view zones may be similarly selected.
[00232] In applications where there is one-way communication (e.g. the system only receives user input, such as in solutions where user privacy is a concern), a user may adjust the system (e.g. the distance between the display and a LFSL, etc.) with a remote or smartphone application until they are satisfied with the display of one or more view zones. Such embodiments may alternatively relate to, for instance, remote eye exams, wherein a doctor remotely adjusts the configuration of a display and LFSL. Such systems may, for instance, provide a high-performance, self-contained, simple system that minimises complications arising from the sensitivity of view zone quality based on, for instance, minute differences from predicted relative component configurations as predicted by, for instance, Equations 1 to 7 above, component alignment, user perception, and the like.
[00233] The skilled artisan will appreciate that while a smartphone application or other like system may be used to communicate user preferences or location-related data (e.g. a quality of perceived content from a particular viewing zone), such an application, process, or function may reside in a system or application and be executable by a processing system associated with the display system. Furthermore, data related to a user or viewing location may comprise a user instruction to, for instance, adjust a LFSL, based on, for instance, a user perception of an image quality, or the like. [00234] Alternatively, or additionally, a receiver, such as a smartphone camera and digital application associated therewith, may be used to calibrate a display, in accordance with various embodiments. For instance, a smartphone camera directed towards a display may be operable to receive and/or store signals/content emanating from the LFSL or display system. A digital application associated therewith may be operated to characterise a quality of a particular view zone through analysis of received content and adjust the LFSL to improve the quality of content at the camera’s location (e.g. to improve on a calculated LFSL position relative to display that was determined theoretically, for instance using one or more of Equations 1 to 7 above).
[00235] For instance, a calibration may be initially performed wherein a user positions themselves in a desired viewing location and points a receiver at a display generating red and blue content for respective first and second view zones. A digital application associated with the smartphone or remote receiver in the first viewing location may estimate a distance from the display by any means known in the art (e.g. a subroutine of a smartphone application associated with a light field display and operable to measure distances using a smartphone sensor). The application may further record, store, and/or analyse (e.g. in mobile RAM) the light emanating from the display to determine whether or not, and/or in which dimensions, angle, etc., to adjust a dynamic LFSL to maximise the amount of red light received in the first view zone while minimising that of blue (i.e. reduce cross talk between view zones).
[00236] For example, and in accordance with some embodiments, a semi-automatic LFSL may self-adjust until a digital application associated with a particular view zone receives less than a threshold value of content from a neighbouring view zone (e.g. receives at least 95% red light and less than 5% blue light, in the abovementioned example). The skilled artisan will appreciate that various algorithms and/or subroutines may be employed to this end. For instance, a digital application subroutine may calculate an extent of crosstalk occurring between view zones, or a degree of image sharpness corresponding to an intended perception adjustment (e.g. -6.0 dioptric error correction), to determine in which ways displayed content is blended or improperly adjusted based on content received, to determine which LFSL parameters may be optimised to actuate an appropriate system response. Furthermore, the skilled artisan will appreciate that various means known in the art for encoding, displaying, and/or identifying distinct content may be applied in such embodiments. For example, a display having a LFSL disposed thereon may generate distinct content corresponding to a perception adjustment or dioptric shift that may comprise one or more of, but is not limited to, distinct colours, IR signals, patterns, or the like, to determine a displayed content quality, and initiate compensatory adjustments in a LFSL.
[00237] Furthermore, and in accordance with yet further embodiments, a semi - automatic LFSL calibration process may comprise a user moving a receiver within a designated range or region (e.g. a user may move a smartphone from left to right, or forwards/backwards) to acquire display content data. Such data acquisition may, for instance, aid in LFSL layer adjustment, or in determining a LFSL configuration that is acceptable for one or more users of the system within an acceptable tolerance (e.g. all users receive 95% of their intended display content, or a resolution of at least 1 arcsecond is achieved in an eye examination device) within the geometrical limitations of the LFSL and/or display.
[00238] The skilled artisan will appreciate that user instructions to any or all of these ends may be presented to a user on the display or smartphone/remote used in calibration for ease of use (i.e. guide the user in during calibration and/or use). Similarly, if, for instance, physical constraints (e.g. LSFL or display geometries) preclude an acceptable adjusted image resolution, an application associated with the display, having performed the appropriate calculations, may guide a user to move to a different location (or to move the display) to provide for a better experience. [00239] In yet other embodiments, one or more user locations may be determined automatically by a display system. For instance, a pupil location may be determined via the use of one or more cameras or other like sensors and/or means known in the art for determining user, head, and/or eye locations, and dynamically adjusting a LFSL in one or more dimensions to render content so to be displayed at one or more appropriate locations. Yet other embodiments relate to a self-localisation method and system that maintains user privacy with minimal user input or action required to determine one or more view zone locations, and dynamically adjust a LFSL to display appropriate content thereto.
[00240] Yet further applications may utilise a dynamic light field shaping layer subjected to oscillations or vibrations in one or more dimensions in order to, for instance, improve perception of an image generated by a pixelated display. Or, such a system may by employed to increase an effective view zone size so as to accommodate user movement during viewing. For example, a LSFL may be vibrated in a direction perpendicular to a screen so to increase a depth of a view zone in that dimension to improve user experience by allowing movement of a user’s head towards/away from a screen without introducing a high degree of perceived crosstalk, or to improve a perceived image brightness.
[00241] While the present disclosure describes various embodiments for illustrative purposes, such description is not intended to be limited to such embodiments. On the contrary, the applicant's teachings described and illustrated herein encompass various alternatives, modifications, and equivalents, without departing from the embodiments, the general scope of which is defined in the appended claims. Except to the extent necessary or inherent in the processes themselves, no particular order to steps or stages of methods or processes described in this disclosure is intended or implied. In many cases the order of process steps may be varied without changing the purpose, effect, or import of the methods described.
[00242] Information as herein shown and described in detail is fully capable of attaining the above-described object of the present disclosure, the presently preferred embodiment of the present disclosure, and is, thus, representative of the subject matter which is broadly contemplated by the present disclosure. The scope of the present disclosure fully encompasses other embodiments which may become apparent to those skilled in the art, and is to be limited, accordingly, by nothing other than the appended claims, wherein any reference to an element being made in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more." All structural and functional equivalents to the elements of the above-described preferred embodiment and additional embodiments as regarded by those of ordinary skill in the art are hereby expressly incorporated by reference and are intended to be encompassed by the present claims. Moreover, no requirement exists for a system or method to address each and every problem sought to be resolved by the present disclosure, for such to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. However, that various changes and modifications in form, material, work-piece, and fabrication material detail may be made, without departing from the spirit and scope of the present disclosure, as set forth in the appended claims, as may be apparent to those of ordinary skill in the art, are also encompassed by the disclosure.
Figure imgf000068_0001
Figure imgf000069_0001

Claims

CLAIMS What is claimed is:
1. A light field shaping system for interfacing with light emanated from pixels of a digital display to govern display of perception-adjusted content, the system comprising: a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the digital display so to align said array of light field shaping elements with the pixels in accordance with a current light field shaping geometry to thereby define a perception adjustment of displayed content in accordance with said current geometry; an actuator operable to adjust an optical path length between said LSFL and the digital display to adjust alignment of said light field shaping elements with the pixels in accordance with an adjusted geometry thereby defining an adjusted perception adjustment of displayed content in accordance with said adjusted geometry; and a digital data processor operable to activate said actuator to translate said LFSL to adjust the perception-adjusted content.
2. The light field shaping system of Claim 1, wherein said LFSL comprises a microlens array.
3. The light field shaping system of Claim 1, wherein said adjusted perception adjustment corresponds to a reduced visual acuity of a user.
4. The light field shaping system of Claim 1, wherein said actuator is operable to translate said LFSL in a direction perpendicular to the digital display.
5. The light field shaping system of Claim 1, wherein said light field shaping geometry relates to a physical distance between the digital display and said LFSL.
6. The light field shaping system of Claim 1, wherein said adjusted geometry corresponds to a selectable range of perception adjustments of displayed content, wherein distinctly selectable geometries correspond with distinct selectable ranges of perception adjustments.
7. The light field shaping system of Claim 6, wherein said distinct selectable ranges comprise distinct dioptric correction ranges.
8. The light field shaping system of Claim 1, wherein said digital data processor is further operable to: receive as input a requested perception adjustment as said adjusted perception adjustment; based at least in part on said requested perception adjustment, calculate an optimal optical path length to thereby define an optimal geometry as said adjusted geometry; and activate said actuator to adjust said optical path length to said optimal optical path length and thereby optimally achieve said requested perception adjustment.
9. The light field shaping system of Claim 1, wherein said digital data processor is further operable to: receive as input feedback data related to a quality of said adjusted perception adjustment; and dynamically adjust said optical path length via said actuator in response to said feedback data.
10. The light field shaping system Claim 1, wherein the light field shaping system comprises a system for administering a vision-based test.
11. The light field shaping system of Claim 10, wherein said vision-based test comprises a visual acuity examination, and the perception-adjusted content comprises an optotype.
12. The light field shaping system of Claim 10, wherein said vision-based test comprises a cognitive impairment test.
13. The light field shaping system of Claim 1, wherein said actuator selectively introduces an optical path length-increasing medium within said optical path length to selectively adjust said optical path length.
14. The light field shaping system of Claim 1, wherein said digital data processor is operable to translate said LFSL to adjust the perception-adjusted content while satisfying a visual content quality parameter.
15. The light field shaping system of Claim 14, wherein said visual content quality parameter comprises one or more of a perception-adjusted content resolution, a corneal beam size, a view zone size, or a distance between a pupil and a view zone edge.
16. A method for dynamically adjusting a perception adjustment of displayed content in a light field display system comprising a digital processor and a digital display defined by an array of pixels and a light field shaping layer (LFSL) disposed relative thereto, the method comprising: accessing display geometry data related to one or more of the light field display system and a user thereof, said display geometry data at least in part defining the perception adjustment of displayed content; digitally identifying a preferred display geometry based, at least in part, on said display geometry data, said preferred display geometry comprising a desirable optical path length between said LFSL and the pixels to optimally produce a requested perception adjustment of displayed content; automatically adjusting said optical path length, via the digital processor and an actuator operable to adjust said optical path length and thereby adjust the perception adjustment of displayed content to said requested perception adjustment.
17. A light field shaping system for interfacing with light emanated from underlying pixels of a digital screen in a light field display to display content in accordance with a designated perception adjustment, the system comprising: a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the digital screen so to define a system configuration, said system configuration at least in part defining a subset of perception adjustments displayable by the light field display; an actuator operable to adjust a relative distance between said LFSL and the digital screen to adjust said system configuration; and a digital data processor operable to activate said actuator to selectively adjust said relative distance and thereby provide a preferred system configuration defining a preferred subset of perception adjustments; wherein said preferred subset of perception adjustments comprises the designated perception adjustment.
18. The light field shaping system of Claim 17, wherein said digital data processor is further operable to: receive as input data related to said designated perception adjustment; based at least in part on said data related to said designated perception adjustment, calculate said preferred system configuration.
19. The light field shaping system of Claim 17, wherein said digital data processor is further operable to dynamically adjust said system configuration during use of the light field display.
20. A light field display system for displaying content in accordance with a designated perception adjustment, the system comprising: a digital display screen comprising an array of pixels; a light field shaping layer (LFSL) comprising an array of light field shaping elements shaping a light field emanating from said array of pixels and disposable relative thereto in accordance with a system configuration at least in part defining a subset of displayable perception adjustments; an actuator operable to translate said LFSL relative to said array of pixels to adjust said system configuration; and a digital data processor operable to activate said actuator to translate said LFSL and thereby provide a preferred system configuration defining a preferred subset of perception adjustments; wherein said preferred subset of perception adjustments comprises the designated perception adjustment.
21. A light field shaping layer (LFSL) to be used in conjunction with a digital display comprising an array of digital pixels, wherein an optimal rendering of a perceptively adjusted image is provided by minimizing a spread of light from the display pixels through the LFSL in accordance with the following expression
Figure imgf000074_0001
22. A light field shaping system for performing a vision-based assessment using perception- adjusted content, the system comprising: a pixelated digital display; a light field shaping layer (LFSL) comprising an array of light field shaping elements and disposable relative to the pixelated digital display so to align said array of light field shaping elements with pixels of the pixelated digital display in accordance with a current light field shaping geometry to thereby define a perception adjustment of displayed content in accordance with said current geometry; an actuator operable to adjust an optical path length between said LSFL and the pixelated digital display to adjust alignment of said light field shaping elements with the pixels in accordance with an adjusted geometry thereby defining an adjusted perception adjustment of displayed content in accordance with said adjusted geometry; and a digital data processor operable to activate said actuator to translate said LFSL to adjust the perception-adjusted content for the vision-based assessment.
23. The system of Claim 22, wherein the vision-based assessment comprises a cognitive impairment assessment.
24. The system of Claim 22, wherein the vision-based assessment comprises a visual acuity assessment.
25. The system of Claim 22, wherein said digital data processor is operable to translate said LFSL to adjust the perception-adjusted content while maintaining a visual content quality parameter associated with the vision-based assessment.
26. The system of Claim 25, wherein said visual content quality parameter comprises one or more of a perception-adjusted content resolution, a comeal beam size of said perception-adjusted content, a view zone size, or a distance between a pupil and a view zone edge.
27. The system of Claim 22, wherein said adjusted perception adjustment comprises a range of perception adjustments corresponding to said adjusted geometry.
28. The system of Claim 27, wherein the vision-based assessment comprises the display of content in accordance with an assessment range of perception adjustments.
29. The system of Claim 28, wherein said range of perception adjustments corresponds at least in part to said assessment range of perception adjustments.
30. The system of Claim 22, further comprising an optical component intersecting an optical path of the perception- adjusted content and configured to adjust an optical power of the perception-adjusted content for the vision-based assessment.
31. The system of Claim 30, wherein said optical component comprises a lens or a tunable lens.
PCT/US2021/070944 2020-07-24 2021-07-23 Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor WO2022020861A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA3186253A CA3186253A1 (en) 2020-07-24 2021-07-23 Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor
EP21845718.2A EP4185183A4 (en) 2020-07-24 2021-07-23 Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202063056188P 2020-07-24 2020-07-24
US63/056,188 2020-07-24
US202063104468P 2020-10-22 2020-10-22
US63/104,468 2020-10-22

Publications (1)

Publication Number Publication Date
WO2022020861A1 true WO2022020861A1 (en) 2022-01-27

Family

ID=79729025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/070944 WO2022020861A1 (en) 2020-07-24 2021-07-23 Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor

Country Status (3)

Country Link
EP (1) EP4185183A4 (en)
CA (1) CA3186253A1 (en)
WO (1) WO2022020861A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022616A (en) * 2022-08-08 2022-09-06 太原理工大学 Image focusing enhancement display device and display method based on human eye tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130027668A1 (en) * 2010-04-22 2013-01-31 Vitor Pamplona Near Eye Tool for Refractive Assessment
US10394322B1 (en) * 2018-10-22 2019-08-27 Evolution Optiks Limited Light field display, adjusted pixel rendering method therefor, and vision correction system and method using same
US20200069174A1 (en) * 2016-12-07 2020-03-05 Essilor International Apparatus and method for measuring subjective ocular refraction with high-resolution spherical and/or cylindrical optical power
US20200233492A1 (en) * 2018-10-22 2020-07-23 Evolution Optiks Limited Light field vision testing device, adjusted pixel rendering method therefor, and vision testing system and method using same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105717640B (en) * 2014-12-05 2018-03-30 北京蚁视科技有限公司 Near-to-eye based on microlens array
CN117770757A (en) * 2016-07-25 2024-03-29 奇跃公司 Light field processor system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130027668A1 (en) * 2010-04-22 2013-01-31 Vitor Pamplona Near Eye Tool for Refractive Assessment
US20200069174A1 (en) * 2016-12-07 2020-03-05 Essilor International Apparatus and method for measuring subjective ocular refraction with high-resolution spherical and/or cylindrical optical power
US10394322B1 (en) * 2018-10-22 2019-08-27 Evolution Optiks Limited Light field display, adjusted pixel rendering method therefor, and vision correction system and method using same
US20200233492A1 (en) * 2018-10-22 2020-07-23 Evolution Optiks Limited Light field vision testing device, adjusted pixel rendering method therefor, and vision testing system and method using same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4185183A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022616A (en) * 2022-08-08 2022-09-06 太原理工大学 Image focusing enhancement display device and display method based on human eye tracking

Also Published As

Publication number Publication date
CA3186253A1 (en) 2022-01-27
EP4185183A4 (en) 2024-06-26
EP4185183A1 (en) 2023-05-31

Similar Documents

Publication Publication Date Title
US10860099B2 (en) Light field display, adjusted pixel rendering method therefor, and adjusted vision perception system and method using same addressing astigmatism or similar conditions
CA3148706C (en) Light field vision testing device, adjusted pixel rendering method therefor, and vision testing system and method using same
CA3148710A1 (en) Light field vision-based testing device, adjusted pixel rendering method therefor, and online vision-based testing management system and method using same
US20220198766A1 (en) Light field display and vibrating light field shaping layer and vision testing and/or correction device
US11841988B2 (en) Light field vision-based testing device, adjusted pixel rendering method therefor, and online vision-based testing management system and method using same
US10936064B2 (en) Light field display, adjusted pixel rendering method therefor, and adjusted vision perception system and method using same addressing astigmatism or similar conditions
US11966507B2 (en) Light field vision testing device, adjusted pixel rendering method therefor, and vision testing system and method using same
WO2022020861A1 (en) Light field display for rendering perception-adjusted content, and dynamic light field shaping system and layer therefor
US11487361B1 (en) Light field device and vision testing system using same
US20230021236A1 (en) Light field device, optical aberration compensation or simulation rendering method and vision testing system using same
US11823598B2 (en) Light field device, variable perception pixel rendering method therefor, and variable perception system and method using same
WO2022186894A1 (en) Light field device and vision-based testing system using same
WO2021087375A1 (en) Light field device, variable perception pixel rendering method therefor, and variable perception system and method using same
US20230104168A1 (en) Light field device and vision testing system using same
EP4022898B1 (en) Light field display, adjusted pixel rendering method therefor, and adjusted vision perception system and method using same addressing astigmatism or similar conditions
US20240013690A1 (en) Light field device, variable perception pixel rendering method therefor, and variable perception system and method using same
US11789531B2 (en) Light field vision-based testing device, system and method
WO2023161776A1 (en) Light field vision-based testing device, system and method
CA3167642A1 (en) Light field device, optical aberration compensation or simulation rendering method and vision testing system using same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21845718

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3186253

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2021845718

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021845718

Country of ref document: EP

Effective date: 20230224