WO2023049086A2 - Holographic real space refractive system - Google Patents

Holographic real space refractive system Download PDF

Info

Publication number
WO2023049086A2
WO2023049086A2 PCT/US2022/044056 US2022044056W WO2023049086A2 WO 2023049086 A2 WO2023049086 A2 WO 2023049086A2 US 2022044056 W US2022044056 W US 2022044056W WO 2023049086 A2 WO2023049086 A2 WO 2023049086A2
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
virtual
user
diagnostic module
display device
Prior art date
Application number
PCT/US2022/044056
Other languages
French (fr)
Other versions
WO2023049086A3 (en
Inventor
William V. PADULA
Teddi R. DINSMORE
Original Assignee
Veyezer, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/481,819 external-priority patent/US20220007929A1/en
Application filed by Veyezer, Llc filed Critical Veyezer, Llc
Publication of WO2023049086A2 publication Critical patent/WO2023049086A2/en
Publication of WO2023049086A3 publication Critical patent/WO2023049086A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement

Definitions

  • the clinician would then turn the dial to move different lenses in front of the person’s eyes to find the best subjective refraction to improve distance vision.
  • the instrument was then advanced to include prisms that could be used to disassociate images or change the position of the image enabling the clinician the ability to evaluate muscle ranges and the ability to maintain eye alignment and binocularity. It also permitted assessment of the person’s ability to accommodate or focus at a near range. This was all for the purpose of designing glasses to improve eyesight and visual acuity for both distance and near ranges as well as to prescribe prisms to correct for imbalance in eye alignment affecting binocularity.
  • the phoropter While the phoropter is an effective instrument and still used today, it limits the peripheral field and cannot assess binocularity in any other meridian other than primary gaze or looking straight ahead. Binocular imbalances can sometimes only be represented with gaze outside of the primary gaze position. Therefore, the instrument has limited value for these purposes and/or lead the clinician to only be able to prescribe lenses and prisms for one position of the eyes. In addition, the large phoropter blocks the peripheral vision producing an abnormal environment and restriction of side vision, which frequently affects the intensity of the attentional visual process and cause the refractive correction to be too strong or imbalanced.
  • Described herein is a system to evaluate the refractive state of the eye and visual process as well as binocularity in the nine cardinal positions of gaze while in real space by using holographic projection for each eye.
  • the refractive state assessment has been designed to enable the eye of the patient to focus on virtual objects in the manner that the refractive imbalance will focus to maintain clear vision. For example, a object is presented with three dimensions. The myopic eye will focus on the near side of the object and see it with clarity. The dimensions and position of the object is then moved to refocus the far or distance side of the object and calibration is determined as to the power of the eye and the power of the lens required to re-focus the eye to best visual acuity at infinity. The same would occur for the hyperopic eye, only the far portion of the three-dimensional object will be in initial focus.
  • the patient uses hand movements and/or voice command to communicate the subjective measurement of the dioptric power to correct the vision to best visual acuity and, advantageously, these objectives are accomplished through manipulation of the object in real space. More particularly, in an exemplary embodiment of the present disclosure, an eye with astigmatism would be presented a three dimensional object where perpendicular lines would enable the patient to observe that one of the lines is clear and the other blurred. The object will be rotated to determine the axis of the astigmatism and then the opposite or blurred side of the object would be shifted in space virtually to bring it into focus. This sequence of operation will provide the amount of astigmatism measured in this eye and therefore the predicted amount of cylindrical correction needed to bring clarity.
  • FIG. 1 is a block diagram illustrating a system for the holographic eye testing device according to an exemplary embodiment.
  • FIG. 2 is a block diagram illustrating a test for horizontal phoria with a holographic eye testing device according to an exemplary embodiment.
  • FIG. 3 is a block diagram illustrating a test for vertical phoria utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIG. 4A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIG. 4B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 4A according to an exemplary embodiment.
  • FIG. 5A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIG. 5B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 5A according to an exemplary embodiment.
  • FIG. 5C is a block diagram illustrating another perspective of the virtual 3D objects depicted in FIG. 5A according to an exemplary embodiment.
  • FIGs. 6A-6B are diagrams illustrating a test for visual acuity utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIGs. 7A-7G are diagrams illustrating tests for horizontal convergent and horizontal divergent, utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIGs. 8A-8C are diagrams illustrating refraction testing utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIGs. 9A-9B are diagrams illustrating convergence testing utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIGs. 10A-10D are block diagrams illustrating methods for utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIG. 11 depicts a block diagram an exemplary computing device in accordance with an exemplary embodiment.
  • Example embodiments provide a device for utilizing holographic virtual projection to perform eye testing, diagnosis, and prescriptive remedy.
  • the disclosed holographic eye testing device renders on a head mounted device, one or more three dimensional objects within the holographic display device, wherein the rendering corresponds to a virtual level of depth viewable by a user.
  • the holographic display device updates the rendering of the one or more three dimensional objects, wherein the updates include a virtual movement of the one or more three dimensional objects within the virtual level of depth.
  • the holographic display device receives input from a user, wherein the input includes an indication of alignment of the one or more three dimensional objects based on the virtual movement.
  • the indication of alignment includes a relative position between the one or more three dimensional objects.
  • FIG. 1 is a block diagram illustrating a system 100 for the holographic eye testing device according to an exemplary embodiment.
  • the holographic eye testing device can include a head mounted display (HMD) 102.
  • the HMD 102 can include a pair of combiner lenses 104A, 104B for rendering three dimensional (3D) images within a user’s field of view (FOV).
  • the combiner lenses 104A, 104B can be calibrated to the interpupillary distance from the user’s eyes 106A, 106B.
  • a computing system 108 can be connected to the combiner lenses 104A, 104B.
  • the holographic eye testing device can be repositioned in any of the nine primary gaze positions as needed. These tests are built to run on technical platforms that can project 3D holographic images within a field of view provided by a wired or wireless headset.
  • the HMD 102 can be connected to an adjustable, cushioned inner headband, which can tilt the combiner lenses 104A, 104B up and down, as well as forward and backward.
  • the user fits the HMD 102 on their head, using an adjustment wheel at the back of the headband to secure it around the crown, supporting and distributing the weight of the unit equally for comfort, before tilting the visor and combiner lenses 104 A, 104B towards the front of the eyes.
  • the computing system 108 can be inclusive to the HMD 102, where the holographic eye testing device is a self contained apparatus.
  • the computing system 108 in the self contained apparatus can include additional power circuitry to provide electrical current to the parts of the computing system 108.
  • the computing system 108 can be external to the HMD 102 and communicatively coupled either through wired or wireless communication channels to the HMD 102.
  • Wired communication channels can include digital video transmission formats including High Definition Multimedia Interface (HDMI), DisplayPortTM (DisplayPort is a trademark of VESA of San lose CA, U.S.A.), or any other transmission format capable of propagating a video signal from the computing system 108 to the combiner lenses 104A, 104B.
  • HDMI High Definition Multimedia Interface
  • DisplayPortTM DisplayPort is a trademark of VESA of San lose CA, U.S.A.
  • the HMD 102 can include speakers or headphones for the presentation of instructional audio to the user during the holographic eye tests.
  • the HMD 102 can include a wireless adapter capable of low latency high bandwidth applications, including but not limited to IEEE 802.1 lad.
  • the wireless adapter can interface with the computing system 108 for the transmission of low latency video to be displayed upon the combiner lenses 104, 104B.
  • the computing system 108 can include software for the manipulation and rendering of 3D objects within a virtual space.
  • the software can include both platform software to support any fundamental functionality of the HMD 102, such as motion tracking, input functionality, and eye tracking.
  • Platform software can be implemented in a virtual reality (VR) framework, augmented reality (AR) framework, or mixed reality (MR) framework.
  • Platform software to support the fundamental functionality can include but are not limited to SteamVR® (SteamVR is a registered trademark of the Valve Corporation, Seattle WA, U.S.A) software development kit (SDK), Oculus® VR SDK (Oculus is a registered trademark of Oculus VR LLC, Irvine CA, U.S.A.), OSVR (Open source VR) (OSVR is a registered trademark of Razer Asia Pacific Pte. Ltd. Singapore) SDK, and Microsoft Windows Mixed Reality Computing Platform.
  • SteamVR® SteamVR is a registered trademark of the Valve Corporation, Seattle WA, U.S.A
  • SDK software development kit
  • Oculus® VR SDK Oculus is a registered trademark of Oculus VR LLC, Irvine CA, U.S.A.
  • OSVR Open source VR
  • OSVR is a registered trademark of Razer Asia Pacific Pte. Ltd. Singapore
  • Application software executing on the computing system 108 with the underlying platform software can be a customized rendering engine, or an off-the-shelf 3D rendering framework, such as Unity® Software (Unity Software is a registered trademark of Unity Technologies of San Francisco CA, U.S.A).
  • the rendering framework can provide the basic building blocks of the virtualized environment for the holographic refractive eye test, including 3D objects and manipulation techniques to change the appearance of the 3D objects.
  • the rendering framework can provide application programming interfaces (APIs) for the instantiation of 3D objects and well-defined interfaces for the manipulation of the 3D objects within the framework.
  • APIs application programming interfaces
  • Common software programming language bindings for rendering frameworks include but are not limited to C++, Java, and C#.
  • the application software can provide settings to allow a test administrator to adjust actions within the test, such as holographic object speed and object color.
  • the system 100 can be configured to perform a variety of eyes tests, including, but not limited to, acuity testing (near and far), phorias, horizontal divergence, horizontal convergence, and refraction.
  • FIG. 2 is a block diagram illustrating a test for horizontal phoria with a holographic eye testing device according to an exemplary embodiment.
  • two virtual 3D objects 202 A, 202B can be manipulated in a user’s field of view (FOV) 204A, 204B.
  • the virtual 3D objects 202A, 202B can be translated within the same horizontal plane until convergence.
  • the virtual 3D objects 202A, 202B can have a starting point within the user’s FOV 204A, 204B equidistant within the same horizontal plane from a mid-point of the FOV.
  • the virtual 3D objects 202A, 202B are translated and projected on the combiner lenses 104A, 104B to give the appearance that the virtual 3D objects are a set distance from the view of the user’s eyes 106A, 106B.
  • the application software can present the virtual 3D objects 202A, 202B via the combiner lenses 104A, 104B so that the virtual 3D objects can appear to be at different distances from the user’s eyes 106A, 106B.
  • the presentation of the virtual 3D objects 202A, 202B can correspond to projection of the virtual 3D objects at distances of 16 inches to 20 feet in front of the user’s eyes 106A, 106B.
  • the range of distances allow phoria to be measured at different intervals of depth for better confidence in convergence results.
  • the user can provide input to the application software or platform software.
  • the input can take the form of voice commands, gestures, or input from a “clicker.”
  • the virtual 3D objects 202A, 202B approach each other, they will begin to overlap and converge into a single virtual 3D object.
  • the user can provide input to stop any motion or translation of the virtual 3D objects 202A, 202B.
  • the application software evaluates a delta between the midpoint of the user’s FOV 204A, 204B and the point at which the virtual 3D objects 202A, 202B were located when the user provided input to stop the motion or translation.
  • the delta can be represented as a deviation relative to the virtual distance of the virtual 3D objects 202A, 202B from the patient.
  • FIG. 3 is a block diagram illustrating a test for vertical phoria utilizing the holographic eye testing device according to an exemplary embodiment.
  • two virtual 3D objects 304A, 304B can be manipulated in a user’s FOV 302.
  • the virtual 3D objects 304A, 304B can be translated within the same vertical plane until convergence.
  • the virtual 3D objects 304 A, 304B can have a starting point within the user’s FOV 302 equidistant within the same vertical plane from a mid-point of the FOV.
  • the virtual 3D objects 304A, 304B are translated and projected on the combiner lenses 104A, 104B to give the appearance that the virtual 3D objects are a set distance from the view of the user’s eyes 106A, 106B.
  • the application software can present the virtual 3D objects 304A, 304B via the combiner lenses 104A, 104B so that the virtual 3D objects can appear to be at different distances from the user’s eyes 106 A, 106B.
  • the presentation of the virtual 3D objects 304A, 304B can correspond to projection of the virtual 3D objects at distances of 16 inches to 20 feet in front of the user’s eyes 106A, 106B.
  • the range of distances allow phoria to be measured at different intervals of depth for better confidence in convergence results.
  • FOV 302 the user can provide input to the application software or platform software.
  • the input can take the form of voice commands, gestures, or input from a “clicker.”
  • the virtual 3D objects 304A, 304B approach each other, they will begin to overlap and converge into a single visible virtual 3D object.
  • the user can provide input to stop any motion or translation of the virtual 3D objects 304A, 304B.
  • the application software evaluates a delta between the midpoint of the user’s FOV 302 and the point at which the virtual 3D objects 304A, 304B were located when the user provided input to stop the motion or translation.
  • the delta can be represented as a deviation relative to the virtual distance of the virtual 3D objects 304A, 304B from the patient.
  • FIG. 4A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment.
  • multiple virtual 3D objects 404A, 406A, 406B can be manipulated in a user’s FOV 402.
  • the virtual 3D objects 404A, 406A, 406B start in different planes 408A, 408B parallel to the plane of the combiner lenses 408C.
  • the virtual 3D objects 404A can be a set of vertical lines (relative to the user’s eyes 106A, 106B) residing in a plane 408A.
  • Additional virtual 3D objects 406A, 406B can be a set of horizontal lines (relative to the user’s eyes 106A, 106B) residing in a plane 408B.
  • the virtual 3D objects 404A, 406A, 406B can appear as a hash mark (#) where the virtual 3D objects appear to intersect, however since they are in different planes 408A, 408B they do not actually intersect.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • Control of the test can take the form voice commands including “forward” and “backward.”
  • a voice command of “forward” translates the plane 408A, and associated virtual 3D objects 404A toward the combiner lenses 104A, 104B.
  • a voice command of “backward” translates the plane 408A, and associated virtual 3D objects 404A away from the combiner lenses 104A, 104B.
  • a user can manipulated the virtual 3D objects 404A where the user believes the respective planes 408A, 408B and associated virtual 3D objects 404A, 406A, 406B are coincidental.
  • the user can provide a voice command to the computing system 108, such as stating the word “stop” to complete the manipulation portion of the test.
  • the computing system 108 Upon the receipt of the “stop” command, the computing system 108 disallows subsequent input commands, such as “forward” and “backward,” and determines a delta distance between the final location of the planes 408 A, 408B. In the event the user manipulated the planes 408A, 408B to coincide, the delta would be zero.
  • FIG. 4B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 4A.
  • Virtual 3D objects 406A, 406B can be implemented as parallel lines residing the same plane 408B.
  • Virtual 3D objects 404A, 404B can be implemented as parallel lines residing in the same plane 408A.
  • FIG. 5A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment.
  • FIG. 5B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 5A.
  • multiple virtual 3D objects 504A, 504B can be manipulated in a user’s FOV 402.
  • the virtual 3D objects 504A, 504B correspond to concentric opaque rings across the surface of an invisible sphere 510, where the concentric opaque rings traverse the surface of the sphere perpendicular to the plane of the combiner lenses 104 A, 104B.
  • the virtual 3D objects 504A, 504B can be oriented along coaxial planes 506, 508. In other embodiments, the virtual 3D objects 504A, 504B can be distal or proximal portions of concentric opaque rings across the surface of an invisible sphere 510.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • the invisible sphere 510 and accompanying virtual 3D objects are translated toward the combiner lenses 104 A, 104B to give the user the appearance that the virtual 3D objects are coming directly at the user’s eyes 106A.
  • the user can provide input to stop the test in the form of a voice command of “stop.”
  • the computing system 108 ceases translation of the invisible sphere 510 and calculates a delta distance from the starting point of the invisible sphere to the point where the invisible sphere resides at the end of the test.
  • a constant point of reference on the invisible sphere 510 can be utilized to determine a consistent location to determine the delta distance.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • the virtual 3D objects 504A, 504B being the test in a parallel or coincidental plane with a starting plane 506. As the test begins the invisible sphere 510 and accompanying virtual 3D objects are rotated in a clockwise motion 512 from the user’s perspective.
  • the user can provide input to stop the test in the form of a voice command of “stop.”
  • the computing system 108 ceases rotation of the invisible sphere 510 and calculates a delta in degrees based on the rotation from the starting point of the invisible sphere to the orientation of the invisible sphere at the end of the test.
  • the delta in degrees can be used to determine the axis of the astigmatism. This provides the amount of astigmatism measured in this eye and therefore the predicted amount of cylindrical correction needed to bring clarity.
  • FIG. 5C is a block diagram illustrating another user’s perspective of the virtual 3D objects depicted in FIG. 5A.
  • the virtual 3D objects 504A, 504B can be distal portions of concentric opaque rings across the surface of an invisible sphere 510.
  • the virtual 3D objects 514A, 514B can be proximal portions of concentric opaque rings across the surface of an invisible sphere 510.
  • the distal portions of the concentric opaque rings form a group and the proximal portions form a group. Groups are rotated and translated in the FOV 402 in unison.
  • the distal portions can be offset from the proximal portions by a rotation of 45 degrees.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • the computing system 108 translates virtual 3D objects 514A, 514B corresponding to the proximal portions toward the combiner lenses 104A, 104B where the virtual 3D object 514A, 514B appear to be coming toward the user’s eyes 102A, 102B.
  • the user can provide input to stop the test in the form of a voice command of “stop.”
  • the computing system 108 ceases translation of the virtual 3D objects 514A, 514B corresponding to the proximal portions and calculates a delta in distance based on the translation from the starting point to the position at the end of the test.
  • FIG. 6A is a block diagram illustrating a test for visual acuity utilizing the holographic eye testing device, according to an exemplary embodiment.
  • virtual 2D or 3D letters or images 604 can be displayed and/or manipulated in a user’s FOV 402.
  • the virtual letters or images 604 are translated and projected on the combiner lenses 104 A, 104B to give the appearance that the virtual letters or images 604 are a set distance from the view of the user’s eyes 106A, 106B.
  • the application software can present the virtual letters or images 604 via the combiner lenses 104A, 104B so that the virtual letters or images 604 can appear to be at different distances from the user’s eyes 106A, 106B.
  • the presentation of the virtual letters or images 604 can correspond to projection of the virtual letters or images 604 at distances of 16 inches (near visual acuity) to 20 feet (distance visual acuity) in front of the user’s eyes 106A, 106B.
  • the visual acuity test is used to determine the smallest letters or images a user can identify from the virtual letters or images 604 at a specified distance away (e.g., 6 meters). The range of distances allows visual acuity to be measured at different intervals of depth.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • the virtual letters or images 604 can be moved forward or backwards.
  • Control of the test can take the form voice commands including “forward” and “backward.”
  • a voice command of “forward” translates the plane 608, and associated virtual letters or images 604 toward the combiner lenses 104A, 104B.
  • a voice command of “backward” translates the plane 608, and associated virtual letters or images 604 away from the combiner lenses 104A, 104B.
  • a user can manipulated the virtual letters or images 604 until the user can or can no longer identify the virtual letters or images 604.
  • the user can provide a voice command to the computing system 108, such as stating the word “stop” to complete the manipulation portion of the test.
  • the computing system 108 Upon the receipt of the “stop” command, the computing system 108 disallows subsequent input commands, such as “forward” and “backward,” and determines a final distance of the virtual letters or images 604B.
  • FIG. 6B is a block diagram illustrating a user’s perspective of the virtual letters depicted in FIG. 6A.
  • the virtual letters 604 can be implemented as letters residing on the same plane 608. In other embodiments, one or more of the virtual letters 604 can be implemented as letters residing on different planes.
  • FIG. 7A is a block diagram illustrating a test for horizontal convergent and horizontal divergent, utilizing the holographic eye testing device according to an exemplary embodiment.
  • virtual 2D or 3D shapes 704 can be displayed and/or manipulated in a user’s FOV 402.
  • the horizontal convergent and horizontal divergent tests are used to examine eye movement responses to symmetric stimuli.
  • the virtual shapes 704 can be manipulated in a user’s field of view (FOV) 402.
  • the virtual shapes 704 can have a starting point within the user’s FOV 402 equidistant within the same horizontal plane from a mid-point of the FOV 402.
  • the virtual shapes 704 are translated and projected on the combiner lenses 104A, 104B to give the appearance that the virtual shapes 704 are a set distance from the view of the user’s eyes 106A, 106B.
  • the application software can present the virtual shapes 704 via the combiner lenses 104A, 104B so that the virtual shapes 704 can appear to be at different distances from the user’s eyes 106A, 106B.
  • the presentation of the virtual shapes 704 can correspond to projection of the virtual shapes 704 at distances of 16 inches to 20 feet in front of the user’s eyes 106A, 106B.
  • the range of distances allows the horizontal convergent and the horizontal divergent to be measured at different intervals of depth for better confidence in convergence and divergence results.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • FIGs. 7B-7C are block diagrams illustrating a horizontal convergent test, utilizing the holographic eye testing device according to an exemplary embodiment.
  • the horizontal convergent test is performed in two stages - a break stage and a recovery stage.
  • Two objects for example, 3D or 2D shapes, such as 3D cubes
  • the distance can be changed depending on needs of the user, such as whether the test is being performed for near-sightedness or far-sightedness.
  • a first object 710 of the two objects is projected to a right eye and a second object 712 of the two objects is projected to the left eye.
  • the first object 710 and the second object 712 begin overlaid on each other and appear as one object, as shown in FIG. 7B.
  • the first object 710 and the second object 712 slowly move apart.
  • the first object 710 shown to the right eye moves to the left from a center start point
  • the second object 712 shown to the left eye moves to the right from the center start point, as shown in FIG. 7C.
  • the user reports when the user notices that instead of viewing the single object, the user now views that there are two objects (the first object 710 and the second object 712).
  • the user can provide input to the application software or platform software.
  • the input can take the form of voice commands, gestures, or input from a “clicker.”
  • the first object 710 moves to the left from the center start point and the second object 712 moves to the right from the center start point, the objects will begin to diverge and appear as separate objects.
  • the user can provide input to stop any motion or translation of the first object 710 and the second object 712.
  • the application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation.
  • the delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user.
  • the first object 710 and the second object 712 start out offset from each other and slowly move together.
  • the first object 710 shown to the right eye starts on the left side of the user's view and moves to the center start point
  • the second object 712 shown to the left eye starts on the right side of the user's view and moves to the center start point, as shown in FIG. 7D.
  • the user reports when the user no longer views two distinct objects, but instead only a single object.
  • the user can provide input to the application software or platform software.
  • the input can take the form of voice commands, gestures, or input from a “clicker.”
  • the user can provide input to stop any motion or translation of the first object 710 and the second object 712.
  • the application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation.
  • the delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user.
  • FIGs. 7E-7G is a block diagram illustrating a horizontal divergent test, utilizing the holographic eye testing device according to an exemplary embodiment.
  • the horizontal divergent is performed in two stages - a break stage and a recovery stage.
  • Two objects for example, 3D or 2D shapes, such as 3D cubes
  • the distance can be changed depending on needs of the user, such as whether the test is being performed for near-sightedness or far-sightedness.
  • a first object of the two objects is projected to a right eye and a second object of the two objects is projected to the left eye.
  • the first object 710 and the second object 712 begin overlaid on each other and appear as one object, as shown in FIG. 7E.
  • the first object 710 and the second object 712 slowly move apart.
  • the first object 710 shown to the right eye moves to the right from the center start point
  • the second object 712 shown to the left eye moves to the left from the center start point, as shown in FIG. 7F.
  • the user reports when the user notices that instead of viewing the single object, the user now views that there are two objects (the first object 710 and the second object 712).
  • the user can provide input to the application software or platform software.
  • the input can take the form of voice commands, gestures, or input from a “clicker.”
  • the first object 710 moves to the right from the center start point and the second object 712 moves to the left from the center start point, the objects will begin to diverge and appear as separate objects.
  • the user can provide input to stop any motion or translation of the first object 710 and the second object 712.
  • the application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation.
  • the delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user.
  • the first object 710 and the second object 712 start out offset from each other and slowly move together, as shown in FIG. 7G.
  • the first object 710 shown to the right eye starts on the right side of the user's view and moves to the center start point
  • the second object 712 shown to the left eye start's on the left side of the user's view and moves to the center start point.
  • the user reports when the user no longer views two distinct objects, but instead only a single object.
  • the user can provide input to the application software or platform software.
  • the input can take the form of voice commands, gestures, or input from a “clicker.”
  • the objects will begin to overlap and converge into a single object.
  • the user can provide input to stop any motion or translation of the first object 710 and the second object 712.
  • the application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation.
  • the delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user.
  • the first object 710 is only shown the right eye and the second object 712 is only shown to the left eye.
  • eye tracking data is important because it allows the system to determine where the user is looking.
  • the program may request that the user look at a certain point in order to start the test, or to confirm that they are looking in the right location and see the shapes before the test is started.
  • the system may request that a user attempt to gaze at a certain point while the shapes are in motion in order to ensure that the tests return accurate results. If they look away from the point, or directly at the shapes, the test may pause and wait for them to return their gaze to the requested object before continuing.
  • FIGs. 8A-8C are diagrams illustrating refraction testing utilizing the holographic eye testing device according to an exemplary embodiment.
  • at least one virtual object 804 and/or a series of virtual lines 810 can be displayed and/or manipulated in a user’s FOV 402.
  • the object 804 or the series of lines 810 are translated on a vertical plane 808 and projected on the combiner lenses 104A, 104B to give the appearance that the object 804 or the series of lines 810 is a set distance from the view of the user’s eyes 106A, 106B.
  • the application software can present the object 804 or the series of lines 810 via the combiner lenses 104A, 104B so that the object 804 or the series of lines 810 can appear to be at different distances from the user’s eyes 106A, 106B.
  • the presentation of the object 804 or the series of lines 810 can correspond to projection of virtual letters, images, or lines at distances of 16 inches (near visual acuity) to 20 feet (distance visual acuity) in front of the user’s eyes 106A, 106B.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures, or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • the described systems and methods use depth of field for testing refraction.
  • the refraction testing determines the user’s level of hyperopia (farsightedness), myopia (nearsightedness), and astigmatism, and three associated numerical values for sphere, cylinder, and axis, typically needed for an eyeglass prescription.
  • the computing system 108 measures the refractive state of the eye by having the user move the object 804 from an initial distance towards or away from the user until a resolution of the object 804 appears clear to the user. The distance at which the object 804 appears clear to the user is labeled as a final distance.
  • the computing system 108 determines an initial measurement between the final position of the virtual object and an optimal virtual position. This initial measurement is at the focal length of the refractive spherical equivalent of the eye and is the sphere power.
  • the sphere power indicates the amount of lens power, measured in diopters (D), prescribed to correct nearsightedness or farsightedness.
  • a series of virtual lines 810 are presented at the final distance in a parallel or coincidental plane with the plane 808.
  • the series of lines 810 correspond to concentric opaque rings across the surface of an invisible sphere 812 (shown in FIG. 8C), where the concentric opaque rings traverse the surface of the sphere 812 perpendicular to the plane of the combiner lenses 104 A, 104B.
  • the series of lines 810 includes a predefined number of lines (for example, three lines) in the axis plane.
  • the invisible sphere 812 and accompanying series of lines 810 are rotated in a clockwise motion from the user’s perspective.
  • the invisible sphere 812 and accompanying series of lines 810 appear to have rotated ninety (90) degrees about an axis from the final position, (parallel or coincidental to the axis 814 shown in FIG. 8C)
  • the user can provide input to stop the test, for example, in the form of a voice command of “stop.”
  • the computing system 108 ceases rotation of the invisible sphere 812 and calculates a delta in degrees based on the rotation from the starting point of the invisible sphere 812 to the orientation of the invisible sphere 812 at the end of the test.
  • the delta in degrees can be used to determine the axis value.
  • the second step provides the axis. This provides the amount of astigmatism measured in this eye and therefore the predicted amount of cylindrical correction needed to bring clarity.
  • the cylinder value refers to the amount of astigmatism in the eyes.
  • the lines 810 are shifted 90 degrees from the axis 814 and moved further away from the user to display a blurred side of the lines 810.
  • the user moves the lines 810 closer to or farther from (plus cylinder or minus cylinder) the user until the lines appear clear to the user. This is the focal length of the cylinder power and corresponds to the difference in power from the sphere.
  • the sphere, cylinder, and axis values are determined.
  • the above described sequence provides the amount of hyperopia (farsightedness), myopia (nearsightedness), and astigmatism measured in this eye and therefore the predicted amount of correction needed to bring clarity. If the user has both myopia or hyperopia and astigmatism, the object 804 would be simultaneously manipulated to determine myopia or hyperopia while also evaluation the dioptric power of the astigmatism.
  • the hyperopic eye when measuring hyperopia, plus (convex) lenses are included in the HMD 102 underneath the lenses 104A, 104B and in front of the user’s eyes 106A, 106B.
  • the reason for this is that the hyperopic eye focuses beyond the fixation object.
  • the hyperopic eye is mathematically made to become myopic An algorithm is used to subtracts the values to determine the dioptric value of the hyperopis.
  • a fourth step is included to refine the exact sphere power and then to binocularly balance the prescription for the two eyes.
  • the object 804 is shifted further away from the user to create +0.50 diopters in each the user’s eyes 106A, 106B.
  • the object 804 initially appears as one object to the user and then is disassociated or separated until the user can identify two separate objects.
  • the user reports which object is clearer.
  • the clearer object is then moved away from the user until both objects appear equally as blurred.
  • the objects are then merged and the user moves them closer until the resolution appears best to the user.
  • the binocular balance has then been completed.
  • the object 804 and lines 810 can be moved forward or backwards or rotated.
  • Control of the test can take the form voice commands including “forward,” “backward,” and “rotate.”
  • a voice command of “forward” translates the plane 808, and associated object 804 or lines 810 toward the combiner lenses 104A, 104B.
  • a voice command of “backward” translates the plane 808, and associated object 804 or lines 810 away from the combiner lenses 104A, 104B.
  • a voice command of “rotate” moves the plane 808, and associated object 804 or lines 810 in a rotation manner.
  • a user can manipulate the object 804 until the user can or can no longer identify the object 804 or lines 810.
  • the user can provide a voice command to the computing system 108, such as stating the word “stop” to complete the manipulation portion of the test.
  • FIG. 8B is a block diagram illustrating a user’s perspective of the virtual object 804 described in FIG. 8A.
  • the virtual object 804 can be implemented as one or more virtual 2D or 3D letters or images, such as shapes, pictures, etc., residing on the plane 808.
  • FIG. 8C is a block diagram illustrating a user’s perspective of the series of lines 810 described in FIG. 8A.
  • the series of lines 810 can be implemented as parallel lines (running vertical and/or horizontal) residing on the plane 808.
  • the series of lines 810 correspond to concentric opaque rings across the surface of the invisible sphere 812, where the concentric opaque rings traverse the surface of the sphere 812 perpendicular to the plane of the combiner lenses 104A, 104B.
  • FIG. 9 is a block diagram illustrating convergence testing utilizing the holographic eye testing device according to an exemplary embodiment.
  • the head mounted display (HMD) 102 incorporates eye tracking. Eye tracking enables the HMD 102 to track where the user’s eyes 106A, 106B are looking in real time.
  • at least one virtual object 902 can be displayed and/or manipulated in a user’s FOV 402.
  • the object 902 is translated on a vertical plane 908 and projected on the combiner lenses 104A, 104B to give the appearance that the object 902 is a set distance from the view of the user’s eyes 106A, 106B.
  • the application software can present the object 902 via the combiner lenses 104A, 104B so that the object 902 can appear to be at different distances from the user’s eyes 106A, 106B.
  • the presentation of the object 902 can correspond to projection of virtual letters, images, or lines at distances of 16 inches (near visual acuity) to 20 feet (distance visual acuity) in front of the user’s eyes 106A, 106B.
  • the user can start the test by providing input to the computing system 108.
  • the input can take the form of voice commands, including saying key words indicative of beginning the test, gestures, or providing input from a “clicker.”
  • the user states the word “start” to begin the test.
  • the object 902 is presented to each eyes 106A, 106B and is moved across the user’s FOV 402.
  • the user follows the object 902 from left to right and right to left.
  • the object 902 is then moved in a circle clock wise and counter-clockwise.
  • the HMD 102 via eye tracking, monitors fixation loss and quality of movement of the user’s eyes 106A, 106B. Points of fixation loss and the quality of movement are recorded.
  • Convergence near point will be assessed by rendering a first virtual object 910 displayed to a right eye and a second virtual object 912 displayed to a left eye within the holographic display device, wherein the rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to the user.
  • the first virtual object 910 and the second virtual object 912 appear at a distance of 40 centimeters in front of the user’s eyes 106A, 106B.
  • the user moves the first virtual object 910 and the second virtual object 912 presented to both eyes 106A, 106B toward the user’s nose.
  • the HMD 102 monitors eye alignment and record the distance (in centimeter or inches) when the eyes 106A, 106B lose alignment on the first virtual object 910 and the second virtual object 912 and the first virtual object 910 and the second virtual object 912 appears as two separate objects. This is recorded as the break point.
  • the first virtual object 910 and the second virtual object 912 are then moved away from the user and the HMD 102 records the distance when the eyes 106A, 106B realign and the user fuses the first virtual object 910 and the second virtual object 912 back to appear as one virtual object to the user. This is recorded as the realignment point.
  • FIG. 9B is a block diagram illustrating a user’s perspective of the virtual object 902 and/or the first virtual object 910 and the second virtual object 912 as viewed aligned, as described in FIG. 9A.
  • the virtual object 804 can be implemented as one or more virtual 2D or 3D letters or images, such as shapes, pictures, etc., residing on the plane 908.
  • FIGs. 10A-10D illustrates methods for diagnosis and/or prescription of remedies for visual impairment in accordance with exemplary embodiments.
  • FIG. 10A illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
  • the holographic display device renders one or more three dimensional objects with the holographic display device.
  • the rendering corresponds to a virtual level of depth viewable by a user.
  • the holographic display device updates the rendering of the one or more three dimensional objects within the holographic display device.
  • the updated rendering includes a virtual movement of the one or more three dimensional objects within the virtual level of depth.
  • the virtual movement includes moving the one or more three dimensional objects laterally in the field of view of the user.
  • the virtual movement includes moving the one or more three dimensional objects vertically in the field of view of the user.
  • the virtual movement includes moving the one or more three dimensional objects from a distal position to proximal position within the field of view of the user.
  • the virtual level of depth corresponds to a simulated distance away from the user.
  • the simulated distance can range from sixteen (16) inches to twenty (20) feet from the user.
  • the holographic display device receives input from a user.
  • the input can include an indication of alignment of the one or more three dimensional objects based on the virtual movement.
  • the indication of alignment can include a relative virtual position between the one or more three dimensional objects.
  • the input from the user can include hand gestures and voice commands
  • the holographic display device determines a delta between the relative virtual position of the one or more three dimensional objects and an optimal virtual position.
  • the holographic display device generates a prescriptive remedy based on the delta between the relative virtual position of the one or more three dimensional objects and the optimal virtual position.
  • FIG. 10B illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
  • a diagnostic module configured to execute on a computing device communicatively coupled to the head mounted holographic display device renders a first virtual object displayed to the right eye and a second virtual object displayed to the left eye within a head mounted holographic display device.
  • the rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to a user.
  • the diagnostic module updates the rendering of the first virtual object and the second virtual object within the holographic display device.
  • the update includes a virtual movement of the first virtual object in a first direction and the second virtual object in a second direction opposite the first direction.
  • the diagnostic module receives input from a user.
  • the input includes an indication of separation of the first virtual object and the second virtual object based on the virtual movement.
  • the indication of separation comprises a relative virtual position between the first virtual object and the second virtual object where the user views the first virtual object and the second virtual object as separate objects.
  • the diagnostic module determines a first delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
  • the diagnostic module updates the rendering of the first virtual object and the second virtual object within the holographic display device.
  • the update includes a virtual movement of the first virtual object in the second direction and the second virtual object in the first direction.
  • the diagnostic module receives a second input from the user.
  • the input includes an indication of alignment of the first virtual object and the second virtual object based on the virtual movement.
  • the indication of alignment comprises a relative virtual position between the first virtual object and the second virtual object where the user views the first virtual object and the second virtual object as aligned to appear as one virtual object.
  • the diagnostic module determines a second delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
  • FIG. 10C illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
  • the diagnostic module renders at least one virtual object within the holographic display device at an initial position.
  • the rendering corresponds to a virtual level of depth corresponding to an initial simulated distance away from a user.
  • the diagnostic module receives at least one first input from the user.
  • the at least one first input comprises an indication to move the at least one virtual object virtually towards or away from the user.
  • the diagnostic module updates the rendering of the at least one virtual object within the holographic display device. The update includes a virtual movement of the at least one virtual object in a direction towards or away from the user.
  • the diagnostic module receives a second input from the user.
  • the second input includes an indication that the at least one virtual object appears clear to the user at a final position.
  • the rendering corresponds to a virtual level of depth corresponding to a final simulated distance away from the user.
  • the diagnostic module determines a measurement between the final position of the virtual object and an optimal virtual position.
  • the diagnostic module renders at least one line within the holographic display device at the final position.
  • the diagnostic module rotates the at least one line about an axis from the final position.
  • the diagnostic module receives a third input from the user.
  • the third input comprises an indication that the at least one line appears to the user to have rotated ninety degrees about the axis from the final position.
  • the diagnostic module determines a delta in degrees based on the rotation of the at least one line from the final point to an orientation of the at least one line at a time of receiving the third input.
  • FIG. 10D illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
  • the diagnostic module renders at least one virtual object within the holographic display device.
  • the rendering corresponds to a virtual level of depth viewable by the user.
  • the diagnostic module updates the rendering of the at least one virtual object within the holographic display device.
  • the update includes a virtual movement of the at least one virtual object within the virtual level of depth.
  • the diagnostic module monitors, via eye tracking, at least one of fixation loss or quality of movement of the eyes of the user.
  • the diagnostic module renders a first virtual object displayed to a right eye and a second virtual object displayed to a left eye within the holographic display device.
  • the rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to a user.
  • the diagnostic module updates the rendering of the first virtual object and the second virtual object within the holographic display device.
  • the update includes a virtual movement of the first virtual object and the second virtual object towards the user.
  • the diagnostic module monitors, via the eye tracking, eye alignment as the eyes of the user track the first virtual object and the second virtual object moving towards the user.
  • the diagnostic module identifies a distance when the user views the first virtual object and the second virtual object as separate objects.
  • the diagnostic module determines a delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
  • the diagnostic module generates a prescriptive remedy based on the delta between the relative virtual position of the first virtual object and the second virtual object and the optimal virtual position.
  • FIG. 11 depicts a block diagram an exemplary computing device in accordance with an exemplary embodiment.
  • Embodiments of the computing device 1100 can implement embodiments of the system including the holographic eye testing device.
  • the computing device can be embodied as a portion of the holographic eye testing device, and supporting computing devices.
  • the computing device 1100 includes one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing exemplary embodiments.
  • the non-transitory computer-readable media may include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives, one or more solid state disks), and the like.
  • memory 1106 included in the computing system 108 may store computer- readable and computer-executable instructions or software (e.g., applications 1130 such as rendering application) for implementing exemplary operations of the computing device 1100.
  • the computing system 108 also includes configurable and/or programmable processor 1102 and associated core(s) 1104, and optionally, one or more additional configurable and/or programmable processor(s) 1102’ and associated core(s) 1104’ (for example, in the case of computer systems having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 1106 and other programs for implementing exemplary embodiments of the present disclosure.
  • Processor 1102 and processor(s) 1102’ may each be a single core processor or multiple core (1104 and 1104’) processor. Either or both of processor 1102 and processor(s) 1102’ may be configured to execute one or more of the instructions described in connection with computing system 108.
  • Virtualization may be employed in the computing system 108 so that infrastructure and resources in the computing system 108 may be shared dynamically.
  • a virtual machine 1112 may be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines may also be used with one processor.
  • Memory 1106 may include a computer system memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory 1106 may include other types of memory as well, or combinations thereof.
  • the computing system 108 can receive data from input/output devices. A user may interact with the computing system 108 through a visual display device 1114, such as a combiner lenses 1116, which may display one or more virtual graphical user interfaces, a microphone 1120 and one or more cameras 1118.
  • the computing system 108 may also include one or more storage devices 1126, such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that implement exemplary embodiments of the present disclosure.
  • exemplary storage device 1126 can include storing information associated with platform software and the application software.
  • the computing system 108 can include a network interface 1108 configured to interface via one or more network devices 1124 with one or more networks, for example, Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, Tl, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above.
  • the computing system can include one or more antennas 1122 to facilitate wireless communication (e.g., via the network interface) between the computing system 108 and a network and/or between the computing system 108 and other computing devices.
  • the network interface 1108 may include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing system 108 to any type of network capable of communication and performing the operations described herein.
  • the computing system 108 may run any operating system 1110, such as any of the versions of the Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the computing system 108 and performing the operations described herein.
  • the operating system 1110 may be run in native mode or emulated mode.
  • the operating system 1110 may be run on one or more cloud machine instances.
  • Exemplary flowcharts are provided herein for illustrative purposes and are non-limiting examples of methods.
  • One of ordinary skill in the art will recognize that exemplary methods can include more or fewer steps than those illustrated in the exemplary flowcharts and that the steps in the exemplary flowcharts can be performed in a different order than the order shown in the illustrative flowcharts.

Abstract

A system and a method for holographic eye testing device are disclosed. The system renders one or more three dimensional objects within the holographic display device. The system updates the rendering of the one or more three dimensional objects within the holographic display device, by virtual movement of the one or more three dimensional objects within the level of depth. The system receives input from a user indicating alignment of the one or more three dimensional objects after the virtual movement. The system determines a delta between a relative virtual position of the one or more three dimensional objects at the moment of receiving input and an optimal virtual position and generates prescriptive remedy based on the delta.

Description

HOLOGRAPHIC REAL SPACE REFRACTIVE SYSTEM
BACKGROUND
1. Technical Field
[0001] Systems and methods for providing a visual examination using holographic projection in real space and time are provided.
2. Background Art
[0002] For over one hundred years, doctors have provided eye examinations including refraction by using lenses and prisms to determine the refractive state and binocularity of the patient. Refraction means to bend light. Persons with myopia (nearsightedness), hyperopia (farsightedness) and astigmatism (two different power curves) performed a refraction to correct the refractive state and blurred vision of the patient by using physical lenses and prisms. While in the 19th century the refraction was mostly conducted with a trial frame by holding up individual lenses before each eye to make the image more clear, in the 20th century the phoropter (meaning “many lenses”) was developed. This instrument was extended on the arm of a physical stand and the instrument was positioned before the patient’s face. The clinician would then turn the dial to move different lenses in front of the person’s eyes to find the best subjective refraction to improve distance vision. The instrument was then advanced to include prisms that could be used to disassociate images or change the position of the image enabling the clinician the ability to evaluate muscle ranges and the ability to maintain eye alignment and binocularity. It also permitted assessment of the person’s ability to accommodate or focus at a near range. This was all for the purpose of designing glasses to improve eyesight and visual acuity for both distance and near ranges as well as to prescribe prisms to correct for imbalance in eye alignment affecting binocularity.
[0003] While the phoropter is an effective instrument and still used today, it limits the peripheral field and cannot assess binocularity in any other meridian other than primary gaze or looking straight ahead. Binocular imbalances can sometimes only be represented with gaze outside of the primary gaze position. Therefore, the instrument has limited value for these purposes and/or lead the clinician to only be able to prescribe lenses and prisms for one position of the eyes. In addition, the large phoropter blocks the peripheral vision producing an abnormal environment and restriction of side vision, which frequently affects the intensity of the attentional visual process and cause the refractive correction to be too strong or imbalanced.
[0004] These and other issues and limitations of existing instraments and technologies are addressed and overcome by the systems and methods of the present disclosure.
SUMMARY OF THE INVENTION
[0005] Described herein is a system to evaluate the refractive state of the eye and visual process as well as binocularity in the nine cardinal positions of gaze while in real space by using holographic projection for each eye. The refractive state assessment has been designed to enable the eye of the patient to focus on virtual objects in the manner that the refractive imbalance will focus to maintain clear vision. For example, a object is presented with three dimensions. The myopic eye will focus on the near side of the object and see it with clarity. The dimensions and position of the object is then moved to refocus the far or distance side of the object and calibration is determined as to the power of the eye and the power of the lens required to re-focus the eye to best visual acuity at infinity. The same would occur for the hyperopic eye, only the far portion of the three-dimensional object will be in initial focus.
[0006] The patient uses hand movements and/or voice command to communicate the subjective measurement of the dioptric power to correct the vision to best visual acuity and, advantageously, these objectives are accomplished through manipulation of the object in real space. More particularly, in an exemplary embodiment of the present disclosure, an eye with astigmatism would be presented a three dimensional object where perpendicular lines would enable the patient to observe that one of the lines is clear and the other blurred. The object will be rotated to determine the axis of the astigmatism and then the opposite or blurred side of the object would be shifted in space virtually to bring it into focus. This sequence of operation will provide the amount of astigmatism measured in this eye and therefore the predicted amount of cylindrical correction needed to bring clarity. If the patient has both myopia or hyperopia and astigmatism, the object would be simultaneously be manipulated to determine myopia or hyperopia while also evaluation the dioptric power of the astigmatism. [0007] Additional features, functions and benefits of the disclosed systems and methods will be apparent from the detailed description which follows, particularly when read in conjunction with the accompanying figures.
BRIEF DESCRIPTION OF DRAWINGS
[0008] Illustrative embodiments are shown by way of example in the accompanying drawings and should not be considered as a limitation of the present disclosure:
[0009] FIG. 1 is a block diagram illustrating a system for the holographic eye testing device according to an exemplary embodiment.
[0010] FIG. 2 is a block diagram illustrating a test for horizontal phoria with a holographic eye testing device according to an exemplary embodiment.
[0011] FIG. 3 is a block diagram illustrating a test for vertical phoria utilizing the holographic eye testing device according to an exemplary embodiment.
[0012] FIG. 4A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment.
[0013] FIG. 4B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 4A according to an exemplary embodiment.
[0014] FIG. 5A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment.
[0015] FIG. 5B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 5A according to an exemplary embodiment.
[0016] FIG. 5C is a block diagram illustrating another perspective of the virtual 3D objects depicted in FIG. 5A according to an exemplary embodiment.
[0017] FIGs. 6A-6B are diagrams illustrating a test for visual acuity utilizing the holographic eye testing device according to an exemplary embodiment. [0018] FIGs. 7A-7G are diagrams illustrating tests for horizontal convergent and horizontal divergent, utilizing the holographic eye testing device according to an exemplary embodiment.
[0019] FIGs. 8A-8C are diagrams illustrating refraction testing utilizing the holographic eye testing device according to an exemplary embodiment.
[0020] FIGs. 9A-9B are diagrams illustrating convergence testing utilizing the holographic eye testing device according to an exemplary embodiment.
[0021] FIGs. 10A-10D are block diagrams illustrating methods for utilizing the holographic eye testing device according to an exemplary embodiment.
[0022] FIG. 11 depicts a block diagram an exemplary computing device in accordance with an exemplary embodiment.
DETAILED DESCRIPTION
[0023] Apparatus, methods, and non-transitory computer readable medium are described for a holographic eye testing device. Example embodiments provide a device for utilizing holographic virtual projection to perform eye testing, diagnosis, and prescriptive remedy.
[0024] In some embodiments, the disclosed holographic eye testing device renders on a head mounted device, one or more three dimensional objects within the holographic display device, wherein the rendering corresponds to a virtual level of depth viewable by a user. The holographic display device updates the rendering of the one or more three dimensional objects, wherein the updates include a virtual movement of the one or more three dimensional objects within the virtual level of depth. The holographic display device receives input from a user, wherein the input includes an indication of alignment of the one or more three dimensional objects based on the virtual movement. The indication of alignment includes a relative position between the one or more three dimensional objects. The holographic display device determines a delta between the relative virtual position between the one or more three dimensional objects and an optimal virtual position. The holographic display device generates prescriptive remedy based on the delta. [0025] FIG. 1 is a block diagram illustrating a system 100 for the holographic eye testing device according to an exemplary embodiment. In one embodiment, the holographic eye testing device can include a head mounted display (HMD) 102. The HMD 102 can include a pair of combiner lenses 104A, 104B for rendering three dimensional (3D) images within a user’s field of view (FOV). The combiner lenses 104A, 104B can be calibrated to the interpupillary distance from the user’s eyes 106A, 106B. A computing system 108 can be connected to the combiner lenses 104A, 104B. The holographic eye testing device can be repositioned in any of the nine primary gaze positions as needed. These tests are built to run on technical platforms that can project 3D holographic images within a field of view provided by a wired or wireless headset. The HMD 102 can be connected to an adjustable, cushioned inner headband, which can tilt the combiner lenses 104A, 104B up and down, as well as forward and backward. To wear the unit, the user fits the HMD 102 on their head, using an adjustment wheel at the back of the headband to secure it around the crown, supporting and distributing the weight of the unit equally for comfort, before tilting the visor and combiner lenses 104 A, 104B towards the front of the eyes.
[0026] The computing system 108 can be inclusive to the HMD 102, where the holographic eye testing device is a self contained apparatus. The computing system 108 in the self contained apparatus can include additional power circuitry to provide electrical current to the parts of the computing system 108. Alternatively, the computing system 108 can be external to the HMD 102 and communicatively coupled either through wired or wireless communication channels to the HMD 102. Wired communication channels can include digital video transmission formats including High Definition Multimedia Interface (HDMI), DisplayPort™ (DisplayPort is a trademark of VESA of San lose CA, U.S.A.), or any other transmission format capable of propagating a video signal from the computing system 108 to the combiner lenses 104A, 104B. Additionally, the HMD 102 can include speakers or headphones for the presentation of instructional audio to the user during the holographic eye tests. In a wireless communication embodiment, the HMD 102 can include a wireless adapter capable of low latency high bandwidth applications, including but not limited to IEEE 802.1 lad. The wireless adapter can interface with the computing system 108 for the transmission of low latency video to be displayed upon the combiner lenses 104, 104B. [0027] Additionally the computing system 108 can include software for the manipulation and rendering of 3D objects within a virtual space. The software can include both platform software to support any fundamental functionality of the HMD 102, such as motion tracking, input functionality, and eye tracking. Platform software can be implemented in a virtual reality (VR) framework, augmented reality (AR) framework, or mixed reality (MR) framework. Platform software to support the fundamental functionality can include but are not limited to SteamVR® (SteamVR is a registered trademark of the Valve Corporation, Seattle WA, U.S.A) software development kit (SDK), Oculus® VR SDK (Oculus is a registered trademark of Oculus VR LLC, Irvine CA, U.S.A.), OSVR (Open source VR) (OSVR is a registered trademark of Razer Asia Pacific Pte. Ltd. Singapore) SDK, and Microsoft Windows Mixed Reality Computing Platform. Application software executing on the computing system 108 with the underlying platform software can be a customized rendering engine, or an off-the-shelf 3D rendering framework, such as Unity® Software (Unity Software is a registered trademark of Unity Technologies of San Francisco CA, U.S.A). The rendering framework can provide the basic building blocks of the virtualized environment for the holographic refractive eye test, including 3D objects and manipulation techniques to change the appearance of the 3D objects. The rendering framework can provide application programming interfaces (APIs) for the instantiation of 3D objects and well-defined interfaces for the manipulation of the 3D objects within the framework. Common software programming language bindings for rendering frameworks include but are not limited to C++, Java, and C#. Additionally, the application software can provide settings to allow a test administrator to adjust actions within the test, such as holographic object speed and object color.
[0028] The system 100 can be configured to perform a variety of eyes tests, including, but not limited to, acuity testing (near and far), phorias, horizontal divergence, horizontal convergence, and refraction.
[0029] FIG. 2 is a block diagram illustrating a test for horizontal phoria with a holographic eye testing device according to an exemplary embodiment. In one embodiment, two virtual 3D objects 202 A, 202B can be manipulated in a user’s field of view (FOV) 204A, 204B. The virtual 3D objects 202A, 202B can be translated within the same horizontal plane until convergence. The virtual 3D objects 202A, 202B can have a starting point within the user’s FOV 204A, 204B equidistant within the same horizontal plane from a mid-point of the FOV. Utilizing application software, the virtual 3D objects 202A, 202B are translated and projected on the combiner lenses 104A, 104B to give the appearance that the virtual 3D objects are a set distance from the view of the user’s eyes 106A, 106B. The application software can present the virtual 3D objects 202A, 202B via the combiner lenses 104A, 104B so that the virtual 3D objects can appear to be at different distances from the user’s eyes 106A, 106B. In some embodiments, the presentation of the virtual 3D objects 202A, 202B can correspond to projection of the virtual 3D objects at distances of 16 inches to 20 feet in front of the user’s eyes 106A, 106B. The range of distances allow phoria to be measured at different intervals of depth for better confidence in convergence results. As the virtual 3D objects 202A, 202B approach the mid-point of the user’s FOV, the user can provide input to the application software or platform software. The input can take the form of voice commands, gestures, or input from a “clicker.” As the virtual 3D objects 202A, 202B approach each other, they will begin to overlap and converge into a single virtual 3D object. At the point in which the convergence becomes clear to the user, the user can provide input to stop any motion or translation of the virtual 3D objects 202A, 202B. The application software evaluates a delta between the midpoint of the user’s FOV 204A, 204B and the point at which the virtual 3D objects 202A, 202B were located when the user provided input to stop the motion or translation. The delta can be represented as a deviation relative to the virtual distance of the virtual 3D objects 202A, 202B from the patient. A diopter is measured by the deviation of the image at a specific virtual distance (1 prism diopter = 1 virtual cm deviation of the image at a 1 virtual meter distance).
[0030] FIG. 3 is a block diagram illustrating a test for vertical phoria utilizing the holographic eye testing device according to an exemplary embodiment. In one embodiment, two virtual 3D objects 304A, 304B can be manipulated in a user’s FOV 302. The virtual 3D objects 304A, 304B can be translated within the same vertical plane until convergence. The virtual 3D objects 304 A, 304B can have a starting point within the user’s FOV 302 equidistant within the same vertical plane from a mid-point of the FOV. Utilizing application software, the virtual 3D objects 304A, 304B are translated and projected on the combiner lenses 104A, 104B to give the appearance that the virtual 3D objects are a set distance from the view of the user’s eyes 106A, 106B. The application software can present the virtual 3D objects 304A, 304B via the combiner lenses 104A, 104B so that the virtual 3D objects can appear to be at different distances from the user’s eyes 106 A, 106B. In some embodiments, the presentation of the virtual 3D objects 304A, 304B can correspond to projection of the virtual 3D objects at distances of 16 inches to 20 feet in front of the user’s eyes 106A, 106B. The range of distances allow phoria to be measured at different intervals of depth for better confidence in convergence results. As the virtual 3D objects 304A, 304B approach the midpoint of the user’s FOV 302, the user can provide input to the application software or platform software. The input can take the form of voice commands, gestures, or input from a “clicker.” As the virtual 3D objects 304A, 304B approach each other, they will begin to overlap and converge into a single visible virtual 3D object. At the point in which the convergence becomes clear to the user, the user can provide input to stop any motion or translation of the virtual 3D objects 304A, 304B. The application software evaluates a delta between the midpoint of the user’s FOV 302 and the point at which the virtual 3D objects 304A, 304B were located when the user provided input to stop the motion or translation. As mentioned above, the delta can be represented as a deviation relative to the virtual distance of the virtual 3D objects 304A, 304B from the patient. A diopter is measured by the deviation of the image at a specific virtual distance (1 prism diopter = 1 virtual centimeter deviation of the image at a 1 virtual meter distance).
[0031] FIG. 4A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment. In one embodiment, multiple virtual 3D objects 404A, 406A, 406B can be manipulated in a user’s FOV 402. The virtual 3D objects 404A, 406A, 406B start in different planes 408A, 408B parallel to the plane of the combiner lenses 408C. In one embodiment, the virtual 3D objects 404A can be a set of vertical lines (relative to the user’s eyes 106A, 106B) residing in a plane 408A. Additional virtual 3D objects 406A, 406B can be a set of horizontal lines (relative to the user’s eyes 106A, 106B) residing in a plane 408B. When viewed through the combiner lenses 104A, 104B in the FOV 402, the virtual 3D objects 404A, 406A, 406B can appear as a hash mark (#) where the virtual 3D objects appear to intersect, however since they are in different planes 408A, 408B they do not actually intersect.
[0032] In one embodiment, the user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.” In one embodiment, the user states the word “start” to begin the test. Control of the test can take the form voice commands including “forward” and “backward.” A voice command of “forward” translates the plane 408A, and associated virtual 3D objects 404A toward the combiner lenses 104A, 104B. A voice command of “backward” translates the plane 408A, and associated virtual 3D objects 404A away from the combiner lenses 104A, 104B. Utilizing the voice commands and associated translations, a user can manipulated the virtual 3D objects 404A where the user believes the respective planes 408A, 408B and associated virtual 3D objects 404A, 406A, 406B are coincidental. The user can provide a voice command to the computing system 108, such as stating the word “stop” to complete the manipulation portion of the test. Upon the receipt of the “stop” command, the computing system 108 disallows subsequent input commands, such as “forward” and “backward,” and determines a delta distance between the final location of the planes 408 A, 408B. In the event the user manipulated the planes 408A, 408B to coincide, the delta would be zero.
[0033] FIG. 4B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 4A. Virtual 3D objects 406A, 406B can be implemented as parallel lines residing the same plane 408B. Virtual 3D objects 404A, 404B can be implemented as parallel lines residing in the same plane 408A.
[0034] FIG. 5A is a block diagram illustrating a test for astigmatism utilizing the holographic eye testing device according to an exemplary embodiment. FIG. 5B is a block diagram illustrating a user’s perspective of the virtual 3D objects depicted in FIG. 5A. In one embodiment, multiple virtual 3D objects 504A, 504B can be manipulated in a user’s FOV 402. The virtual 3D objects 504A, 504B correspond to concentric opaque rings across the surface of an invisible sphere 510, where the concentric opaque rings traverse the surface of the sphere perpendicular to the plane of the combiner lenses 104 A, 104B. The virtual 3D objects 504A, 504B can be oriented along coaxial planes 506, 508. In other embodiments, the virtual 3D objects 504A, 504B can be distal or proximal portions of concentric opaque rings across the surface of an invisible sphere 510.
[0035] In one embodiment, the user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.” The user states the word “start” to begin the test. As the test begins, the invisible sphere 510 and accompanying virtual 3D objects are translated toward the combiner lenses 104 A, 104B to give the user the appearance that the virtual 3D objects are coming directly at the user’s eyes 106A. When the user can see the virtual 3D objects 504A, 504B clearly, the user can provide input to stop the test in the form of a voice command of “stop.” The computing system 108 ceases translation of the invisible sphere 510 and calculates a delta distance from the starting point of the invisible sphere to the point where the invisible sphere resides at the end of the test. A constant point of reference on the invisible sphere 510 can be utilized to determine a consistent location to determine the delta distance.
[0036] In another embodiment, the user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.” The user states the word “start” to begin the test. The virtual 3D objects 504A, 504B being the test in a parallel or coincidental plane with a starting plane 506. As the test begins the invisible sphere 510 and accompanying virtual 3D objects are rotated in a clockwise motion 512 from the user’s perspective. When the invisible sphere 510 and accompanying virtual 3D objects appear to have rotated ninety (90) degrees from the original starting position, (parallel or coincidental to the horizontal plane 508), the user can provide input to stop the test in the form of a voice command of “stop.” The computing system 108 ceases rotation of the invisible sphere 510 and calculates a delta in degrees based on the rotation from the starting point of the invisible sphere to the orientation of the invisible sphere at the end of the test. The delta in degrees can be used to determine the axis of the astigmatism. This provides the amount of astigmatism measured in this eye and therefore the predicted amount of cylindrical correction needed to bring clarity.
[0037] FIG. 5C is a block diagram illustrating another user’s perspective of the virtual 3D objects depicted in FIG. 5A. In another embodiment, the virtual 3D objects 504A, 504B can be distal portions of concentric opaque rings across the surface of an invisible sphere 510. The virtual 3D objects 514A, 514B can be proximal portions of concentric opaque rings across the surface of an invisible sphere 510. The distal portions of the concentric opaque rings form a group and the proximal portions form a group. Groups are rotated and translated in the FOV 402 in unison. The distal portions can be offset from the proximal portions by a rotation of 45 degrees. The user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.” The user states the word “start” to begin the test. The computing system 108 translates virtual 3D objects 514A, 514B corresponding to the proximal portions toward the combiner lenses 104A, 104B where the virtual 3D object 514A, 514B appear to be coming toward the user’s eyes 102A, 102B. When the user determines that the virtual 3D objects 514A, 514B are clear and distinct from the distal virtual 3D objects 504A, 504B, the user can provide input to stop the test in the form of a voice command of “stop.” The computing system 108 ceases translation of the virtual 3D objects 514A, 514B corresponding to the proximal portions and calculates a delta in distance based on the translation from the starting point to the position at the end of the test.
[0038] FIG. 6A is a block diagram illustrating a test for visual acuity utilizing the holographic eye testing device, according to an exemplary embodiment. In one embodiment, virtual 2D or 3D letters or images 604 can be displayed and/or manipulated in a user’s FOV 402. Utilizing application software, the virtual letters or images 604 are translated and projected on the combiner lenses 104 A, 104B to give the appearance that the virtual letters or images 604 are a set distance from the view of the user’s eyes 106A, 106B. The application software can present the virtual letters or images 604 via the combiner lenses 104A, 104B so that the virtual letters or images 604 can appear to be at different distances from the user’s eyes 106A, 106B. In some embodiments, the presentation of the virtual letters or images 604 can correspond to projection of the virtual letters or images 604 at distances of 16 inches (near visual acuity) to 20 feet (distance visual acuity) in front of the user’s eyes 106A, 106B. The visual acuity test is used to determine the smallest letters or images a user can identify from the virtual letters or images 604 at a specified distance away (e.g., 6 meters). The range of distances allows visual acuity to be measured at different intervals of depth.
[0039] In one embodiment, the user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.” In one embodiment, the user states the word “start” to begin the test. [0040] In some embodiments, the virtual letters or images 604 can be moved forward or backwards. Control of the test can take the form voice commands including “forward” and “backward.” A voice command of “forward” translates the plane 608, and associated virtual letters or images 604 toward the combiner lenses 104A, 104B. A voice command of “backward” translates the plane 608, and associated virtual letters or images 604 away from the combiner lenses 104A, 104B. Utilizing the voice commands and associated translations, a user can manipulated the virtual letters or images 604 until the user can or can no longer identify the virtual letters or images 604. The user can provide a voice command to the computing system 108, such as stating the word “stop” to complete the manipulation portion of the test. Upon the receipt of the “stop” command, the computing system 108 disallows subsequent input commands, such as “forward” and “backward,” and determines a final distance of the virtual letters or images 604B.
[0041] FIG. 6B is a block diagram illustrating a user’s perspective of the virtual letters depicted in FIG. 6A. The virtual letters 604 can be implemented as letters residing on the same plane 608. In other embodiments, one or more of the virtual letters 604 can be implemented as letters residing on different planes.
[0042] FIG. 7A is a block diagram illustrating a test for horizontal convergent and horizontal divergent, utilizing the holographic eye testing device according to an exemplary embodiment. In one embodiment, virtual 2D or 3D shapes 704 can be displayed and/or manipulated in a user’s FOV 402. The horizontal convergent and horizontal divergent tests are used to examine eye movement responses to symmetric stimuli.
[0043] In one embodiment, the virtual shapes 704 can be manipulated in a user’s field of view (FOV) 402. The virtual shapes 704 can have a starting point within the user’s FOV 402 equidistant within the same horizontal plane from a mid-point of the FOV 402. Utilizing application software, the virtual shapes 704 are translated and projected on the combiner lenses 104A, 104B to give the appearance that the virtual shapes 704 are a set distance from the view of the user’s eyes 106A, 106B. The application software can present the virtual shapes 704 via the combiner lenses 104A, 104B so that the virtual shapes 704 can appear to be at different distances from the user’s eyes 106A, 106B. In some embodiments, the presentation of the virtual shapes 704 can correspond to projection of the virtual shapes 704 at distances of 16 inches to 20 feet in front of the user’s eyes 106A, 106B. The range of distances allows the horizontal convergent and the horizontal divergent to be measured at different intervals of depth for better confidence in convergence and divergence results.
[0044] In one embodiment, the user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures or providing input from a “clicker.” In one embodiment, the user states the word “start” to begin the test.
[0045] FIGs. 7B-7C are block diagrams illustrating a horizontal convergent test, utilizing the holographic eye testing device according to an exemplary embodiment.
[0046] The horizontal convergent test is performed in two stages - a break stage and a recovery stage. Two objects (for example, 3D or 2D shapes, such as 3D cubes) are presented to a user at a given distance. The distance can be changed depending on needs of the user, such as whether the test is being performed for near-sightedness or far-sightedness. A first object 710 of the two objects is projected to a right eye and a second object 712 of the two objects is projected to the left eye.
[0047] For the break stage of the test, the first object 710 and the second object 712 begin overlaid on each other and appear as one object, as shown in FIG. 7B. The first object 710 and the second object 712 slowly move apart. The first object 710 shown to the right eye moves to the left from a center start point, and the second object 712 shown to the left eye moves to the right from the center start point, as shown in FIG. 7C. The user reports when the user notices that instead of viewing the single object, the user now views that there are two objects (the first object 710 and the second object 712).
[0048] During the break stage, as the first object 710 and the second object 712 move from the center start point of the user’s FOV 402, the user can provide input to the application software or platform software. The input can take the form of voice commands, gestures, or input from a “clicker.” As the first object 710 moves to the left from the center start point and the second object 712 moves to the right from the center start point, the objects will begin to diverge and appear as separate objects. At the point in which the divergence becomes clear to the user, the user can provide input to stop any motion or translation of the first object 710 and the second object 712. The application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation. The delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user. A diopter is measured by the deviation of the object at a specific virtual distance (1 prism diopter = 1 virtual cm deviation of the object at a 1 virtual meter distance).
[0049] For the recovery stage of the test, the first object 710 and the second object 712 start out offset from each other and slowly move together. The first object 710 shown to the right eye starts on the left side of the user's view and moves to the center start point, and the second object 712 shown to the left eye starts on the right side of the user's view and moves to the center start point, as shown in FIG. 7D. The user reports when the user no longer views two distinct objects, but instead only a single object.
[0050] During the recovery stage, as the first object 710 and the second object 712 approach the center start point of the user’s FOV 402, the user can provide input to the application software or platform software. The input can take the form of voice commands, gestures, or input from a “clicker.” As the first object 710 and the second object 712 approach each other, they will begin to overlap and converge into a single object. At the point in which the convergence becomes clear to the user, the user can provide input to stop any motion or translation of the first object 710 and the second object 712. The application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation. The delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user. A diopter is measured by the deviation of the object at a specific virtual distance (1 prism diopter = 1 virtual cm deviation of the object at a 1 virtual meter distance).
[0051] FIGs. 7E-7G is a block diagram illustrating a horizontal divergent test, utilizing the holographic eye testing device according to an exemplary embodiment.
[0052] The horizontal divergent is performed in two stages - a break stage and a recovery stage. Two objects (for example, 3D or 2D shapes, such as 3D cubes) are presented to a user at a given distance. The distance can be changed depending on needs of the user, such as whether the test is being performed for near-sightedness or far-sightedness. A first object of the two objects is projected to a right eye and a second object of the two objects is projected to the left eye.
[0053] The difference between horizontal divergent test and the horizontal convergent tests has to do with what object is shown to what eye, and where the objects move. For divergent test, the objects are shown to the opposite eye from the convergent test.
[0054] For the break stage of the test, the first object 710 and the second object 712 begin overlaid on each other and appear as one object, as shown in FIG. 7E. The first object 710 and the second object 712 slowly move apart. The first object 710 shown to the right eye moves to the right from the center start point, and the second object 712 shown to the left eye moves to the left from the center start point, as shown in FIG. 7F. The user reports when the user notices that instead of viewing the single object, the user now views that there are two objects (the first object 710 and the second object 712).
[0055] During the break stage, the first object 710 and the second object 712 move from the center start point of the user’s FOV 402, the user can provide input to the application software or platform software. The input can take the form of voice commands, gestures, or input from a “clicker.” As the first object 710 moves to the right from the center start point and the second object 712 moves to the left from the center start point, the objects will begin to diverge and appear as separate objects. At the point in which the divergence becomes clear to the user, the user can provide input to stop any motion or translation of the first object 710 and the second object 712. The application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation. The delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user. A diopter is measured by the deviation of the object at a specific virtual distance (1 prism diopter = 1 virtual cm deviation of the object at a 1 virtual meter distance).
[0056] For the recovery stage of the test, the first object 710 and the second object 712 start out offset from each other and slowly move together, as shown in FIG. 7G. The first object 710 shown to the right eye starts on the right side of the user's view and moves to the center start point, and the second object 712 shown to the left eye start's on the left side of the user's view and moves to the center start point. The user reports when the user no longer views two distinct objects, but instead only a single object.
[0057] During the recovery stage, as the first object 710 and the second object 712 approach the center start point of the user’s FOV 402, the user can provide input to the application software or platform software. The input can take the form of voice commands, gestures, or input from a “clicker.” As the first object 710 and the second object 712 approach each other, the objects will begin to overlap and converge into a single object. At the point in which the convergence becomes clear to the user, the user can provide input to stop any motion or translation of the first object 710 and the second object 712. The application software evaluates a delta between the midpoint of the user’s FOV 402 and the point at which the first object 710 and the second object 712 were located when the user provided input to stop the motion or translation. The delta can be represented as a deviation relative to the virtual distance of the first object 710 and the second object 712 from the user. A diopter is measured by the deviation of the object at a specific virtual distance (1 prism diopter = 1 virtual cm deviation of the object at a 1 virtual meter distance).
[0058] For both the convergent test and the divergent test described above, the first object 710 is only shown the right eye and the second object 712 is only shown to the left eye. This means at the start of the convergent test and/or the divergent test, a user will see two objects, but if the user closes one eye, only one object will be visible to the user (the one object projected for that eye). This is how a fusion effect is achieved, where two objects suddenly appear to be one, at the start of the divergent test and at the end of the convergent test.
[0059] In the above convergent test and/or the divergent test, and along with most of the other tests, eye tracking data is important because it allows the system to determine where the user is looking. For these tests, along with others, the program may request that the user look at a certain point in order to start the test, or to confirm that they are looking in the right location and see the shapes before the test is started. For the phorias testing, the system may request that a user attempt to gaze at a certain point while the shapes are in motion in order to ensure that the tests return accurate results. If they look away from the point, or directly at the shapes, the test may pause and wait for them to return their gaze to the requested object before continuing.
[0060] FIGs. 8A-8C are diagrams illustrating refraction testing utilizing the holographic eye testing device according to an exemplary embodiment. As shown in FIG. 8A, in one embodiment, at least one virtual object 804 and/or a series of virtual lines 810 can be displayed and/or manipulated in a user’s FOV 402. Utilizing application software, the object 804 or the series of lines 810 are translated on a vertical plane 808 and projected on the combiner lenses 104A, 104B to give the appearance that the object 804 or the series of lines 810 is a set distance from the view of the user’s eyes 106A, 106B. The application software can present the object 804 or the series of lines 810 via the combiner lenses 104A, 104B so that the object 804 or the series of lines 810 can appear to be at different distances from the user’s eyes 106A, 106B. In some embodiments, the presentation of the object 804 or the series of lines 810 can correspond to projection of virtual letters, images, or lines at distances of 16 inches (near visual acuity) to 20 feet (distance visual acuity) in front of the user’s eyes 106A, 106B.
[0061] In one embodiment, the user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures, or providing input from a “clicker.” In one embodiment, the user states the word “start” to begin the test.
[0062] The described systems and methods use depth of field for testing refraction. The refraction testing determines the user’s level of hyperopia (farsightedness), myopia (nearsightedness), and astigmatism, and three associated numerical values for sphere, cylinder, and axis, typically needed for an eyeglass prescription.
[0063] In the first step, the computing system 108 measures the refractive state of the eye by having the user move the object 804 from an initial distance towards or away from the user until a resolution of the object 804 appears clear to the user. The distance at which the object 804 appears clear to the user is labeled as a final distance. The computing system 108 determines an initial measurement between the final position of the virtual object and an optimal virtual position. This initial measurement is at the focal length of the refractive spherical equivalent of the eye and is the sphere power. The sphere power indicates the amount of lens power, measured in diopters (D), prescribed to correct nearsightedness or farsightedness.
[0064] In the second step, a series of virtual lines 810 are presented at the final distance in a parallel or coincidental plane with the plane 808. In a first embodiment, the series of lines 810 correspond to concentric opaque rings across the surface of an invisible sphere 812 (shown in FIG. 8C), where the concentric opaque rings traverse the surface of the sphere 812 perpendicular to the plane of the combiner lenses 104 A, 104B. In a second embodiment, the series of lines 810 includes a predefined number of lines (for example, three lines) in the axis plane.
[0065] In the first embodiment, as the test begins, the invisible sphere 812 and accompanying series of lines 810 are rotated in a clockwise motion from the user’s perspective. When the invisible sphere 812 and accompanying series of lines 810 appear to have rotated ninety (90) degrees about an axis from the final position, (parallel or coincidental to the axis 814 shown in FIG. 8C), the user can provide input to stop the test, for example, in the form of a voice command of “stop.” The computing system 108 ceases rotation of the invisible sphere 812 and calculates a delta in degrees based on the rotation from the starting point of the invisible sphere 812 to the orientation of the invisible sphere 812 at the end of the test. The delta in degrees can be used to determine the axis value.
[0066] The second step provides the axis. This provides the amount of astigmatism measured in this eye and therefore the predicted amount of cylindrical correction needed to bring clarity. The cylinder value refers to the amount of astigmatism in the eyes.
[0067] In the third step, the lines 810 are shifted 90 degrees from the axis 814 and moved further away from the user to display a blurred side of the lines 810. The user moves the lines 810 closer to or farther from (plus cylinder or minus cylinder) the user until the lines appear clear to the user. This is the focal length of the cylinder power and corresponds to the difference in power from the sphere.
[0068] Based on the above, the sphere, cylinder, and axis values are determined. The above described sequence provides the amount of hyperopia (farsightedness), myopia (nearsightedness), and astigmatism measured in this eye and therefore the predicted amount of correction needed to bring clarity. If the user has both myopia or hyperopia and astigmatism, the object 804 would be simultaneously manipulated to determine myopia or hyperopia while also evaluation the dioptric power of the astigmatism.
[0069] In some embodiments, when measuring hyperopia, plus (convex) lenses are included in the HMD 102 underneath the lenses 104A, 104B and in front of the user’s eyes 106A, 106B. The reason for this is that the hyperopic eye focuses beyond the fixation object. In order to measure the hyperopia, the hyperopic eye is mathematically made to become myopic An algorithm is used to subtracts the values to determine the dioptric value of the hyperopis.
[0070] In some embodiments, a fourth step is included to refine the exact sphere power and then to binocularly balance the prescription for the two eyes. In the fourth step, the object 804 is shifted further away from the user to create +0.50 diopters in each the user’s eyes 106A, 106B. The object 804 initially appears as one object to the user and then is disassociated or separated until the user can identify two separate objects. The user reports which object is clearer. The clearer object is then moved away from the user until both objects appear equally as blurred. The objects are then merged and the user moves them closer until the resolution appears best to the user. The binocular balance has then been completed.
[0071] The object 804 and lines 810 can be moved forward or backwards or rotated. Control of the test can take the form voice commands including “forward,” “backward,” and “rotate.” A voice command of “forward” translates the plane 808, and associated object 804 or lines 810 toward the combiner lenses 104A, 104B. A voice command of “backward” translates the plane 808, and associated object 804 or lines 810 away from the combiner lenses 104A, 104B. A voice command of “rotate” moves the plane 808, and associated object 804 or lines 810 in a rotation manner. Utilizing the voice commands and associated translations, a user can manipulate the object 804 until the user can or can no longer identify the object 804 or lines 810. The user can provide a voice command to the computing system 108, such as stating the word “stop” to complete the manipulation portion of the test.
[0072] FIG. 8B is a block diagram illustrating a user’s perspective of the virtual object 804 described in FIG. 8A. The virtual object 804 can be implemented as one or more virtual 2D or 3D letters or images, such as shapes, pictures, etc., residing on the plane 808. [0073] FIG. 8C is a block diagram illustrating a user’s perspective of the series of lines 810 described in FIG. 8A. The series of lines 810 can be implemented as parallel lines (running vertical and/or horizontal) residing on the plane 808. The series of lines 810 correspond to concentric opaque rings across the surface of the invisible sphere 812, where the concentric opaque rings traverse the surface of the sphere 812 perpendicular to the plane of the combiner lenses 104A, 104B.
[0074] FIG. 9 is a block diagram illustrating convergence testing utilizing the holographic eye testing device according to an exemplary embodiment. The head mounted display (HMD) 102 incorporates eye tracking. Eye tracking enables the HMD 102 to track where the user’s eyes 106A, 106B are looking in real time. As shown in FIG. 9A, in one embodiment, at least one virtual object 902 can be displayed and/or manipulated in a user’s FOV 402. Utilizing application software, the object 902 is translated on a vertical plane 908 and projected on the combiner lenses 104A, 104B to give the appearance that the object 902 is a set distance from the view of the user’s eyes 106A, 106B. The application software can present the object 902 via the combiner lenses 104A, 104B so that the object 902 can appear to be at different distances from the user’s eyes 106A, 106B. In some embodiments, the presentation of the object 902 can correspond to projection of virtual letters, images, or lines at distances of 16 inches (near visual acuity) to 20 feet (distance visual acuity) in front of the user’s eyes 106A, 106B.
[0075] In one embodiment, the user can start the test by providing input to the computing system 108. The input can take the form of voice commands, including saying key words indicative of beginning the test, gestures, or providing input from a “clicker.” In one embodiment, the user states the word “start” to begin the test.
[0076] During the convergence testing, the object 902 is presented to each eyes 106A, 106B and is moved across the user’s FOV 402. The user follows the object 902 from left to right and right to left. The object 902 is then moved in a circle clock wise and counter-clockwise. The HMD 102, via eye tracking, monitors fixation loss and quality of movement of the user’s eyes 106A, 106B. Points of fixation loss and the quality of movement are recorded.
[0077] Convergence near point will be assessed by rendering a first virtual object 910 displayed to a right eye and a second virtual object 912 displayed to a left eye within the holographic display device, wherein the rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to the user. The first virtual object 910 and the second virtual object 912 appear at a distance of 40 centimeters in front of the user’s eyes 106A, 106B. The user moves the first virtual object 910 and the second virtual object 912 presented to both eyes 106A, 106B toward the user’s nose. The HMD 102 monitors eye alignment and record the distance (in centimeter or inches) when the eyes 106A, 106B lose alignment on the first virtual object 910 and the second virtual object 912 and the first virtual object 910 and the second virtual object 912 appears as two separate objects. This is recorded as the break point. The first virtual object 910 and the second virtual object 912 are then moved away from the user and the HMD 102 records the distance when the eyes 106A, 106B realign and the user fuses the first virtual object 910 and the second virtual object 912 back to appear as one virtual object to the user. This is recorded as the realignment point.
[0078] FIG. 9B is a block diagram illustrating a user’s perspective of the virtual object 902 and/or the first virtual object 910 and the second virtual object 912 as viewed aligned, as described in FIG. 9A. The virtual object 804 can be implemented as one or more virtual 2D or 3D letters or images, such as shapes, pictures, etc., residing on the plane 908.
[0079] FIGs. 10A-10D illustrates methods for diagnosis and/or prescription of remedies for visual impairment in accordance with exemplary embodiments.
[0080] FIG. 10A illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
[0081] At step 1002, the holographic display device renders one or more three dimensional objects with the holographic display device. The rendering corresponds to a virtual level of depth viewable by a user.
[0082] At step 1004, the holographic display device updates the rendering of the one or more three dimensional objects within the holographic display device. The updated rendering includes a virtual movement of the one or more three dimensional objects within the virtual level of depth. The virtual movement includes moving the one or more three dimensional objects laterally in the field of view of the user. Alternatively, the virtual movement includes moving the one or more three dimensional objects vertically in the field of view of the user. Additionally, the virtual movement includes moving the one or more three dimensional objects from a distal position to proximal position within the field of view of the user. The virtual level of depth corresponds to a simulated distance away from the user. The simulated distance can range from sixteen (16) inches to twenty (20) feet from the user.
[0083] At step 1006, the holographic display device receives input from a user. The input can include an indication of alignment of the one or more three dimensional objects based on the virtual movement. The indication of alignment can include a relative virtual position between the one or more three dimensional objects. The input from the user can include hand gestures and voice commands
[0084] At step 1008, the holographic display device determines a delta between the relative virtual position of the one or more three dimensional objects and an optimal virtual position.
[0085] At step 1010, the holographic display device generates a prescriptive remedy based on the delta between the relative virtual position of the one or more three dimensional objects and the optimal virtual position.
[0086] FIG. 10B illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
[0087] At step 1012, a diagnostic module configured to execute on a computing device communicatively coupled to the head mounted holographic display device renders a first virtual object displayed to the right eye and a second virtual object displayed to the left eye within a head mounted holographic display device. The rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to a user.
[0088] At step 1014, the diagnostic module updates the rendering of the first virtual object and the second virtual object within the holographic display device. The update includes a virtual movement of the first virtual object in a first direction and the second virtual object in a second direction opposite the first direction.
[0089] At step 1016, the diagnostic module receives input from a user. The input includes an indication of separation of the first virtual object and the second virtual object based on the virtual movement. The indication of separation comprises a relative virtual position between the first virtual object and the second virtual object where the user views the first virtual object and the second virtual object as separate objects.
[0090] At step 1018, the diagnostic module determines a first delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
[0091] At step 1020, the diagnostic module updates the rendering of the first virtual object and the second virtual object within the holographic display device. The update includes a virtual movement of the first virtual object in the second direction and the second virtual object in the first direction.
[0092] At step 1022, the diagnostic module receives a second input from the user. The input includes an indication of alignment of the first virtual object and the second virtual object based on the virtual movement. The indication of alignment comprises a relative virtual position between the first virtual object and the second virtual object where the user views the first virtual object and the second virtual object as aligned to appear as one virtual object.
[0093] At step 1024, the diagnostic module determines a second delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
[0094] FIG. 10C illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
[0095] At step 1030, the diagnostic module renders at least one virtual object within the holographic display device at an initial position. The rendering corresponds to a virtual level of depth corresponding to an initial simulated distance away from a user.
[0096] At step 1032, the diagnostic module receives at least one first input from the user. The at least one first input comprises an indication to move the at least one virtual object virtually towards or away from the user. [0097] At step 1034, the diagnostic module updates the rendering of the at least one virtual object within the holographic display device. The update includes a virtual movement of the at least one virtual object in a direction towards or away from the user.
[0098] At step 1036, the diagnostic module receives a second input from the user. The second input includes an indication that the at least one virtual object appears clear to the user at a final position. The rendering corresponds to a virtual level of depth corresponding to a final simulated distance away from the user.
[0099] At step 1038, the diagnostic module determines a measurement between the final position of the virtual object and an optimal virtual position.
[00100] At step 1040, the diagnostic module renders at least one line within the holographic display device at the final position.
[00101] At step 1042, the diagnostic module rotates the at least one line about an axis from the final position.
[00102] At step 1044, the diagnostic module receives a third input from the user. The third input comprises an indication that the at least one line appears to the user to have rotated ninety degrees about the axis from the final position.
[00103] At step 1046, the diagnostic module determines a delta in degrees based on the rotation of the at least one line from the final point to an orientation of the at least one line at a time of receiving the third input.
[00104] FIG. 10D illustrates a method for diagnosis and prescription of remedies for visual impairment in accordance with an exemplary embodiment.
[00105] At step 1050, the diagnostic module renders at least one virtual object within the holographic display device. The rendering corresponds to a virtual level of depth viewable by the user.
[00106] At step 1052, the diagnostic module updates the rendering of the at least one virtual object within the holographic display device. The update includes a virtual movement of the at least one virtual object within the virtual level of depth. [00107] At step 1054, the diagnostic module monitors, via eye tracking, at least one of fixation loss or quality of movement of the eyes of the user.
[00108] At step 1056, the diagnostic module renders a first virtual object displayed to a right eye and a second virtual object displayed to a left eye within the holographic display device. The rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to a user.
[00109] At step 1058, the diagnostic module updates the rendering of the first virtual object and the second virtual object within the holographic display device. The update includes a virtual movement of the first virtual object and the second virtual object towards the user.
[00110] At step 1060, the diagnostic module monitors, via the eye tracking, eye alignment as the eyes of the user track the first virtual object and the second virtual object moving towards the user.
[00111] At step 1062, the diagnostic module identifies a distance when the user views the first virtual object and the second virtual object as separate objects.
[00112] At step 1064, the diagnostic module determines a delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
[00113] At step 1066, the diagnostic module generates a prescriptive remedy based on the delta between the relative virtual position of the first virtual object and the second virtual object and the optimal virtual position.
[00114] FIG. 11 depicts a block diagram an exemplary computing device in accordance with an exemplary embodiment. Embodiments of the computing device 1100 can implement embodiments of the system including the holographic eye testing device. For example, the computing device can be embodied as a portion of the holographic eye testing device, and supporting computing devices. The computing device 1100 includes one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing exemplary embodiments. The non-transitory computer-readable media may include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives, one or more solid state disks), and the like. For example, memory 1106 included in the computing system 108 may store computer- readable and computer-executable instructions or software (e.g., applications 1130 such as rendering application) for implementing exemplary operations of the computing device 1100. The computing system 108 also includes configurable and/or programmable processor 1102 and associated core(s) 1104, and optionally, one or more additional configurable and/or programmable processor(s) 1102’ and associated core(s) 1104’ (for example, in the case of computer systems having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 1106 and other programs for implementing exemplary embodiments of the present disclosure. Processor 1102 and processor(s) 1102’ may each be a single core processor or multiple core (1104 and 1104’) processor. Either or both of processor 1102 and processor(s) 1102’ may be configured to execute one or more of the instructions described in connection with computing system 108.
[00115] Virtualization may be employed in the computing system 108 so that infrastructure and resources in the computing system 108 may be shared dynamically. A virtual machine 1112 may be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines may also be used with one processor.
[00116] Memory 1106 may include a computer system memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory 1106 may include other types of memory as well, or combinations thereof. The computing system 108 can receive data from input/output devices. A user may interact with the computing system 108 through a visual display device 1114, such as a combiner lenses 1116, which may display one or more virtual graphical user interfaces, a microphone 1120 and one or more cameras 1118.
[00117] The computing system 108 may also include one or more storage devices 1126, such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that implement exemplary embodiments of the present disclosure. For example, exemplary storage device 1126 can include storing information associated with platform software and the application software.
[00118] The computing system 108 can include a network interface 1108 configured to interface via one or more network devices 1124 with one or more networks, for example, Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, Tl, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. In exemplary embodiments, the computing system can include one or more antennas 1122 to facilitate wireless communication (e.g., via the network interface) between the computing system 108 and a network and/or between the computing system 108 and other computing devices. The network interface 1108 may include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing system 108 to any type of network capable of communication and performing the operations described herein.
[00119] The computing system 108 may run any operating system 1110, such as any of the versions of the Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the computing system 108 and performing the operations described herein. In exemplary embodiments, the operating system 1110 may be run in native mode or emulated mode. In an exemplary embodiment, the operating system 1110 may be run on one or more cloud machine instances.
[00120] In describing exemplary embodiments, specific terminology is used for the sake of clarity. For purposes of description, each specific term is intended to at least include all technical and functional equivalents that operate in a similar manner to accomplish a similar purpose. Additionally, in some instances where a particular exemplary embodiment includes multiple system elements, device components, or method steps, those elements, components, or steps can be replaced with a single element, component, or step. Likewise, a single element, component, or step can be replaced with multiple elements, components, or steps that serve the same purpose. Moreover, while exemplary embodiments have been shown and described with references to particular embodiments thereof, those of ordinary skill in the art will understand that various substitutions and alterations in form and detail can be made therein without departing from the scope of the present disclosure. Further, still, other aspects, functions, and advantages are also within the scope of the present disclosure.
[00121] Exemplary flowcharts are provided herein for illustrative purposes and are non-limiting examples of methods. One of ordinary skill in the art will recognize that exemplary methods can include more or fewer steps than those illustrated in the exemplary flowcharts and that the steps in the exemplary flowcharts can be performed in a different order than the order shown in the illustrative flowcharts.

Claims

We claim:
1. A method for diagnosis and prescription of remedies for visual impairment, comprising: rendering, via a diagnostic module configured to execute on a computing device communicatively coupled to the head mounted holographic display device, a first virtual object displayed to the right eye and a second virtual object displayed to the left eye within a head mounted holographic display device, wherein the rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to a user; updating, via the diagnostic module, the rendering of the first virtual object and the second virtual object within the holographic display device, wherein the updates comprise a virtual movement of the first virtual object in a first direction and the second virtual object in a second direction opposite the first direction; receiving, via the diagnostic module, input from a user, wherein the input comprises an indication of separation of the first virtual object and the second virtual object based on the virtual movement, wherein the indication of separation comprises a relative virtual position between the first virtual object and the second virtual object where the user views the first virtual object and the second virtual object as separate objects; and determining, via the diagnostic module, a first delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
2. The method of claim 1, further comprising: updating, via the diagnostic module, the rendering of the first virtual object and the second virtual object within the holographic display device, wherein the updates comprise a virtual movement of the first virtual object in the second direction and the second virtual object in the first direction; receiving, via the diagnostic module, a second input from the user, wherein the input comprises an indication of alignment of the first virtual object and the second virtual object based on the virtual movement, wherein the indication of alignment comprises a relative virtual
29 position between the first virtual object and the second virtual object where the user views the first virtual object and the second virtual object as aligned to appear as one virtual object; and determining, via the diagnostic module, a second delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
3. The method of claim 2, further comprising generating, via the diagnostic module, a prescriptive remedy based on at least one of the first delta or the second delta.
4. The method of claim 1, wherein the input from the user comprises hand gestures and voice commands.
5. The method of claim 1, wherein the head mounted display device comprises a pair of transparent combiner lenses calibrated to the interpupillary distance.
6. The method of claim 1, wherein the first direction and the second direction are horizontal directions.
7. An apparatus for diagnosis and prescription of remedies for visual impairment, comprising: a head mounted holographic display device; a computing device communicatively coupled to the head mounted holographic display device; a diagnostic module configured to execute on the computing device, the diagnostic module when executed: renders a first virtual object displayed to the right eye and a second virtual object displayed to the left eye within the holographic display device, wherein the rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to a user; updates the rendering of the first virtual object and the second virtual object within the holographic display device, wherein the updates comprise a virtual movement of the
30 first virtual object in a first direction and the second virtual object in a second direction opposite the first direction; receives input from a user, wherein the input comprises an indication of separation of the first virtual object and the second virtual object based on the virtual movement, wherein the indication of separation comprises a relative virtual position between the first virtual object and the second virtual object where the user views the first virtual object and the second virtual object as separate objects; and determines a first delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position.
8. A method for diagnosis and prescription of remedies for visual impairment, comprising: rendering, via a diagnostic module configured to execute on a computing device communicatively coupled to a head mounted holographic display device, at least one virtual object within the holographic display device at an initial position, wherein the rendering corresponds to a virtual level of depth corresponding to an initial simulated distance away from a user; receiving, via the diagnostic module, at least one first input from the user, wherein the at least one first input comprises an indication to move the at least one virtual object virtually towards or away from the user; updating, via the diagnostic module, the rendering of the at least one virtual object within the holographic display device, wherein the updates comprise a virtual movement of the at least one virtual object in a direction towards or away from the user; receiving, via the diagnostic module, a second input from the user, wherein the second input comprises an indication that the at least one virtual object appears clear to the user at a final position, wherein the rendering corresponds to a virtual level of depth corresponding to a final simulated distance away from a user; and determining, via the diagnostic module, a measurement between the final position of the virtual object and an optimal virtual position.
9. The method of claim 8, further comprising: rendering, via the diagnostic module, at least one line within the holographic display device at the final position; rotating, via the diagnostic module, the at least one line about an axis from the final position; receiving, via the diagnostic module, a third input from the user, wherein the third input comprises an indication that the at least one line appears to the user to have rotated ninety degrees about the axis from the final position; and determining, via the diagnostic module, a delta in degrees based on the rotation of the at least one line from the final point to an orientation of the at least one line at a time of receiving the third input.
10. The method of claim 8, further comprising determining, via the diagnostic module, a sphere power based on the measurement.
11. The method of claim 9, further comprising determining, via the diagnostic module, an axis and a cylinder power based on the delta.
12. The method of claim 9, wherein the at least one line corresponds to at least one concentric opaque ring across a surface of an invisible sphere.
13. The method of claim 9, further comprising: shifting, via the diagnostic module, the at least one line 90 degrees from the axis to show an opposite side of the at least one line; and receiving, via the diagnostic module, at least one fourth input from the user, wherein the at least one fourth input comprises an indication to move the at least one a line virtually towards or away from the user; updating, via the diagnostic module, the rendering of the at least one line within the holographic display device, wherein the updates comprise a virtual movement of the at least one line in a direction towards or away from the user; receiving, via the diagnostic module, a fifth input from the user, wherein the fifth input comprises an indication that the at least one line appears clear to the user.
14. The method of claim 8, wherein the head mounted display device comprises a pair of transparent combiner lenses calibrated to the interpupillary distance.
15. An apparatus for diagnosis and prescription of remedies for visual impairment, comprising: a head mounted holographic display device; a computing device communicatively coupled to the head mounted holographic display device; a diagnostic module configured to execute on the computing device, the diagnostic module when executed: renders at least one virtual object within the holographic display device at an initial position, wherein the rendering corresponds to a virtual level of depth corresponding to an initial simulated distance away from a user; receives at least one first input from the user, wherein the at least one first input comprises an indication to move the at least one virtual object virtually towards or away from the user; updates the rendering of the at least one virtual object within the holographic display device, wherein the updates comprise a virtual movement of the at least one virtual object in a direction towards or away from the user; receives a second input from the user, wherein the second input comprises an indication that the at least one virtual object appears clear to the user at a final position,
33 wherein the rendering corresponds to a virtual level of depth corresponding to a final simulated distance away from a user; and determines a measurement between the final position of the virtual object and an optimal virtual position.
16. A method for diagnosis and prescription of remedies for visual impairment, comprising: rendering, via a diagnostic module configured to execute on a computing device communicatively coupled to a head mounted holographic display device enabled for eye tracking of eyes of a user, at least one virtual object within the holographic display device, wherein the rendering corresponds to a virtual level of depth viewable by the user; updating, via the diagnostic module, the rendering of the at least one virtual object within the holographic display device, wherein the updates comprise a virtual movement of the at least one virtual object within the virtual level of depth; and monitoring, via the diagnostic module, via the eye tracking, at least one of fixation loss or quality of movement of the eyes of the user.
17. The method of claim 16, further comprising the diagnostic module when executed: rendering, via the diagnostic module, a first virtual object displayed to a right eye and a second virtual object displayed to a left eye within the holographic display device, wherein the rendering corresponds to the first virtual object and the second virtual object aligned to appear as one virtual object to a user; updating, via the diagnostic module, the rendering of the first virtual object and the second virtual object within the holographic display device, wherein the updates comprise a virtual movement of the first virtual object and the second virtual object towards the user; monitoring, via the diagnostic module, via the eye tracking, eye alignment as the eyes of the user track the first virtual object and the second virtual object moving towards the user;
34 identifying, via the diagnostic module, a distance when the user views the first virtual object and the second virtual object as separate objects; determining, via the diagnostic module, a delta between the relative virtual position of the first virtual object and the second virtual object and an optimal virtual position; and generating, via the diagnostic module, prescriptive remedy based on the delta between the relative virtual position of the first virtual object and the second virtual object and the optimal virtual position.
18. The method of claim 16, wherein the rendering of the at least one virtual object within the holographic display device comprises virtually moving the at least one virtual object from left to right, right to left, a circle clock wise, and a circle counter-clockwise.
19. The method of claim 16, wherein the head mounted display device comprises a pair of transparent combiner lenses calibrated to the interpupillary distance.
20. An apparatus for diagnosis and prescription of remedies for visual impairment, comprising: a head mounted holographic display device enabled for eye tracking of eyes of a user; a computing device communicatively coupled to the head mounted holographic display device; a diagnostic module configured to execute on the computing device, the diagnostic module when executed: renders at least one virtual object within the holographic display device, wherein the rendering corresponds to a virtual level of depth viewable by the user; updates the rendering of the at least one virtual object within the holographic display device, wherein the updates comprise a virtual movement of the at least one virtual object within the virtual level of depth; and
35 monitors, via the eye tracking, at least one of fixation loss or quality of movement of the eyes of the user.
36
PCT/US2022/044056 2021-09-22 2022-09-20 Holographic real space refractive system WO2023049086A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/481,819 US20220007929A1 (en) 2018-02-26 2021-09-22 Holographic Real Space Refractive System
US17/481,819 2021-09-22

Publications (2)

Publication Number Publication Date
WO2023049086A2 true WO2023049086A2 (en) 2023-03-30
WO2023049086A3 WO2023049086A3 (en) 2023-05-04

Family

ID=85721102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/044056 WO2023049086A2 (en) 2021-09-22 2022-09-20 Holographic real space refractive system

Country Status (1)

Country Link
WO (1) WO2023049086A2 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4726194B2 (en) * 2005-04-01 2011-07-20 キヤノン株式会社 Calibration method and apparatus
US9405122B2 (en) * 2014-01-29 2016-08-02 Ricoh Co., Ltd Depth-disparity calibration of a binocular optical augmented reality system
US10441161B2 (en) * 2018-02-26 2019-10-15 Veyezer LLC Holographic real space refractive sequence
US11614623B2 (en) * 2018-02-26 2023-03-28 Veyezer, Llc Holographic real space refractive system
US11238836B2 (en) * 2018-03-16 2022-02-01 Magic Leap, Inc. Depth based foveated rendering for display systems
EP3726474A1 (en) * 2019-04-19 2020-10-21 Koninklijke Philips N.V. Methods and systems for handling virtual 3d object surface interaction
US11367226B2 (en) * 2020-02-28 2022-06-21 The Johns Hopkins University Calibration techniques for aligning real-world objects to virtual objects in an augmented reality environment

Also Published As

Publication number Publication date
WO2023049086A3 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
US10441161B2 (en) Holographic real space refractive sequence
CN109964167B (en) Method for determining an eye parameter of a user of a display device
US10958898B2 (en) Image creation device, method for image creation, image creation program, method for designing eyeglass lens and method for manufacturing eyeglass lens
US11253149B2 (en) Holographic real space refractive sequence
CN109688898B (en) Device for assisting in the establishment of a correction for correcting strabismus or heterophoria and related method
US20220007929A1 (en) Holographic Real Space Refractive System
Sauer et al. Assessment of consumer VR-headsets’ objective and subjective field of view (FoV) and its feasibility for visual field testing
US11614623B2 (en) Holographic real space refractive system
WO2023049086A2 (en) Holographic real space refractive system
US11256110B2 (en) System and method of utilizing computer-aided optics
CN203882018U (en) 3D glasses and 3D display system
CN115052512A (en) Method for determining at least one parameter of an eye of a person
AU2021305095A1 (en) Holographic real space refractive system
WO2022066744A1 (en) Holographic real space refractive system
EP4183320A1 (en) Method for comparing two ophthalmic lenses having different optical designs
CN113721365A (en) Refractive adjustment method of wearable device, wearable device and medium
Simmons Emmetropic eyeglasses: Methods, early development and extended applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22873470

Country of ref document: EP

Kind code of ref document: A2