WO2018200717A1 - Système portable de commande et de manipulation d'images à correction des défauts de vision et augmentation de la vision et de la détection - Google Patents

Système portable de commande et de manipulation d'images à correction des défauts de vision et augmentation de la vision et de la détection Download PDF

Info

Publication number
WO2018200717A1
WO2018200717A1 PCT/US2018/029428 US2018029428W WO2018200717A1 WO 2018200717 A1 WO2018200717 A1 WO 2018200717A1 US 2018029428 W US2018029428 W US 2018029428W WO 2018200717 A1 WO2018200717 A1 WO 2018200717A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
display
camera
vision
Prior art date
Application number
PCT/US2018/029428
Other languages
English (en)
Inventor
Michael Hayes Freeman
Mitchael C. Freeman
Jordan BOSS
Richard C. Freeman
Chad Boss
Original Assignee
Raytrx, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raytrx, Llc filed Critical Raytrx, Llc
Priority to AU2018258242A priority Critical patent/AU2018258242A1/en
Priority to CA3060309A priority patent/CA3060309A1/fr
Priority to EP18790963.5A priority patent/EP3615986A4/fr
Priority to CN201880041696.9A priority patent/CN110770636B/zh
Priority claimed from US15/962,661 external-priority patent/US11956414B2/en
Publication of WO2018200717A1 publication Critical patent/WO2018200717A1/fr
Priority to AU2023285715A priority patent/AU2023285715A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C2202/00Generic optical aspects applicable to one or more of the subgroups of G02C7/00
    • G02C2202/10Optical elements and systems for visual disorders other than refractive errors, low vision

Definitions

  • the present invention relates generally to improvements in Augmented Reality (AR) glasses including using such glasses for medical purposes for the correction of vision defects, and more particularly to a system and methods for compensating for visual defects, for detecting the vision defects, capturing an image, modifying the image for correcting the visual defect, and displaying a modified image for that correction, and also for the correction of what prescription glasses would otherwise do.
  • AR Augmented Reality
  • This present invention also incorporates novel hardware and software applications related to the invention, including the application of smart contact lenses.
  • Macular degeneration AMD
  • macular hole and other FOV (Field of Vision) related blindness or vision defect conditions, such as central macular scar, histoplasmosis, end- stage glaucoma, Stargardt's disease, central serous retinopathy, myopic macular degeneration, diabetic macular edema, cystoid macular edema, macular holes, macular atrophy, central macular scar, histoplasmosis, macular hole, anterior ischemic optic neuropathy, and retinitas pigmentosa, are often irreversible.
  • peripheral receptors in the retina are usually still functioning, it is the purpose of this invention, in one embodiment for the medical application of AR glasses, then, to stretch, skew, and manipulate the image being projected on the eye to avoid the macula, and be directed to the retina's peripheral receptors. In this way, the entire image is projected on the functioning retinal receptors, and any involvement of the macula is avoided.
  • the method taught in this invention is how to create a matrix distortion of the entire image and project it onto the periphery of the eye, while avoiding the macula.
  • the patient by using "see through” glasses or lenses that provide a wide field of vision, upon which an augmented image can also be displayed, can have both real world and augmented visual information which corrects for the vision defect suffered delivered to the eyes.
  • This is an improvement to the existing art and a new "Mixed Reality" wearable invention.
  • the visually impaired patent can be introduced to both real world visual information and augmented information, at the same time, such that together the two separate inputs provide a "mixed reality" vision.
  • This can be accomplished, as taught herein, with virtually no latency, such that the augmented enhances the user/patient's remaining real-world experience eyesight.
  • the patient can still see some real world visual information with their peripheral eyesight so that the patient can move, walk, and navigate his or her immediate surroundings with as much surety and safety as the patent would otherwise have, and at the same time rely on the augmented reality of an augmented pixel/image moved video feed.
  • the present invention is aimed at one or more of the problems identified above.
  • the invention in general, in a first aspect, relates to a vision corrective wearable device which, in its preferred embodiment, uses Mixed Reality type of glasses/lenses together with new software and hardware to achieve the desired effect.
  • This patent teaches to manipulate an image or video to avoid unsighted areas, such as the damaged areas that result with macular degeneration or macular hole, and project the image on the glasses lenses where it can be viewed by the next nearest sighted areas of the eye. It also teaches to merge such augmented video back into real world images which can be viewed alongside the real-world images received without video by, typically, the periphery of the naked eye. It also teaches to correct for nearsightedness and farsightedness at the same time as the correction of the central vision.
  • the entire retina is the light and color sensitive tissue that lines the inside of the eye.
  • the retina functions in a manner similar to film in a camera, hence this invention supplements the retina's camera effect by providing an augmented, Mixed Reality duality of vision to the patient using both external camera(s) and display, as well as the eye's natural vision. Because it is important to make the augmented video or image hit as many cones as possible, the higher the rate of resolution, the better.
  • the preferred embodiment of the invention would cover at least 50 degrees of the Field of Vision (FOV) or greater. Although, the invention will work with a lesser FOV also.
  • FOV Field of Vision
  • the image to be displayed covers over the entire 120 degrees of normal eye vision, while in another aspect of the invention, the image is displayed on 90 degrees, 80 degrees, or 50 degrees FOV.
  • the image to be displayed is intended to be displayed on all or a portion of the lenses of Mixed Reality glasses, goggles, or other display techniques, where is extant both video and normal vision.
  • Part of the duality of the vision is the real-world vision that the patient sees where there is no augmented modified video, typically on the periphery of the lenses of the glasses and beyond that, simply the user's own unrestricted vision.
  • the other portion of the duality of vision is the augmented, modified video or picture which is typically, in the case of macular degeneration, focused on the portion of the eye closest to the central vision, concentrating manipulated pixels and images onto areas that are still sighted, and avoiding areas that are unsighted. Together, these make up a Mixed Reality augmented reality vision which helps correct for the defect of eye diseases like macular degeneration (all of which eye diseases are referred herein sometimes as "defects" or "deficits").
  • the optical elements in the eye focus an image onto the retina of the eye, using the lens, initiating a series of chemical and electrical events within the retina. Nerve fibers within the retina receive these signals and send electrical signals to the brain, which then interprets these signals as visual images.
  • all of us “see” an image upside down, since the eye bends the image through the lens, and the brain has the unique ability to "upright” the image in brain implemented natural simulation.
  • This invention uses this natural “simulation” created by the brain to "see” a whole picture or video, without any part missing, when in actuality there is a portion of the lens, which does not display an image.
  • this invention also employs the "brain-stitching" theory behind the natural blind spot, scotoma, or punctum caecum, which naturally exist in every human's eye.
  • This naturally occurring "hole” is the place in the visual field that corresponds to the lack of light- detecting photoreceptor cells on the optic disc of the retina where the optic nerve passes through the optic disc. Because there are no cells to detect light on the optic disc, this part of the eye's Field of Vision (FOV) is naturally occurring as unsighted and invisible to the human eye, as no visual information can be captured there.
  • FOV Field of Vision
  • This invention teaches that by removing and displacing pixels or images of pictures or video from a non-sighted portion of a defective macula to the area just surrounding the damaged portion of the macula, the brain will interpret the image as a whole, and dismiss the actual hole that is cut into the picture or video.
  • Computing software and chips create a modified cameral generated display image which corrects for the missing macular portion of the retina by not projecting any video or picture on the unsighted areas, and instead displaying the entire image or video on all remaining sighted areas.
  • This invention has discovered a new concept for the correction of defects like macular degeneration which supposes and enables the brain-stitching/natural brain simulation theory. It has been proven on one notable patient, Brig. Gen. Richard C. "Dick” Freeman (U.S.A.F. Ret.) who is one of the inventors here and one of the inventors who first invented streaming mobile video. General Freeman had macular degeneration, and upon wearing a device using the invention and its augmentations, could instantly "see” a nose on a face, which, due to the macular degeneration, has not been visible for years. The brain-stitching was, in his case, instant, and did not need to be "learned" by the brain.
  • the First Phase of the Image Manipulation Techniques is the "hole” of diverse shapes and sizes, resembling as closely as possible the user/patient's own defect which is virtually “cut” into the picture or video through software techniques, to be displayed to the lenses for the eyes to view.
  • this First Phase there is no video or image display, except what the user might see with the naked eye and with the existing defect.
  • the Second Phase FMT is the augmented reality video display which contains the Pixel Mapping, Interpolation, and Synthesis. This is the area where the pixels which have been "cut out” of the video or image are repositioned to the nearest adjacent area of the eye. These pixels and subpixels are repositioned on the area directly around the defective area of the eye, and the brain, like the case of the punctum caecum fills in the "hole" with the visual information added to the surrounding area.
  • the image is displayed directly onto the eyes through techniques like retinal projection.
  • the display is directly on the eye by virtue of Smart Contact Lenses, which can create a display on a contact lens covering the eye.
  • This pixel mapping and replacement occurs after the camera has acquired the image or video and the buffering begins.
  • This manipulation typically takes place in the Central Processing Unit (CPU) of a micro circuit; and more specifically in the Graphics Processing Unit (GPU) occasionally called the Visual Processing Unit (VPU).
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • VPU Visual Processing Unit
  • These GPU “chips” are specialized electronic circuits designed to rapidly manipulate and compress/decompress video and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Speed is key here, as any latency will be evident in the display to the eye.
  • most of the modern GPU's can be configured to have only a 1 millisecond delay, between acquisition of the image or video, manipulation of the pixels and the display of the video, which the eye can easily accommodate and absorb in the display with little or no affect.
  • both the CPU and the GPU may need to be used and functions separated and an ASIC, which is an Application Specific Integrated Circuit, may be used to help combine the necessary CPU and GPU functions.
  • the CPU and the GPU work together, however, to accomplish the task and may need other parts on a circuit or circuit board to fully perform, such as capacitors, resistors, input/output connectors, circuitry, and the like.
  • two pixels or parts of an image which were originally exactly adjacent to one another on any axis up/down, sideways, or transverse may be moved together one way, or, if one pixel or part of an image is closer to one border than to the other, the pixels may be split with one pixel or image going to its closest border and the other pixel going to its closest border which is the essence of corrective subpixel mapping and modification.
  • the cutting of the "hole” and repositioning of the video or image may be accomplished primarily by stretching the pixels to a larger overall area than the original captured image (i.e. 100° stretches to 120° overall space, but the center 10° is cut out). In this method all the pixels are still there, in relatively the same size and shape, as originally captured and buffered by the camera(s), except either the far edge boundary has been extended or cropped. This method works well with Virtual Reality goggles, but not as well with Mixed Reality improvements in the technique. Thus, the preferred method in Mixed Reality Corrective Glasses (MRCG) is to use Pixel Mapping, Interpolation, and Synthesis (PMIS).
  • MMRG Mixed Reality Corrective Glasses
  • the pixels in the area of the display to be avoided are mapped, in real or near real time, within or without a buffer, and software algorithms keep the same shape of the image, but reduce the size of the pixels to subpixels, such an image which was, for instance, shown on four pixels, is now shown on three, two, or just one.
  • the resulting display has all the visual information, just displayed using a fewer number of pixels and subpixels.
  • pixels have been reduced to subpixels, which have been moved in the video according to the software implementation and the shape of the defect.
  • the pixels and the image that are moved do not necessarily have to have a specific "boundary" like an oval or a circle, but the pixels can be removed from any defect area, no matter how irregular and repositioned to a sighted area just adjacent.
  • the idea is not just one where boundaries are created, but where the image or video pixels are moved one by one out of the non-seeing, defect area to another location as close to that unsighted area as possible with the remaining image being likewise transposed to make room for the removed and replaced pixels and image.
  • the area to be avoided may be very irregular and complex, which makes no difference, as once it is mapped, pixels are removed from the space where no sight is and placed as closely adjacent to the place on the pixel map as possible, which is described herein as subpixel mapping and placement.
  • Pixels as used herein are perceived spaces where subpixel mapping is a recently developed technology involving algorithms to obtain and map the spatial distribution information of area covered within mixed pixels and then reposition them on a smaller or different scale. See, Figure 25. Algorithms can be applied to pixel mapped video or image content, and images moved from one location in the video to another wherein the shape may not be a homogenous shape like a circle or oval. In some instances, the pixels or subpixels must be "distorted” in order to have 100% of the image included into 100% of the display space. In this case the pixels or image take on a shape which is not a typical pixel square, but can be something besides a square, and often more like a tetrahedron or polyhedron, or shapes like triangles or parallelograms.
  • the classification on a per pixel basis is established and then reconstituted in a pixel/subpixel format to achieve subpixel mapping for modification.
  • an image or video can be displayed with augmented pixel/subpixel manipulation and stitching so that a whole image exists, just not in the original place as the camera input originally assigned.
  • the Third Phase where video is faded back into reality video through "stitching" or similar techniques which are used to merge combine the Second Phase with the Third Phase in steps where the Second Phase is "phased out” and the Third Phase of real world captured video dominates.
  • direct camera input is a phased-in re-engagement of the real world projected image.
  • the Second Phase Image Manipulation Technique merges with the Third Image Manipulation Technique to phase out the 100% pixel manipulation.
  • This Third Phase works the other way to reintroduce the image or video back to 100%) of what the camera actually acquires as an image.
  • the video may still be manipulated so as to correct for line-of-sight (to correct for the what the eye sees versus the camera captured images) and to correct for the epipolar geometry effect of the eyes moving inward and outward/straight.
  • This Third Phase software/hardware stitching is akin to the techniques commonly utilized in 3D video stitching software. It is in Phase Three where the augmented video is then returned to an un-modified video of what the user would actually "see” if the cameras were projecting and displaying raw, unmodified video or images.
  • This "raw” video is proj ected or displayed on the retina, contacts or lenses of glasses where only a portion of the Field of Vision is used for Phases One through Three and the rest of the display area is reserved for Phase Four video, where it can be merged by the eye and brain with the real-world vision which is external to Phase Four.
  • Phase Four is where the user sees with his or her peripheral vision the real world and upon which either the sight through the lenses or beyond the lenses, no video is displayed.
  • This Phase also includes any extra peripheral vision that is extant outside of the glasses, lenses, contacts, or retinal projection, and provides the user with additional real -world ques and images.
  • a user experiences four distinct image sets, all of which merge through the brain's natural simulations to create one Mixed Reality view of the world, which is corrected for the defect.
  • an augmented video which could be as large as 30-50 degrees Field of Vision or more. This could be greater or smaller depending on the type of defect and the amount of correction.
  • augmented video display on the lens is displayed a video of what the eyes would ordinarily see, but augmented in a phase-in/phase-out of the augmented video.
  • an implanted lens, or lenses, akin to an implanted intraocular lens performs some or all of the pixel manipulation by diverting pixels away from the damaged areas of the macula.
  • IOLVIP Intraocular Lens for Visually Impaired Patients
  • the IOLVIP procedure involves the surgical implantation of a pair of lenses that magnify and divert the image using the principals of the Galilean telescope. By arranging the lenses, it is possible to direct the image to a different part of the eye than the fovea.
  • the glasses, frame and headgear (GFH) and external display would be calculated to coordinate with the implanted lenses to cut out the image ordinarily displayed where the defect exists and project the full image on the display, which is then diverted by the implanted lenses and becomes a full image.
  • GSH headgear
  • this invention comprises a system having a database, a CPU, a model controller, a camera intake, a display controller, and a display unit.
  • the model controller which may be hardware, firmware, software, memory, microcontroller, state machine, or a combination of any of the forgoing, is coupled to the database and is configured to establish a reference to a visual model associated with a patient's visual defect; then the camera(s), one or more, take a picture or video of the actual image and the software makes corrections for the patient's visual defect and then the corrected/modified image is displayed which has been corrected for the patient's visual defect.
  • one or more cameras and lenses are enabled to assist the patient in identifying one or more of his or her visual impairment boundaries, and then transferring this information into the Visual Modification Program which augments the displayed video to displace video and picture images to displace the part of the image within the vision impaired boundaries and replace it to the nearest sighted area.
  • the Visual Modification Program also re-introduces real world images captured by a Camera Input System (CIS) so that an augmented video segment is displayed on the lenses, wherein the augmented video segment is phased back to a real -world, un-modified video, so that the "edges" of the displayed system are in sync or near sync with the real-world vision seen by the eyes.
  • CIS Camera Input System
  • the invention also includes a method to store the modified visual model in the database and to project it on a display.
  • the invention also includes a Diagnostic Impairment Mapping (DEVI) System and method to capture information about the area and location of the eye defect. An example of this would be mapping an area where macular degeneration has occurred and little or no sight or vision remains.
  • the corrected visual model includes data related to the quality of the patient's vision and the manipulation of images and/or pixels or other visual portions of a video or recorded image or images which correct for that patient's visual defect.
  • the corrected image is not a manipulation of pixels, but a mapping of pixels in software/firmware including a step of correction for the patient's visual defect through repositioning of the image onto other pixels or subset of pixels which are then projected onto the sighted areas of the eye, such that a whole picture or video is shown, but the portion of the eye that is defective is left with no image/video projection.
  • a mapping of pixels in software/firmware including a step of correction for the patient's visual defect through repositioning of the image onto other pixels or subset of pixels which are then projected onto the sighted areas of the eye, such that a whole picture or video is shown, but the portion of the eye that is defective is left with no image/video projection.
  • FIG. 1 is a block diagram of a system to augment a patient's vision, according to an embodiment of the present invention
  • FIG. 2 is a diagrammatic illustration of a patient's vision without a defect
  • FIG. 3 is a diagrammatic illustration of a patient's vision with a defect
  • FIG. 4A is an illustration of a sample visual model, according to an embodiment of the present invention
  • FIG. 4B is an alternative view of the sample visual model of FIG. 4B;
  • FIG. 4C is an illustration of first and second boundaries, according to an embodiment of the present invention.
  • FIG. 4D is an illustration of first and second boundaries, according to another embodiment of the present invention.
  • FIG. 5 is an illustration of a complex boundary, according to an embodiment of the present invention.
  • FIG. 6 is an illustration of a simple boundary comprised from one of a plurality of predefined shapes
  • FIG. 7 is an illustration of a patient's vision with a more complex defect
  • FIG. 8 is an illustration of a boundary associated with the illustration of FIG. 7;
  • FIG. 9 is a diagrammatic illustration used in establishing a retinal map, according to an embodiment of the present invention.
  • FIG. 10 is a diagrammatic illustration used in establishing a retinal map, according to an embodiment of the present invention.
  • FIG. 11 is a diagrammatic illustration used in establishing a retinal map, according to another embodiment of the present invention.
  • FIG. 12 is a diagrammatic illustration of a head mounted display unit, according to an embodiment of the present invention.
  • FIG. 13 is a second diagrammatic illustration of the head mounted display unit of FIG. 12;
  • FIG. 14 is a diagrammatic illustration of a heads up display unit, according to an embodiment of the present invention.
  • FIG. 15 is a flow diagram of a method for augmenting the vision of a patient, according to an embodiment of the present invention.
  • FIG. 16 is a graphical illustration of a first example of a manipulation of prescribed retinal interface, according to an embodiment of the present invention.
  • FIG. 17 is a graphical illustration of a second example of a manipulation of prescribed retinal interface, according to an embodiment of the present invention.
  • FIG. 18 is a flow diagram of a process for establishing a digital field of vision map, according to an embodiment of the present invention.
  • FIG. 19 is a graphical illustration of a first portion of the process of FIG. 18;
  • FIG. 20 is a graphical illustration of a second portion of the process of FIG. 18;
  • FIG. 21 is a graphical illustration of a third portion of the process of FIG. 18;
  • FIG. 22 is a graphical illustration of an Amsler map of a patient with normal vision and an Amsler map of a patient with AMD;
  • FIG. 23 is an illustration of a smart contact lens
  • FIG. 24 is an illustration of the patient's macula
  • FIG. 25 is an illustration of subpixel mapping
  • FIG. 26 is a graphical illustration of the corrected field of vision, showing the area of pixel manipulation
  • FIG. 27 is a further illustration of the corrected field of vision, showing the area of pixel manipulation
  • FIG. 28 is an illustration of the system with remote camera (top) and contact lens camera (bottom);
  • FIG. 29 is a flow chart of the process
  • FIG. 30 is an illustration demonstrating dynamic opacity
  • FIG. 31 is an illustration of lens layers
  • FIG. 32 is an illustration of a micro display configuration.
  • Embodiments in accordance with the present invention may be embodied as an apparatus, method, or computer program product. All of the Systems and Subsystems may exist or portions of the Systems and Subsystems may exist to form the invention. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "unit", "module” or "system.” Furthermore, the present invention may take the form of a computer program product embodied in any tangible media of expression having computer-usable program code embodied in the media.
  • any combination of one or more computer-usable or computer-readable media may be utilized.
  • a random-access memory (RAM) device for example, a random-access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device.
  • Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages.
  • the intelligence in the main circuitry may be software, firmware or hardware, and can be a microcontroller based or included in a state machine.
  • the invention may be a combination of the above intelligence and memory and this can exist in a Central Processing Unit or a multiple of chips including a central graphics chip.
  • the computer portion of the invention typically also includes a Model View Controller.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • These computer program instructions may also be stored in a computer-readable media that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable media produce an article of manufacture, including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the disclosure particularly describes a system, a method and computer program instructions stored in media, that augment the sight of an individual or patient whose sight has been damaged or is otherwise defective.
  • the present invention provides techniques that may be implemented in systems, methods, and/or computer-executable instructions that (1) map the defective areas of the patient's sight, (2) establish one or more boundaries that delineate between the effective and defective areas of the patient's eye(s), (3) capture an image (or series of images) using a camera associated with the patient, (4) map the capture image (or series of images) and generate a corrected image (or series of images), and (5) present the correct image(s) to the patient's eye(s).
  • the system 10 includes a database 12, a model controller 14, a display controller 16, and a display unit 18.
  • a data gathering unit 20 is used to gather data that may be used to develop a visual model of the patient's eyesight.
  • the data used to establish the visual model, the visual model and other data is stored in the database 12. Since the peripheral receptors, in the macular degeneration case, in the retina are usually still functioning, the present invention stretches, skews and/or otherwise manipulates the image(s) presented to the eye(s) of the patient to avoid the macula or the damaged portions of the macula.
  • the entire image is presented to, or onto, the functioning retinal receptors.
  • the present invention creates a distortion map of the image and displays it, or projects it onto the periphery of the eye(s), while avoiding the (damaged portion of the) macula.
  • the distorted image is presented to, projected onto, the eye using (high definition) goggles, glasses, a "smart" contact lens, or a photon projection (using a virtual retina display) of the image directly onto the periphery of the eye.
  • the model controller 14 is coupled to the database 12 and is configured to establish the visual model associated with a patient and to store the visual model in the database.
  • the visual model includes data related to a quality of the patient's vision.
  • the model controller 14 is further configured to establish a boundary as a function of data associated with the visual model. This process is discussed in further detail below.
  • the boundary is indicative of an area to be corrected within the patient's vision.
  • the model controller is further configured to establish a retinal map as a function of the boundary and to store the retinal map in the database.
  • the display controller 16 is configured to receive and to store the retinal map.
  • the display controller 16 is further configured to receive an image (or series of images) from a camera, such as a video camera, (see below) associated with the patient and to apply corrections to the image(s) based on the retinal map and responsively generate corrected image(s).
  • a camera such as a video camera
  • one or more macular or retinal maps may be generated. These maps may be associated with predefined settings, for examples, day time, night time, reading, or watching television. The correct retinal map may be automatically selected for specific conditions and/or may be user selectable to fit changing conditions.
  • the display unit 18 is coupled to the display controller 16 and is configured to receive the corrected image(s) and to present the corrected image(s) to the eye of the patient. It should be noted that the present invention may be configured to present corrected video, as a series of images, to the eye of the patient.
  • the model controller 14 and database 12 may be an embodiment, in a computer, specific or specifically designed hardware or apparatus, and application specific integrated circuit (ASIC) server, or servers operating independently, or in a networked environment.
  • the data gathering unit 20 (described in further detail below) may be linked, at least temporarily, or may be data transferred over a network, electronically, or through a physical media.
  • the retinal map may be established automatically and adjusted (with or without the patient's specific update permission) at or by the model controller and then transferred electronically to the display controller.
  • the model controller 14 may establish a plurality of retinal maps that vary in either the parameters used to generate the retinal map and/or the method used to generate the retinal map.
  • the plurality of retinal maps may be stored at the display controller 16. The patient may then cycle through the retinal maps and select, for use, one of the retinal maps that works best. For instance, a particular retinal map may work best for the instant conditions. Thus, the patient may select a retinal that works best for the conditions which currently exist.
  • the display controller 16 and the display unit 18 may be embodied in a head mounted display, goggles, or glasses that are mounted to, or worn by the patient.
  • the display controller 16 and display unit 18 may be embodied in a unit that is separated from, i.e., not worn by, the patient.
  • One or more sensors may be utilized to find the location and distance of the patient relative to the display unit 18 such that the image may be displayed properly.
  • Each eye of the patient is different and typically has a unique defect.
  • one eye of the patient may have a specific defect (having a specific shape, size and location), while the other eye of the patient may not have a defect or may have a defect having a different shape and size.
  • each eye of the patient will generally be mapped and a respective visual model of each eye established.
  • a border of the defect of each eye will be generated and an associated retinal map generated.
  • separate cameras will generate a separate set of images for each eye and the display controller 16 will generate a respective series of images to be presented to each eye. Cameras should be of very high quality and 4K or 8K cameras and projection will provide the best results.
  • a graphic 22A representing the vision of a patient's eye without a defect is shown for purposes of comparison.
  • a graphic 22B representing the vision of a patient's eye with a defect is shown.
  • the defect is represented by the dark shape 24 shown in the center of the graphic 22B.
  • the visual model may be established using the data gathering unit 20.
  • the data gathering unit 20 may include at least one of (1) a field of vision ophthalmological instrument, (2) a portable mobile field of vision test apparatus, and (3) a computer-based system. The process of gathering data using the data gathering unit 20 is discussed in more detail below.
  • the FOV data 26 is used to create the visual model.
  • the FOV data 26 includes a plurality of cells 28 arranged in a grid 30.
  • Each cell 28 has an associated value associated with the quality of the patient's vision.
  • the values may be based on an absolute or representative scale that is indicative of the quality of vision. Alternatively, the values may be a deviation from a standard value, or a value of an associated cell.
  • the values in the grid utilize a scale of 0-9, where 0 represents no defect, 9 represents a defect and the values 1-8 represent a quality of vision between 0 and 9.
  • a scale of 0-9 is for discussion purposes only.
  • the scale utilized may be any suitable scale, for example, 0-99, 0-255, -30 to 30, or any suitable scale.
  • the illustrated grid having 12 rows and 20 columns.
  • the shape of the grid may be used to approximate the shape of an eye and may be different between the left and the right eye.
  • the size and the shape of the grid may be based on a 12 x 20 grid, however, any size grid may be utilized.
  • the size of the grid may be dependent upon the data gathering process, or data gathering unit 20 and/or the display unit 18.
  • the FOV data may be represented by a contour, polygon or morphological operator.
  • the boundary may be established as a function of the values associated with the cells in the grid.
  • the values in the grid values are compared with a threshold to establish the boundary.
  • the threshold may be set to 7.
  • any cell 28 having a value of 7 or greater is within the boundary and any cell 28 having a value of 0 is outside of the boundary.
  • FIG. 4B A modified view of the FOV data 26 is shown in FIG. 4B, in which the cells 28 meeting the above threshold are highlighted.
  • the FOV data 26 could be used to create a contour.
  • the visual model emerges from interpreting the raw data and is not necessarily a point-by-point transformation of the raw data.
  • the intent is to put the removed pixels as close to where they ordinarily would have been, thus, the algorithms in the software determine, based on (i) the whole of the defect, (ii) the distance of the specific pixel or ray from the border of the defect, (iii) whether a pixel is a new image or a part of an existing image (meaning whether the pixel is a part of an image or on the border of an image change), (iv) the other options for the pixel to move another way, and (v) where the adjacent pixels to be adjusted are being moved, exactly where to move such pixels/rays.
  • vector images are used.
  • vector images and pixels are used interchangeably.
  • digital images which are made up of (usually) millions of tiny squares or other shapes known as pixels
  • vector images are made from mathematical points connected together by lines and curves to create different shapes. Since they are based on math and algorithms, not merely pre-placed pixels, vector shapes are extremely flexible and do not suffer from the same limitations as pixels.
  • the first major system is the glasses, frame and headgear ("GFH"), which typically is worn on the head of a user and positioned over the eyes and nose like typical glasses.
  • the GFH houses the cameras, the microcontrollers, the connectors, and Subsystems which are comprised of sensors, such as motion sensors, six or nine Degrees of Freedom sensors (up/down; back/forward; left/right; pitch/roll/yaw), gesture recognition sensors, fiducial marker sensors, accelerometer sensor, infrared sensors, motion sensors, alert sensors (which would alert a user to a danger), gyroscope technology and related sensors, positional tracking sensors (including Wi- Fi location systems, mobile locations systems, and RFID location based systems), sound sensors, and optical sensor technologies.
  • sensors such as motion sensors, six or nine Degrees of Freedom sensors (up/down; back/forward; left/right; pitch/roll/yaw), gesture recognition sensors, fiducial marker sensors, accelerometer sensor, infrared sensors, motion sensors, alert sensors (which would alert a user
  • the sensor array also can include mechanical linkages, magnetic sensors, optical sensors, acoustic sensors, and inertial sensors. This list is not exhaustive, but illustrative of the type of sensors located on the GFH.
  • the GFH also houses virtual environment (VE) Subsystems such as: (1) head and eye tracking for augmenting visual displays; (2) hand and arm tracking for haptic interfaces to control virtual objects and aid in the diagnostic tools; (3) body tracking for locomotion and visual displays; (4) environment mapping interfaces to build a digitized geometrical model for interaction with sensors, diagnostics, and simulations.
  • VE virtual environment
  • Other sensor technologies typically housed on the GFH are the digital buttons, which would include the power buttons and a D Pad or Control Pad for accessing and controlling functions by the user.
  • the sensors listed above include their operating systems and output.
  • the GFH also houses the connectors such as power connection for recharging a battery or for direct connection to and AC source, as well as other connectors for HDMI, sound, and other input/outputs, such as additional image overlay display, or for a diagnostics protocol for upgrading the system.
  • the GFH also houses the Microprocessor(s) Control Circuits (MCC) which are described below.
  • the GFH may also include a strap and counterweight or other headgear to balance the GFH and maintain its position on the head.
  • the GFH may include a "dongle" whereby one or more of the Systems or Subsystems are connected via wire or wireless to another device, such as could be worn on a belt or carried in a pocket to reduce the overall weight of the GFH.
  • the GFH is connected to another device which is providing power, while in an alternative embodiment, the GFH has its own power from the Mains or from wireless power transmission or from a battery. Further, in another embodiment, the GFH houses the cameras, the microcontrollers, the connectors, Central Processing Unit, Graphics Processing Unit, software, firmware, microphone, speakers, and subsystems.
  • the GFH contains an RFID reader to read signals from RFID tags.
  • the GFH contains optical character recognition/reader sensors to read information from the real world.
  • some parts of the system mentioned herein are in a dongle attached to the GFH via wire or wireless connection.
  • some portions of the system mentioned herein are contained in a connected device, like a laptop, smart phone, or WiFi router.
  • some parts of the system mentioned herein are contained in a remote location and accessed by the GFH via Radio Frequency (i.e. cellular bands) or other wireless frequencies or via wireline.
  • Radio Frequency i.e. cellular bands
  • multiple heads-up displays on the same headgear or on the headgear of multiple wearers are connected through a wire or wireless network in order to develop or control information which can be shared with the other users.
  • the GFH gather information from the cameras or sensors processing the information through preset filters and distributing the information to all the other GFH would have the ability to control the information or share the information with all the other GFH connected to the network.
  • the information could be gathered from a remote location or library and shared with other HDC through an intermediate source like a smart phone or laptop.
  • the GFH also contains the battery and receipt charging DC Subsystem or alternatively, an AC input and converter to connect directly to an AC source; as well as the wire and wireless Subsystems to connect or pair the device to other systems, such as sound, alert systems, fall monitoring systems, heart monitoring, other vital sign monitoring, and various APPs programs, cloud computing and data storage.
  • Other Subsystems in the GFH are a microphone/speaker and amplifier system, an integrated Inertial Measuring Unit (FMU) containing a Three Axis Accelerometer, a Three Axis Gyroscope and a Three Axis Magnetometer, or things like an Auxiliary port for custom sensors such as range finder, thermal camera, etc.
  • FMU Inertial Measuring Unit
  • Subsystems like Bluetooth for near connectivity to cell phones, tablets, automobiles, and the like can be included as well as Global Positioning Systems or interior tracking systems like RFID, Wi-Fi, or Cellular tracking location based directional travel.
  • Other communication systems can also be included based on either wire or wireless connectivity of the GFH.
  • the GFH can also be connected wired or wirelessly to a main monitoring data system which would track the health, whereabouts, and condition of the user to be displayed to another person such as a caretaker or a health care provider.
  • an AR headset which provides a computer mediated video shown on a display screen such that the wearer sees both the real world and the augmented video at the same time.
  • features as voice/speech recognition, gesture recognition, obstacle avoidance, an accelerometer, a magnetometer, gyroscope, GPS, special mapping (as used in simultaneous localization and mapping (SLAM), Cellular Radio Frequencies, WiFi frequencies, Bluetooth and Bluetooth light connections, infrared cameras, other light, sound, movement, and temperature sensors are employed, as well as infrared lighting, eye-tracking, and Dynamic Opacity are employed as set out following.
  • the GFH uses a bright display, typically for the highest resolution it could be a Quad HD AMOLED display, which is reflected onto the surface of a lens for the user to see the "virtual" portion of the display.
  • the brightness can be adjusted up or down depending on ambient light.
  • the adjustment can be in the system controller and automatically adjust depending on what the sensors say the brightness of the ambient light is, which would typically be brighter when in brighter exterior light.
  • the AMOLED, OLED, or similar display can be one display or two displays, one for each eye as reflected on the lens.
  • a reflective coating is applied to the clear lens to enhance the reflectivity of the virtually displayed image.
  • the reflective coating is not necessary because of the operation of the Dynamic Opacity subsystem.
  • the clear lens upon which the high-resolution display which may be a plastic like Lexan or other clear polycarbonate or glass or any other clear material, may or may not have a reflector integrated into the lens to improve visibility of the reflected display.
  • the outside of the lens would also be bonded to a layer containing a Liquid Crystal Display (LCD) or Transparent OLED display which operates to obscure the outside light to provide greater acuity for the wearer viewing the virtual information displayed in high lighting conditions (Dynamic Opacity Display or DOD).
  • LCD Liquid Crystal Display
  • DOD Dynamic Opacity Display
  • An OLED transparent display can be quite clear, which makes reading fine details or text on objects behind the display possible until something is displayed on the screen in "virtual mode," meaning something from the streaming video reflected display is shown on the display/lens.
  • a transparent/translucent LCD can be used as an outer layer or middle layer of the otherwise clear lenses, and either bonded together with the clear lens upon which the reflected display is to be projected, to create the Dynamic Opacity. Dynamic Opacity senses where the image is being projected on the interior of the lens and obscures from one percent or less to up to 100 percent of the otherwise clear lens.
  • the clear lens may or may not be also coated with a reflective layer. See Figure 30.
  • the clear lenses can also have reflective material on the inside to increase reflectivity of the projected image, such that the base lens is not exactly clear, but is some percentage of obscured by the reflective film, paint, or other embedded reflectivity. See Figure 31.
  • the Dynamic Opacity subsystem is controlled by the display controller and works in tandem with the information displayed.
  • the display controller creates an image buffer for the projected virtual display, and this information is shared with the Dynamic Opacity controller, which then activates the pixels which correspond with the exact or near exact location where the display controller is projecting the virtual image, so as to make the portion of the reflective lens upon which the image display is being projected is likewise made opaque on the exterior of the reflective lens, so that the image displayed appears to be brighter due to the backlighting or light filtering provided by the Dynamic Opacity.
  • the Dynamic Opacity subsystem works because the transparent LCD or translucent OLED contain some resolution of pixels, which in the instance of Dynamic Opacity can be a lower resolution than the projected display, and each pixel is controllable by the Dynamic Opacity controller, which gets its information of which pixels to activate from the display controller.
  • the activation of the pixels would be turning on the individual OLED RGB pixels in order to achieve the correct level of opacity to compensate for existing brightness for the condition experienced by the user.
  • the RGB pixels can be activated to create a "shadow" effect or depending on the type of light which is extant, an emphasis on either Red, Green, or Blue, or combinations of the three.
  • the Dynamic Opacity subsystem can be pre-programmed to provide a user with various options from warm color to cold (amber to green) for a sunglass effect on the exterior of the reflective lens.
  • the activation of the pixels is one or more phases and changing the polarization of the pixels to achieve opacity on the exterior of the glasses for the same effect.
  • an LCD unit would be employed which does not include a RGB component, as just outside ray blocking is needed.
  • any other transparent material which provides electronic control of pixels or areas inside the transparency to create an opaqueness can be used.
  • the outer layer would typically be transparent to the user providing a "see through" lens to the real world, until some virtual information was displayed on the Head Mounted Display Unit reflective lens, such as a hologram, a 2D image like a movie, or other 3D image or information, including text.
  • a controller like Model View Controller (MVC) would control the Dynamic Opacity Display through corresponding data input information about where the reflective display is projecting information.
  • MVC Model View Controller
  • the MVC would identify in the buffer or elsewhere, in digital format, where the images are going to be displayed on the reflective display, and the MCV would anticipate these locations and turn on pixels, including RGB pixels in the transparent LCD or OLED, and "cloud” or rather make more opaque the portions of the lens corresponding to the areas of the lens where the virtual image is being displayed on the inside or other layers of the reflective display.
  • the Dynamic Opacity provides a "backdrop” or “background” display corresponding to the pixels where the virtual image is displayed making the contrast of the virtual display greater to the eye, so that brightness like natural sunlight can be minimized, which would otherwise compete with the reflected display and cause it to be hard to see.
  • the reflected display has a buffer between it and exterior light, which gives the reflected display greater brightness to the eye.
  • the Dynamic Opacity could be in either a course or fine mode, meaning that the opacity from the Transparent OLED or LCD would either appear in the general area of the virtual display or for fine applications would appear in almost or the exact same pixels which correspond to the image pixels being displayed or reflected on the interior of the lens.
  • the Dynamic Opacity can work with wave guide displays or prism type displays with equal effect.
  • the Dynamic Opacity described here can be used with a micro-mirror type display with equal effect.
  • the Dynamic Opacity includes transparent OLED or LCD overlay or layer of the lens can also act as "sun glasses” for the display and "tint" the entire display to compensate for bright lights, like on a sunny day.
  • a light valve can be used with the same effect in a similar manner.
  • a light valve (LV) is a known device for varying the quantity of light from a source, which reaches a target. Examples of targets are computer screen surfaces, or a wall screen or in this case the coarse or fine coverage of the virtual display on the glasses lens.
  • the MCV can be pre-programmed or programmed to automatically compensate for external brightness and act as instant "transition" lenses and can be either used on the AR glasses display or with computer intelligence can be used on typical corrective lenses.
  • the entire exterior layer of Transparent OLED or LCD would tint much like a light valve to balance the bright external light, and still provide additional opaqueness on the portion of the lens where the virtual video or picture or image is being displayed.
  • the display can be a small display like OLED- on-Silicon micro-displays.
  • a display device consists of two key elements: the silicon backplane that contains circuitry to drive the OLED pixels, and the OLED emissive frontplane layer.
  • the micro-display which is only 1 inch by 1 inch but contains 2.5K by 2.5K resolution, with as bright a display as possible (1,000 NITS) one can use two displays, one for each eye, to be the projector on to a reflective or semi-reflective lens.
  • the micro- displays can serve as the projector for a reflected display which the eyes of the wearer would see.
  • the correction or fine tuning is offered by keystone corrections contained within or on the GFH and the correction for projection of the reflected display.
  • one or more small micro displays like the those offered by TSMC, which is a 1 inch by 1 inch, 2.5K by 2.5K resolution display(s) can be used to project an image onto a clear lens connected to the a head mounted display that contains computer intelligence through a CPU and can be known as a Smart Head Mounted Display (SmartHMD) or GFH.
  • SmartHMD Smart Head Mounted Display
  • a corrective lens or lenses can be affixed to very small micro-displays, which are bright enough to provide a reflected image onto the reflective lens.
  • the micro-displays in order to correct and fine tune the image for displaying on an ultra-short throw between the display and the inside of the reflective lens can utilize one or more image correcting lenses and can even be combined with a middle layer of a wave guide or polarization, which provides enhanced image resolution and guides the image's rays to exactly where it is to be displayed on the reflective lens.
  • two corrective lenses sandwich a wave guide or polarization layer.
  • the image projection source is a small display, as shown in Figure 32, that is rotated to achieve the greatest clarity and field of view.
  • the image source (OLED) is then passed through a circular polarizer.
  • the circular polarized image is then passed through a lens with a positive diopter to focus the light through a linear polarizer.
  • This linearly polarized light is then passed through a negative diopter lens, and possibly multiple negative diopter lenses to achieve the necessary projection size required.
  • the purpose of the polarizing films used either in combination with other correcting lenses or not, is to retard the light that may be reflected back onto the micro-display and to focus the light rays on the specific part of the reflective lens as is desired.
  • the image is then reflected into the eye using a spherical lens, possibly coated with a semi-reflective or reflective surface.
  • the angle of the display and lens combination to the angle of the spherical reflection surface will be adjustable to provide focus for eye location, which can be monitored using eye-tracking technologies combined with the control of the projected image.
  • the Eye-Tracking subsystem works through hardware and software.
  • the software is connected to the system's GPU working in connection with the systems model controller.
  • the eye-tracking is captured by infrared (TR) light being projected onto the eye, which creates a glint or reflection, which is then captured by an IR sensitive camera.
  • TR infrared
  • an eye-tracking system captures the glint from the eye from 30 frames per second to 500 frames per second.
  • This information is stored in real-time in the Model Controller, which can be a MVC, and them processes this information into a virtual space represented by XY or Cartesian coordinates. These coordinates provide the system with the information about where the user's gaze is in relation to the reflective lens.
  • the eye- tracking information is correlated with the buffered information about the person's eye visual defect such that when the manipulated image is displayed, it is in sync with the user's gaze. This is necessary because the eye scanning and eye movement necessitates that the buffered and manipulated area of the video be moved to correspond to the user's eye gaze so that the buffered "hole” and the user's defect align and remain in sync. All this processing happens in real-time and keeps up with the movement of the user's eye. Latency is important and keeping the latency to less than 10 milliseconds will aid in preventing the user from feeling dizzy and preventing whirr.
  • a computerized worm gear or drive is used, or non-computerized mechanical device such as a worm gear or gear can be used to move the micro-displays on the GFH such that the displays can be aligned with a person's own Inter Pupillary Distance or IPD.
  • this gear can get its information about how far to move in one to four directions from the eye-tracking subsystem, which can measure the distance from the gleam detected in each of the person's eyes and transmit measurement data into movement data so that the worm drive aligns the micro-display in the GFH to the perfect position for the persons own IPD and relative height vis-a-vis the way the GFH is worn, so that alignment side to side and up and down is accomplished. Alignment of the user's eyes on a four axis is necessary because this ensures the sharpest reflected image for each individual user in combination with how the user wears the GFH.
  • the GFH can be made where it is locked on a user so that in institutional environments it cannot be easily removed.
  • people, such as inmates of some type would be required to wear such GFH headgear, so that if there is trouble or emergencies, a manager could either cut off the video feed leaving the user with only limited sight resources with which to navigate. This may reduce the desire to become aggressive or provide information for emergency exit.
  • the display screen is subject to the command of an outside operator, and could display, for instance, peaceful pictures, and soothing music to calm the user experiencing a fit. Or the display could become opaque and deny the user the ability to see.
  • the display could be used to heighten awareness with magnification, color enhancements and sharper contrasts of images and sound.
  • the GFH could also be used to dispense smells to either enhance a pleasurable experience, permit a focus on identification of a person, or thing, or for training purposes, like to give a user an artificial experience like would exist in a simulation or another not currently existent real-world situation.
  • the GFH is more like a helmet or the display more like a face shield than lenses.
  • the GFH is more like a band and the reflective display is like two partial spherical clear lenses, one partial sphere over each eye.
  • the real world is not displayed, but videos, television shows, emails, or other online or prepackaged information is displayed, either with or without the macular degeneration type pixel manipulation, so that a user could experience other forms of entertainment, training, learning, or task accomplishment with the Mixed Reality Glasses than just a real -world projection onto the display.
  • the GFH can also be fitted with night-vision, infrared, or other types of cameras so that the experience is hyper real world. Thus, any kind of camera can be used to make a display.
  • the GFH can be programmed to act as a host for other devices utilizing technologies like Apple Airplay, which permits the GFH to be "paired" with other devices, like a phone or smart watch.
  • the GFH is connected to the internet via cellular or WIFI or other radio frequencies or wireline or wireless frequencies and acts like a router with other devices which can attach themselves to the GFH, much like computers acquire and connect to a typical internet router. This provides the GFH with the ability to access the internet.
  • the GFH is loaded with Artificial Intelligence, like the Google virtual assistant, Siri, or Alexis.
  • the GFH can be programmed with a virtual assistant virtual image and be able to show a visual virtual assistant (VVA), not just a voice like Siri or Alexis.
  • VVA visual virtual assistant
  • the AI software neural nets are trained to change a video of a speaking mouth to other words can be used to create a VVA with a minimum of actual videos taken of the live subject which is to be the VVA.
  • the GFH can include either speakers which would be controllable or have incorporated into its sound system earbud type speakers, which are either attached via wire or via a wireless network in the GFH like Bluetooth light.
  • the cameras can be used to not only display an image in real time to the user, but to record the image that a camera captures for replay later.
  • a user if sleepy, could activate a "record” button, causing the CPU and GPU to record the real-world images, for instance from a football game, and the user, when awakened, could then enable the recorded display to show on the lenses of the GFH.
  • This feature could also be used to recall real world experiences, for instance to record a university lecture for playback and contemplation at another time.
  • the playback can be in real time, slow motion, freeze framed, stopped, and fast forwarded or reversed.
  • the GFH has a Subsystem which permits storing data and replaying data and menus to identify the stored information, or to recall an instruction previously given.
  • the user could activate the record when taking medication and the CPU would log such information and be able to respond to visual, text input or auditory requests, like, "did I take my medicine today" to which the GFH would respond yes or no or not known, depending on whether the recorded information was available.
  • the record function can be configured to automatically record certain functions, like image recognition software which could activate the recording of taking medicine, convert that to data base information, and be able to play back the correct information to the user.
  • the GFH could also become Bluetooth enabled when in the proximity of other devices, like a pulse oximeter or blood pressure cuff, and automatically record this information and store it in the data base to be replayed, recorded for later use, or sent to a third party, which might be a caretaker or health care provider or store for recall by the user.
  • a third party which might be a caretaker or health care provider or store for recall by the user.
  • other meaningful information can be displayed along with either the real-world information or non-real -world information (such as TV or a movie) where a user can be altered or amended by text information or sound to conduct a certain time-based task, like, for instance, an alert to take medicine, check on a pet, or answer a phone call or email.
  • the GFH would permit the user to use the D-Pad, Fiducial Marker, or other controller to switch from a real world or non-real -world experience on the display to a task-based experience, such as an email or phone call or video phone call.
  • the GFH would be akin to a wearable computer, and permit a change in the user's environment and display to correspond with the task or undertaking necessary at the time, whether to see the real world, to see the non-real world, or use the GFH as a wearable computer, online device, Wi-Fi device, RFID device, Near Field Communication device, or other communication device, learning device, or a smart device, like one that would clock elapsed time.
  • the GFH acting as a wearable computing device could process a credit card payment or undertake some other task that the physical limitations of the user would otherwise prohibit or would enhance.
  • the GFH does not provide specific correction for eye diseases like macular degeneration which requires repositioning pixels or vectored images but does contain all these Subsystems which exist to inform the user and show a user how to reach a certain waypoint, or prioritize travel, all displayed on the lens display of the GFH.
  • the pixel manipulation is used, but not to correct for eye defects like macular degeneration, but to reposition a display onto a certain portion of the lenses, so that a user can see both the display and the real time world at the same time.
  • the GFH can contain other wearables technology to monitor, report, and track or direct the user. This can be done by audio, or within the display or as a separate display, where, for instance, the real-world environment is displayed, and a text is also shown of directions, or alerts or any kind of useful information to the user. Alerts could also be signaled by vibrations from the GFH.
  • the GFH can also signal messages to people external to the GFH, and, for instance, to alert third parties that an impaired sighted person is passing. Or alert third parties that the person has some sort of authority, like a siren, or flashing light, in the case of police officers or emergency personnel.
  • the GFH also contains the Image Projection and Lenses (IPL) System which is the combination of the projector and lenses upon which the image or corrected image is to be displayed, along with their connectors and integration with the other Systems and Subsystems.
  • IPL Image Projection and Lenses
  • the GFH also contains connectors for a patient diagnostics programming, and computer interface, for wearable computing functions and other Subsystems, explained herein.
  • the examples above are designated herein as "subsystems" or “Subsystems” of the invention which also is understood to include all powering, connectivity, computing, display, and integration of the Subsystems.
  • the computing and patient diagnostic programming can be resident in the system or external through a connector.
  • the patient diagnostics programming can be in the circuitry and intelligence of the system, the GFH or accessed externally through wire or wireless connections to a device like a tablet, laptop, computer, or mainframe.
  • the GFH may all be worn on the head, or be like a helmet, or be dispersed on other parts of the body as auxiliary wearables.
  • the second major System is the Camera Input System (CIS), which typically includes one or more cameras and their lenses, connectors, and operating systems.
  • CIS Camera Input System
  • the cameras can be of a typical video or still camera or can be of a specialized nature like night vision, infrared, 360 cameras, thermal imaging, magnification, color, black and white, or 3D cameras with each their own distinctive displays.
  • CIS Camera Input System
  • the GFH would contain one or more camera and camera systems for capturing the real word visuals that the user would ordinarily see; and also can contain one or more cameras which monitor eye movement so that corrective software can receive this eye positioning information and approximate the epipolar geometry of the eyes (eyes moving inwards or outwards, left or right, transversely) and calculate for the same as well as the offset of the line of sight of the cameras versus the actual eye position so that the display shows nearly what the user's eye would ordinarily see.
  • the CIS may be partially or completely embedded on smart contact lenses, where the cameras, in the instance of macular degeneration, are positioned on the smart contact lens (SCL) in the exact location where no sight exists, being typically in the most central 15% of the eye.
  • SCL smart contact lens
  • the GFH provides the energy to be harvested by the SCL and the communication network and protocols, for wireless communication, all of which are a Subsystem of the GFH.
  • the GFH System provides the necessary energy and communication link and are synced together.
  • one or more cameras per eye are used to create monocular or binocular vision.
  • the GFH System would also have a method to monitor the movement of at least one eye, like a camera in the GFH facing back towards one or more eye to monitor the eye movement, for line of sight augmentations to the projected image, and for epipolar geometry corrections for the movement of the eyes focusing on far away versus close items.
  • One Subsystem and method for monitoring the eye in another camera one or more which is directed at least one eye.
  • This camera would utilize eye tracking software to provide to the FMP the information necessary for an adjustment in the display so that the image displayed as nearly as possible represented the real-world images, thus, there would be correction for epipolar geometry and line of sight at least in the software.
  • one cameras is used, creating monocular vision to be displayed to one or both eyes.
  • the monocular vision can be corrected per eye, so that the "cut outs" are different for each eye, such that the correction best suits each eye differently.
  • two cameras per eye it is recommended that they would be offset towards each other, so that each camera's FOV intersects the other. This is because when capturing a wide Field of Vision, the cameras themselves interject a certain amount of distortion.
  • a typical camera lens, which does not introduce a great degree of distortion is only up to about 75 degrees FOV.
  • two cameras are recommended to avoid wide-angle lenses, which introduce distortion, and avoid the most distortion from camera lenses that attempt wide FOV.
  • the third major System is the Microcontroller Control Circuits.
  • This group of chips, parts, circuits and circuit boards include one or more microprocessors, its circuit board and parts, and typically a specialized Application Specific Integrated Circuit (ASIC) which may be a separate chip or housed in one of the other chips in the microprocessor circuit board.
  • ASIC Application Specific Integrated Circuit
  • the MCC does the main functions of the invention and receives the input from the CIS and sensors, runs the routines and programs for collecting sensor data and visual images, and then corrects for the macular defect of the user and controls the display. Portions of the MCC System are controllable by the user, especially related to the Macular Degeneration Diagnostic Program (MDDP) Subsystem.
  • MDDP Macular Degeneration Diagnostic Program
  • This MDDP Subsystem contains the software and firmware for the patient application defect mapping program which establishes the boundaries, one or more, per eye, of the defect area, as well as the boundaries of the area of projection.
  • the MCC also houses the Video Manipulation Programs (VMP) which collects the camera input and repositions the image and pixels, for corrected vision display.
  • VMP Video Manipulation Programs
  • the MCC also houses the Application Program Interfaces as well as the Graphic User Interfaces (GUI) and routines.
  • GUI Graphic User Interfaces
  • the MCC also houses the controllers for all of the sensors, inputs and outputs and user control.
  • VMP may be any number of kinds as described previously, or could be a Pixel Manipulation Scheme or Vector math like taking the image from the real world such as the Pixel Interpolation and Simulation, Image Stretching, or other software video distorting application.
  • the flat picture as sent to the buffer by the camera and is turned in to a "fisheye” or “barrel” distortion where the middle is larger and then the image is squeezed at the edge.
  • the central image which is as near as possible to the deficit of the person's disease, is removed and the image is stretched and displayed.
  • the edge is not critical, and may simply be "cropped” to permit the central portion of the video to be displayed without the edges, which have been pushed out by cutting the central portion out.
  • the edges are important, like in the case of the Mixed Reality macular degeneration glasses where Phase Two distorted images must be remerged into Phase Three video images.
  • this invention teaches that one camera can be used for monoscopic image capture and display.
  • this invention teaches that you can use two cameras to simulate on the goggle/glasses display true stereoscopic vision, wherein the IMD model includes factor correction for epipolar curves, guided by the epipolar geometry so that stereo vision, generated by two or more cameras, can be employed and be displayed, and seen, as one PRI image.
  • the invention uses computer aided video images which are skewed and stretched in a matrix distortion or other similar fashion to put the most or the entirety of the image onto the peripheral vision of the patent by opening up the center of the image and manipulating it to the peripheral cones of the eyes, as seen by the patent in the projected image, in order to project the video captured images on the peripheries of the cones in the eyes where vision is still active.
  • One of the benefits of this invention is that no invasive procedures are necessary and as the patient's macular degeneration changes the software can be adjusted so that the image is now correctly skewed.
  • the spreading and/or multi-lateral skewing of the image reflects the corrected image onto 3D or High Definition goggles and/or glasses worn by the patient.
  • the image is skewed via the FMD module to avoid projection to the area of the eye which involves the macula, but still has all the image information.
  • This process think of a picture which is printed onto a stretchable and compactable substance. A hole is cut into the middle of the image and stretched open. The image compress into the sides of the picture. Thus, all of the information of the picture is still there, it is just rearranged where a hole is in the middle and the image is moved each way to the side, top, and bottom.
  • This "hole-cutting" is done via algorithms and computer software/firmware technology, for instance, using a technology like Matrix Distortion as above mentioned.
  • Matrix Distortion of a camera and Matrix Calibration which is the correction of the distortion are commonly known areas of camera calibration and have been used for a long time. Often times cameras display significant distortion. However, the distortion is constant like on a matrix, and with a calibration and some remapping the distortion can be corrected. Typical distortion correction takes into account the radial and tangential factors. For the radial factor one uses the following formula:
  • the IMD model stretches a center pixel to the points at which an individual cannot see, and compresses everything else to fit in the remaining peripheral portion of the goggles. In this fashion a "hole” is artificially cut into the image by computer and software/firmware aided manipulation such that a pixel which was formerly in the center of an image is squeezed to the outside so that the entire image in projected around the "hole” in the center which is artificially created.
  • the FMD distortion model is shows as a value to the "webGL”l, a program which can be used with "renderingContext"2.
  • the fourth major System is the Image Projection and Lenses System.
  • the IPL projector and lenses may employ such technologies for display such as wave guides, mirrors, prisms or other technologies, such as transparent rear projection film, to correctly display the image on the glasses (lenses) or on a portion of the lenses.
  • a "heads-up" type display may be used, such as a transparent shield or facemask.
  • the lenses may be one of any of a number of types of see-through displays, like Augmented Reality or Mixed Reality glasses, or can be immersive, and not transparent like Virtual Reality goggles.
  • OLED organic light emitting diodes
  • Passive-matrix OLED Active-matrix OLED
  • Transparent OLED Top-emitting OLED
  • Foldable OLED Lucius Prism OLED
  • White OLED Quantum dot light emitting diode (QLED)
  • ULED Ultra HD 3840x2160 pixel resolution, also called 4K, which is twice the resolution of Full HD and has 4 times the number of pixels.
  • AMOLED transparent Active Matrix OLED
  • AMOLED transparent Active Matrix OLED
  • lenses such as Coming's transparent display technology and features Corning® Gorilla® Glass could be used.
  • the application of a special functional film on the thin, durable Gorilla Glass surface creates a transparent display that is acceptable for displaying real time augmented video onto the GFH lenses.
  • technologies such as LG Display's N Pixel technology can assist the invention by making the pixels clearer from any viewing by the eyes.
  • technologies such as retinal projection can be used, and would be housed in the GFH.
  • the fifth major System is the Diagnostics Impairment Mapping (DEVI) System and tools, which include virtual simulations and tools, a user manipulated method of viewing a grid and using hand gesture sensors or tools like fiducial markers, or a connected mouse to identify the area and boundaries where no vision exists, so that this mapping can be obtained from the real "analog" world and transferred into digital coordinates for correction by the Video Manipulation Program.
  • DEVI Diagnostics Impairment Mapping
  • the user would select "Diagnostics” setting, and an Amsler grid would appear on the lenses one at a time, while one lens was being evaluated, the other lens would be opaque to not let the user be distracted by "see through.”
  • the user would trace where the edges of the border of the sight is which is then transposed by the MCC to specific mathematical coordinates which create a border where the image is to be removed and replaced elsewhere.
  • the Diagnostic Test could be employed as often as the user desires to refine and re-correct for the advance of the disease.
  • the display screen on the GFH is curved slightly, so as to reduce the reflections of ambient light from the display, thus improving image contrast, and focusing more of the image on the eye peripheries.
  • the slight curvature also reduces the optical distortion (keystone) in the screen image geometry, especially farther away from the central portion of the display, were no or little image is displayed in the case of macular degeneration.
  • normal corrective glasses/lenses are used and a film, like 3M translucent rear projection film is used and simply affixed to the corrective lenses, or the corrective glasses are affixed to the OLED material so that the patient has both his correction and the pixel manipulation in the same set of lenses.
  • the correction for typical non-retinal problems of the eye like astigmatisms, myopia, hyperopia, or presbyopia is done in the MCC.
  • Pixel corrections can be combined with the pixel manipulation techniques so that that the displayed video image corrects and compensates for that person's native other visual impairments, by using algorithms that adjust for the myopia or hyperopia through techniques like increased focus, increased contrast and enlargement of the video with known techniques like fixed parallax barriers, lenticular lenses, pre-filtered light display, switchable liquid crystal barrier or display, multilayer display, diopter adjustment with independent eye focus, or pre- filtered light field display and the deployment of self-illuminating pixel technologies in the display and specialized lenses on the camera to correct for the non-macular problems of the eye astigmatisms, myopia, hyperopia, or presbyopia.
  • the invention replaces corrective optics to correct vision, with computations within the software and other aids.
  • the camera lenses have the correction needed or that works
  • the image correction is made in the software, firmware, or hardware, so that the device corrects for both the loss of sight, like in macular degeneration, and also for problems like myopia.
  • a person wearing the GFH system would obtain two types of correction in the same display, (i) one for the macular degeneration, and (ii) another for the nearsightedness or farsightedness.
  • the invention teaches that by pre-filtering, the video on the display computes a pre-filtered light field, or uses other similar technologies, which results in a desired projection of the displayed image on the retina of a user or patient which corrects for their exact eye problem.
  • the correction which is computed into the video can be adjusted on the fly, or in real time by the user via a fiducial marker, D Pad, or Control Pad ("focus controller").
  • An adjustment on the control pad would automatically correspond with a change in the filtering so that a more precise image is displayed on the lens and on the retina of the patient's eye.
  • This correction can be done for each eye, so that the display on one eye is different than the display on the other eye and each eye display can be adjusted independently by the focus controller.
  • the problem of scanning or eye-tracking is solved by having the cameras needed for the correction on the smart contact lenses, which then permits the cameras input and displayed images to match that of the movement of the eyes.
  • the augmented video may be displayed on the lenses and include the central 10 to 60 degrees FOV, for example, or any other desired FOV.
  • This displayed video would encompass Phases One and Two.
  • the stitching techniques would be employed on the "edges" of Phase Two the augmented video, here, in this example, beginning at 60 degrees FOV and using, there would be projected/displayed, for example, on another 20 degrees FOV to re-interpolated and phase back into real-world, non-adjusted video.
  • Pixel mapping techniques can help retain image edge features better and produce higher accuracy of integration of a real -world image projection.
  • a user would have his or her central most vision augmented via the projected video, while the video further from the central vision is reintegrated into the real world non-adjusted video, and then there is no video on the outermost peripheral areas where actual vision is used.
  • the data comprising the visual model may be filtered or transformed to eliminate noise or other undesirable effects within the data prior to the boundary (or boundaries) being established.
  • This process may be performed automatically using a set of predefined operations, or may be performed under the control of an operator of the model controller 14.
  • the data may be filtered using one or more morphological transformations. Possible morphological transformations or operations may include, but are not limited to: erosion, dilation, opening, morphological gradient, top hat, and/or black hat.
  • An initial boundary may be established using pre-filtered data and a secondary boundary may be established after the data has been filtered or transformed. The initial and secondary boundary may be compared automatically or by the operator to optimize the boundary used.
  • Boolean operations may be used to filter the visual model and/or combining boundaries.
  • the pre-filtering can also include the pixel manipulation which by using a parallax filter or other filter permits only the pixels which are rays that are at such an angle to miss the area of defect are utilized to be projected.
  • the threshold is adjustable, either at the model controller 14 or at the display controller 16. If performed at the model controller 14, this would provide control to the operator. In adjusting the threshold, the operator could optimize the boundary. If performed at the display controller 16, control would be provided to the patient. This would allow the patient to adjust the boundary to optimize the boundary for current conditions.
  • a fiducial marker is connected to the diagnostic system resident in the GFH.
  • a fiducial marker is an object placed in the field of view of an imaging system which appears in the image produced, for use as a point of reference or a measure merging the analog world with the digital world. Its applications are often seen in commercial products like virtual games. It may be either something placed into or on the imaging subject, or a mark or set of marks, as is preferable in this instance, in the reticle of an optical instrument, which is the measured camera and display.
  • This diagnostic system is combined with the pixel manipulation system such that the input of the diagnostic system causes the pixels identified by the user as non-sighted or defective to me moved to a different location as is more fully explained below.
  • Amsler Grid has been included in the software to be projected onto the lenses.
  • a sample Amsler grid of a person with normal vision and a sample Amsler grid of a person with AMD are shown in FIG. 22.
  • the fiducial marker, or mouse or other similar device is connected to the software so that a location on the visual grid the user sees corresponds to the virtual grid resident in the software. The user then looks through the glasses at the grid and utilizes the fiducial marker to identify the exact edges of the non-sighted space, which is then converted or identified by the fiducial marker software or firmware as the space from which pixels and images must be moved and manipulated.
  • the output of a wearable FOV test is used.
  • the embodiment may use an automated program embedded in the wearable FIMD/FIUD display device 50, 60.
  • An initial start-up and mapping routine would be performed by observation, such as looking at an Amsler grid or moving objects to check the UFOV, or both, utilizing an existing FOV map to modify and optimize.
  • Eye tracking technology may be used to ensure more accurate FOV mapping, and validating fixation. Since eye movements can be as fast as 600 deg/s. and the smallest time constant for saccades is around 50 ms; and the smallest saccades could be completed in 60 milliseconds, thus, it is possible for the "reverse cameras" which are a part of the CIS System looking at the eyes to sample eye movements at a rate of 1 kHz which will allow sufficient precision of tracking of the eyes to let the system know how to modify the output in near real time for epipolar geometry and line of sight offsets. This result is immediately usable directly as the digital input for the UFOV for the Matrix Mapping Technology.
  • the boundary 32 may be adjusted or replaced with a simpler form (boundary 32', see FIG 6).
  • the boundary 32 may be replaced with a boundary established as a function of one or more predesigned shapes and the visual model.
  • the model controller 14 may utilize a set of predefined set of shapes, for example, rectangles, triangles, ovals that are sized to include the affected area.
  • the model controller 14 may select one or more shapes automatically, or the process may be performed by, or with the assistance of, the operator.
  • the shape of the defect or damaged area 24' may be more complex.
  • a complex boundary may be established using the threshold process identified above, or by some other method.
  • the initial boundary may be replaced automatically, or with operator input using one or more of the predefined shapes, sized to cover the defect or with the results of the user using the fiducial marker.
  • two shapes 34A, 34B are used.
  • the boundary may be formed by the outer edge of the joined shapes.
  • the image data inside the boundary 32 is shifted outside of the boundary 32.
  • a center point 36 is established.
  • the center point 36 may be an actual center of the boundary if the shape of the boundary is regular, or it may be defined by finding or estimating the center of the shape defined by the boundary or the center point is ignored and the other items as described above are used to determine how a pixel is moved.
  • image data along a plurality of rays 37 starting at the center point and extending outward is shifted outside of the boundary. It should be noted that in the above examples, the areas inside the boundary or boundaries are defective. However, in some situations, for example, where peripheral vision is affected, the area inside a boundary may be associated with good vision and the areas outside of a boundary may be associated with poor vision.
  • the retinal map includes a series of data points which overlay the digital model.
  • the data points are laid out in a grid in a regular pattern approximating the Amsler Grid.
  • Each data point is defined by a set of X, Y coordinates relative to the image data.
  • each data point is assigned a set of coordinate transformation values ( ⁇ , ⁇ ), which is used to transform the image data.
  • Each data point lies on a single ray and one or more pixels which extends outward from the center point 36.
  • the associated ray is found and a set of coordinate transformation values ( ⁇ , ⁇ ) are established based on a set of predetermined rules.
  • the coordinate transformation values ( ⁇ , ⁇ ) are used as coefficient values in the transformation equations below.
  • visual information in the image from the camera is radially shifted from a central point.
  • the image data from the center point 36 to the edge of the image 38 is compressed (in the corrected image) from the boundary 32 to the edge of the image 38.
  • the coordinate transformation values ( ⁇ , ⁇ ) for any data point lying on the ray may be calculated based on the length of the distance from the center point 36 to the boundary 32, and the length from the center point 36 to the respective edge of the image 38. This works better in an immersive environment where the concern for the moved "edges" is non-existent.
  • the coordinate transformation value ( ⁇ , ⁇ ) is calculated such that the visual information is disproportionally shifted from the center point.
  • visual information from the center point 36 to the boundary 32 may be shifted to a segment of the ray defined by the boundary 32 and a point 32' .
  • the length between the boundary 32 and point 32' may be equal to or different than the length between the center point and the boundary 32.
  • the visual information between the boundary and the edge of the image 38 may be compressed between point 32' and the edge of the image 38. Not only can the visual information be shifted out towards the periphery, but can also be accomplished in reverse and the visual information can be shifted inward as well.
  • the retinal map is stored in the database 12 and transferred to the display controller 16. In use, the retinal map is then used to transform the image(s) received from the camera and generate the corrected image(s). The corrected image(s) may then be displayed in real-time via the display unit 18.
  • the visual information is transformed (or moved) at each data point.
  • the visual information between the data points may be transformed using a spline function, e.g., a B spline function, to interpolate the visual information between the data points.
  • the pixels relating to the data portion of the image which is moved are reduced to smaller pixels, such that the moved pixels and the preexisting pixels occupy the same space on the display.
  • the removed and replaced pixels may be interlaced into a video frame consisting of two sub-fields taken in sequence, each sequentially scanned at odd then even lines of the image sensor.
  • the pixels may be manipulated by fixed parallax barriers, pre-filtered light display, or switchable liquid crystal barrier or display.
  • the parallax barrier will cancel out the pixels which have an undesirable angle and permit the ray bearing pixels which do have the correct angle of projection onto the retina to pass.
  • the other technologies will only let certain rays through to the retina, which can be used for the cut-out and repositioning of the pixels.
  • the prescription for the use is included in each camera lenses so that the correction is done at the lens stage with lenticular lenses, progressive lenses, bifocal or trifocal lenses, and the like before or at the same time as the other modifications identified in this patent.
  • the display controller in generating the corrected image, shifts visual information within the corrected image in a first area inside the boundary to a second area outside of the boundary as a function of the series of data points.
  • the coordinate transformation values are used to shift image data that exists inside the boundary to an area outside of the boundary.
  • the second area is defined as any area in the image that is outside of the boundary.
  • the second area may be defined based on the data in the visual model. For example, a second boundary may be established as a function of the data in the visual model. In one example, the second boundary may be established based on the visual model that meets predefined criteria.
  • an area within the visual model may be established cells 28 in the grid 30 that have a value that meets predefined criteria.
  • the second boundary may encompass an area of the grid 30 in which the cells 28 have a value of 3 (or some other threshold) or less.
  • the information inside the first boundary 32 is shifted (proportionally or disproportionally) into the area defined by the second boundary. Examples of an area defined by a first area 32A and an area defined by a second area 32C are shown in FIGS. 4C and 4D. In both examples, visual information in one of the areas 32A or 32C may be shifted towards or into the other one of the areas 32A, 32C.
  • the second boundary in FIG. 4C has been replaced with a simpler shape/form in FIG. 4D.
  • the display controller 16 and the display unit 18 may be implemented in a suitable user wearable device, such as smart glasses or head mounted displays (HMDs).
  • these hardware wearable platforms all contain wearable glasses that contain one or two forward mounted cameras, and onboard microprocessor, display technologies for viewing by the eye.
  • these are usually battery powered, as well as able to plug into a PC in order to upload information via a USB cable etc. and/or for charging.
  • This may also include HUD (Heads Up Displays), for example, the offering from Meta can be worn over a patient's existing glasses with prescription lenses 62 in order to facilitate moving between the two modes of normal vision and the augmented IDM (Image Distortion Map) vision.
  • HUD Heads Up Displays
  • a virtual retina display maybe used to project photons directly onto the retina, or a "smart" contact lens can project the image that is worn on the eye.
  • Any suitable method or device to present the correction image or images to or onto the eye(s) may be used.
  • the image or images presented to the patient may be otherwise opaque such that the outside world is not visible.
  • the display controller 16 and the display unit 18 are embodied in an exemplary head mountable display (HMD) device 50 that is worn by the patient.
  • the HMD device 50 includes a set of wearable glasses 52 that contains one or two forward mounted cameras 54.
  • the display controller 16 may be mounted to an HMD frame 58 and include an onboard microprocessor.
  • the display unit 18 includes a suitable display technology for viewing by the eye.
  • One or more input or control buttons may be provided that work in conjunction with suitable menus, and software controls display on the display unit 18 to allow the patient/user to change options.
  • the HMD device 50 may be battery powered and may include a USB cable or suitable port 62 to connect to, e.g., a computer to transfer data and software and/or for charging the battery.
  • the display controller 16 and the display unit 18 may also be embodied in a Heads Up Displays (HUD) display device 60, for example, the offering from Meta Vision, that can be worn over a patient's existing glasses with prescription lenses in order to facilitate moving between the two modes of normal vision and augmented IMD vision.
  • the HUD display device 60 are head mountable and may include different display technology such as separate LCD or LED type of display.
  • the HUD display device 60 may embed a display on the actual lenses of the glasses themselves that overlay the image to view the augmented display in conjunction with the outside world.
  • a method M10 according to one embodiment of the present invention is provided.
  • a visual model associated with a patient is established, by the model controller 14 and stored in the database 12.
  • the visual model includes data related to a quality of the patient's vision.
  • at least one boundary is established, by the model controller 14, as a function of data associated with the visual model. At least one boundary is indicative of an area to be corrected within the patient's vision.
  • the model controller 14 establishes a retinal map as a function of the boundary and stores the retinal map in the database 12.
  • the database may be incorporated into a semiconductor chip, which may also be existing space in a camera chip.
  • a fourth step S40 an image from one or more cameras associated with the patient is received by a display controller 16. Corrections to the image based on the retinal map are applied to the image and a corrected image is generated in a fifth step S50. In a sixth step S60, the corrected image is received at the display unit 18 and presented to the eye of the patient.
  • the system 10 and method M10 in general, remap portions of the image(s) captured by the camera(s) which would be viewed by the effected portions of the patient's eye(s) to the periphery or unaffected portions of the patient's vision, or alternatively to another portion of the patient's retina. With this mapping correctly, executed the patient's brain adapts quickly and effective central (or periphery) vision is mimicked. This is accomplished with the forward- looking cameras as the sensor that captures the real world image.
  • the system 10 and method M10 of the present invention shift the pixels to form a corrected image or series of images which are displayed on the micro-displays on a head mounted device, such as readily available augmented reality and virtual reality glasses.
  • the display device utilized may be implemented in head mounted devices, suitable examples of which are these offered by companies such as Sony, Epson, Facebook, Google, etc., utilize a variety of display technologies, such as LED, LCD, OLED, Photon Retinal Display, Virtual Retinal Displays, and Heads Up Displays.
  • the initial mapping of the UFOV Usable Field of Vision
  • the present invention is not limited to mapping from a center area to a peripheral area. In some cases, peripheral vision is affected and the mapping may be from the peripheral area to the center. There are a multitude of methods to accomplish this task. In all cases the initial examination, mapping and calibration must be converted to a digital file. This digital file is then used to construct the boundaries of the UFOV. The UFOV is treated as a sharp outline where peripheral or useable vision is clear, and not degraded.
  • this boundary may be a result of evaluation and determination of the gradation of the partial vision, then interpreted to construct the UFOV boundary.
  • This UFOV border is then utilized as the baseline for the FMA (Image Mapping Algorithm) to determine the area where the effective central vision can be mapped into, along with the existing effective peripheral vision.
  • FMA Image Mapping Algorithm
  • the FOV test may be administered by a trained medical professional such as an optometrist or ophthalmologist in the doctor's office.
  • an automated FOV test may be self- administered with the proper digital technology.
  • a trained professional can manually administer an FOV mapping test to generate the UFOV. Any, and all, of these cases can be utilized to generate the UFOV as outlined.
  • the wearable GFH is placed on the patient's head and would be put into "Diagnostic" mode for FOV mapping.
  • the wearable GFH is connected (via external cable or wireless communication mode) to a patient feedback device, such as a PC with a mouse, tablet, and mobile phone.
  • a patient feedback device such as a PC with a mouse, tablet, and mobile phone.
  • Step S80 or voice recognition technology where the patient gives verbal feedback to the system, which recognized commands, clues and instructions, and accomplishes the FOV mapping automatically.
  • the FOV mapping test is administered first for the left eye (or right eye) through use of visually moving along an Amsler grid to see where images are warped or straight.
  • Steps SI 00 and SI 10 a flashing object is generated to show at different points in the patient's vision in order to determine visual acuity through the feedback device. This is performed at different level intensities to verify level of degradation of vision. See FIGS. 19 and 20.
  • an object is moved through a series of sequences and with feedback, determined when the object becomes clear from blurry to unviewable, effectively creating gradations of the sight map. See FIG. 21.
  • a constantly expanding sphere is displayed until the edges become clearly visible to the patient.
  • the edges are manipulated through the feedback device until the edge of the UFOV is determined.
  • the latter two cases offer the advantage of a faster approach to FOV mapping for utilization with the wearable later. With a quicker mapping procedure, the system is less likely to cause fixation errors due to lack of concentration from the patient. This also offers quicker calibration for more frequent tweaks to the UFOV map to optimize the performance.
  • the further advantage that can be realized with the patient's ability to manipulate the FOV edge is to better personalize the calibration to their particular affliction (Step S120).
  • the Digital FOV map is then generated (Step 160).
  • the auto-mapping and Digital FOV map can be created using voice recognition technology where the patient gives verbal feedback to the system, which recognized commands, clues and instructions, and accomplishes the FOV mapping automatically.
  • This invention teaches the use of one or more cameras to capture the approximate line of sight of the user and display a corrected pixel manipulated version of the real world onto see through glasses or lenses though which the user looks.
  • software is used to realign the picture or video so that it most closely approximates the actual line of sight of the eyes.
  • Smart Contact Lenses are worn with the cameras place in the center of the lenses.
  • software is used for correction for the epipolar geometry correction, so that the image is corrected for when the eye is looking at long distances versus looking at something close.
  • a camera looking at the eyes or one eye tracks the position of the eye and sends information to the control subsystem.
  • smart contact lenses are used in connection with glasses.
  • the smart contact lenses (Fig. 23, 26) have the camera placed in the area where the vision has been impaired or is non-existent.
  • the image which is to be displayed on the lenses has the same or near similar aspect as the rest of the normal vision because the cameras move with each eyeball and, when projected with a corrected image, can approximate the real -world vision.
  • more than two cameras may be used.
  • the two or more cameras may be used to create stereoscopic vision or to simply project the same corrected image to both eyes.
  • the reason that more than one camera per eye may be used is because each camera institutes its own distortion, and the larger the FOV that the camera captures, the more distortion.
  • less distortion may be introduced in the example of one corrected image displayed for both eyes, captured by two cameras to create the entire FOV of over from less than 100% FOV to over 200% FOV. This is because it is easier to use simple existing programs for "blending" or "seaming" the images from two cameras together than to use one camera that must originally capture an image which is up to 220%) FOV and then correct for the lens distortion.
  • This method may also be employed with the method described below for the employment of smart contact lenses, where the smart contact lenses may use one camera for a corrected display to both eyes, or may utilize one camera for each eye for a dual corrected display, or more than one camera for each eye/contact lens for a display to each eye or to both eyes.
  • the invention teaches that software/firmware can be used to correct the projected image for eye view aspect ratio, meaning to make the projected image look as though it was captured in the line of sight of the eyes.
  • the use of smart contact lenses with camera(s) placed in the central vision non-sighted portion of the patient's vision also corrects the displayed image for triangulation and Epipolar Geometry so that a mono or stereoscopic image can be accurately displayed on the glasses/lenses or directly into the retina and be in aspect with the patient's own vision.
  • the image of the real world is captured, then modified in accordance with the corrective modification software/hardware which is then displayed on the glasses or a portion of the Field of Vision of the glasses. This can be done on one lens or on both lenses. In this fashion, the user is looking at the real -world vision through the glasses while simultaneously an augmented manipulated and corrected (for that patient/user) version is also displayed onto a portion of the glasses or lenses, where only the portion of the Field of View which needs to be adjusted is modified.
  • the goal of the new inventions in this patent is to ensure that there remains some peripheral vision where real world images are reintroduced to the patients FOV, which is unmodified looking through the glasses and around the glasses/lenses so a person can use this peripheral vision to avoid hazards, ensure near navigation and be able to manage steps or other obstacles or see hazards.
  • the corrected display onto the glasses, lenses or retina can be accomplished with glasses or lenses using such technology as transparent OLED material, or such as Apple's Retina® HiDPI mode display, where the user interface image is doubled in width and height to compensate for the smaller pixels.
  • transparent OLED material or such as Apple's Retina® HiDPI mode display
  • the word pixels it also means a subpart of an image and light emitted rays of information which is to be broadcast to the eye and retina.
  • see-through technologies which project opaque images via the use of wave guided images upon lenses, or the use of mirrors to project an image upon clear lenses, or technology such as clear rear projection film affixed to a person's prescription lenses are also suitable.
  • technologies which project images directly into the retina can also be employed. The goal of all of this is to remove the image from the non-sighted portion of the patient's vision within the damaged macula, as shown on Fig.
  • the augmented video which is the video which has had the pixels manipulated to show more FOV information than would otherwise exist in the real world
  • the augmented video is merged with real world visual information to create a "mixed reality" display, so that the patient sees augmented video with the manipulated images on the display of the glasses, lenses, or retina, which is then slowly merged back into a real world video matched as closely as possible with the real world unmodified vision of the patient, all of which combine in the mind to create one homogeneous corrected image.
  • the glasses or lenses are not used and the image is displayed upon smart contact lenses which receive the video from a remote source which has received the video, manipulated the image and re-projected the modified image onto the smart contact lenses for the patient to see.
  • the lenses such as Wave Guide projected lenses, Mirror projected lenses, transparent OLED lenses, or film applied to lenses, such as 3M reverse projection transparent film, upon which the video or images are to be displayed may be glued or similarly affixed to the patient's corrective lenses, such that the patient sees both the prescription corrected real world images along with the video projected augmented images, all of which combine in the mind to create one homogeneous image.
  • pixel algorithms are used to use the outer boundary of the projected FOV to intersperse augmented visual information which by skipping some, but not all pixels, to permit real world information to be viewed through the see-through glasses or lenses, a merging effect "mixed reality" is created which merges the real -world images to the eye with the augmented video.
  • the prescriptive corrective lenses may be worn together with the "mixed reality" see-through lenses, without the same being glued or directly affixed. In this case they corrective lenses would have a mechanism to "snap in” or otherwise hold the corrective lenses within a close proximity to the augmented "mixed reality” lenses.
  • contact lenses upon which augmented images can be viewed can be used together with the patient's own prescription glasses and/or lenses.
  • this manipulated video of the real world would be displayed on see through glasses, and improvement over the enclosed goggles which previously existed, in order to merge manipulated video information with real world visuals.
  • the model controller is further configured to establish a border somewhere in the FOV as a function of data associated with the augmented visual model.
  • the boundary is indicative of an area to be corrected within the patient's vision, wherein the area to be corrected includes more visual information than would originally exist in that same FOV in the real world.
  • the image or pixels from the area where the patent cannot see are included into the FOV where the patient can see.
  • the pixels are the same size but are manages pixel by pixel to include additional visual information.
  • the model controller is further configured to establish a retinal map as a function of the boundary and to store the retinal map in the database.
  • the display controller is configured to receive and to store the retinal map.
  • the display controller is further configured to receive an image from a camera or cameras from associated with the patient and to apply corrections to the image based on the retinal map and responsively generate a corrected image.
  • the display unit is coupled to the display controller and is configured to receive the corrected image to present the corrected image to the eye of the patient.
  • a method includes the steps of establishing, by a model controller, a visual model associated with a patient and storing the visual model in the database.
  • the visual model includes data related to a quality of the patient's vision.
  • the method further includes the step of establishing, by the model controller, a boundary as a function of data associated with the visual model, the boundary being indicative of an area to be corrected within the patient's vision into which corrected FOV where the additional pixels removed from the non -visual area of the patients FOV are added.
  • the method also includes the steps of establishing, by the model controller, a retinal map as a function of the boundary and storing the retinal map in the database, receiving, at a display controller, an image from a camera or cameras associated with the patient, applying corrections to the image based on the retinal map, and responsively generating a corrected image. Further, the method includes the steps of receiving, at a display unit, the corrected image and presenting the corrected image to the eye of the patient.
  • one or more non-transitory computer-readable storage media have computer-executable instructions embodied thereon.
  • the computer-executable instructions When executed by at least one processor, the computer-executable instructions cause the at least one processor to establish, by a model controller, a visual model associated with a patient and storing the visual model in the database.
  • the visual model includes data related to a quality of the patient's vision.
  • a boundary is established as a function of data associated with the visual model, the boundary being indicative of an area to be corrected within the patient's vision.
  • a retinal map is established as a function of the boundary.
  • An image from a camera or cameras associated with the patient is received at a display controller. Corrections are applied to the image based on the retinal map, and a corrected image is generated. The corrected image is presented to the eye of the patient.
  • the present invention provides systems, and methods to stretch, skew, and manipulate the image being projected on the eye to avoid the vision impaired or unsighted portions of the macula, and be directed to the remaining central vision, sighted macular vision, and the near peripheral vision.
  • the findings of the inventors being that the displaced pixels or images should be removed but replaced as near the original position as possible.
  • the Central Vision area typically is said to comprise the central 5 degrees FOV of the eye, with the Paracentral area being the most central 8 degrees of the eye's vision and the Macular Vision being the central 18 degrees of the eye's vision.
  • the eye defect lies within these areas.
  • the Near Peripheral area of the eye which comprises the next 30 degrees of the FOV of the eye. If possible, since the receptors of the eye are the most similar to the central portion of the eye, the displacement of the pixels or image should be to the nearest possible Near Peripheral Field of Vision of the eye.
  • the whole foveal area including foveal pit, foveal slope, parafovea, and perifovea is considered the macula of the human eye. This is what is destroyed with macular degeneration. Familiar to ophthalmologists is a yellow pigmentation to the macular area known as the macula lutea. The macula lutea is thought to act as a short wavelength filter, additional to that provided by the lens.
  • the fovea is the most essential part of the retina for human vision and contains short- wavelength receptors cells, medium-wavelength receptor cells, and long-wavelength receptor cells.
  • the central approximate 10 degrees of the eye's FOV projects onto approximately the central 3 mm of retina, or a region within 1.5 mm radius of the fovea centralis positioned at 0° eccentricity. This is a slightly larger area than the region that contains the yellow macular pigments, which is 4-6° in diameter ⁇ macula lutea) or the Macula.
  • the foveola approximately coincides with the area of peak cone density in the photoreceptor layer, and in general is centered within a small region devoid of retinal vessels - the ' foveal avascular zone' (FAZ).
  • the repositioning of pixels or images must be concentrated onto the remaining non-defect areas of this region, as much as possible, as the cones in this region are so densely packed that they look almost like rods.
  • the relationship to the cellular structure and ganglia are on par with a more one-to-one basis than any other area in the eye, so that just making a "hole” bigger, if it ignores sighted portions of the foveolar centralis makes a far less crisp picture.
  • Figure 25 depicts how this is to be accomplished. In this way the remaining sighted portions of the foveolar centralis and macula are used to project the modified image to make the best use of this specialized region of the eye.
  • an important aspect of this invention is to displace the pixels or image to as similar an area of the eye as possible, so that perception of the image by the eye is projected onto an area which is as close to the same as the damaged area, in terms of rods and cones, as possible.
  • the image must be moved to the next best place which is the Near Periphery and the retina's peripheral receptors.
  • the image can be skewed to immediately adjacent portions of the retina in an irregular fashion that best approximates the area of defect. In this way, the entire image is projected on the functioning retinal receptors, and any involvement of the macula is avoided.
  • the systems and methods create a distortion map of the entire image and project it onto the periphery of the eye, while avoiding the macula. This can be done by the use of computer aided 90-degree 3D or similar High Definition goggles or glasses, or by photon projection with a virtual retina display of the image directly onto the retina of the eye.
  • the method and manner of the skewed projection relies on external lenses, with up to 2 million pixels, a resolution seen only otherwise on ultra-high-definition TVs and tablet computers, which provide the resolution needed to put the entire image on the peripheral retina receptors in sufficient detail to be analyzed by the optical nerve and brain.
  • the goggles and/or glasses could be used to house a technology like virtual retina display, retina scan display projection, and/or a retinal projector technology which all use photon on retina projection, which in this case would be modulated by the IDM (Image Distortion Map) to the person's specific Retinal Map so that an intentionally distorted image would be projected onto the areas of the eye which have the best visual reception.
  • IDM Image Distortion Map
  • VRD virtual retinal display
  • RSD retinal scan display
  • RP retinal projector
  • the person's specific retinal map, modulated by the image distortion map, would be displayed by the technology which draws a raster display (like a television) directly onto the retina of the eye, and in this case on to the usable portions of the retina of the eye.
  • a raster display like a television
  • the patient user sees what appears to be a conventional display floating in space in front of them, which is corrected for the loss of macula, but still provides the patient with the ability to see other peripheral obstacles, such as steps in front of the patient which the camera is not yet focused on.
  • the goggles and/or glasses could be used to house a technology like virtual retina display, retina scan display projection, and/or a retinal projector technology which all use photon on retina projection, which in this case would being modulated by the pixel manipulation according to the person's specific loss of sight.
  • you can scan the manipulated image directly into the portion of the peripheral retina which is still active in a macular degeneration patient via photons.
  • These photons may be projected by cameras in the glasses or by Smart Contact Lenses, which may or may not receive its information, energy, and connection from the GFH.
  • Another advantage is that these types of wide field-of-vision goggles or glasses can be used in conjunction with one or more cameras, which are typically head mounted.
  • Another advantage of these types of glasses is that they can be combined with proximity sensors, motion sensors, head and eye tracking, a feature which is advantageous for understanding a user's specific field of vision for adjustments, and to measure distance through tri angulation. For instance, in human eyes there is a convergence of the image when it comes closer to the face, meaning that the image captured by each eye begins to overlap the other eye's image.
  • the sensors can also be used to automatically change the field of view presented to the retina, i.e., a virtual zoom to determine facial features when in proximate distance of another person.
  • a virtual zoom to determine facial features when in proximate distance of another person.
  • the zoom, skew or other manipulation features can be selected in a straightforward method chosen by the user to gain visual acuity in various environments.
  • a differential adjustment may also be chosen with regard to each eye.
  • software derived proximity and motion sensing can be employed by utilizing comparative techniques on sequential camera images.
  • this invention teaches that, one camera can be used for monoscopic image capture and display.
  • this invention teaches that you can use two cameras to simulate on the goggles/glasses display true stereoscopic vision, wherein the IDM (Image Distortion Map) model includes factor correction for epipolar curves, guided by the epipolar geometry so that stereoscopic vision, generated by two or more cameras, can be employed and be displayed, and seen.
  • IDM Image Distortion Map
  • the invention uses computer aided video images which are skewed and stretched in a matrix distortion or other similar fashion to put the most or the entirety of the image onto the peripheral vision of the patient by opening up the center of the image and manipulating it to the peripheral cones of the eyes, as seen by the patient in the projected image, in order to project the video captured images on the peripheries of the cones in the eyes where vision is still active.
  • the benefits of this invention are that no invasive procedures are necessary and as the MD changes, the software can be adjusted so that the image is now correctly skewed. It is an additional advantage of this invention that live feedback can be provided.
  • the viewed experience makes it nearly impossible for the user to distinguish between what is actually seen and the image that is created by the distortion map.
  • the spreading and/or multi-lateral skewing of the image which reflects the corrected image onto 3D or High-Definition goggles and/or glasses worn by the patient.
  • the image is skewed via the IDM (Image Distortion Map) module to avoid projection to the area of the eye which involves the macula, but still has all the image information.
  • IDM Image Distortion Map
  • This process think of a picture which is printed onto a stretchable and compactable substance. A hole is cut into the middle of the image and stretched open. This makes the image compress into the sides of the picture. Thus, all of the information of the picture is still there, it is just rearranged where a hole is in the middle and the image is moved each way to the side, top and bottom.
  • This "hole-cutting" is done via algorithms and computer software/firmware technology, for instance, using a technology like Image Distortion Mapping as above mentioned.
  • the process maps each pixel in the two dimensional image (or video) from the camera(s) and maps the pixel to a new pixel location on the display.
  • only the data points are remapped.
  • the other image data is transformed using a predefined function that interpolates the data between the data points.
  • the IDM model takes vector values (numbers) that describe the lens center of the goggle device (per eye, on the oculus rift) (called "lCr"), as well as field of view of the display, and returns the vector object that defines how to distort the image to make it more viewable by someone with macular degeneration.
  • the key element is to define the mapping between image (pixel) coordinates and 3D rays in the camera(s) coordinates as a linear combination of nonlinear functions of the image coordinates.
  • This Image Distortion Map (“IDM”) model thus becomes that person's Prescribed Retinal Interface (“PRI").
  • This invention has great benefits in that it is non-invasive, can be worn or not worn, and is easier to adjust and keep fine-tuned because it is external, and image and algorithms which stretch and skew the image to the PRI can be adjusted in real-time based on MD Patient feedback in adjustments.
  • the active retinal receptors are identified through evaluation with the system or by known prescription whereby the lowest number of receptors in the retina required to affect the desired mental and visual impression of the image are used to increase the apparent refresh rate, by actually increasing the refresh rate by displaying the image on less than all of the receptors.
  • various FOV maps are stored and/or analyzed or tracked in a database.
  • the database could be stored in the cloud.
  • a knowledge base and decision tree based formula can be used to analyze the FOV maps, and one or more of the FOV maps could be used as a starting point for a patient.
  • the selected FOV map could be fine- tuned using one or more of the methods described above.
  • a FOV from the database may be chosen as a starting point based on patient visual models, common trends and outliers within the data.
  • the FOVs models could be sorted and/or chosen based on identified common boundaries.
  • the output of the different FOV maps, i.e., the resultant corrected images could be analyzed, with patient input, utilizing a process of comparison and elimination while viewing desired real world images, i.e., a face chart, text chart or the like.
  • a controller, computing device, server or computer such as described herein, includes at least one or more processors or processing units and a system memory, which may be an embodiment in a personal computer, server, or other computing device.
  • the controller typically also includes at least some form of computer-readable media.
  • computer-readable media may include computer storage media and communication media.
  • Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology that enables storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • Communication media typically embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism and include any information delivery media.
  • modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.
  • a processor or controller includes any programmable system including systems and microcontrollers, reduced instruction set circuits (RISC), application specific integrated circuits (ASIC), programmable logic circuits (PLC), and any other circuit or processor capable of executing the functions described herein.
  • RISC reduced instruction set circuits
  • ASIC application specific integrated circuits
  • PLC programmable logic circuits

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un système de manipulation d'image portable comprenant un système d'entrée de caméra, un système de projection d'image, le système de projection d'image pouvant être porté par un utilisateur, et un processeur en communication avec le système d'entrée de caméra et le système de projection d'image de telle sorte que le processeur soit apte à recevoir une image provenant du système d'entrée de caméra, à modifier l'image pour produire une image modifiée, et à afficher l'image modifiée sur le système de projection d'image. Le système d'entrée de caméra peut comprendre une lentille de contact sur laquelle est montée une caméra. De plus ou en variante, le système peut être apte à suivre le mouvement de l'œil d'un utilisateur pour capturer avec précision où l'utilisateur regarde avec le système d'entrée de la caméra.
PCT/US2018/029428 2017-04-25 2018-04-25 Système portable de commande et de manipulation d'images à correction des défauts de vision et augmentation de la vision et de la détection WO2018200717A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
AU2018258242A AU2018258242A1 (en) 2017-04-25 2018-04-25 Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing
CA3060309A CA3060309A1 (fr) 2017-04-25 2018-04-25 Systeme portable de commande et de manipulation d'images a correction des defauts de vision et augmentation de la vision et de la detection
EP18790963.5A EP3615986A4 (fr) 2017-04-25 2018-04-25 Système portable de commande et de manipulation d'images à correction des défauts de vision et augmentation de la vision et de la détection
CN201880041696.9A CN110770636B (zh) 2017-04-25 2018-04-25 具有矫正视力缺陷、增强视力和感知能力的可穿戴图像处理和控制系统
AU2023285715A AU2023285715A1 (en) 2017-04-25 2023-12-18 Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762489801P 2017-04-25 2017-04-25
US62/489,801 2017-04-25
US15/962,661 US11956414B2 (en) 2015-03-17 2018-04-25 Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing
US15/962,661 2018-04-25

Publications (1)

Publication Number Publication Date
WO2018200717A1 true WO2018200717A1 (fr) 2018-11-01

Family

ID=63920380

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/029428 WO2018200717A1 (fr) 2017-04-25 2018-04-25 Système portable de commande et de manipulation d'images à correction des défauts de vision et augmentation de la vision et de la détection

Country Status (3)

Country Link
AU (2) AU2018258242A1 (fr)
CA (1) CA3060309A1 (fr)
WO (1) WO2018200717A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872472B2 (en) 2016-11-18 2020-12-22 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US10984508B2 (en) 2017-10-31 2021-04-20 Eyedaptic, Inc. Demonstration devices and methods for enhancement for low vision users and systems improvements
US11043036B2 (en) 2017-07-09 2021-06-22 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US11187906B2 (en) 2018-05-29 2021-11-30 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
CN114615484A (zh) * 2022-03-08 2022-06-10 常山县亿思达电子有限公司 一种基于视网膜监视的视域跟踪定位系统
CN114846518A (zh) * 2019-11-05 2022-08-02 阿尔斯佩特拉有限公司 用于医疗成像的增强现实头戴设备
US11563885B2 (en) 2018-03-06 2023-01-24 Eyedaptic, Inc. Adaptive system for autonomous machine learning and control in wearable augmented reality and virtual reality visual aids
US11726561B2 (en) 2018-09-24 2023-08-15 Eyedaptic, Inc. Enhanced autonomous hands-free control in electronic visual aids
US11963868B2 (en) 2020-06-01 2024-04-23 Ast Products, Inc. Double-sided aspheric diffractive multifocal lens, manufacture, and uses thereof

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114460805B (zh) * 2020-10-21 2024-05-28 中国科学院国家空间科学中心 一种基于高通滤波的遮挡物散射成像系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120200595A1 (en) * 2007-04-02 2012-08-09 Esight Corporation Apparatus and method for augmenting sight
US20130215147A1 (en) 2012-02-17 2013-08-22 Esight Corp. Apparatus and Method for Enhancing Human Visual Performance in a Head Worn Video System
US20130335543A1 (en) * 2012-06-13 2013-12-19 Esight Corp. Apparatus and Method for Enhancing Human Visual Performance in a Head Worn Video System
US20150355481A1 (en) * 2012-12-31 2015-12-10 Esight Corp. Apparatus and method for fitting head mounted vision augmentation systems
US20150362733A1 (en) * 2014-06-13 2015-12-17 Zambala Lllp Wearable head-mounted display and camera system with multiple modes
US20160270648A1 (en) 2015-03-17 2016-09-22 Ocutrx Vision Technologies, LLC System, method, and non-transitory computer-readable storage media related to correction of vision defects using a visual display

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120200595A1 (en) * 2007-04-02 2012-08-09 Esight Corporation Apparatus and method for augmenting sight
US20130215147A1 (en) 2012-02-17 2013-08-22 Esight Corp. Apparatus and Method for Enhancing Human Visual Performance in a Head Worn Video System
US20130335543A1 (en) * 2012-06-13 2013-12-19 Esight Corp. Apparatus and Method for Enhancing Human Visual Performance in a Head Worn Video System
US20150355481A1 (en) * 2012-12-31 2015-12-10 Esight Corp. Apparatus and method for fitting head mounted vision augmentation systems
US20150362733A1 (en) * 2014-06-13 2015-12-17 Zambala Lllp Wearable head-mounted display and camera system with multiple modes
US20160270648A1 (en) 2015-03-17 2016-09-22 Ocutrx Vision Technologies, LLC System, method, and non-transitory computer-readable storage media related to correction of vision defects using a visual display

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872472B2 (en) 2016-11-18 2020-12-22 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US11282284B2 (en) 2016-11-18 2022-03-22 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US12033291B2 (en) 2016-11-18 2024-07-09 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US11676352B2 (en) 2016-11-18 2023-06-13 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US11043036B2 (en) 2017-07-09 2021-06-22 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US11935204B2 (en) 2017-07-09 2024-03-19 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US11521360B2 (en) 2017-07-09 2022-12-06 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US10984508B2 (en) 2017-10-31 2021-04-20 Eyedaptic, Inc. Demonstration devices and methods for enhancement for low vision users and systems improvements
US11756168B2 (en) 2017-10-31 2023-09-12 Eyedaptic, Inc. Demonstration devices and methods for enhancement for low vision users and systems improvements
US11563885B2 (en) 2018-03-06 2023-01-24 Eyedaptic, Inc. Adaptive system for autonomous machine learning and control in wearable augmented reality and virtual reality visual aids
US11187906B2 (en) 2018-05-29 2021-11-30 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US11385468B2 (en) 2018-05-29 2022-07-12 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US11726561B2 (en) 2018-09-24 2023-08-15 Eyedaptic, Inc. Enhanced autonomous hands-free control in electronic visual aids
CN114846518A (zh) * 2019-11-05 2022-08-02 阿尔斯佩特拉有限公司 用于医疗成像的增强现实头戴设备
US11963868B2 (en) 2020-06-01 2024-04-23 Ast Products, Inc. Double-sided aspheric diffractive multifocal lens, manufacture, and uses thereof
CN114615484B (zh) * 2022-03-08 2022-11-01 常山县亿思达电子有限公司 一种基于视网膜监视的视域跟踪定位系统
CN114615484A (zh) * 2022-03-08 2022-06-10 常山县亿思达电子有限公司 一种基于视网膜监视的视域跟踪定位系统

Also Published As

Publication number Publication date
CA3060309A1 (fr) 2018-11-01
AU2018258242A1 (en) 2019-11-07
AU2023285715A1 (en) 2024-01-18

Similar Documents

Publication Publication Date Title
US11956414B2 (en) Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing
US11461936B2 (en) Wearable image manipulation and control system with micro-displays and augmentation of vision and sensing in augmented reality glasses
AU2023285715A1 (en) Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing
US10874297B1 (en) System, method, and non-transitory computer-readable storage media related to correction of vision defects using a visual display
US12013536B2 (en) Wearable image manipulation and control system with high resolution micro-displays and dynamic opacity augmentation in augmented reality glasses
WO2020014705A1 (fr) Système de manipulation et de commande d'image pouvant être porté avec des micro-affichages et augmentation de la vision et de la détection dans des lunettes à réalité augmentée
US11204641B2 (en) Light management for image and data control
US10275024B1 (en) Light management for image and data control
CN110770636B (zh) 具有矫正视力缺陷、增强视力和感知能力的可穿戴图像处理和控制系统
US20170092007A1 (en) Methods and Devices for Providing Enhanced Visual Acuity
US12062430B2 (en) Surgery visualization theatre
EP2621169B1 (fr) Appareil et procédé pour augmenter la vision
CA2781064C (fr) Grossissement d'image sur un visiocasque
US20210389590A1 (en) Wearable image manipulation and control system with high resolution micro-displays and dynamic opacity augmentation in augmented reality glasses
JP2021502130A (ja) デジタル治療用矯正眼鏡
CN111683629A (zh) 抑制眼睛的屈光不正的进展的方法、装置和系统
US11031120B1 (en) System, method, and non-transitory computer-readable storage media related to correction of vision defects using a visual display
WO2020014707A1 (fr) Système de manipulation et de commande d'image pouvant être porté doté de micro-dispositifs d'affichage haute-résolution et d'une augmentation d'opacité dynamique dans des lunettes de réalité augmentée
WO2021226134A1 (fr) Salle de visualisation chirurgicale
US20240266033A1 (en) Surgery visualization theatre
EP3830630A1 (fr) Système de manipulation et de commande d'image pouvant être porté doté de micro-dispositifs d'affichage haute-résolution et d'une augmentation d'opacité dynamique dans des lunettes de réalité augmentée
CN115280219A (zh) 强化视力的系统与方法
EP4146115A1 (fr) Salle de visualisation chirurgicale
WO2022146514A1 (fr) Système, procédé et supports de stockage lisibles par ordinateur non transitoires liés à la correction de défauts de vision à l'aide d'un affichage visuel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18790963

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3060309

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018258242

Country of ref document: AU

Date of ref document: 20180425

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018790963

Country of ref document: EP

Effective date: 20191125