WO2019017861A1 - See through display enabling the correction of visual deficits - Google Patents

See through display enabling the correction of visual deficits Download PDF

Info

Publication number
WO2019017861A1
WO2019017861A1 PCT/US2017/042289 US2017042289W WO2019017861A1 WO 2019017861 A1 WO2019017861 A1 WO 2019017861A1 US 2017042289 W US2017042289 W US 2017042289W WO 2019017861 A1 WO2019017861 A1 WO 2019017861A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display system
visual field
display
visual
Prior art date
Application number
PCT/US2017/042289
Other languages
French (fr)
Inventor
Victor Stone
Fusao Ishii
Original Assignee
Iron City Microdisplay, Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iron City Microdisplay, Incorporated filed Critical Iron City Microdisplay, Incorporated
Priority to PCT/US2017/042289 priority Critical patent/WO2019017861A1/en
Priority to US15/659,619 priority patent/US10416462B2/en
Publication of WO2019017861A1 publication Critical patent/WO2019017861A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2003Display of colours
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0242Compensation of deficiencies in the appearance of colours
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0626Adjustment of display parameters for control of overall brightness
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/14Detecting light within display terminals, e.g. using a single or a plurality of photosensors
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/08Biomedical applications
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2018Display of intermediate tones by time modulation using two or more time intervals

Definitions

  • This invention relates to a wearable visual sensor-modulator-display system that (1) is worn on a user's head or face, (2) has a single or multiple visual sensors, (3) has a video image modulator, and (4) projects a video image via a microdisplay to the user's visual field. More particularly, this invention relates to the application of aforementioned display system to acquire visual imagery, modulate the video image in a manner that corrects for visual deficits and displays the visual imagery in a manner that the user can perceive and
  • APPLICATION FOR PRESBYOPIA As people age, they commonly lose the ability to focus on both near and far objects. This phenomenon is commonly called senior vision or presbyopia in medical language. Presbyopia happens because of the natural hardening of the lens in a viewer's eye. The hardening results in a decreased ability for the muscles to contract and expand the shape of the lens. The loss of near vision causes the most obstacles for daily life and viewers augment their vision with a low power magnification in the form of reading glasses. However, the natural progression of presbyopia is not limited to near vision, but also far vision because of the progressive hardening of the lens as described above.
  • Fig-2 shows the natural physiological mechanism of focusing on an object.
  • the eye perceives the letter ⁇ ' (201) at some distance.
  • the eye muscles (203) modulate the shape of lens (204) so that the lens will focus the object ⁇ ' onto the retina (206).
  • the hardening of the lens causes the eye muscles to be insufficient in changing the lens form and the object cannot focus onto the retina.
  • Fig-3 shows the object ⁇ ' (301-304) in focus as perceived by a young viewer, while Fig-4 shows the object ⁇ ' (401-404) by a view with presbyopia that is unable to focus the object onto the retina.
  • Fig- 5 shows the usage of concave lens (501) to correct for myopia (inability to focus on far objects) and Fig-6 shows the usage of convex lens (601) to correct for the near distances in presbyopia.
  • FIG-7 shows an example of progressive lens (701) simultaneously correcting for near (703) and far (702) distances, albeit near and far distance focus is restricted to lower and upper visual fields, respectively.
  • conventional bifocal lens enables a viewer to focus on near and far by separating the lens areas. For example the lower portions of the lens cannot be used to perceive objects at a far or even normal distance.
  • One example of this challenge is for a viewer wearing bifocal lens to descend a flight of stairs.
  • the bifocal lens enables a view to read objects 30-45cm, but in turn, obstructs the view from perceiving objects at a distance of l-2m, including the view's own feet and the next step in the stair case. This causes significant concern for views trying to descend a flight of stairs.
  • Bifocal lens enables a user to read a scorecard, but prohibits the viewer from focusing on a golf ball (801) when taking a shot as in Fig-8. After taking a shot, the viewer can only see the ball in flight with the upper half of the visual field because the lower half can only focus on near objects.
  • APPLICATION FOR COLOR BLINDNESS The eye perceives light through photoreceptor cells called Rods and Cones located in the retina of the eye (Fig-11). Light energy elicits a cellular reaction whereby the ionic composition internal to the photoreceptor cells triggers a nerve impulse which is transmitted to the brain as a light signal. Rods (1102 and 1104) and Cones (1103) are found on an array and the selective triggering of these photoreceptors translates light images into a visual image perceived by the brain.
  • Each red, green, and blue Cone photoreceptor has protein structures that react to light energy with wavelengths correlating to red, green, and blue light.
  • the gene that codes for these protein structures is X-linked (found on the non-redundant arm of the X-Chromosome), and therefore males have a propensity to have genetic deficits associated with Cone photoreceptors.
  • the sensitivity of these Cone photoreceptors is shown in Fing-12.
  • the first type of Cone has the sensitivity shown as the curve marked (1201) or L (long wavelength)
  • the second type of Cone has the sensitivity of the curve marked (1202) or M (middle wavelength)
  • the third type of Cone has the sensitivity of the curve marked (1203) or S (short wavelength).
  • the horizontal axis is wavelength and the vertical axis is the normalized sensitivity to each peak.
  • the impact of having a genetic deficit on the cone photoreceptor is the inability to differentiate that specific color.
  • a genetic mutation located on the second type of Cone photoreceptor (whose sensitivity is the curve marked M or 1202 and hereafter called as Green Cone) will be used as it is the most prevalent. Green light may enter the eye and strike the photoreceptor layer of the retina, however, no or little of Green Cone photoreceptor reacts to the light because of the genetic deficit. Green light fails to trigger a nerve response and therefore the brain does not perceive this wavelength of light. The brain is still able to perceive red and blue light and therefore, this patient will see the world in two colors, red and blue. This is the mechanism of color blindness.
  • Fig- 13 shoes the population of normal vision and color blindness.
  • the weal- sensitivity of the first type of Cone photoreceptor (Red) is called as protanomaly and weaker sensitivity is called as protaopia.
  • the second type of Cone photoreceptor (Green) deficit is deuteranomaly and deuteraopia respectively.
  • the third type of Cone photoreceptor (Blue) deficit is called as Tritanomaly and Tritaopia respectively.
  • the second type Cone deficit has the largest population among color blindness and 2.7% (deuteranomaly) and 0.56% (deuteranopia).
  • Complete color blindness (Achromatopsia) is very rare and less than 0.0001% as shown in Fig- 13.
  • the color bars in Fing-13 shows how each type perceive the color of spectrum.
  • Fig- 14 shows the patterns used for color blindness test. Normal vision sees the patterns (1401) which has red character of "6” over the background of yellow, green and blue and the pattern (1405) having green character of "74" over red and yellow background.
  • Fig- 15 shows another example to show how images are perceived by each type of color blindness.
  • the image (1501) is by Normal Vision.
  • the image (1504) is by Protanopic Vision which loses red and a large part of green, because the sensitivity of the first type of Cone photoreceptor is overlapping from red to green.
  • the image (1508) is by Deuteranopic Vision which loses green and a large part of red.
  • the image (1510) is by Tritanopic Vision which loses blue.
  • APPLICATION FOR POOR NIGHT VISION Over time, humans progressively lose night vision, or the ability to distinguish objects in darkness. The cause of this visual deficit can be multi-faceted with underlying conditions including, but not exclusive to, early cataracts, vitamin A deficiency, retinitis pigmentosa, and diabetes. Any progressive visual deficit warrants medical attention; however, not all conditions have immediately reversible treatments.
  • SAFETY FEATURE FOR ALL APPLICATIONS A safety factor that should not be missed is the importance of peripheral vision. Many people focus on the central vision or macular vision where the vision is perceived in color and the resolution is the highest. In contrast, peripheral vision has very low visual acuity and generally perceives in black and white. However, the brain receives many cues from the peripheral field which ultimately contribute to special awareness, motion detection, and depth perception. One good example is to wear a pair of goggles that restricts vision in the periphery; such views will find many activities of daily living become restricted. Therefore, it is desirable for corrective glasses to correct a wide field of view, however, ultimately leave a peripheral margin unobstructed to enable the viewer with nascent visual cues from the periphery.
  • Glass by Google as shown in Fing-24, and MEG 4.0 by Olympus are both examples of wearable displays that cover a minor area of the visual field.
  • the displays are meant to be worn while conducting activities of daily living, however, the majority of the visual field is unobstructed and therefore the users will have no issues in perceiving peripheral cues while using these products.
  • wearable displays will cover a 'full field of view,' and designed for simultaneous wear with activities of daily living.
  • This invention seeks to be such a product whereby people with visual deficits such as presbyopia, color blindness, or poor night vision can enjoy life with a visual field that is corrected for the deficit.
  • This type of product to become useful when the display can project more than 13 degrees field of view from center and have a transparency exceeding 60%.
  • the rationale for the field of view (13 degrees from center) is that it covers central vision (macular vision). Projection beyond that range enters into peripheral vision.
  • 60% transparency refers to 60% of light is able to pass through the image-capture and display apparatus lens and enter into the user's eye.
  • the user must be able to see through the apparatus and see the visual field naturally, and we believe 60% transparency is the threshold whereby any less light would be considered obstructive for natural activities.
  • sunglasses diminish light transparency
  • This invention aims to resolve this issue by fashioning a wearable display with a mounted optical sensor system that senses the user's visual field, modulates the image, and then displays that image in real time into the user's visual field.
  • the image modulation enables image data of objects at multiple focal distances to be reconstructed into an image with objects at a focal distance that the user can perceive and differentiate.
  • Fig-9 and 10 illustrate this concept.
  • the larger rectangular frame (900) represents a hypothetical visual field. In said field, four objects are in view, two near and two far.
  • This invention seeks to create a visual display system whereby camera inputs detect image data of the visual field and image data for individual objects are modulated and displayed to the viewer at a focal distance that the viewer can readily see.
  • This invention further intends to accomplish this by a data circuit loop whereby the visual field is captured by the image sensors, the video data is modulated by a processing unit to suit the user's specific needs such as presbyopia (inability to focus on near and far objects because of hardening of lens), and then this modified visual field is projected onto a display positioned in front of the user's eyes.
  • the processor may be single or multi-part and communicate with each other through wired or wireless means.
  • the image processing is expected to consume significant calculation resources in both power and processor and therefore, the wireless communication of the processing units enables the outsourcing of calculations to be done on a unit positioned outside the actual image capture and display apparatus.
  • the display lens of the apparatus is created in such a way that maximizes the transmission of light so that the user has a natural view of the outside field of view when the projector is not displaying an image.
  • This apparatus specifically incorporates a safety feature whereby the outer margin of the user's visual field is left intentionally intact without obstruction by the display lens or the display projection area. This enables the user to maintain visual cues from the peripheral vision which is useful for depth perception, motion detection in the periphery, and other spatial awareness cues that enable natural walking and activities of daily living.
  • the sound sensor may be single or multiple with audio capture apparatus on the surface of the apparatus, or positioned in a tube.
  • the tube may or may not be pointed in-line with the user's visual field.
  • the purpose of this orientation includes the optimization and differentiation of audio inputs to the user's attention.
  • Audio data will flow from the sound sensor(s) to a sound processor system which will then transmit the audio data to the user's ears directly or through bone-conduction mechanisms.
  • the sound processor system may be single or multi-part and communicate with other processor components through wired or wireless means.
  • the presence of multiple sound sensors enables different audio signals to flow into the processor.
  • the processor can compare the audio inputs and distinguish sound of interest while modulating ambient sound or noise. For example, consider a head-mounted apparatus with 4 total audio sensors, two positioned in forward facing tubes, and two others on the surface of the apparatus. Surface audio sensors will detect the most sound; however, the sensor cannot distinguish between ambient noise from an air conditioner and a person speaking in front of the user.
  • the audio sensor-processor-speaker system may increase the absolute value of the audio inputs, but contrast between the ambient noise and forward speaker will not change, and the user will have difficulty discerning the words spoken in front. Audio sensors positioned within a tube pointed forward will selectively sense sound from the front. With both types of inputs, surface and tubular, the processor can compare the signals and identify what is noise and what is sound from the front. If there is a significant
  • the processor can selectively amplify the forward sounds and diminish the surface sounds, thereby enabling the user to better distinguish sounds from the front from ambient noise.
  • This invention is not restricted to this four sensor system, however, intends to capture the merits of sound spatial selectivity as described here.
  • Wearable displays received significant attention in recent years. Wearable displays, especially those with high resolution, are expected to augment or perhaps replace the smartphone as the mobile interface to the internet. Many inventors have developed wearable displays, but many are opaque; users can see the display, but cannot see through it. This will disable viewers to walk freely or to compare its projected image with the external view. This situation encouraged inventors to invent see-through displays, so that viewers can walk freely as well as comparing projected images with see-through view.
  • Kasai et al. disclose in Patent US7460286 an eye glass type display system that implements see-through capability with a holographic optical element. About 85% of external light can go through lens and reaches viewer's eyes. This means that background brightness can be very high in a bright room or bright outside. Bright background washes out superimposed image and black object cannot become black, but gray or even white. This system will not be able to correct for multiple objects of varying focal distances.
  • the embodiment apparatus of Sako et al. if successful, may be helpful to a viewer with presbyopia, the apparatus will ultimately (1) capture an entire visual field with a set focal distance, or (2) magnify a given field of view through telescopic means and displayed in a screen-within-a-screen format.
  • Our claim is distinct because we seek to create a visual modulation system whereby multiple objects at different focal distances are corrected to a distance that the viewer can perceive. The spatial relationships of varying objects will be kept the same; however the visual will be projected to the viewer as if that object is at a distance where detail can be resolved.
  • Sako et al. do not claim an image capture-display apparatus that simultaneously corrects for multiple objects at different focal distances.
  • Fig-1 shows an example of this invention.
  • (116) is a transparent plate functioning as a wave guide having a hologram layer to enable see-through display.
  • (111) is a camera lens and (112) is a CMOS image sensor module.
  • (115) is a mirror to reflect projected light into the wave guide (116).
  • (118) is a light source,
  • (114) is a projection lens,
  • (113) is a controller electronics and
  • 117) is an eye-glass frame containing a battery.
  • Fig-2 illustrates that the object (201) is projected to the retina (205).
  • Light 208 is projected from the object (201) and is led to cornea (202) and lens (204).
  • the ciliary muscle (203) adjusts the lens (204) to focus the light beam (207) onto the retina (205) and fovea (206).
  • Fig-3 shows how a viewer with normal vision sees the images.
  • the large characters 301, 302, 303) can be seen and the small character (304) becomes difficult to read.
  • Fig-4 shows how a viewer with presbyopia sees the images. Even the large character (401) is not focused on the retina.
  • Fig-5 shows the usage of concave lens (501) to correct for myopia (inability to focus on far objects). Both far object (502) and near object (503) can be focused.
  • Fig-6 shows the usage of convex lens (601) to correct for the near distances in presbyopia.
  • the near object (603) can be focused, but the far object (602) cannot be focused.
  • Fig-7 shows that more sophisticated optics were introduced by the bifocal lens, whereby the upper half of the lens is constructed to assist viewers for far distance view (702), while the lower half of the lens is constructed to assist viewers for near distance view (703). This enables a user with presbyopia to view both near and far with a single pair of glasses.
  • Fig-7 shows an example of progressive lens (701) simultaneously correcting for near (703) and far (702) distances, albeit near and far distance focus is restricted to lower and upper visual fields, respectively.
  • Fig-8 shows that a bifocal lens enables a user to read a scorecard, but prohibits the viewer from focusing on a golf ball (801) when taking a shot.
  • the larger rectangular frame (900) represents a hypothetical visual field. In said field, four objects are in view, two near and two far. Conventional bifocal lens restrict the focal distances of objects to the upper and lower fields, and therefore objects 1 (901) and 2 (902) can be seen, but object 3 (903) and 4 (904) are out of focus.
  • the larger rectangular frame (1000) represents a displayed field wherein all the images are captured by the camera (111 and 112) attached to the wearable display in Fig-1 and all the captured images are individually focused and displayed in a same distance for the viewer, so that the viewer can see all images in focus.
  • Fig-11 illustrates the structure of human eye, wherein (1101) is a lens, (1102) and (1104) are Rods which sense brightness and (1103) is Cones which sense three colors.
  • Fig-11 A shows a microscopic image of Rods and Cones.
  • Cones have three different types. The first type of Cones is to sense long wavelength of light (red) and the second is to sense middle wavelength of light (green) and the third is to sense short wavelength of light (blue).
  • Fig- 12 shows the sensitivity curves (1201, 1202 and 1203) of each type of Cones to the wavelength of light.
  • the first type of Cones absorbs the light energy with the sensitivity curve of L (1201) having wavelength between about 500nm and 650nm with its peak at 560nm and converts its photon energy to chemical energy and transfer to brain through the nerve system.
  • the second type of Cones absorbs light energy with the sensitivity curve of M (1202) and converts photon energy around 530nm (green).
  • the third type of Cones does the same with the curve of S (1203, blue). This means that the function of the first type of Cones is to sense primarily red light and the second is green and the third is blue.
  • the viewer will have color blindness of red or Protanomaly or Protanopia depending of the extent. If the second type of Cones has deficit, it will cause color blindness of green or Deuteranomaly or Deuteranopia depending on the extent.
  • the third type is color blindness of blue or Tritanomaly or Tritanopia.
  • Fig- 13 shows the population of color blindness. 92% of people are normal. The largest number of color blind patients is Deuteranomaly (2.7%) and Deuteranopia (0.59%), then Protanomaly (0.66%) and Protanopia (0.59%), Tritanopia (0.016%) and Tritanomaly (0.01% > ) follow. The color bars show how patients in each category will see the colors.
  • Complete color blindness is less than 0.0001%. The majority of color blindness can be corrected by enhanced vision system except complete color blindness.
  • Fig- 14 shows the patterns used for color blindness test. Normal vision sees the patterns (1401) which has red character of "6” over the background of yellow, green and blue and the pattern (1405) having green character of "74" over red and yellow background.
  • Protanopic and Deuteranopic vison cannot discriminate red and green, therefore cannot see these characters as shown in (1402, 1403, 1406 and 1407), although Tritanopic vision can read these as shown in (1404 and 1408).
  • Fig-15 shows another example to show how images are perceived by each type of color blindness.
  • the image (1501) is by Normal Vision.
  • the image (1504) is by Protanopic Vision which loses red and a large part of green, because the sensitivity of the first type of Cone photoreceptor is overlapping from red to green.
  • the image (1508) is by Deuteranopic Vision which loses green and a large part of red.
  • the image (1510) is by Tritanopic Vision which loses blue.
  • Fig- 16 shows the Field of View (or FOV) of human eyes.
  • Human eyes can see an image in high resolution and in color only in the central area of field of view as shown in the green area (1605), but eyes can see very wide angle view in lower resolution and without color as show in the blue area which is as wide as 180 degrees horizontally from +90° (1607) to -90° (1604) and 120 degrees vertically from +50° (1606) to -70° (1608).
  • Fig- 17 shows an example of this invention with a hypothetical visual field with multiple objects with varying focal distances.
  • the camera (1701) captures the objects (901, 902, 903 and 904 in Fig-9) in various distances and auto-focuses at each object and captures the focused images.
  • the display will show all focused images (1001, 1002, 1003 and 1004 in Fig- 10) in the field of the display (1702).
  • Fig- 18 illustrates an example of this invention wherein the video signal is modulated to enhance video image to a viewer 1) who needs the images of individually focused objects regardless of distances and with adjusted size and brightness of image (presbyopia, myopia or hyperopia) or 2) who needs strengthened color to correct color blindness or 3) who needs visualized images in a darkness (night-vision).
  • (1801) is a visual sensory such as a camera with CMOS image sensor and 1802 is a processor to modulate the images from the camera to provide a viewer of the above 1) and/or 3) with modulated image signals of individually focused objects regardless of distances and with adjusted size and brightness of images and to provide a viewer of the above 2) with strengthened color to correct color blindness.
  • the display system (1803) shows said modulated images to the viewer.
  • Fig- 19 illustrates an example of this invention wherein the video signal from the camera (1901) to the processor (A) (1902) is analyzed for focus and brightness and feedback to the camera (1901) so that the images of individually focused objects are captured with adjusted brightness.
  • the processor (A) (1902) transmits the data of the images to the
  • the external processor (B) (1904) processes the data and return to another processor (C) (1906) in the wearable display and the Processor (C) will transfer data to the display (1907) in the wearable display through wireless transmission (1905).
  • Fig-20 illustrates an example of this invention wherein some or all of the chips on a wearable display are packed in a single SOC (system on chip) or single scale package or single die package.
  • SOC system on chip
  • Fig-21 shows an example of a face mount display made by Olympus, "Eye-Trek”. This completely obstructs view from the viewer.
  • Fig-22 shows a head mount display, HMZ-T2 by Sony which is a wearable display that is completely opaque.
  • Fig-23 shows an example of wearable display with see-through optics with half mirrors. The light transmission is less than 50% and the image becomes dark.
  • Fig-24 shows an example of wearable glasses with display and camera.
  • Glass by Google as shown in Fig-24, and MEG 4.0 by Olympus are both examples of wearable displays that cover a minor area of the visual field.
  • the displays are meant to be worn while conducting activities of daily living, however, the majority of the visual field is unobstructed and therefore the users will have no issues in perceiving peripheral cues while using these products.
  • Fig-25A shows an example of digital Pulse- Width-Modulation (PWM) of brightness.
  • Analog brightness control used to be popular for analog display devices such as CRT and LCD.
  • Analog brightness control uses analog control of driving voltage or current of display devices to control brightness.
  • precise control of brightness is difficult with analog control and digital brightness control provides more accurate in other words higher grayscale brightness control is possible.
  • binary PWM shown as in Fig-25A is becoming more popular, because digital video signal can be directly used as ON pulse with "1" and OFF pulse with "0".
  • Fig-25A shows an example of 8 bit binary PWM wherein the entire frame time is divided into 8 pulses whose pulse widths are 1/2 of the frame time as DO (Most Significant Bit or MSB, 2501), 1/4 of frame time as
  • Fig-25B shows an example of 8 bit binary PWM with the data of 10101001 in binary which is 169 in decimal and it represents the brightness of
  • This invention seeks to create such a visual sensory and display system via a visual image data flow as depicted in Fig- 17 through 20.
  • Cameras are mounted onto a set of glasses pointed in-line with the user's visual field.
  • the cameras convert visual images into image data, which is then sent to a modulation system where the image data is divided into specific focal distances.
  • the modulation system may relay this information back to the camera to recapture the image through an optical focusing system, or the modulator may focus the object through digital algorithms.
  • the modulator will ultimately output digital image data with objects with focal distances for multiple objects recalibrated to a distance that the viewer can readily perceive.
  • Fig- 17 shows an example of the embodiments of this invention with a hypothetical visual field with multiple objects with varying focal distances.
  • the camera (1701) captures the objects (901, 902, 903 and 904 in Fig-9) in various distances and auto-focuses at each object and captures the focused images.
  • the display will show all focused images (1001, 1002, 1003 and 1004 in Fig- 10) in the field of the display (1702).
  • Fig- 18 illustrates an example of the embodiments of this invention wherein the video signal is modulated to enhance video image to a viewer 1) who needs the images of individually focused objects regardless of distances and with adjusted size and brightness of image (presbyopia, myopia or hyperopia) or 2) who needs strengthened color to correct color blindness or 3) who needs visualized images in a darkness (night-vision).
  • (1801) is a visual sensory such as a camera with CMOS image sensor
  • (1802) is a processor to modulate the images from the camera to provide a viewer of the above 1) and/or 3) with modulated image signals of individually focused objects regardless of distances and with adjusted size and brightness of images and to provide a viewer of the above 2) with strengthened color to correct color blindness.
  • the display system (1803) shows said modulated images to the viewer.
  • This invention seeks to create the aforementioned visual sensory and display system in the shape of common glasses (lens(s), nose piece, and ear brace(s)) that is light weight and comfortable to wear. To accomplish this, it may become necessary to divide the modulation component depicted in Fig- 18 into three sections, Processor (A), (B), and (C) as depicted in Fig- 19. The purpose of this division is to allow for superior computing power in Processor (B) to be made external to the glasses, while the camera(s) and display(s) are still fitted into the glasses.
  • Fig- 19 illustrates an example of the embodiments of this invention wherein the video signal from the camera (1901) to the processor (A) (1902) is analyzed for focus and brightness and feedback to the camera (1901) so that the images of individually focused objects are captured with adjusted brightness.
  • the processor (A) (1902) transmits the data of the images to the Processor (B) (1904) of an external unit such as a cellphone which has a more powerful processor than that of the wearable display. Often video data processing requires high computation and consumes more energy which the battery of wearable display cannot support.
  • the external processor (B) (1904) processes the data and return to another processor (C) (1906) in the wearable display and the Processor (C) will transfer data to the display (1907) in the wearable display.
  • the data transmissions between Processor (A) and Processor (B) (1903) and between Processor (B) and Processor(C) (1905) are from a group of wireless, wired and fiber optic.
  • Fig-20 illustrates an example of embodiment of this invention wherein some or all of the chips on a wearable display are packed in a single SOC (system on chip) or single scale package or single die package.
  • SOC system on chip
  • Processor (B) (1904 in Fig- 19 or 2009 in Fig-20) is connect to the internet to allow for internet data to be displayed on the glasses.
  • Processor (A) (2002) and Processor (C) (2004 in Fig-20) communicate directly.
  • FIG. 19 Another example of the embodiments of this invention is that the communications between processors ((A) and (B), (B) and (C), and (A) and (C)) in Fig- 19 and Fig-20 are unidirectional or bidirectional.
  • image capture and display apparatus are battery powered, or receive power from an external source via wired or wireless power transfer.
  • the image capture and display apparatus have a single or multiple audio input(s) and output(s) to allow for user instructions to Processor (A), (B), and (C) in Fig- 19 or Fig-20, and also for transfer of information from the Processor (A), (B), and (C) to the user.
  • the image capture and display apparatus has a safety feature which comprises of a design that allows a margin outside the projected visual field if the projected visual field exceeds 13 degrees from center with a front-of-eye lens apparatus with more than 60% transparency.
  • Optical element such as lens with holographic optical element (HOE) or diffractive optical element (DOE) is shown at (116).
  • a camera is shown at (111).
  • a Free-Form Prism/Mirror is shown at (115).
  • a microdisplay is shown at (114) and a light source is shown at (118).
  • a set of batteries is shown at (117).
  • a controller circuitry is shown at (113).
  • Color blindness is defined as the ability to differentiate discrete areas of the visual field varying wavelengths of light: approximately 564-580nm, approximately 534-545nm, and approximately and 420-440nm. These ranges are approximate as shown in Fig- 12;
  • Fig- 14 illustrates an example of a test apparatus for color blindness.
  • Ishihara Color Blindness Test is an internationally accepted form of testing color blindness and the standard viewer is able to score 100% while any deviation is considered a form of color blindness.
  • the apparatus shall modulate the cumulative amount and mixture of light emitted from the display to increase or maximize (100% is maximum) the score on the Ishihara Color Blindness test, or increase the ability to differentiate colors in the three ranges of wavelength described here (approximately 564-580nm, approximately 534-545nm, and approximately and 420-440nm).
  • the algorithm to modulate the displayed image shall vary the total light emission from the display and the mixture of colors (wavelength of light) emitted.
  • Visual Acuity is defined as the ability to differentiate objects at a distance.
  • Acuity 1 / (gap size [arc min])
  • the standard viewer has a visual acuity of 1.0, and therefore is able to differentiate objects at 1 arc min (1/60 of degree). Visual acuity less than 1.0 is considered a deficiency in visual acuity.
  • a comparison of 304 and 404 demonstrates a loss of visual acuity whereby in 304, the horizontal lines of the letter E can be differentiated while in 404 the lines cannot.
  • the apparatus shall enable the individual with deficiency in visual acuity to perceive 304.
  • the apparatus shall enable a viewer to increase visual acuity as described as 1/ (gap size [arc min]).
  • the algorithm to modulate the image shown on the apparatus shall combine two elements: (1) magnification of an object in question and (2) increase in contrast.
  • the apparatus shall provide an option to invert black and white of a field of view. Although the mathematical differences in contrast remain unchanged with the inversion of dark and light areas of the visual field, the eye is trained to detect small areas of light in a background of dark far better than a small area of dark in a background of light.
  • the apparatus shall increase visual acuity (defined as 1/gap size [arc min]) in an individual with a deficiency in visual acuity (defined as visual acuity less than 1.0) by an algorithm using at least one of (1) increasing the magnification of the object in question (defined as an increase in the horizontal and vertical arc lengths of an object in the visual field) and (2) increasing the contrast (K defined as (Lh-Ll)/Lh).
  • the apparatus shall provide an option to invert light and dark (black and white) areas depending on the preference of the user.
  • the apparatus shall enable the viewer with deficiency of visual acuity to perceive near objects in a manner similar to the area inside the circle in Fig-6, while a viewer with deficiency of visual acuity to perceive far objects in a manner similar to the area inside the circle in Fig-5.
  • the apparatus Given a deficiency in visual acuity that is dependent on distance from viewer to object, the apparatus employ an algorithm that varies (1) the focal length of the camera depending on the distance from viewer to the object, (2) the magnification of the object in question, and (3) the contrast of the emitted display image, to maximize visual acuity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A See-Through Display System with the ability to correct visual deficits such as presbyopia, color blindness and poor night vision is disclosed. This invention enables the correction of visual deficits using camera(s), microdisplay (s), controlling circuit(s) with digital grayscale control and see through optics such as free form lens/mirror, half-mirror, diffractive and /or holographic optical element(s).

Description

Title: See Through Display enabling the correction of visual deficits
Inventor: Victor Stone & Fusao Ishii
Cross Reference to Related Applications
[0001] This application is a Continuation in Part Application of US Application 14/121,588 filed on September 21, 2014 and Application 14/121,588 is a Non-Provisional Application and claims the Priority Date of previously filed Provisional Application 61/960,537 filed on September 21, 2013.
TECHNICAL FIELD
[0002] This invention relates to a wearable visual sensor-modulator-display system that (1) is worn on a user's head or face, (2) has a single or multiple visual sensors, (3) has a video image modulator, and (4) projects a video image via a microdisplay to the user's visual field. More particularly, this invention relates to the application of aforementioned display system to acquire visual imagery, modulate the video image in a manner that corrects for visual deficits and displays the visual imagery in a manner that the user can perceive and
differentiate.
BACKGROUND OF THE INVENTION
[0003] Humans have various medical conditions that alter their ability to see. The so-called standard viewer is able to differentiate between multiple colors, resolve specific shapes at a standard distance, and see under specific lighting conditions. The so-called standard viewer also has the ability to maintain psychological stability despite ambient darkness. The deviation of the physiological and mental state from that of the standard viewer is considered a medical condition when the deviation obstructs daily living.
[0004] APPLICATION FOR PRESBYOPIA: As people age, they commonly lose the ability to focus on both near and far objects. This phenomenon is commonly called senior vision or presbyopia in medical language. Presbyopia happens because of the natural hardening of the lens in a viewer's eye. The hardening results in a decreased ability for the muscles to contract and expand the shape of the lens. The loss of near vision causes the most obstacles for daily life and viewers augment their vision with a low power magnification in the form of reading glasses. However, the natural progression of presbyopia is not limited to near vision, but also far vision because of the progressive hardening of the lens as described above.
[0005] Fig-2 shows the natural physiological mechanism of focusing on an object. The eye perceives the letter Έ' (201) at some distance. To focus the object into the viewer's central vision (206), the eye muscles (203) modulate the shape of lens (204) so that the lens will focus the object Έ' onto the retina (206). The hardening of the lens causes the eye muscles to be insufficient in changing the lens form and the object cannot focus onto the retina. Fig-3 shows the object Έ' (301-304) in focus as perceived by a young viewer, while Fig-4 shows the object Έ' (401-404) by a view with presbyopia that is unable to focus the object onto the retina.
[0006] Viewers with presbyopia can still see clearly, but their visual focus is limited to a narrow range of distances. For the purposes of this patent, it is useful to know that such viewers can see objects clearly at a distance of l-2m.
[0007] Common optics can correct for this visual deficit, however conventional optics can only correct for a single focal point. For example, reading glasses often seen in drug stores an enable a viewer with presbyopia to see clearly near objects, but the same lens cannot be used for objects far away. Fig- 5 shows the usage of concave lens (501) to correct for myopia (inability to focus on far objects) and Fig-6 shows the usage of convex lens (601) to correct for the near distances in presbyopia.
[0008] More sophisticated optics were introduced by the bifocal lens, whereby the upper half of the lens is constructed to assist viewers for far distance view(702), while the lower half of the lens is constructed to assist viewers for near distance view(703). This enables a user with presbyopia to view both near and far with a single pair of glasses. Fig-7 shows an example of progressive lens (701) simultaneously correcting for near (703) and far (702) distances, albeit near and far distance focus is restricted to lower and upper visual fields, respectively.
[0009] However, conventional bifocal lens enables a viewer to focus on near and far by separating the lens areas. For example the lower portions of the lens cannot be used to perceive objects at a far or even normal distance. One example of this challenge is for a viewer wearing bifocal lens to descend a flight of stairs. The bifocal lens enables a view to read objects 30-45cm, but in turn, obstructs the view from perceiving objects at a distance of l-2m, including the view's own feet and the next step in the stair case. This causes significant concern for views trying to descend a flight of stairs.
[00010] Another commonly mentioned challenge is a viewer with presbyopia trying to enjoy a round of golf. Bifocal lens enables a user to read a scorecard, but prohibits the viewer from focusing on a golf ball (801) when taking a shot as in Fig-8. After taking a shot, the viewer can only see the ball in flight with the upper half of the visual field because the lower half can only focus on near objects.
[00011] Such societal needs call for a pair of glasses that is comfortable to wear, enables a wide field of view, and enables a viewer to simultaneously see objects at near and far distances in focus without restriction on upper and lower visual fields as seen in bifocal glasses.
[00012] APPLICATION FOR COLOR BLINDNESS: The eye perceives light through photoreceptor cells called Rods and Cones located in the retina of the eye (Fig-11). Light energy elicits a cellular reaction whereby the ionic composition internal to the photoreceptor cells triggers a nerve impulse which is transmitted to the brain as a light signal. Rods (1102 and 1104) and Cones (1103) are found on an array and the selective triggering of these photoreceptors translates light images into a visual image perceived by the brain.
[00013] Color is perceived by Cones. There are three types of Cones, each with
photoreceptors that enables selectivity for the three primary colors, red, green, and blue. Each red, green, and blue Cone photoreceptor has protein structures that react to light energy with wavelengths correlating to red, green, and blue light. The gene that codes for these protein structures is X-linked (found on the non-redundant arm of the X-Chromosome), and therefore males have a propensity to have genetic deficits associated with Cone photoreceptors. The sensitivity of these Cone photoreceptors is shown in Fing-12. The first type of Cone has the sensitivity shown as the curve marked (1201) or L (long wavelength), the second type of Cone has the sensitivity of the curve marked (1202) or M (middle wavelength) and the third type of Cone has the sensitivity of the curve marked (1203) or S (short wavelength). The horizontal axis is wavelength and the vertical axis is the normalized sensitivity to each peak.
[00014] The impact of having a genetic deficit on the cone photoreceptor is the inability to differentiate that specific color. A genetic mutation located on the second type of Cone photoreceptor (whose sensitivity is the curve marked M or 1202 and hereafter called as Green Cone) will be used as it is the most prevalent. Green light may enter the eye and strike the photoreceptor layer of the retina, however, no or little of Green Cone photoreceptor reacts to the light because of the genetic deficit. Green light fails to trigger a nerve response and therefore the brain does not perceive this wavelength of light. The brain is still able to perceive red and blue light and therefore, this patient will see the world in two colors, red and blue. This is the mechanism of color blindness.
[00015] Fig- 13 shoes the population of normal vision and color blindness. The weal- sensitivity of the first type of Cone photoreceptor (Red) is called as protanomaly and weaker sensitivity is called as protaopia. Similar way, the second type of Cone photoreceptor (Green) deficit is deuteranomaly and deuteraopia respectively. The third type of Cone photoreceptor (Blue) deficit is called as Tritanomaly and Tritaopia respectively. The second type Cone deficit has the largest population among color blindness and 2.7% (deuteranomaly) and 0.56% (deuteranopia). Complete color blindness (Achromatopsia) is very rare and less than 0.0001% as shown in Fig- 13. The color bars in Fing-13 shows how each type perceive the color of spectrum.
[00016] Fig- 14 shows the patterns used for color blindness test. Normal vision sees the patterns (1401) which has red character of "6" over the background of yellow, green and blue and the pattern (1405) having green character of "74" over red and yellow background.
Protanopic and Deuteranopic vison cannot discriminate red and green, therefore cannot see these characters as shown in (1402, 1403, 1406 and 1407), although Tritanopic vision can read these as shown in (1404 and 1408). Fig- 15 shows another example to show how images are perceived by each type of color blindness. The image (1501) is by Normal Vision. The image (1504) is by Protanopic Vision which loses red and a large part of green, because the sensitivity of the first type of Cone photoreceptor is overlapping from red to green. The image (1508) is by Deuteranopic Vision which loses green and a large part of red. The image (1510) is by Tritanopic Vision which loses blue.
[00017] It is important to note the mechanism of color blindness. A genetic deficit results in a change in photoreceptor protein shape, and in the majority of patients, this weakens the photochemical reaction. In other words, if the incoming light for the color in question is strengthened, a photochemical reaction can occur triggering a nerve response and the brain can perceive the color. In the above mentioned example of a green color blind patient, if a three color image were presented where the green color has significantly increased intensity, then this patient can differentiate between the three colors and perceive the world in red, green, and blue.
[00018] Such societal needs call for an apparatus that can capture the images of a patient's visual field, modulate the image by increasing the intensity of a specific color, and displaying this modified image to the patient. If images can be captured, modified, and displayed to the patient in real time, the patient can effectively enjoy daily life in three colors rather than two.
[00019] It is noteworthy to mention that such an apparatus can help patients with genetic deficits that weaken the photoreceptor reaction to a specific wavelength of light. If the genetic deficit rendered the photochemical receptor completely unreactive to the assigned wavelength, increasing the intensity will not enable correction of deficit. Fortunately, the majority of patients with color blindness have a weakness in perceiving green, and the apparatus of this invention will benefit the vast majority of color blind patients as shown in Fig- 13
[00020] APPLICATION FOR POOR NIGHT VISION: Over time, humans progressively lose night vision, or the ability to distinguish objects in darkness. The cause of this visual deficit can be multi-faceted with underlying conditions including, but not exclusive to, early cataracts, vitamin A deficiency, retinitis pigmentosa, and diabetes. Any progressive visual deficit warrants medical attention; however, not all conditions have immediately reversible treatments.
[00021] Such societal needs call for an apparatus that can capture the images of a patient's visual field in darkness, modulate the image to increase the brightness or render the image in such a way that objects can be distinguished, and display this modified image to the viewer. If such visual fields can be captured by image data, modified, and projected in real time, people can greatly enhance their ability to see in darkness.
[00022] SAFETY FEATURE FOR ALL APPLICATIONS: A safety factor that should not be missed is the importance of peripheral vision. Many people focus on the central vision or macular vision where the vision is perceived in color and the resolution is the highest. In contrast, peripheral vision has very low visual acuity and generally perceives in black and white. However, the brain receives many cues from the peripheral field which ultimately contribute to special awareness, motion detection, and depth perception. One good example is to wear a pair of goggles that restricts vision in the periphery; such views will find many activities of daily living become restricted. Therefore, it is desirable for corrective glasses to correct a wide field of view, however, ultimately leave a peripheral margin unobstructed to enable the viewer with nascent visual cues from the periphery.
[00023] Human eyes can see an image in high resolution and in color only in the central area of field of view as shown in (1605) of Fig-16, but eyes can see very wide angle view in lower resolution and without color as wide as 180 degrees horizontally (from 1607 to 1604) and 120 degrees vertically (1606 to 1608) in Fig-16.
[00024] In past years, preservation of the peripheral field for wearable displays was less of a concern. This is because wearable displays were either (1) completely opaque, or (2) covered only a minor aspect of the visual field. Eye-Trek by Olympus as shown in Fig-21, and HMZ-T2 by Sony as shown in Fig-22, are all wearable displays that are completely opaque. The peripheral vision is completely cut off by light shields and the visual field is meant to be as dark as possible except for the projected image. The designers of these products intentionally created their products in such a way to decrease the entrance of ambient light, which in turn increased the contrast ratio of the display, thus creating a better visual experience. Such products were not meant for wear during activities of daily living, but meant as personal theaters for viewers who wanted to concentrate on viewing the display. Such products do not need this safety feature because users will likely be seated and not moving about nor conducting over operations simultaneously.
[00025] On the other hand, Glass by Google as shown in Fing-24, and MEG 4.0 by Olympus are both examples of wearable displays that cover a minor area of the visual field. The displays are meant to be worn while conducting activities of daily living, however, the majority of the visual field is unobstructed and therefore the users will have no issues in perceiving peripheral cues while using these products.
[00026] However, as wearable displays advance, it is expected that wearable displays will cover a 'full field of view,' and designed for simultaneous wear with activities of daily living. This invention seeks to be such a product whereby people with visual deficits such as presbyopia, color blindness, or poor night vision can enjoy life with a visual field that is corrected for the deficit. We expect this type of product to become useful when the display can project more than 13 degrees field of view from center and have a transparency exceeding 60%. The rationale for the field of view (13 degrees from center) is that it covers central vision (macular vision). Projection beyond that range enters into peripheral vision. 60% transparency refers to 60% of light is able to pass through the image-capture and display apparatus lens and enter into the user's eye. For a visual apparatus to be useful in daily living, the user must be able to see through the apparatus and see the visual field naturally, and we believe 60% transparency is the threshold whereby any less light would be considered obstructive for natural activities. For example, sunglasses diminish light transparency
(transparency is under 60%), and although it is possible to conduct activities of daily living while wearing sun glasses, it is not considered natural. Another example is a standard pair of glasses for myopia (near sightedness). The field of view clearly exceeds 13 degrees from center and the transparency exceeds 60%. With myopia glasses, the user considers the visual field to be natural and wears them while simultaneously conducting activities of daily living.
[00027] When a user views through an image capture-display apparatus that can project more than 13 degrees from center with transparency exceeding 60%, we believe the user will require less cognitive thought. For example, when looking through 'personal theater' goggles such as Sony's HMZ-T1, the viewer clearly understands that the field of view is not natural and takes appropriate measures to prevent disorientation such as sitting down to view the image. However, if the image-capture display device is sufficiently transparent (more than 60%) and has a field of view that covers the entire central field and extends into the peripheral field (exceeds 13 degrees from center), the viewer will consider the visual field to be natural much the same way one considers the visual field when wearing myopia glasses.
[00028] When peripheral view is completely lost, the viewer loses visual cues such as motion and direction which becomes disorienting. This disorientation can result in falls or accidents while conducting activities of daily living. Ideally, an image-capture and display apparatus will capture the entire visual field and enable a user with a full field of peripheral vision. However, we believe that there is utility to maintaining a margin in the peripheral visual field that is unobstructed by the projected image because it creates a safety mechanism whereby the viewer maintains the ability to detect peripheral cues even in the event of failure by the apparatus. We believe this safety feature is critical to this invention and claim the design of an image-capture and display apparatus such that the projected image leaves an unobstructed margin of the peripheral visual field. SUMMARY OF THE INVENTION
[00029] This invention aims to resolve this issue by fashioning a wearable display with a mounted optical sensor system that senses the user's visual field, modulates the image, and then displays that image in real time into the user's visual field. The image modulation enables image data of objects at multiple focal distances to be reconstructed into an image with objects at a focal distance that the user can perceive and differentiate. Fig-9 and 10 illustrate this concept. In Fig-9, the larger rectangular frame (900) represents a hypothetical visual field. In said field, four objects are in view, two near and two far. Conventional bifocal lens restrict the focal distances of objects to the upper and lower fields, and therefore objects 1 (901) and 2 (902) can be seen, but object 3 (903) and 4 (904) are out of focus. This invention seeks to create a visual display system whereby camera inputs detect image data of the visual field and image data for individual objects are modulated and displayed to the viewer at a focal distance that the viewer can readily see.
[00030] This invention further intends to accomplish this by a data circuit loop whereby the visual field is captured by the image sensors, the video data is modulated by a processing unit to suit the user's specific needs such as presbyopia (inability to focus on near and far objects because of hardening of lens), and then this modified visual field is projected onto a display positioned in front of the user's eyes. The processor may be single or multi-part and communicate with each other through wired or wireless means. The image processing is expected to consume significant calculation resources in both power and processor and therefore, the wireless communication of the processing units enables the outsourcing of calculations to be done on a unit positioned outside the actual image capture and display apparatus.
[00031] To enable usage in daily life, the display lens of the apparatus is created in such a way that maximizes the transmission of light so that the user has a natural view of the outside field of view when the projector is not displaying an image.
[00032] This apparatus specifically incorporates a safety feature whereby the outer margin of the user's visual field is left intentionally intact without obstruction by the display lens or the display projection area. This enables the user to maintain visual cues from the peripheral vision which is useful for depth perception, motion detection in the periphery, and other spatial awareness cues that enable natural walking and activities of daily living.
[00033] The sound sensor may be single or multiple with audio capture apparatus on the surface of the apparatus, or positioned in a tube. The tube may or may not be pointed in-line with the user's visual field. The purpose of this orientation includes the optimization and differentiation of audio inputs to the user's attention. By orienting a tube in front of the audio sensor that is in-line with the user's visual field, sound inputs that coming from the front of the user will selectively be captured, thereby increasing the level of sound differentiation.
[00034] Audio data will flow from the sound sensor(s) to a sound processor system which will then transmit the audio data to the user's ears directly or through bone-conduction mechanisms.
[00035] The sound processor system may be single or multi-part and communicate with other processor components through wired or wireless means.
[00036] The presence of multiple sound sensors enables different audio signals to flow into the processor. The processor can compare the audio inputs and distinguish sound of interest while modulating ambient sound or noise. For example, consider a head-mounted apparatus with 4 total audio sensors, two positioned in forward facing tubes, and two others on the surface of the apparatus. Surface audio sensors will detect the most sound; however, the sensor cannot distinguish between ambient noise from an air conditioner and a person speaking in front of the user. The audio sensor-processor-speaker system may increase the absolute value of the audio inputs, but contrast between the ambient noise and forward speaker will not change, and the user will have difficulty discerning the words spoken in front. Audio sensors positioned within a tube pointed forward will selectively sense sound from the front. With both types of inputs, surface and tubular, the processor can compare the signals and identify what is noise and what is sound from the front. If there is a significant
discrepancy, the processor can selectively amplify the forward sounds and diminish the surface sounds, thereby enabling the user to better distinguish sounds from the front from ambient noise. This invention is not restricted to this four sensor system, however, intends to capture the merits of sound spatial selectivity as described here. PRIOR ARTS
[00037] Wearable displays received significant attention in recent years. Wearable displays, especially those with high resolution, are expected to augment or perhaps replace the smartphone as the mobile interface to the internet. Many inventors have developed wearable displays, but many are opaque; users can see the display, but cannot see through it. This will disable viewers to walk freely or to compare its projected image with the external view. This situation encouraged inventors to invent see-through displays, so that viewers can walk freely as well as comparing projected images with see-through view.
[00038] Levola in SID 2006 Digest, ISSN0006-64 · SID 06 DIGEST 0966X/06/3701-0064, Novel Diffractive Optical Components for Near to Eye Displays discloses an example of implementation of see through display, locating LCD device in the middle of two eyes, but still this does not correct for visual deficits of focal distances.
[00039] Mukawa et al. in SID 2008 Digest, ISSN/008-0966X/08/3901-0089, "A Full Color Eyewear Display using Holographic Planar Waveguides", disclose an eye glass display system that implements see-through capability with two plates of holographic optical elements. This system also has the same configuration as the above prior art and cannot correct for multiple objects at varying focal distances.
[00040] Kasai et al. disclose in Patent US7460286 an eye glass type display system that implements see-through capability with a holographic optical element. About 85% of external light can go through lens and reaches viewer's eyes. This means that background brightness can be very high in a bright room or bright outside. Bright background washes out superimposed image and black object cannot become black, but gray or even white. This system will not be able to correct for multiple objects of varying focal distances.
[00041] US patent, US7369317, Kuo Yuin Li et. al. "Head-Mounted Display utilizing an LCOS panel with a color filter attached thereon", discloses a compact example of see-through eyeglass display using LCOS and PBS (polarized beam splitter). This invention does not include any mechanisms to correct for visual deficits in focal distance of multiple objects.
[00042] US patent, US7855743, Sako et. al. "Image Capturing and Displaying Apparatus and Image Capturing and Displaying Method", discloses an image capture and display apparatus that deals with visual deficits of focal distances including presbyopia, however, the
fundamental invention and the claims relate to the adjustment of focal distance of the original image capture device. The embodiment apparatus of Sako et al., if successful, may be helpful to a viewer with presbyopia, the apparatus will ultimately (1) capture an entire visual field with a set focal distance, or (2) magnify a given field of view through telescopic means and displayed in a screen-within-a-screen format. Our claim is distinct because we seek to create a visual modulation system whereby multiple objects at different focal distances are corrected to a distance that the viewer can perceive. The spatial relationships of varying objects will be kept the same; however the visual will be projected to the viewer as if that object is at a distance where detail can be resolved. Simply put, Sako et al. do not claim an image capture-display apparatus that simultaneously corrects for multiple objects at different focal distances.
[00043] US patent, US854149, Sako et. al. "Imaging Display Apparatus and Method", further extrapolate on the aforementioned US7855743 by claiming various forms of
screen-within-a-screen theme. Our patent is distinct because we seek to create an image capture-display apparatus that modulates the natural field of view in such a way that object detail can be resolved by the viewer without resorting to a screen-within-a-screen format.
[00044] The above prior inventions propose inventions that enable the see-through display apparatus that can be worn on the head and enable digital image data to be displayed. Some inventions combine image sensory and display into a single apparatus. However, none of these prior arts seek to create an image sensory and display apparatus that modulates captured image data by modifying the focal distance of multiple objects and displays them for the viewer.
[00045] In recent years, there have been registered patents such as US7145571 by Jones et al., "Technique for enabling color blind persons to distinguish between various colors," which seek to create solutions that enable people with color blindness to distinguish between objects by means other than color such as hue and patterns. Fig- 13 shows an image from the aforementioned patent. Patterns are matched to colors, and image data is modified to show these patterns in lieu of colors, thereby circumventing the patient's color vision deficit and leveraging the ability to distinguish black and white vision. Our invention is fundamentally different because we seek to harness the weakened but present ability in a patient with color blindness who are not truly blind, but can sense the color if intensity is significantly increased.
[00046] Other patents still in application stage seek to create electronic apparatus that uses image captured data with data modification schemes to enable a color blind person to distinguish those objects. Once again, our invention is distinct because we seek to enhance object and color differentiation among color blind people by increasing the amount of photoreceptor reactions in the cones. The primary method to achieve this capability is not through the modulation of the video image data, but through the intensity of the light source and timing of the microdisplay.
[00047] Enhancements to night vision, or the ability to distinguish objects in darkness, has significant commercial value, as well as benefits to patients with medical conditions such as diabetes and cateracts. As seen in US 7755831, Filipovich et al., demonstrate an optical system with an image intensifier that enhances vision with muted ambient light. Our invention is distinct because we utilize an image capturing device creating digital image data and projection of a modified image data by microdisplay, neither of which are primary claims by Filipovich et al.
[00048] As seen in US 7855743, Sako et al. "Imaging capturing and display apparatus and image capturing and display method," suggest an image capture and display apparatus whereby users can visualize enhanced night vision as well as aid presbyopia. However, their invention makes a primary claim whereby the apparatus has sensors of the viewer's physiologic state and state of motion, which are distinct from the image sensor, and provide modulatory inputs to the controller mechanism. Our invention is fundamentally distinct because our invention has no such need; an embodiment of our invention does not include the direct communication of viewer's physiologic and motion sensors for the purpose of image modulation.
[00049] As seen in US 8294766, Sako et al. "Imaging apparatus and Imaging Method," suggest an image capture and display apparatus whereby users can visualize enhanced night vision as well as aid presbyopia. However, their invention makes a primary claim whereby the apparatus has environmental sensors, which are distinct from the image sensor, and provide modulatory inputs to the controller mechanism. Our invention is fundamentally distinct because our invention has no such need; an embodiment of our invention does not include the direct communication of environmental sensors for the purpose of image modulation. BRIEF DESCRIPTION OF THE DRAWINGS
[00050] Fig-1 shows an example of this invention. (116) is a transparent plate functioning as a wave guide having a hologram layer to enable see-through display. (111) is a camera lens and (112) is a CMOS image sensor module. (115) is a mirror to reflect projected light into the wave guide (116). (118) is a light source, (114) is a projection lens, (113) is a controller electronics and (117) is an eye-glass frame containing a battery.
[00051] Fig-2 illustrates that the object (201) is projected to the retina (205). Light 208 is projected from the object (201) and is led to cornea (202) and lens (204). The ciliary muscle (203) adjusts the lens (204) to focus the light beam (207) onto the retina (205) and fovea (206).
[00052] Fig-3 shows how a viewer with normal vision sees the images. The large characters 301, 302, 303) can be seen and the small character (304) becomes difficult to read.
[00053] Fig-4 shows how a viewer with presbyopia sees the images. Even the large character (401) is not focused on the retina.
[00054] Fig-5 shows the usage of concave lens (501) to correct for myopia (inability to focus on far objects). Both far object (502) and near object (503) can be focused.
[00055] Fig-6 shows the usage of convex lens (601) to correct for the near distances in presbyopia. The near object (603) can be focused, but the far object (602) cannot be focused.
[00056] Fig-7 shows that more sophisticated optics were introduced by the bifocal lens, whereby the upper half of the lens is constructed to assist viewers for far distance view (702), while the lower half of the lens is constructed to assist viewers for near distance view (703). This enables a user with presbyopia to view both near and far with a single pair of glasses. Fig-7 shows an example of progressive lens (701) simultaneously correcting for near (703) and far (702) distances, albeit near and far distance focus is restricted to lower and upper visual fields, respectively.
[00057] Fig-8 shows that a bifocal lens enables a user to read a scorecard, but prohibits the viewer from focusing on a golf ball (801) when taking a shot. [00058] In Fig-9, the larger rectangular frame (900) represents a hypothetical visual field. In said field, four objects are in view, two near and two far. Conventional bifocal lens restrict the focal distances of objects to the upper and lower fields, and therefore objects 1 (901) and 2 (902) can be seen, but object 3 (903) and 4 (904) are out of focus.
[00059] In Fig- 10, the larger rectangular frame (1000) represents a displayed field wherein all the images are captured by the camera (111 and 112) attached to the wearable display in Fig-1 and all the captured images are individually focused and displayed in a same distance for the viewer, so that the viewer can see all images in focus.
[00060] Fig-11 illustrates the structure of human eye, wherein (1101) is a lens, (1102) and (1104) are Rods which sense brightness and (1103) is Cones which sense three colors.
[00061] Fig-11 A shows a microscopic image of Rods and Cones. Cones have three different types. The first type of Cones is to sense long wavelength of light (red) and the second is to sense middle wavelength of light (green) and the third is to sense short wavelength of light (blue).
[00062] Fig- 12 shows the sensitivity curves (1201, 1202 and 1203) of each type of Cones to the wavelength of light. For example, the first type of Cones absorbs the light energy with the sensitivity curve of L (1201) having wavelength between about 500nm and 650nm with its peak at 560nm and converts its photon energy to chemical energy and transfer to brain through the nerve system. The second type of Cones absorbs light energy with the sensitivity curve of M (1202) and converts photon energy around 530nm (green). The third type of Cones does the same with the curve of S (1203, blue). This means that the function of the first type of Cones is to sense primarily red light and the second is green and the third is blue. If the first type of Cones is unable to function, the viewer will have color blindness of red or Protanomaly or Protanopia depending of the extent. If the second type of Cones has deficit, it will cause color blindness of green or Deuteranomaly or Deuteranopia depending on the extent. The third type is color blindness of blue or Tritanomaly or Tritanopia.
[00063] Fig- 13 shows the population of color blindness. 92% of people are normal. The largest number of color blind patients is Deuteranomaly (2.7%) and Deuteranopia (0.59%), then Protanomaly (0.66%) and Protanopia (0.59%), Tritanopia (0.016%) and Tritanomaly (0.01%>) follow. The color bars show how patients in each category will see the colors.
Complete color blindness is less than 0.0001%. The majority of color blindness can be corrected by enhanced vision system except complete color blindness.
[00064] Fig- 14 shows the patterns used for color blindness test. Normal vision sees the patterns (1401) which has red character of "6" over the background of yellow, green and blue and the pattern (1405) having green character of "74" over red and yellow background.
Protanopic and Deuteranopic vison cannot discriminate red and green, therefore cannot see these characters as shown in (1402, 1403, 1406 and 1407), although Tritanopic vision can read these as shown in (1404 and 1408).
[00065] Fig-15 shows another example to show how images are perceived by each type of color blindness. The image (1501) is by Normal Vision. The image (1504) is by Protanopic Vision which loses red and a large part of green, because the sensitivity of the first type of Cone photoreceptor is overlapping from red to green. The image (1508) is by Deuteranopic Vision which loses green and a large part of red. The image (1510) is by Tritanopic Vision which loses blue.
[00066] Fig- 16 shows the Field of View (or FOV) of human eyes. Human eyes can see an image in high resolution and in color only in the central area of field of view as shown in the green area (1605), but eyes can see very wide angle view in lower resolution and without color as show in the blue area which is as wide as 180 degrees horizontally from +90° (1607) to -90° (1604) and 120 degrees vertically from +50° (1606) to -70° (1608).
[00067] Fig- 17 shows an example of this invention with a hypothetical visual field with multiple objects with varying focal distances. The camera (1701) captures the objects (901, 902, 903 and 904 in Fig-9) in various distances and auto-focuses at each objet and captures the focused images. The display will show all focused images (1001, 1002, 1003 and 1004 in Fig- 10) in the field of the display (1702).
[00068] Fig- 18 illustrates an example of this invention wherein the video signal is modulated to enhance video image to a viewer 1) who needs the images of individually focused objects regardless of distances and with adjusted size and brightness of image (presbyopia, myopia or hyperopia) or 2) who needs strengthened color to correct color blindness or 3) who needs visualized images in a darkness (night-vision). (1801) is a visual sensory such as a camera with CMOS image sensor and 1802 is a processor to modulate the images from the camera to provide a viewer of the above 1) and/or 3) with modulated image signals of individually focused objects regardless of distances and with adjusted size and brightness of images and to provide a viewer of the above 2) with strengthened color to correct color blindness. The display system (1803) shows said modulated images to the viewer.
[00069] Fig- 19 illustrates an example of this invention wherein the video signal from the camera (1901) to the processor (A) (1902) is analyzed for focus and brightness and feedback to the camera (1901) so that the images of individually focused objects are captured with adjusted brightness. The processor (A) (1902) transmits the data of the images to the
Processor (B) (1904) of an external unit such as a cellphone which has a more powerful processor than that of the wearable display through wireless transmission (1903) such as electromagnetic wave or modulated light. Often video data processing requires high
computation and consumes more energy which the battery of wearable display cannot support. The external processor (B) (1904) processes the data and return to another processor (C) (1906) in the wearable display and the Processor (C) will transfer data to the display (1907) in the wearable display through wireless transmission (1905).
[00070] Fig-20 illustrates an example of this invention wherein some or all of the chips on a wearable display are packed in a single SOC (system on chip) or single scale package or single die package.
[00071] Fig-21 shows an example of a face mount display made by Olympus, "Eye-Trek". This completely obstructs view from the viewer.
[00072] Fig-22 shows a head mount display, HMZ-T2 by Sony which is a wearable display that is completely opaque.
[00073] Fig-23 shows an example of wearable display with see-through optics with half mirrors. The light transmission is less than 50% and the image becomes dark.
[00074] Fig-24 shows an example of wearable glasses with display and camera. Glass by Google as shown in Fig-24, and MEG 4.0 by Olympus are both examples of wearable displays that cover a minor area of the visual field. The displays are meant to be worn while conducting activities of daily living, however, the majority of the visual field is unobstructed and therefore the users will have no issues in perceiving peripheral cues while using these products.
[00075] Fig-25A shows an example of digital Pulse- Width-Modulation (PWM) of brightness. Analog brightness control used to be popular for analog display devices such as CRT and LCD. Analog brightness control uses analog control of driving voltage or current of display devices to control brightness. However precise control of brightness is difficult with analog control and digital brightness control provides more accurate in other words higher grayscale brightness control is possible. Instead of changing the duty ratio of pulse width, binary PWM shown as in Fig-25A is becoming more popular, because digital video signal can be directly used as ON pulse with "1" and OFF pulse with "0". Fig-25A shows an example of 8 bit binary PWM wherein the entire frame time is divided into 8 pulses whose pulse widths are 1/2 of the frame time as DO (Most Significant Bit or MSB, 2501), 1/4 of frame time as
Dl(2502), 1/8 as D2, 1/16 as D3, 1/32 as D4, 1/64 as D5, 1/128 as D6 and 1/256 as D7 or Least Significant Bit (LSB, 2503). Fig-25B shows an example of 8 bit binary PWM with the data of 10101001 in binary which is 169 in decimal and it represents the brightness of
169/256=66% of peak brightness. The first 1 means DO or MSB (2504) and 1/2 of the frame time must be ON or peak brightness. The "0" at Dl (2505) means the next 1/4 of frame time must be OFF meaning zero brightness. This process continues to D7 (Lease Significant Bit or LSB, 2506). Thus any brightness with integer multiplication of LSB (=1/256) from 0 to 1 can be shown with 8 bit binary PWM. However Sequential order from MSB to LSB requires very high band width of signal transfer lines. Fig-25C shows an example of non-sequential order of data transfer which reduces the band width requirement of signal transfer. The details of non-sequential data transfer are described in US Patent US8228595, Ishii et.al.
DESCRIPTION OF PREFERRED EMBODIMENTS
[00076] This invention seeks to create such a visual sensory and display system via a visual image data flow as depicted in Fig- 17 through 20. Cameras are mounted onto a set of glasses pointed in-line with the user's visual field. The cameras convert visual images into image data, which is then sent to a modulation system where the image data is divided into specific focal distances. The modulation system may relay this information back to the camera to recapture the image through an optical focusing system, or the modulator may focus the object through digital algorithms. The modulator will ultimately output digital image data with objects with focal distances for multiple objects recalibrated to a distance that the viewer can readily perceive.
[00077] Fig- 17 shows an example of the embodiments of this invention with a hypothetical visual field with multiple objects with varying focal distances. The camera (1701) captures the objects (901, 902, 903 and 904 in Fig-9) in various distances and auto-focuses at each objet and captures the focused images. The display will show all focused images (1001, 1002, 1003 and 1004 in Fig- 10) in the field of the display (1702).
[00078] Fig- 18 illustrates an example of the embodiments of this invention wherein the video signal is modulated to enhance video image to a viewer 1) who needs the images of individually focused objects regardless of distances and with adjusted size and brightness of image (presbyopia, myopia or hyperopia) or 2) who needs strengthened color to correct color blindness or 3) who needs visualized images in a darkness (night-vision). (1801) is a visual sensory such as a camera with CMOS image sensor and (1802) is a processor to modulate the images from the camera to provide a viewer of the above 1) and/or 3) with modulated image signals of individually focused objects regardless of distances and with adjusted size and brightness of images and to provide a viewer of the above 2) with strengthened color to correct color blindness. The display system (1803) shows said modulated images to the viewer.
[00079] This invention seeks to create the aforementioned visual sensory and display system in the shape of common glasses (lens(s), nose piece, and ear brace(s)) that is light weight and comfortable to wear. To accomplish this, it may become necessary to divide the modulation component depicted in Fig- 18 into three sections, Processor (A), (B), and (C) as depicted in Fig- 19. The purpose of this division is to allow for superior computing power in Processor (B) to be made external to the glasses, while the camera(s) and display(s) are still fitted into the glasses.
[00080] Fig- 19 illustrates an example of the embodiments of this invention wherein the video signal from the camera (1901) to the processor (A) (1902) is analyzed for focus and brightness and feedback to the camera (1901) so that the images of individually focused objects are captured with adjusted brightness. The processor (A) (1902) transmits the data of the images to the Processor (B) (1904) of an external unit such as a cellphone which has a more powerful processor than that of the wearable display. Often video data processing requires high computation and consumes more energy which the battery of wearable display cannot support. The external processor (B) (1904) processes the data and return to another processor (C) (1906) in the wearable display and the Processor (C) will transfer data to the display (1907) in the wearable display. The data transmissions between Processor (A) and Processor (B) (1903) and between Processor (B) and Processor(C) (1905) are from a group of wireless, wired and fiber optic.
[00081] Fig-20 illustrates an example of embodiment of this invention wherein some or all of the chips on a wearable display are packed in a single SOC (system on chip) or single scale package or single die package.
[00082] Another example of the embodiments of this invention is that Processor (B) (1904 in Fig- 19 or 2009 in Fig-20) is connect to the internet to allow for internet data to be displayed on the glasses.
[00083] Another example of the embodiments of this invention is that Processor (A) (2002) and Processor (C) (2004 in Fig-20) communicate directly.
[00084] Another example of the embodiments of this invention is that the communications between processors ((A) and (B), (B) and (C), and (A) and (C)) in Fig- 19 and Fig-20 are unidirectional or bidirectional.
[00085] Another example of the embodiments of this invention is that the image capture and display apparatus are battery powered, or receive power from an external source via wired or wireless power transfer.
[00086] Another example of the embodiments of this invention is that the image capture and display apparatus have a single or multiple audio input(s) and output(s) to allow for user instructions to Processor (A), (B), and (C) in Fig- 19 or Fig-20, and also for transfer of information from the Processor (A), (B), and (C) to the user.
[00087] Another example of the embodiments of this invention is that the image capture and display apparatus has a safety feature which comprises of a design that allows a margin outside the projected visual field if the projected visual field exceeds 13 degrees from center with a front-of-eye lens apparatus with more than 60% transparency.
[00088] An example of the embodiments of this invention is shown in Fig-1. Optical element such as lens with holographic optical element (HOE) or diffractive optical element (DOE) is shown at (116). A camera is shown at (111). A Free-Form Prism/Mirror is shown at (115). A microdisplay is shown at (114) and a light source is shown at (118). A set of batteries is shown at (117). A controller circuitry is shown at (113).
[00089] Color blindness is defined as the ability to differentiate discrete areas of the visual field varying wavelengths of light: approximately 564-580nm, approximately 534-545nm, and approximately and 420-440nm. These ranges are approximate as shown in Fig- 12;
physiologic sensitivities of cone cells have a distribution that exceeds these wavelengths. Fig- 14 illustrates an example of a test apparatus for color blindness. Ishihara Color Blindness Test is an internationally accepted form of testing color blindness and the standard viewer is able to score 100% while any deviation is considered a form of color blindness. The apparatus shall modulate the cumulative amount and mixture of light emitted from the display to increase or maximize (100% is maximum) the score on the Ishihara Color Blindness test, or increase the ability to differentiate colors in the three ranges of wavelength described here (approximately 564-580nm, approximately 534-545nm, and approximately and 420-440nm). The algorithm to modulate the displayed image shall vary the total light emission from the display and the mixture of colors (wavelength of light) emitted.
[00090] Visual Acuity is defined as the ability to differentiate objects at a distance.
Acuity = 1 / (gap size [arc min])
The standard viewer has a visual acuity of 1.0, and therefore is able to differentiate objects at 1 arc min (1/60 of degree). Visual acuity less than 1.0 is considered a deficiency in visual acuity. A comparison of 304 and 404 demonstrates a loss of visual acuity whereby in 304, the horizontal lines of the letter E can be differentiated while in 404 the lines cannot. To provide a conceptual description: given a situation whereby the standard viewer perceives 304, and an individual with deficiency in visual acuity as described above perceives 404, the apparatus shall enable the individual with deficiency in visual acuity to perceive 304. For a more formal definition, the apparatus shall enable a viewer to increase visual acuity as described as 1/ (gap size [arc min]).
[00091] The algorithm to modulate the image shown on the apparatus shall combine two elements: (1) magnification of an object in question and (2) increase in contrast.
Magnification is defined as an increase in the horizontal and vertical visual arc required by the object in question. Contrast (K) is difference in luminescence of bright (Lh) and dark (LI) visual regions defined as: K = (Lh - Ll)/Lh with 0=<K=<1.
K=0 means there is no contrast while Kmax =1.
[00092] The apparatus shall provide an option to invert black and white of a field of view. Although the mathematical differences in contrast remain unchanged with the inversion of dark and light areas of the visual field, the eye is trained to detect small areas of light in a background of dark far better than a small area of dark in a background of light.
[00093] The apparatus shall increase visual acuity (defined as 1/gap size [arc min]) in an individual with a deficiency in visual acuity (defined as visual acuity less than 1.0) by an algorithm using at least one of (1) increasing the magnification of the object in question (defined as an increase in the horizontal and vertical arc lengths of an object in the visual field) and (2) increasing the contrast (K defined as (Lh-Ll)/Lh). The apparatus shall provide an option to invert light and dark (black and white) areas depending on the preference of the user.
[00094] Conditions exist whereby visual acuity (1/ (gap size [arc min]) is deficient for objects with a near focal length (defined here as the distance between the object and the viewer less than lm) and far focal length (defined here as the distance between the object and the viewer more than lm). The area outside the circle in Fig-5 illustrates deficient visual acuity at far focal length, corrected with a concave lens (area inside circle). The area outside the circle in Fig-6 illustrates deficient visual acuity at near focal length, corrected with a convex lens (area inside circle). Conceptually, the apparatus shall enable the viewer with deficiency of visual acuity to perceive near objects in a manner similar to the area inside the circle in Fig-6, while a viewer with deficiency of visual acuity to perceive far objects in a manner similar to the area inside the circle in Fig-5.
[00095] Given a deficiency in visual acuity that is dependent on distance from viewer to object, the apparatus employ an algorithm that varies (1) the focal length of the camera depending on the distance from viewer to the object, (2) the magnification of the object in question, and (3) the contrast of the emitted display image, to maximize visual acuity.

Claims

CLAIMS We claim:
1. A display system comprising:
A microdisplay device having
A grayscale control system using pulse-width-modulation and
A set of solid state light sources having at least two colors and
A control system to drive said microdisplay and said light sources and
A set of optical elements including at least one of free-form mirror, half-mirror, Fresnel mirror, HOE and DOE and
A set of optics enabling see-through capability whereby a user simultaneously can see both the visual field in front of the display system and the projected image by said microdisplay and
Image capturing sensor(s) and
Video processing unit(s) with algorithms designed to modulate still and moving video images that is capable of, but not limited to, the treatment of genetic, physiological, and psychological conditions involving the visual field including, but not limited to, presbyopia, myopia, hyperopia, cataract, retinitis pigmentosa and color blindness and Said algorism corrects the weakness of the vision of a viewer by enhancing the video images from said microdisplay with at least one of the capabilities among the brightness increase of selected color(s), the changes of focal length of the objects captured by said image sensor(s), the change of the size of the objects, the brightness of the objects, and the change in contrast changing in brightness gap between two visual areas. .
2. The display system of claim 1 wherein:
Said system increases visual acuity, defined as 1/gap size [arc min], by more than 1%
3. The display system of claim 1 wherein:
Said system improves color deficiency, defined at least one among the Ishihara Color Blindness Test score, and the ability to differentiate objects with wavelengths 564-580nm, 534-545nm, and 420-440nm, by more than 1%
4. The algorithm of claim 1 wherein:
Said algorithm modulates the visual field by increasing the horizontal and vertical visual field arc length of an object in view
5. The algorithm of claim 1 wherein:
Said algorithm modulates the visual field by increasing the contrast, defined as the difference in brightness of discrete areas in the visual field and defined as K = (Lh-Ll)/Lh whereby 0=<K=<1, K=0 means there is no contrast while Kmax =1, Lh is the brightness at a discrete area with high luminescence, and LI is the brightness at a discrete area with low luminescence.
6. The algorithm of claim 1 wherein:
Said algorithm modulates the visual field by changing the focal length of the camera.
7. The display system of claim 1 wherein:
Said system modulates the visual field by inverting the brightness of light and dark areas of the visual field.
8. The display system of claim 1 wherein:
Said microdisplay is one of a group of Spatial Light Modulator (SLM), including, but not limited to, LCD, LCOS, Micromirror and MEMS display, and OLED.
9. The image-capture and display system of claim 8 wherein:
Said system includes a modulator system having a video data processing circuit wherein image data including at least one of still and moving images flows from (1) one of said image sensor(s) and external source to (2) said video processing unit to (3) said control system to (4) said microdisplay which converts the video data into an image which is projected to (5) said see-through optics and projected to (6) the user's visual field, and Said control system is capable to flow data in unilateral and bilateral.
10. The image-capture and display system of claim 9 wherein:
Said modulator increases color differentiation in image captured by image sensor or external video source by modulating color content of the image data.
11. The image-capture and display system of claim 9 wherein:
Said visual display system increases color differentiation by selectively increasing the brightness of at least one color within said light source.
12. The image-capture and display system of claim 9 wherein:
Said visual display system increases color differentiation by selectively increasing the time composition of at least one color within at least one of said light source and microdisplay.
13. The image-capture and display system of claim 9 wherein:
Said image sensor is able to sense infrared light and said processor is able to modulate the video image data in a manner that the user can differentiate between objects in the absence of light in the visible wavelengths.
14. The image-capture and display system of claim 9 wherein:
Said image sensor can modulate the video image data by increasing the brightness of at least one of the entire visual field and specific objects within the visual field.
15. The image-capture and display system of claim 9 wherein:
Said image modulation system identifies objects in the captured image at varying focal distances and modulates the object image area to appear at a different focal distance.
16. The image-capture and display system of claim 9 wherein:
Said image modulation system recognizes specific objects, including, but not restricted to, computer monitors and reading material, and modulates the image in those object areas to a different focal distance.
17. The image-capture and display system of claim 9 wherein:
Said video processing unit consists of multiple components which communicate via at least one of wired and wireless means.
18. The image-capture and display system of claim 17 wherein:
At least one of components of the video processing unit can communicate with an external unit which is separate from the system and communicate data through wire or wireless means.
19. The display system of claim 17 wherein:
The display has an array of pixels and a memory(s) in the pixel and the memories are written line by line in the array by the control system and the sequence of writing the lines is non-sequential.
20. The display system of claim 17 wherein:
The memories in the pixel array of the display are one of SRAM, DRAM, flipflop and cascode circuit.
PCT/US2017/042289 2013-09-21 2017-07-16 See through display enabling the correction of visual deficits WO2019017861A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2017/042289 WO2019017861A1 (en) 2017-07-16 2017-07-16 See through display enabling the correction of visual deficits
US15/659,619 US10416462B2 (en) 2013-09-21 2017-07-26 See through display enabling the correction of visual deficits

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2017/042289 WO2019017861A1 (en) 2017-07-16 2017-07-16 See through display enabling the correction of visual deficits

Publications (1)

Publication Number Publication Date
WO2019017861A1 true WO2019017861A1 (en) 2019-01-24

Family

ID=65015236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/042289 WO2019017861A1 (en) 2013-09-21 2017-07-16 See through display enabling the correction of visual deficits

Country Status (1)

Country Link
WO (1) WO2019017861A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024186199A1 (en) 2023-03-08 2024-09-12 Angstone B.V. Optical see-through device and method for enhanced visual perception

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050254116A1 (en) * 2003-11-01 2005-11-17 Silicon Quest Kabushiki-Kaisha Sequence and timing control of writing and rewriting pixel memories for achieving higher number of gray scales
US20090189845A1 (en) * 2008-01-28 2009-07-30 Seiko Epson Corporation Image display device and electronic apparatus
US20140285429A1 (en) * 2013-03-15 2014-09-25 John Castle Simmons Light Management for Image and Data Control
US20150302773A1 (en) * 2013-07-29 2015-10-22 Fusao Ishii See Through Display enabling the correction of visual deficits

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050254116A1 (en) * 2003-11-01 2005-11-17 Silicon Quest Kabushiki-Kaisha Sequence and timing control of writing and rewriting pixel memories for achieving higher number of gray scales
US20090189845A1 (en) * 2008-01-28 2009-07-30 Seiko Epson Corporation Image display device and electronic apparatus
US20140285429A1 (en) * 2013-03-15 2014-09-25 John Castle Simmons Light Management for Image and Data Control
US20150302773A1 (en) * 2013-07-29 2015-10-22 Fusao Ishii See Through Display enabling the correction of visual deficits

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024186199A1 (en) 2023-03-08 2024-09-12 Angstone B.V. Optical see-through device and method for enhanced visual perception
NL2034292B1 (en) 2023-03-08 2024-09-20 Angstone B V See-through device and method for enhanced visual perception

Similar Documents

Publication Publication Date Title
US20150302773A1 (en) See Through Display enabling the correction of visual deficits
US10416462B2 (en) See through display enabling the correction of visual deficits
US10778944B2 (en) Apparatus and method for enhancing human visual performance in a head worn video system
US11461936B2 (en) Wearable image manipulation and control system with micro-displays and augmentation of vision and sensing in augmented reality glasses
US10788686B2 (en) Eye-protective shade for augmented reality smart glasses
US7859562B2 (en) Visual aid display apparatus
US20130215147A1 (en) Apparatus and Method for Enhancing Human Visual Performance in a Head Worn Video System
US11428955B1 (en) Personalized optics
EP3245553A1 (en) Head-mounted display apparatus, and display method
Peli et al. Applications of augmented‐vision head‐mounted systems in vision rehabilitation
US10459255B2 (en) Compensating visual impairment by using an eye-mounted display
KR20190071472A (en) Eyeglasses-type wearable device providing game contents for treatment of amblyopia
WO2019017861A1 (en) See through display enabling the correction of visual deficits
WO2016114130A1 (en) Head-mounted display apparatus, and display method
US20230049899A1 (en) System and method for enhancing visual acuity
TW202319810A (en) Eye data and operation of head mounted device
Vargas‐Martín et al. P‐16: Augmented View for Tunnel Vision: Device Testing by Patients in Real Environments
CN113413265A (en) Visual aid method and system for visual dysfunction person and intelligent AR glasses
JP2022502695A (en) Ophthalmic appliance with flashing elements to reduce the effects of dyslexia
KR102511785B1 (en) Smart glasses for preventing drowsiness and enhancing concentration
Legerton et al. Reality check: Protecting ocular health from headset hazards
TWI762231B (en) Virtual reality headset having adjustable focal length
KR102087720B1 (en) Infrared light based Vision device
CN213276131U (en) Display device with color-changing panel
US20240371113A1 (en) Systems and methods for generating subtractive contrast in an augmented reality display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17918331

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17918331

Country of ref document: EP

Kind code of ref document: A1